diff --git a/src/App.vue b/src/App.vue index 8e8f814..1cbf62f 100644 --- a/src/App.vue +++ b/src/App.vue @@ -3,7 +3,7 @@ - Primary Symbols + Primary Symbols | tf | diff --git a/src/meta_primary_symbol.json b/src/meta_primary_symbol.json new file mode 100644 index 0000000..e2e029b --- /dev/null +++ b/src/meta_primary_symbol.json @@ -0,0 +1 @@ +[{"name": "tf", "docs": "\nTop-level module of TensorFlow. By convention, we refer to this module as\n`tf` instead of `tensorflow`, following the common practice of importing\nTensorFlow via the command `import tensorflow as tf`.\n\nThe primary function of this module is to import all of the public TensorFlow\ninterfaces into a single place. The interfaces themselves are located in\nsub-modules, as described below.\n\nNote that the file `__init__.py` in the TensorFlow source code tree is actually\nonly a placeholder to enable test cases to run. The TensorFlow build replaces\nthis file with a file generated from [`api_template.__init__.py`](https://www.github.com/tensorflow/tensorflow/blob/master/tensorflow/api_template.__init__.py)\n", "desc": "", "type": "API"}, {"name": "tf.abs", "docs": "Computes the absolute value of a tensor.\n\n Given a tensor of integer or floating-point values, this operation returns a\n tensor of the same type, where each element contains the absolute value of the\n corresponding element in the input.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of type\n `float32` or `float64` that is the absolute value of each element in `x`. For\n a complex number \\\\(a + bj\\\\), its absolute value is computed as\n \\\\(\\sqrt{a^2 + b^2}\\\\).\n\n For example:\n\n >>> # real number\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.abs(x)\n \n\n >>> # complex number\n >>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])\n >>> tf.abs(x)\n \n\n Args:\n x: A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`,\n `int32`, `int64`, `complex64` or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`,\n with absolute values. Note, for `complex64` or `complex128` input, the\n returned `Tensor` will be of type `float32` or `float64`, respectively.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)`", "desc": "Computes the absolute value of a tensor.", "type": "API"}, {"name": "tf.acos", "docs": "Computes acos of x element-wise.\n\n Provided an input tensor, the `tf.math.acos` operation\n returns the inverse cosine of each element of the tensor.\n If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`.\n\n Input range is `[-1, 1]` and the output has a range of `[0, pi]`.\n\n For example:\n\n >>> x = tf.constant([1.0, -0.5, 3.4, 0.2, 0.0, -2], dtype = tf.float32)\n >>> tf.math.acos(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Computes acos of x element-wise.", "type": "API"}, {"name": "tf.acosh", "docs": "Computes inverse hyperbolic cosine of x element-wise.\n\n Given an input tensor, the function computes inverse hyperbolic cosine of every element.\n Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.\n\n ```python\n x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.add", "docs": "Returns x + y element-wise.\n\n Example usages below.\n\n Add a scalar and a list:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.add(x, y)\n \n\n Note that binary `+` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x + y\n \n\n Add a tensor and a list of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([1, 2, 3, 4, 5])\n >>> tf.add(x, y)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**7 + 1, 2**7 + 2]\n >>> tf.add(x, y)\n \n\n When adding two input values of different shapes, `Add` follows NumPy\n broadcasting rules. The two input array shapes are compared element-wise.\n Starting with the trailing dimensions, the two dimensions either have to be\n equal or one of them needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(1, 2, 1, 3)\n >>> y = np.ones(6).reshape(2, 1, 3, 1)\n >>> tf.add(x, y).shape.as_list()\n [2, 2, 3, 3]\n\n Another example with two arrays of different dimension.\n\n >>> x = np.ones([1, 2, 1, 4])\n >>> y = np.ones([3, 4])\n >>> tf.add(x, y).shape.as_list()\n [1, 2, 3, 4]\n\n The reduction version of this elementwise operation is `tf.math.reduce_sum`\n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: bfloat16, half,\n float32, float64, uint8, int8, int16, int32, int64, complex64, complex128,\n string.\n y: A `tf.Tensor`. Must have the same type as x.\n name: A name for the operation (optional)\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.add_n", "docs": "Adds all input tensors element-wise.\n\n `tf.math.add_n` performs the same operation as `tf.math.accumulate_n`.\n\n This op does not [broadcast](\n https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)\n its inputs. If you need broadcasting, use `tf.math.add` (or the `+` operator)\n instead.\n\n For example:\n\n >>> a = tf.constant([[3, 5], [4, 8]])\n >>> b = tf.constant([[1, 6], [2, 9]])\n >>> tf.math.add_n([a, b, a])\n \n\n Args:\n inputs: A list of `tf.Tensor` or `tf.IndexedSlices` objects, each with the\n same shape and type. `tf.IndexedSlices` objects will be converted into\n dense tensors prior to adding.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Adds all input tensors element-wise.", "type": "API"}, {"name": "tf.AggregationMethod", "docs": "A class listing aggregation methods used to combine gradients.\n\n Computing partial derivatives can require aggregating gradient\n contributions. This class lists the various methods that can\n be used to combine gradients in the graph.\n\n The following aggregation methods are part of the stable API for\n aggregating gradients:\n\n * `ADD_N`: All of the gradient terms are summed as part of one\n operation using the \"AddN\" op (see `tf.add_n`). This\n method has the property that all gradients must be ready and\n buffered separately in memory before any aggregation is performed.\n * `DEFAULT`: The system-chosen default aggregation method.\n\n The following aggregation methods are experimental and may not\n be supported in future releases:\n\n * `EXPERIMENTAL_TREE`: Gradient terms are summed in pairs using\n the \"AddN\" op. This method of summing gradients may reduce\n performance, but it can improve memory utilization because the\n gradients can be released earlier.\n\n ", "desc": "A class listing aggregation methods used to combine gradients.", "type": "API"}, {"name": "tf.argmax", "docs": "Returns the index with the largest value across axes of a tensor.\n\n In case of identity returns the smallest index.\n\n For example:\n\n >>> A = tf.constant([2, 20, 30, 3, 6])\n >>> tf.math.argmax(A) # A[2] is maximum in tensor A\n \n >>> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8],\n ... [14, 45, 23, 5, 27]])\n >>> tf.math.argmax(B, 0)\n \n >>> tf.math.argmax(B, 1)\n \n >>> C = tf.constant([0, 0, 0, 0])\n >>> tf.math.argmax(C) # Returns smallest index in case of ties\n \n\n Args:\n input: A `Tensor`.\n axis: An integer, the axis to reduce across. Default to 0.\n output_type: An optional output dtype (`tf.int32` or `tf.int64`). Defaults\n to `tf.int64`.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the largest value across axes of a tensor.", "type": "API"}, {"name": "tf.argmin", "docs": "Returns the index with the smallest value across axes of a tensor.\n\n Returns the smallest index in case of ties.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`,\n `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`,\n `uint64`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to\n `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n ", "desc": "Returns the index with the smallest value across axes of a tensor.", "type": "API"}, {"name": "tf.argsort", "docs": "Returns the indices of a tensor that give its sorted order along an axis.\n\n >>> values = [1, 10, 26.9, 2.8, 166.32, 62.3]\n >>> sort_order = tf.argsort(values)\n >>> sort_order.numpy()\n array([0, 3, 1, 2, 5, 4], dtype=int32)\n\n For a 1D tensor:\n\n >>> sorted = tf.gather(values, sort_order)\n >>> assert tf.reduce_all(sorted == tf.sort(values))\n\n For higher dimensions, the output has the same shape as\n `values`, but along the given axis, values represent the index of the sorted\n element in that slice of the tensor at the given position.\n\n >>> mat = [[30,20,10],\n ... [20,10,30],\n ... [10,30,20]]\n >>> indices = tf.argsort(mat)\n >>> indices.numpy()\n array([[2, 1, 0],\n [1, 0, 2],\n [0, 2, 1]], dtype=int32)\n\n If `axis=-1` these indices can be used to apply a sort using `tf.gather`:\n\n >>> tf.gather(mat, indices, batch_dims=-1).numpy()\n array([[10, 20, 30],\n [10, 20, 30],\n [10, 20, 30]], dtype=int32)\n\n See also:\n\n * `tf.sort`: Sort along an axis.\n * `tf.math.top_k`: A partial sort that returns a fixed number of top values\n and corresponding indices.\n\n Args:\n values: 1-D or higher **numeric** `Tensor`.\n axis: The axis along which to sort. The default is -1, which sorts the last\n axis.\n direction: The direction in which to sort the values (`'ASCENDING'` or\n `'DESCENDING'`).\n stable: If True, equal elements in the original tensor will not be\n re-ordered in the returned order. Unstable sort is not yet implemented,\n but will eventually be the default for performance reasons. If you require\n a stable order, pass `stable=True` for forwards compatibility.\n name: Optional name for the operation.\n\n Returns:\n An int32 `Tensor` with the same shape as `values`. The indices that would\n sort each slice of the given `values` along the given `axis`.\n\n Raises:\n ValueError: If axis is not a constant scalar, or the direction is invalid.\n tf.errors.InvalidArgumentError: If the `values.dtype` is not a `float` or\n `int` type.\n ", "desc": "Returns the indices of a tensor that give its sorted order along an axis.", "type": "API"}, {"name": "tf.as_dtype", "docs": "Converts the given `type_value` to a `DType`.\n\n Note: `DType` values are interned. When passed a new `DType` object,\n `as_dtype` always returns the interned value.\n\n Args:\n type_value: A value that can be converted to a `tf.DType` object. This may\n currently be a `tf.DType` object, a [`DataType`\n enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),\n a string type name, or a [`numpy.dtype`](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html).\n\n Returns:\n A `DType` corresponding to `type_value`.\n\n Raises:\n TypeError: If `type_value` cannot be converted to a `DType`.\n ", "desc": "Converts the given `type_value` to a `DType`.", "type": "API"}, {"name": "tf.as_string", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.asin", "docs": "Computes the trignometric inverse sine of x element-wise.\n\n The `tf.math.asin` operation returns the inverse of `tf.math.sin`, such that\n if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.\n\n **Note**: The output of `tf.math.asin` will lie within the invertible range\n of sine, i.e [-pi/2, pi/2].\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.sin(x) # [0.8659266, 0.7068252]\n\n tf.math.asin(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse sine of x element-wise.", "type": "API"}, {"name": "tf.asinh", "docs": "Computes inverse hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic sine\n for every element in the tensor. Both input and output has a range of\n `[-inf, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.Assert", "docs": "Asserts that the given condition is true.\n\nIf `condition` evaluates to false, print the list of tensors in `data`.\n`summarize` determines how many entries of the tensors to print.\n\nArgs:\n condition: The condition to evaluate.\n data: The tensors to print out when condition is false.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional).\n\nReturns:\n assert_op: An `Operation` that, when executed, raises a\n `tf.errors.InvalidArgumentError` if `condition` is not true.\n @compatibility(eager)\n returns None\n @end_compatibility\n\nRaises:\n @compatibility(TF1)\n When in TF V1 mode (that is, outside `tf.function`) Assert needs a control\n dependency on the output to ensure the assertion executes:\n\n```python\n# Ensure maximum element of x is smaller or equal to 1\nassert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])\nwith tf.control_dependencies([assert_op]):\n ... code using x ...\n```\n\n @end_compatibility\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Asserts that the given condition is true.", "type": "API"}, {"name": "tf.assert_equal", "docs": "Assert the condition `x == y` holds element-wise.\n\n This Op checks that `x[i] == y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` and `y` are not equal, `message`, as well as the first `summarize`\n entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x == y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x == y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x == y` holds element-wise.", "type": "API"}, {"name": "tf.assert_greater", "docs": "Assert the condition `x > y` holds element-wise.\n\n This Op checks that `x[i] > y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not greater than `y` element-wise, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is\n raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_greater\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x > y` holds element-wise.", "type": "API"}, {"name": "tf.assert_less", "docs": "Assert the condition `x < y` holds element-wise.\n\n This Op checks that `x[i] < y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not less than `y` element-wise, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is\n raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_less\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < y` is False.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x < y` holds element-wise.", "type": "API"}, {"name": "tf.assert_rank", "docs": "Assert that `x` has rank equal to `rank`.\n\n This Op checks that the rank of `x` is equal to `rank`.\n\n If `x` has a different rank, `message`, as well as the shape of `x` are\n printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: `Tensor`.\n rank: Scalar integer `Tensor`.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to\n \"assert_rank\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x` does not have rank `rank`. The check can be performed immediately\n during eager execution or if the shape of `x` is statically known.\n ", "desc": "Assert that `x` has rank equal to `rank`.", "type": "API"}, {"name": "tf.atan", "docs": "Computes the trignometric inverse tangent of x element-wise.\n\n The `tf.math.atan` operation returns the inverse of `tf.math.tan`, such that\n if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.\n\n **Note**: The output of `tf.math.atan` will lie within the invertible range\n of tan, i.e (-pi/2, pi/2).\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.tan(x) # [1.731261, 0.99920404]\n\n tf.math.atan(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse tangent of x element-wise.", "type": "API"}, {"name": "tf.atan2", "docs": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.\n\n This is the angle \\\\( \\theta \\in [-\\pi, \\pi] \\\\) such that\n \\\\[ x = r \\cos(\\theta) \\\\]\n and\n \\\\[ y = r \\sin(\\theta) \\\\]\n where \\\\(r = \\sqrt{x^2 + y^2} \\\\).\n\n For example:\n\n >>> x = [1., 1.]\n >>> y = [1., -1.]\n >>> print((tf.math.atan2(y,x) * (180 / np.pi)).numpy())\n [ 45. -45.]\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.", "type": "API"}, {"name": "tf.atanh", "docs": "Computes inverse hyperbolic tangent of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic tangent\n for every element in the tensor. Input range is `[-1,1]` and output range is\n `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the\n input is `1`, output will be `inf`. Values outside the range will have\n `nan` as output.\n\n ```python\n x = tf.constant([-float(\"inf\"), -1, -0.5, 1, 0, 0.5, 10, float(\"inf\")])\n tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic tangent of x element-wise.", "type": "API"}, {"name": "tf.audio", "docs": "Public API for tf.audio namespace.\n", "desc": "Public API for tf.audio namespace.", "type": "API"}, {"name": "tf.audio.decode_wav", "docs": "Decode a 16-bit PCM WAV file to a float tensor.\n\n The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.\n\n When desired_channels is set, if the input contains fewer channels than this\n then the last channel will be duplicated to give the requested number, else if\n the input has more channels than requested then the additional channels will be\n ignored.\n\n If desired_samples is set, then the audio will be cropped or padded with zeroes\n to the requested length.\n\n The first output contains a Tensor with the content of the audio samples. The\n lowest dimension will be the number of channels, and the second will be the\n number of samples. For example, a ten-sample-long stereo WAV file should give an\n output shape of [10, 2].\n\n Args:\n contents: A `Tensor` of type `string`.\n The WAV-encoded audio, usually from a file.\n desired_channels: An optional `int`. Defaults to `-1`.\n Number of sample channels wanted.\n desired_samples: An optional `int`. Defaults to `-1`.\n Length of audio requested.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (audio, sample_rate).\n\n audio: A `Tensor` of type `float32`.\n sample_rate: A `Tensor` of type `int32`.\n ", "desc": "Decode a 16-bit PCM WAV file to a float tensor.", "type": "API"}, {"name": "tf.audio.encode_wav", "docs": "Encode audio data using the WAV file format.\n\n This operation will generate a string suitable to be saved out to create a .wav\n audio file. It will be encoded in the 16-bit PCM format. It takes in float\n values in the range -1.0f to 1.0f, and any outside that value will be clamped to\n that range.\n\n `audio` is a 2-D float Tensor of shape `[length, channels]`.\n `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100).\n\n Args:\n audio: A `Tensor` of type `float32`. 2-D with shape `[length, channels]`.\n sample_rate: A `Tensor` of type `int32`.\n Scalar containing the sample frequency.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode audio data using the WAV file format.", "type": "API"}, {"name": "tf.autodiff", "docs": "Public API for tf.autodiff namespace.\n", "desc": "Public API for tf.autodiff namespace.", "type": "API"}, {"name": "tf.autodiff.ForwardAccumulator", "docs": "Computes Jacobian-vector products (\"JVP\"s) using forward-mode autodiff.\n\n Compare to `tf.GradientTape` which computes vector-Jacobian products (\"VJP\"s)\n using reverse-mode autodiff (backprop). Reverse mode is more attractive when\n computing gradients of a scalar-valued function with respect to many inputs\n (e.g. a neural network with many parameters and a scalar loss). Forward mode\n works best on functions with many outputs and few inputs. Since it does not\n hold on to intermediate activations, it is much more memory efficient than\n backprop where it is applicable.\n\n Consider a simple linear regression:\n\n >>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]])\n >>> targets = tf.constant([[1.], [-1.]])\n >>> dense = tf.keras.layers.Dense(1)\n >>> dense.build([None, 2])\n >>> with tf.autodiff.ForwardAccumulator(\n ... primals=dense.kernel,\n ... tangents=tf.constant([[1.], [0.]])) as acc:\n ... loss = tf.reduce_sum((dense(x) - targets) ** 2.)\n >>> acc.jvp(loss)\n \n\n The example has two variables containing parameters, `dense.kernel` (2\n parameters) and `dense.bias` (1 parameter). Considering the training data `x`\n as a constant, this means the Jacobian matrix for the function mapping from\n parameters to loss has one row and three columns.\n\n With forwardprop, we specify a length-three vector in advance which multiplies\n the Jacobian. The `primals` constructor argument is the parameter (a\n `tf.Tensor` or `tf.Variable`) we're specifying a vector for, and the\n `tangents` argument is the \"vector\" in Jacobian-vector product. If our goal is\n to compute the entire Jacobian matrix, forwardprop computes one column at a\n time while backprop computes one row at a time. Since the Jacobian in the\n linear regression example has only one row, backprop requires fewer\n invocations:\n\n >>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]])\n >>> targets = tf.constant([[1.], [-1.]])\n >>> dense = tf.keras.layers.Dense(1)\n >>> dense.build([None, 2])\n >>> loss_fn = lambda: tf.reduce_sum((dense(x) - targets) ** 2.)\n >>> kernel_fprop = []\n >>> with tf.autodiff.ForwardAccumulator(\n ... dense.kernel, tf.constant([[1.], [0.]])) as acc:\n ... kernel_fprop.append(acc.jvp(loss_fn()))\n >>> with tf.autodiff.ForwardAccumulator(\n ... dense.kernel, tf.constant([[0.], [1.]])) as acc:\n ... kernel_fprop.append(acc.jvp(loss_fn()))\n >>> with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc:\n ... bias_fprop = acc.jvp(loss_fn())\n >>> with tf.GradientTape() as tape:\n ... loss = loss_fn()\n >>> kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias))\n >>> np.testing.assert_allclose(\n ... kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis])\n >>> np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis])\n\n Implicit in the `tape.gradient` call is a length-one vector which\n left-multiplies the Jacobian, a vector-Jacobian product.\n\n `ForwardAccumulator` maintains JVPs corresponding primal tensors it is\n watching, derived from the original `primals` specified in the constructor. As\n soon as a primal tensor is deleted, `ForwardAccumulator` deletes the\n corresponding JVP.\n\n `acc.jvp(x)` retrieves `acc`'s JVP corresponding to the primal tensor `x`. It\n does not perform any computation. `acc.jvp` calls can be repeated as long as\n `acc` is accessible, whether the context manager is active or not. New JVPs\n are only computed while the context manager is active.\n\n Note that `ForwardAccumulator`s are always applied in the order their context\n managers were entered, so inner accumulators will not see JVP computation from\n outer accumulators. Take higher-order JVPs from outer accumulators:\n\n >>> primal = tf.constant(1.1)\n >>> with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer:\n ... with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner:\n ... primal_out = primal ** tf.constant(3.5)\n >>> inner_jvp = inner.jvp(primal_out)\n >>> inner_jvp # 3.5 * 1.1 ** 2.5\n \n >>> outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5\n \n\n Reversing the collection in the last line to instead retrieve\n `inner.jvp(outer.jvp(primal_out))` will not work.\n\n Strict nesting also applies to combinations of `ForwardAccumulator` and\n `tf.GradientTape`. More deeply nested `GradientTape` objects will ignore the\n products of outer `ForwardAccumulator` objects. This allows (for example)\n memory-efficient forward-over-backward computation of Hessian-vector products,\n where the inner `GradientTape` would otherwise hold on to all intermediate\n JVPs:\n\n >>> v = tf.Variable([1., 2.])\n >>> with tf.autodiff.ForwardAccumulator(\n ... v,\n ... # The \"vector\" in Hessian-vector product.\n ... tf.constant([1., 0.])) as acc:\n ... with tf.GradientTape() as tape:\n ... y = tf.reduce_sum(v ** 3.)\n ... backward = tape.gradient(y, v)\n >>> backward # gradient from backprop\n \n >>> acc.jvp(backward) # forward-over-backward Hessian-vector product\n \n ", "desc": "Computes Jacobian-vector products (\"JVP\"s) using forward-mode autodiff.", "type": "API"}, {"name": "tf.autodiff.GradientTape", "docs": "Record operations for automatic differentiation.\n\n Operations are recorded if they are executed within this context manager and\n at least one of their inputs is being \"watched\".\n\n Trainable variables (created by `tf.Variable` or `tf.compat.v1.get_variable`,\n where `trainable=True` is default in both cases) are automatically watched.\n Tensors can be manually watched by invoking the `watch` method on this context\n manager.\n\n For example, consider the function `y = x * x`. The gradient at `x = 3.0` can\n be computed as:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = x * x\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n GradientTapes can be nested to compute higher-order derivatives. For example,\n\n >>> x = tf.constant(5.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... with tf.GradientTape() as gg:\n ... gg.watch(x)\n ... y = x * x\n ... dy_dx = gg.gradient(y, x) # dy_dx = 2 * x\n >>> d2y_dx2 = g.gradient(dy_dx, x) # d2y_dx2 = 2\n >>> print(dy_dx)\n tf.Tensor(10.0, shape=(), dtype=float32)\n >>> print(d2y_dx2)\n tf.Tensor(2.0, shape=(), dtype=float32)\n\n By default, the resources held by a GradientTape are released as soon as\n GradientTape.gradient() method is called. To compute multiple gradients over\n the same computation, create a persistent gradient tape. This allows multiple\n calls to the gradient() method as resources are released when the tape object\n is garbage collected. For example:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape(persistent=True) as g:\n ... g.watch(x)\n ... y = x * x\n ... z = y * y\n >>> dz_dx = g.gradient(z, x) # (4*x^3 at x = 3)\n >>> print(dz_dx)\n tf.Tensor(108.0, shape=(), dtype=float32)\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n By default GradientTape will automatically watch any trainable variables that\n are accessed inside the context. If you want fine grained control over which\n variables are watched you can disable automatic tracking by passing\n `watch_accessed_variables=False` to the tape constructor:\n\n >>> x = tf.Variable(2.0)\n >>> w = tf.Variable(5.0)\n >>> with tf.GradientTape(\n ... watch_accessed_variables=False, persistent=True) as tape:\n ... tape.watch(x)\n ... y = x ** 2 # Gradients will be available for `x`.\n ... z = w ** 3 # No gradients will be available as `w` isn't being watched.\n >>> dy_dx = tape.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(4.0, shape=(), dtype=float32)\n >>> # No gradients will be available as `w` isn't being watched.\n >>> dz_dw = tape.gradient(z, w)\n >>> print(dz_dw)\n None\n\n Note that when using models you should ensure that your variables exist when\n using `watch_accessed_variables=False`. Otherwise it's quite easy to make your\n first iteration not have any gradients:\n\n ```python\n a = tf.keras.layers.Dense(32)\n b = tf.keras.layers.Dense(32)\n\n with tf.GradientTape(watch_accessed_variables=False) as tape:\n tape.watch(a.variables) # Since `a.build` has not been called at this point\n # `a.variables` will return an empty list and the\n # tape will not be watching anything.\n result = b(a(inputs))\n tape.gradient(result, a.variables) # The result of this computation will be\n # a list of `None`s since a's variables\n # are not being watched.\n ```\n\n Note that only tensors with real or complex dtypes are differentiable.\n ", "desc": "Record operations for automatic differentiation.", "type": "API"}, {"name": "tf.autograph", "docs": "Conversion of eager-style Python into TensorFlow graph code.\n\nNOTE: In TensorFlow 2.0, AutoGraph is automatically applied when using\n`tf.function`. This module contains lower-level APIs for advanced use.\n\nAutoGraph transforms a subset of Python which operates on TensorFlow objects\ninto equivalent TensorFlow graph code. When executing the graph, it has the same\neffect as if you ran the original code in eager mode.\nPython code which doesn't operate on TensorFlow objects remains functionally\nunchanged, but keep in mind that `tf.function` only executes such code at trace\ntime, and generally will not be consistent with eager execution.\n\nFor more information, see the\n[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md),\nand the [tf.function guide](https://www.tensorflow.org/guide/function#autograph_transformations).\n\n", "desc": "Conversion of eager-style Python into TensorFlow graph code.", "type": "API"}, {"name": "tf.autograph.experimental", "docs": "Public API for tf.autograph.experimental namespace.\n", "desc": "Public API for tf.autograph.experimental namespace.", "type": "API"}, {"name": "tf.autograph.experimental.do_not_convert", "docs": "Decorator that suppresses the conversion of a function.\n\n Args:\n func: function to decorate.\n\n Returns:\n If `func` is not None, returns a `Callable` which is equivalent to\n `func`, but is not converted by AutoGraph.\n If `func` is None, returns a decorator that, when invoked with a\n single `func` argument, returns a `Callable` equivalent to the\n above case.\n ", "desc": "Decorator that suppresses the conversion of a function.", "type": "API"}, {"name": "tf.autograph.experimental.Feature", "docs": "This enumeration represents optional conversion options.\n\n These conversion options are experimental. They are subject to change without\n notice and offer no guarantees.\n\n _Example Usage_\n\n ```python\n optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS\n @tf.function(experimental_autograph_options=optionals)\n def f(i):\n if i == 0: # EQUALITY_OPERATORS allows the use of == here.\n tf.print('i is zero')\n ```\n\n Attributes:\n ALL: Enable all features.\n AUTO_CONTROL_DEPS: Insert of control dependencies in the generated code.\n ASSERT_STATEMENTS: Convert Tensor-dependent assert statements to tf.Assert.\n BUILTIN_FUNCTIONS: Convert builtin functions applied to Tensors to\n their TF counterparts.\n EQUALITY_OPERATORS: Whether to convert the comparison operators, like\n equality. This is soon to be deprecated as support is being added to the\n Tensor class.\n LISTS: Convert list idioms, like initializers, slices, append, etc.\n NAME_SCOPES: Insert name scopes that name ops according to context, like the\n function they were defined in.\n ", "desc": "This enumeration represents optional conversion options.", "type": "API"}, {"name": "tf.autograph.experimental.set_loop_options", "docs": "Specifies additional arguments to be passed to the enclosing while_loop.\n\n The parameters apply to and only to the immediately enclosing loop. It only\n has effect if the loop is staged as a TF while_loop; otherwise the parameters\n have no effect.\n\n Usage:\n\n >>> @tf.function(autograph=True)\n ... def f():\n ... n = 0\n ... for i in tf.range(10):\n ... tf.autograph.experimental.set_loop_options(maximum_iterations=3)\n ... n += 1\n ... return n\n\n >>> @tf.function(autograph=True)\n ... def f():\n ... v = tf.constant((0,))\n ... for i in tf.range(3):\n ... tf.autograph.experimental.set_loop_options(\n ... shape_invariants=[(v, tf.TensorShape([None]))]\n ... )\n ... v = tf.concat((v, [i]), 0)\n ... return v\n\n Also see tf.while_loop.\n\n Args:\n parallel_iterations: The maximum number of iterations allowed to run in\n parallel at any given time. Note that this does not guarantee parallel\n execution.\n swap_memory: Whether to store intermediate values needed for\n gradients on the CPU instead of GPU.\n maximum_iterations: Allows limiting the total number of iterations executed\n by the loop.\n shape_invariants: Allows controlling the argument with the same name passed\n to tf.while_loop. Unlike tf.while_loop, this is a list of\n `(tensor, shape)` pairs.\n ", "desc": "Specifies additional arguments to be passed to the enclosing while_loop.", "type": "API"}, {"name": "tf.autograph.set_verbosity", "docs": "Sets the AutoGraph verbosity level.\n\n _Debug logging in AutoGraph_\n\n More verbose logging is useful to enable when filing bug reports or doing\n more in-depth debugging.\n\n There are two means to control the logging verbosity:\n\n * The `set_verbosity` function\n\n * The `AUTOGRAPH_VERBOSITY` environment variable\n\n `set_verbosity` takes precedence over the environment variable.\n\n For example:\n\n ```python\n import os\n import tensorflow as tf\n\n os.environ['AUTOGRAPH_VERBOSITY'] = '5'\n # Verbosity is now 5\n\n tf.autograph.set_verbosity(0)\n # Verbosity is now 0\n\n os.environ['AUTOGRAPH_VERBOSITY'] = '1'\n # No effect, because set_verbosity was already called.\n ```\n\n Logs entries are output to [absl](https://abseil.io)'s\n [default output](https://abseil.io/docs/python/guides/logging),\n with `INFO` level.\n Logs can be mirrored to stdout by using the `alsologtostdout` argument.\n Mirroring is enabled by default when Python runs in interactive mode.\n\n Args:\n level: int, the verbosity level; larger values specify increased verbosity;\n 0 means no logging. When reporting bugs, it is recommended to set this\n value to a larger number, like 10.\n alsologtostdout: bool, whether to also output log messages to `sys.stdout`.\n ", "desc": "Sets the AutoGraph verbosity level.", "type": "API"}, {"name": "tf.autograph.to_code", "docs": "Returns the source code generated by AutoGraph, as a string.\n\n Example usage:\n\n >>> def f(x):\n ... if x < 0:\n ... x = -x\n ... return x\n >>> tf.autograph.to_code(f)\n \"...def tf__f(x):...\"\n\n Also see: `tf.autograph.to_graph`.\n\n Note: If a function has been decorated with `tf.function`, pass its\n underlying Python function, rather than the callable that `tf.function\n creates:\n\n >>> @tf.function\n ... def f(x):\n ... if x < 0:\n ... x = -x\n ... return x\n >>> tf.autograph.to_code(f.python_function)\n \"...def tf__f(x):...\"\n\n Args:\n entity: Python callable or class to convert.\n recursive: Whether to recursively convert any functions that the converted\n function may call.\n experimental_optional_features: `None`, a tuple of, or a single\n `tf.autograph.experimental.Feature` value.\n\n Returns:\n The converted code as string.\n ", "desc": "Returns the source code generated by AutoGraph, as a string.", "type": "API"}, {"name": "tf.autograph.to_graph", "docs": "Converts a Python entity into a TensorFlow graph.\n\n Also see: `tf.autograph.to_code`, `tf.function`.\n\n Unlike `tf.function`, `to_graph` is a low-level transpiler that converts\n Python code to TensorFlow graph code. It does not implement any caching,\n variable management or create any actual ops, and is best used where greater\n control over the generated TensorFlow graph is desired. Another difference\n from `tf.function` is that `to_graph` will not wrap the graph into a\n TensorFlow function or a Python callable. Internally, `tf.function` uses\n `to_graph`.\n\n Example usage:\n\n >>> def f(x):\n ... if x > 0:\n ... y = x * x\n ... else:\n ... y = -x\n ... return y\n ...\n >>> converted_f = to_graph(f)\n >>> x = tf.constant(2)\n >>> converted_f(x) # converted_foo is like a TensorFlow Op.\n \n\n Supported Python entities include:\n * functions\n * classes\n * object methods\n\n Functions are converted into new functions with converted code.\n\n Classes are converted by generating a new class whose methods use converted\n code.\n\n Methods are converted into unbound function that have an additional first\n argument called `self`.\n\n For a tutorial, see the\n [tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function).\n For more detailed information, see the\n [AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md).\n\n Args:\n entity: Python callable or class to convert.\n recursive: Whether to recursively convert any functions that the converted\n function may call.\n experimental_optional_features: `None`, a tuple of, or a single\n `tf.autograph.experimental.Feature` value.\n\n Returns:\n Same as `entity`, the converted Python function or class.\n\n Raises:\n ValueError: If the entity could not be converted.\n ", "desc": "Converts a Python entity into a TensorFlow graph.", "type": "API"}, {"name": "tf.autograph.trace", "docs": "Traces argument information at compilation time.\n\n `trace` is useful when debugging, and it always executes during the tracing\n phase, that is, when the TF graph is constructed.\n\n _Example usage_\n\n ```python\n import tensorflow as tf\n\n for i in tf.range(10):\n tf.autograph.trace(i)\n # Output: \n ```\n\n Args:\n *args: Arguments to print to `sys.stdout`.\n ", "desc": "Traces argument information at compilation time.", "type": "API"}, {"name": "tf.batch_to_space", "docs": "BatchToSpace for N-D tensors of type T.\n\n This operation reshapes the \"batch\" dimension 0 into `M + 1` dimensions of\n shape `block_shape + [batch]`, interleaves these blocks back into the grid\n defined by the spatial dimensions `[1, ..., M]`, to obtain a result with the\n same rank as the input. The spatial dimensions of this intermediate result\n are then optionally cropped according to `crops` to produce the output. This\n is the reverse of SpaceToBatch (see `tf.space_to_batch`).\n\n Args:\n input: A N-D `Tensor` with shape `input_shape = [batch] + spatial_shape +\n remaining_shape`, where `spatial_shape` has M dimensions.\n block_shape: A 1-D `Tensor` with shape [M]. Must be one of the following\n types: `int32`, `int64`. All values must be >= 1. For backwards\n compatibility with TF 1.0, this parameter may be an int, in which case it\n is converted to\n `numpy.array([block_shape, block_shape],\n dtype=numpy.int64)`.\n crops: A 2-D `Tensor` with shape `[M, 2]`. Must be one of the\n following types: `int32`, `int64`. All values must be >= 0.\n `crops[i] = [crop_start, crop_end]` specifies the amount to crop from\n input dimension `i + 1`, which corresponds to spatial dimension `i`.\n It is required that\n `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.\n This operation is equivalent to the following steps:\n 1. Reshape `input` to `reshaped` of shape: [block_shape[0], ...,\n block_shape[M-1], batch / prod(block_shape), input_shape[1], ...,\n input_shape[N-1]]\n 2. Permute dimensions of `reshaped` to produce `permuted` of shape\n [batch / prod(block_shape), input_shape[1], block_shape[0], ...,\n input_shape[M], block_shape[M-1], input_shape[M+1],\n ..., input_shape[N-1]]\n 3. Reshape `permuted` to produce `reshaped_permuted` of shape\n [batch / prod(block_shape), input_shape[1] * block_shape[0], ...,\n input_shape[M] * block_shape[M-1], input_shape[M+1], ...,\n input_shape[N-1]]\n 4. Crop the start and end of dimensions `[1, ..., M]` of\n `reshaped_permuted` according to `crops` to produce the output\n of shape:\n [batch / prod(block_shape), input_shape[1] *\n block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] *\n block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1],\n ..., input_shape[N-1]]\n name: A name for the operation (optional).\n\n Examples:\n\n 1. For the following input of shape `[4, 1, 1, 1]`,\n `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:\n\n ```python\n [[[[1]]],\n [[[2]]],\n [[[3]]],\n [[[4]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n 2. For the following input of shape `[4, 1, 1, 3]`,\n `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:\n\n ```python\n [[[1, 2, 3]],\n [[4, 5, 6]],\n [[7, 8, 9]],\n [[10, 11, 12]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 3]` and value:\n\n ```python\n x = [[[[1, 2, 3], [4, 5, 6 ]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n 3. For the following\n input of shape `[4, 2, 2, 1]`,\n `block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:\n\n ```python\n x = [[[[1], [3]], [[ 9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n The output tensor has shape `[1, 4, 4, 1]` and value:\n\n ```python\n x = [[[1], [2], [ 3], [ 4]],\n [[5], [6], [ 7], [ 8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]\n ```\n\n 4. For the following input of shape\n `[8, 1, 3, 1]`,\n `block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`:\n\n ```python\n x = [[[[0], [ 1], [ 3]]],\n [[[0], [ 9], [11]]],\n [[[0], [ 2], [ 4]]],\n [[[0], [10], [12]]],\n [[[0], [ 5], [ 7]]],\n [[[0], [13], [15]]],\n [[[0], [ 6], [ 8]]],\n [[[0], [14], [16]]]]\n ```\n\n The output tensor has shape `[2, 2, 4, 1]` and value:\n\n ```python\n x = [[[[ 1], [ 2], [ 3], [ 4]],\n [[ 5], [ 6], [ 7], [ 8]]],\n [[[ 9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for N-D tensors of type T.", "type": "API"}, {"name": "tf.bitcast", "docs": "Bitcasts a tensor from one type to another without copying data.\n\n Given a tensor `input`, this operation returns a tensor that has the same buffer\n data as `input` with datatype `type`.\n\n If the input datatype `T` is larger than the output datatype `type` then the\n shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].\n\n If `T` is smaller than `type`, the operator requires that the rightmost\n dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from\n [..., sizeof(`type`)/sizeof(`T`)] to [...].\n\n tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype\n (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast()\n gives module error.\n For example,\n\n Example 1:\n\n >>> a = [1., 2., 3.]\n >>> equality_bitcast = tf.bitcast(a, tf.complex128)\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast]\n >>> equality_cast = tf.cast(a, tf.complex128)\n >>> print(equality_cast)\n tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)\n\n Example 2:\n\n >>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)\n \n\n Example 3:\n\n >>> x = [1., 2., 3.]\n >>> y = [0., 2., 3.]\n >>> equality= tf.equal(x,y)\n >>> equality_cast = tf.cast(equality,tf.float32)\n >>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8)\n >>> print(equality)\n tf.Tensor([False True True], shape=(3,), dtype=bool)\n >>> print(equality_cast)\n tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32)\n >>> print(equality_bitcast)\n tf.Tensor(\n [[ 0 0 0 0]\n [ 0 0 128 63]\n [ 0 0 128 63]], shape=(3, 4), dtype=uint8)\n\n *NOTE*: Bitcast is implemented as a low-level cast, so machines with different\n endian orderings will give different results.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.\n type: A `tf.DType` from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `type`.\n ", "desc": "Bitcasts a tensor from one type to another without copying data.", "type": "API"}, {"name": "tf.bitwise", "docs": "Operations for manipulating the binary representations of integers.\n", "desc": "Operations for manipulating the binary representations of integers.", "type": "API"}, {"name": "tf.bitwise.bitwise_and", "docs": "Elementwise computes the bitwise AND of `x` and `y`.\n\n The result will have those bits set, that are set in both `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_and(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise AND of `x` and `y`.", "type": "API"}, {"name": "tf.bitwise.bitwise_or", "docs": "Elementwise computes the bitwise OR of `x` and `y`.\n\n The result will have those bits set, that are set in `x`, `y` or both. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_or(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise OR of `x` and `y`.", "type": "API"}, {"name": "tf.bitwise.bitwise_xor", "docs": "Elementwise computes the bitwise XOR of `x` and `y`.\n\n The result will have those bits set, that are different in `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 4, 5], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_xor(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise XOR of `x` and `y`.", "type": "API"}, {"name": "tf.bitwise.invert", "docs": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.\n\n Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101.\n This operation is performed on each element of the tensor argument `x`.\n\n Example:\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n\n # flip 2 (00000010) to -3 (11111101)\n tf.assert_equal(-3, bitwise_ops.invert(2))\n\n dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,\n dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]\n\n inputs = [0, 5, 3, 14]\n for dtype in dtype_list:\n # Because of issues with negative numbers, let's test this indirectly.\n # 1. invert(a) and a = 0\n # 2. invert(a) or a = invert(0)\n input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)\n not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.bitwise_or(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.invert(\n tf.constant(0, dtype=dtype))]\n\n expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)\n tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)\n\n expected = tf.cast([not_0] * 4, tf.float32)\n tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)\n\n # For unsigned dtypes let's also check the result directly.\n if dtype.is_unsigned:\n inverted = bitwise_ops.invert(input_tensor)\n expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)\n tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.", "type": "API"}, {"name": "tf.bitwise.left_shift", "docs": "Elementwise computes the bitwise left-shift of `x` and `y`.\n\n If `y` is negative, or greater than or equal to the width of `x` in bits the\n result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n left_shift_result = bitwise_ops.left_shift(lhs, rhs)\n\n print(left_shift_result)\n\n # This will print:\n # tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.left_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise left-shift of `x` and `y`.", "type": "API"}, {"name": "tf.bitwise.right_shift", "docs": "Elementwise computes the bitwise right-shift of `x` and `y`.\n\n Performs a logical shift for unsigned integer types, and an arithmetic shift\n for signed integer types.\n\n If `y` is negative, or greater than or equal to than the width of `x` in bits\n the result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n right_shift_result = bitwise_ops.right_shift(lhs, rhs)\n\n print(right_shift_result)\n\n # This will print:\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.right_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise right-shift of `x` and `y`.", "type": "API"}, {"name": "tf.boolean_mask", "docs": "Apply boolean mask to tensor.\n\n Numpy equivalent is `tensor[mask]`.\n\n In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match\n the first K dimensions of `tensor`'s shape. We then have:\n `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`\n where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).\n The `axis` could be used with `mask` to indicate the axis to mask from.\n In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match\n the first `axis + dim(mask)` dimensions of `tensor`'s shape.\n\n See also: `tf.ragged.boolean_mask`, which can be applied to both dense and\n ragged tensors, and can be used if you need to preserve the masked dimensions\n of `tensor` (rather than flattening them, as `tf.boolean_mask` does).\n\n Examples:\n\n >>> tensor = [0, 1, 2, 3] # 1-D example\n >>> mask = np.array([True, False, True, False])\n >>> tf.boolean_mask(tensor, mask)\n \n\n >>> tensor = [[1, 2], [3, 4], [5, 6]] # 2-D example\n >>> mask = np.array([True, False, True])\n >>> tf.boolean_mask(tensor, mask)\n \n\n Args:\n tensor: N-D Tensor.\n mask: K-D boolean Tensor, K <= N and K must be known statically.\n axis: A 0-D int Tensor representing the axis in `tensor` to mask from. By\n default, axis is 0 which will mask from the first dimension. Otherwise K +\n axis <= N.\n name: A name for this operation (optional).\n\n Returns:\n (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding\n to `True` values in `mask`.\n\n Raises:\n ValueError: If shapes do not conform.\n\n Examples:\n\n ```python\n # 2-D example\n tensor = [[1, 2], [3, 4], [5, 6]]\n mask = np.array([True, False, True])\n boolean_mask(tensor, mask) # [[1, 2], [5, 6]]\n ```\n ", "desc": "Apply boolean mask to tensor.", "type": "API"}, {"name": "tf.broadcast_dynamic_shape", "docs": "Computes the shape of a broadcast given symbolic shapes.\n\n When `shape_x` and `shape_y` are Tensors representing shapes (i.e. the result\n of calling tf.shape on another Tensor) this computes a Tensor which is the\n shape of the result of a broadcasting op applied in tensors of shapes\n `shape_x` and `shape_y`.\n\n This is useful when validating the result of a broadcasting operation when the\n tensors do not have statically known shapes.\n\n Example:\n\n >>> shape_x = (1, 2, 3)\n >>> shape_y = (5, 1, 3)\n >>> tf.broadcast_dynamic_shape(shape_x, shape_y)\n \n\n Args:\n shape_x: A rank 1 integer `Tensor`, representing the shape of x.\n shape_y: A rank 1 integer `Tensor`, representing the shape of y.\n\n Returns:\n A rank 1 integer `Tensor` representing the broadcasted shape.\n\n Raises:\n InvalidArgumentError: If the two shapes are incompatible for\n broadcasting.\n ", "desc": "Computes the shape of a broadcast given symbolic shapes.", "type": "API"}, {"name": "tf.broadcast_static_shape", "docs": "Computes the shape of a broadcast given known shapes.\n\n When `shape_x` and `shape_y` are fully known `TensorShape`s this computes a\n `TensorShape` which is the shape of the result of a broadcasting op applied in\n tensors of shapes `shape_x` and `shape_y`.\n\n For example, if shape_x is `TensorShape([1, 2, 3])` and shape_y is\n `TensorShape([5, 1, 3])`, the result is a TensorShape whose value is\n `TensorShape([5, 2, 3])`.\n\n This is useful when validating the result of a broadcasting operation when the\n tensors have statically known shapes.\n\n Example:\n\n >>> shape_x = tf.TensorShape([1, 2, 3])\n >>> shape_y = tf.TensorShape([5, 1 ,3])\n >>> tf.broadcast_static_shape(shape_x, shape_y)\n TensorShape([5, 2, 3])\n\n Args:\n shape_x: A `TensorShape`\n shape_y: A `TensorShape`\n\n Returns:\n A `TensorShape` representing the broadcasted shape.\n\n Raises:\n ValueError: If the two shapes can not be broadcasted.\n ", "desc": "Computes the shape of a broadcast given known shapes.", "type": "API"}, {"name": "tf.broadcast_to", "docs": "Broadcast an array for a compatible shape.\n\n Broadcasting is the process of making arrays to have compatible shapes\n for arithmetic operations. Two shapes are compatible if for each\n dimension pair they are either equal or one of them is one. When trying\n to broadcast a Tensor to a shape, it starts with the trailing dimensions,\n and works its way forward.\n\n For example,\n\n >>> x = tf.constant([1, 2, 3])\n >>> y = tf.broadcast_to(x, [3, 3])\n >>> print(y)\n tf.Tensor(\n [[1 2 3]\n [1 2 3]\n [1 2 3]], shape=(3, 3), dtype=int32)\n\n In the above example, the input Tensor with the shape of `[1, 3]`\n is broadcasted to output Tensor with shape of `[3, 3]`.\n\n When doing broadcasted operations such as multiplying a tensor\n by a scalar, broadcasting (usually) confers some time or space\n benefit, as the broadcasted tensor is never materialized.\n\n However, `broadcast_to` does not carry with it any such benefits.\n The newly-created tensor takes the full memory of the broadcasted\n shape. (In a graph context, `broadcast_to` might be fused to\n subsequent operation and then be optimized away, however.)\n\n Args:\n input: A `Tensor`. A Tensor to broadcast.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 1-D `int` Tensor. The shape of the desired output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Broadcast an array for a compatible shape.", "type": "API"}, {"name": "tf.case", "docs": "Create a case operation.\n\n See also `tf.switch_case`.\n\n The `pred_fn_pairs` parameter is a list of pairs of size N.\n Each pair contains a boolean scalar tensor and a python callable that\n creates the tensors to be returned if the boolean evaluates to True.\n `default` is a callable generating a list of tensors. All the callables\n in `pred_fn_pairs` as well as `default` (if provided) should return the same\n number and types of tensors.\n\n If `exclusive==True`, all predicates are evaluated, and an exception is\n thrown if more than one of the predicates evaluates to `True`.\n If `exclusive==False`, execution stops at the first predicate which\n evaluates to True, and the tensors generated by the corresponding function\n are returned immediately. If none of the predicates evaluate to True, this\n operation returns the tensors generated by `default`.\n\n `tf.case` supports nested structures as implemented in\n `tf.nest`. All of the callables must return the same (possibly nested) value\n structure of lists, tuples, and/or named tuples. Singleton lists and tuples\n form the only exceptions to this: when returned by a callable, they are\n implicitly unpacked to single values. This behavior is disabled by passing\n `strict=True`.\n\n @compatibility(v2)\n `pred_fn_pairs` could be a dictionary in v1. However, tf.Tensor and\n tf.Variable are no longer hashable in v2, so cannot be used as a key for a\n dictionary. Please use a list or a tuple instead.\n @end_compatibility\n\n\n **Example 1:**\n\n Pseudocode:\n\n ```\n if (x < y) return 17;\n else return 23;\n ```\n\n Expressions:\n\n ```python\n f1 = lambda: tf.constant(17)\n f2 = lambda: tf.constant(23)\n r = tf.case([(tf.less(x, y), f1)], default=f2)\n ```\n\n **Example 2:**\n\n Pseudocode:\n\n ```\n if (x < y && x > z) raise OpError(\"Only one predicate may evaluate to True\");\n if (x < y) return 17;\n else if (x > z) return 23;\n else return -1;\n ```\n\n Expressions:\n\n ```python\n def f1(): return tf.constant(17)\n def f2(): return tf.constant(23)\n def f3(): return tf.constant(-1)\n r = tf.case([(tf.less(x, y), f1), (tf.greater(x, z), f2)],\n default=f3, exclusive=True)\n ```\n\n Args:\n pred_fn_pairs: List of pairs of a boolean scalar tensor and a callable which\n returns a list of tensors.\n default: Optional callable that returns a list of tensors.\n exclusive: True iff at most one predicate is allowed to evaluate to `True`.\n strict: A boolean that enables/disables 'strict' mode; see above.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the first pair whose predicate evaluated to True, or\n those returned by `default` if none does.\n\n Raises:\n TypeError: If `pred_fn_pairs` is not a list/tuple.\n TypeError: If `pred_fn_pairs` is a list but does not contain 2-tuples.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable.\n ", "desc": "Create a case operation.", "type": "API"}, {"name": "tf.cast", "docs": "Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.cast(x, tf.int32)\n \n\n Notice `tf.cast` has an alias `tf.dtypes.cast`:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.dtypes.cast(x, tf.int32)\n \n\n The operation supports data types (for `x` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`.\n In case of casting from complex types (`complex64`, `complex128`) to real\n types, only the real part of `x` is returned. In case of casting from real\n types to complex types (`complex64`, `complex128`), the imaginary part of the\n returned value is set to `0`. The handling of complex types here matches the\n behavior of numpy.\n\n Note casting nan and inf values to integral types has undefined behavior.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices` of numeric type. It could\n be `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`,\n `int64`, `float16`, `float32`, `float64`, `complex64`, `complex128`,\n `bfloat16`.\n dtype: The destination type. The list of supported dtypes is the same as\n `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` and\n same type as `dtype`.\n\n Raises:\n TypeError: If `x` cannot be cast to the `dtype`.\n ", "desc": "Casts a tensor to a new type.", "type": "API"}, {"name": "tf.clip_by_global_norm", "docs": "Clips values of multiple tensors by the ratio of the sum of their norms.\n\n Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,\n this operation returns a list of clipped tensors `list_clipped`\n and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,\n if you've already computed the global norm for `t_list`, you can specify\n the global norm with `use_norm`.\n\n To perform the clipping, the values `t_list[i]` are set to:\n\n t_list[i] * clip_norm / max(global_norm, clip_norm)\n\n where:\n\n global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))\n\n If `clip_norm > global_norm` then the entries in `t_list` remain as they are,\n otherwise they're all shrunk by the global ratio.\n\n If `global_norm == infinity` then the entries in `t_list` are all set to `NaN`\n to signal that an error occurred.\n\n Any of the entries of `t_list` that are of type `None` are ignored.\n\n This is the correct way to perform gradient clipping (Pascanu et al., 2012).\n\n However, it is slower than `clip_by_norm()` because all the parameters must be\n ready before the clipping operation can be performed.\n\n Args:\n t_list: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.\n clip_norm: A 0-D (scalar) `Tensor` > 0. The clipping ratio.\n use_norm: A 0-D (scalar) `Tensor` of type `float` (optional). The global\n norm to use. If not provided, `global_norm()` is used to compute the norm.\n name: A name for the operation (optional).\n\n Returns:\n list_clipped: A list of `Tensors` of the same type as `list_t`.\n global_norm: A 0-D (scalar) `Tensor` representing the global norm.\n\n Raises:\n TypeError: If `t_list` is not a sequence.\n\n References:\n On the difficulty of training Recurrent Neural Networks:\n [Pascanu et al., 2012](http://proceedings.mlr.press/v28/pascanu13.html)\n ([pdf](http://proceedings.mlr.press/v28/pascanu13.pdf))\n ", "desc": "Clips values of multiple tensors by the ratio of the sum of their norms.", "type": "API"}, {"name": "tf.clip_by_norm", "docs": "Clips tensor values to a maximum L2-norm.\n\n Given a tensor `t`, and a maximum clip value `clip_norm`, this operation\n normalizes `t` so that its L2-norm is less than or equal to `clip_norm`,\n along the dimensions given in `axes`. Specifically, in the default case\n where all dimensions are used for calculation, if the L2-norm of `t` is\n already less than or equal to `clip_norm`, then `t` is not modified. If\n the L2-norm is greater than `clip_norm`, then this operation returns a\n tensor of the same type and shape as `t` with its values set to:\n\n `t * clip_norm / l2norm(t)`\n\n In this case, the L2-norm of the output tensor is `clip_norm`.\n\n As another example, if `t` is a matrix and `axes == [1]`, then each row\n of the output will have L2-norm less than or equal to `clip_norm`. If\n `axes == [0]` instead, each column of the output will be clipped.\n\n Code example:\n\n >>> some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32)\n >>> tf.clip_by_norm(some_nums, 2.0).numpy()\n array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]],\n dtype=float32)\n\n This operation is typically used to clip gradients before applying them with\n an optimizer. Most gradient data is a collection of different shaped tensors\n for different parts of the model. Thus, this is a common usage:\n\n ```\n # Get your gradients after training\n loss_value, grads = grad(model, features, labels)\n\n # Apply some clipping\n grads = [tf.clip_by_norm(g, norm)\n for g in grads]\n\n # Continue on with training\n optimizer.apply_gradients(grads)\n ```\n\n Args:\n t: A `Tensor` or `IndexedSlices`. This must be a floating point type.\n clip_norm: A 0-D (scalar) `Tensor` > 0. A maximum clipping value, also\n floating point\n axes: A 1-D (vector) `Tensor` of type int32 containing the dimensions\n to use for computing the L2-norm. If `None` (the default), uses all\n dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A clipped `Tensor` or `IndexedSlices`.\n\n Raises:\n ValueError: If the clip_norm tensor is not a 0-D scalar tensor.\n TypeError: If dtype of the input is not a floating point or\n complex type.\n ", "desc": "Clips tensor values to a maximum L2-norm.", "type": "API"}, {"name": "tf.clip_by_value", "docs": "Clips tensor values to a specified min and max.\n\n Given a tensor `t`, this operation returns a tensor of the same type and\n shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.\n Any values less than `clip_value_min` are set to `clip_value_min`. Any values\n greater than `clip_value_max` are set to `clip_value_max`.\n\n Note: `clip_value_min` needs to be smaller or equal to `clip_value_max` for\n correct results.\n\n For example:\n\n Basic usage passes a scalar as the min and max value.\n\n >>> t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])\n >>> t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)\n >>> t2.numpy()\n array([[-1., -1., 0.],\n [ 0., 1., 1.]], dtype=float32)\n\n The min and max can be the same size as `t`, or broadcastable to that size.\n\n >>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])\n >>> clip_min = [[2],[1]]\n >>> t3 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)\n >>> t3.numpy()\n array([[ 2., 2., 10.],\n [ 1., 1., 10.]], dtype=float32)\n\n Broadcasting fails, intentionally, if you would expand the dimensions of `t`\n\n >>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])\n >>> clip_min = [[[2, 1]]] # Has a third axis\n >>> t4 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Incompatible shapes: [2,3] vs. [1,1,2]\n\n It throws a `TypeError` if you try to clip an `int` to a `float` value\n (`tf.cast` the input to `float` first).\n\n >>> t = tf.constant([[1, 2], [3, 4]], dtype=tf.int32)\n >>> t5 = tf.clip_by_value(t, clip_value_min=-3.1, clip_value_max=3.1)\n Traceback (most recent call last):\n ...\n TypeError: Cannot convert ...\n\n\n Args:\n t: A `Tensor` or `IndexedSlices`.\n clip_value_min: The minimum value to clip to. A scalar `Tensor` or one that\n is broadcastable to the shape of `t`.\n clip_value_max: The maximum value to clip to. A scalar `Tensor` or one that\n is broadcastable to the shape of `t`.\n name: A name for the operation (optional).\n\n Returns:\n A clipped `Tensor` or `IndexedSlices`.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If the clip tensors would trigger array\n broadcasting that would make the returned tensor larger than the input.\n TypeError: If dtype of the input is `int32` and dtype of\n the `clip_value_min` or `clip_value_max` is `float32`\n ", "desc": "Clips tensor values to a specified min and max.", "type": "API"}, {"name": "tf.compat", "docs": "Compatibility functions.\n\nThe `tf.compat` module contains two sets of compatibility functions.\n\n## Tensorflow 1.x and 2.x APIs\n\nThe `compat.v1` and `compat.v2` submodules provide a complete copy of both the\n`v1` and `v2` APIs for backwards and forwards compatibility across TensorFlow\nversions 1.x and 2.x. See the\n[migration guide](https://www.tensorflow.org/guide/migrate) for details.\n\n## Utilities for writing compatible code\n\nAside from the `compat.v1` and `compat.v2` submodules, `tf.compat` also contains\na set of helper functions for writing code that works in both:\n\n* TensorFlow 1.x and 2.x\n* Python 2 and 3\n\n\n## Type collections\n\nThe compatibility module also provides the following aliases for common\nsets of python types:\n\n* `bytes_or_text_types`\n* `complex_types`\n* `integral_types`\n* `real_types`\n\n", "desc": "Compatibility functions.", "type": "API"}, {"name": "tf.compat.as_bytes", "docs": "Converts `bytearray`, `bytes`, or unicode python input types to `bytes`.\n\n Uses utf-8 encoding for text by default.\n\n Args:\n bytes_or_text: A `bytearray`, `bytes`, `str`, or `unicode` object.\n encoding: A string indicating the charset for encoding unicode.\n\n Returns:\n A `bytes` object.\n\n Raises:\n TypeError: If `bytes_or_text` is not a binary or unicode string.\n ", "desc": "Converts `bytearray`, `bytes`, or unicode python input types to `bytes`.", "type": "API"}, {"name": "tf.compat.as_str", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.as_str_any", "docs": "Converts input to `str` type.\n\n Uses `str(value)`, except for `bytes` typed inputs, which are converted\n using `as_str`.\n\n Args:\n value: A object that can be converted to `str`.\n\n Returns:\n A `str` object.\n ", "desc": "Converts input to `str` type.", "type": "API"}, {"name": "tf.compat.as_text", "docs": "Converts any string-like python input types to unicode.\n\n Returns the input as a unicode string. Uses utf-8 encoding for text\n by default.\n\n Args:\n bytes_or_text: A `bytes`, `str`, or `unicode` object.\n encoding: A string indicating the charset for decoding unicode.\n\n Returns:\n A `unicode` (Python 2) or `str` (Python 3) object.\n\n Raises:\n TypeError: If `bytes_or_text` is not a binary or unicode string.\n ", "desc": "Converts any string-like python input types to unicode.", "type": "API"}, {"name": "tf.compat.dimension_at_index", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n If you want to retrieve the Dimension instance corresponding to a certain\n index in a TensorShape instance, use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n dim = tensor_shape[i]\n\n # Use `dimension_at_index` as direct replacement compatible with both V1 & V2:\n dim = dimension_at_index(tensor_shape, i)\n\n # Another possibility would be this, but WARNING: it only works if the\n # tensor_shape instance has a defined rank.\n dim = tensor_shape.dims[i] # `dims` may be None if the rank is undefined!\n\n # In native V2 code, we recommend instead being more explicit:\n if tensor_shape.rank is None:\n dim = Dimension(None)\n else:\n dim = tensor_shape.dims[i]\n\n # Being more explicit will save you from the following trap (present in V1):\n # you might do in-place modifications to `dim` and expect them to be reflected\n # in `tensor_shape[i]`, but they would not be (as the Dimension object was\n # instantiated on the fly.\n ```\n\n Args:\n shape: A TensorShape instance.\n index: An integer index.\n\n Returns:\n A dimension object.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.dimension_value", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n When accessing the value of a TensorShape dimension,\n use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n value = tensor_shape[i].value\n\n # Use `dimension_value` as direct replacement compatible with both V1 & V2:\n value = dimension_value(tensor_shape[i])\n\n # This would be the V2 equivalent:\n value = tensor_shape[i] # Warning: this will return the dim value in V2!\n ```\n\n Args:\n dimension: Either a `Dimension` instance, an integer, or None.\n\n Returns:\n A plain value, i.e. an integer or None.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.forward_compatibility_horizon", "docs": "Context manager for testing forward compatibility of generated graphs.\n\n See [Version\n compatibility](https://tensorflow.org/guide/version_compat#backward_forward).\n\n To ensure forward compatibility of generated graphs (see `forward_compatible`)\n with older binaries, new features can be gated with:\n\n ```python\n if compat.forward_compatible(year=2018, month=08, date=01):\n generate_graph_with_new_features()\n else:\n generate_graph_so_older_binaries_can_consume_it()\n ```\n\n However, when adding new features, one may want to unittest it before\n the forward compatibility window expires. This context manager enables\n such tests. For example:\n\n ```python\n from tensorflow.python.compat import compat\n\n def testMyNewFeature(self):\n with compat.forward_compatibility_horizon(2018, 08, 02):\n # Test that generate_graph_with_new_features() has an effect\n ```\n\n Args:\n year: A year (e.g., 2018). Must be an `int`.\n month: A month (1 <= month <= 12) in year. Must be an `int`.\n day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an\n `int`.\n\n Yields:\n Nothing.\n ", "desc": "Context manager for testing forward compatibility of generated graphs.", "type": "API"}, {"name": "tf.compat.forward_compatible", "docs": "Return true if the forward compatibility window has expired.\n\n See [Version\n compatibility](https://tensorflow.org/guide/version_compat#backward_forward).\n\n Forward-compatibility refers to scenarios where the producer of a TensorFlow\n model (a GraphDef or SavedModel) is compiled against a version of the\n TensorFlow library newer than what the consumer was compiled against. The\n \"producer\" is typically a Python program that constructs and trains a model\n while the \"consumer\" is typically another program that loads and serves the\n model.\n\n TensorFlow has been supporting a 3 week forward-compatibility window for\n programs compiled from source at HEAD.\n\n For example, consider the case where a new operation `MyNewAwesomeAdd` is\n created with the intent of replacing the implementation of an existing Python\n wrapper - `tf.add`. The Python wrapper implementation should change from\n something like:\n\n ```python\n def add(inputs, name=None):\n return gen_math_ops.add(inputs, name)\n ```\n\n to:\n\n ```python\n from tensorflow.python.compat import compat\n\n def add(inputs, name=None):\n if compat.forward_compatible(year, month, day):\n # Can use the awesome new implementation.\n return gen_math_ops.my_new_awesome_add(inputs, name)\n # To maintain forward compatibility, use the old implementation.\n return gen_math_ops.add(inputs, name)\n ```\n\n Where `year`, `month`, and `day` specify the date beyond which binaries\n that consume a model are expected to have been updated to include the\n new operations. This date is typically at least 3 weeks beyond the date\n the code that adds the new operation is committed.\n\n Args:\n year: A year (e.g., 2018). Must be an `int`.\n month: A month (1 <= month <= 12) in year. Must be an `int`.\n day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an\n `int`.\n\n Returns:\n True if the caller can expect that serialized TensorFlow graphs produced\n can be consumed by programs that are compiled with the TensorFlow library\n source code after (year, month, day).\n ", "desc": "Return true if the forward compatibility window has expired.", "type": "API"}, {"name": "tf.compat.path_to_str", "docs": "Converts input which is a `PathLike` object to `str` type.\n\n Converts from any python constant representation of a `PathLike` object to\n a string. If the input is not a `PathLike` object, simply returns the input.\n\n Args:\n path: An object that can be converted to path representation.\n\n Returns:\n A `str` object.\n\n Usage:\n In case a simplified `str` version of the path is needed from an\n `os.PathLike` object\n\n Examples:\n ```python\n $ tf.compat.path_to_str('C:\\XYZ\\tensorflow\\./.././tensorflow')\n 'C:\\XYZ\\tensorflow\\./.././tensorflow' # Windows OS\n $ tf.compat.path_to_str(Path('C:\\XYZ\\tensorflow\\./.././tensorflow'))\n 'C:\\XYZ\\tensorflow\\..\\tensorflow' # Windows OS\n $ tf.compat.path_to_str(Path('./corpus'))\n 'corpus' # Linux OS\n $ tf.compat.path_to_str('./.././Corpus')\n './.././Corpus' # Linux OS\n $ tf.compat.path_to_str(Path('./.././Corpus'))\n '../Corpus' # Linux OS\n $ tf.compat.path_to_str(Path('./..////../'))\n '../..' # Linux OS\n\n ```\n ", "desc": "Converts input which is a `PathLike` object to `str` type.", "type": "API"}, {"name": "tf.compat.v1", "docs": "Bring in all of the public TensorFlow interface into this module.", "desc": "Bring in all of the public TensorFlow interface into this module.", "type": "API"}, {"name": "tf.compat.v1.abs", "docs": "Computes the absolute value of a tensor.\n\n Given a tensor of integer or floating-point values, this operation returns a\n tensor of the same type, where each element contains the absolute value of the\n corresponding element in the input.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of type\n `float32` or `float64` that is the absolute value of each element in `x`. For\n a complex number \\\\(a + bj\\\\), its absolute value is computed as\n \\\\(\\sqrt{a^2 + b^2}\\\\).\n\n For example:\n\n >>> # real number\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.abs(x)\n \n\n >>> # complex number\n >>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])\n >>> tf.abs(x)\n \n\n Args:\n x: A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`,\n `int32`, `int64`, `complex64` or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`,\n with absolute values. Note, for `complex64` or `complex128` input, the\n returned `Tensor` will be of type `float32` or `float64`, respectively.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)`", "desc": "Computes the absolute value of a tensor.", "type": "API"}, {"name": "tf.compat.v1.accumulate_n", "docs": "Returns the element-wise sum of a list of tensors.\n\n Optionally, pass `shape` and `tensor_dtype` for shape and type checking,\n otherwise, these are inferred.\n\n `accumulate_n` performs the same operation as `tf.math.add_n`.\n\n For example:\n\n ```python\n a = tf.constant([[1, 2], [3, 4]])\n b = tf.constant([[5, 0], [0, 6]])\n tf.math.accumulate_n([a, b, a]) # [[7, 4], [6, 14]]\n\n # Explicitly pass shape and type\n tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)\n # [[7, 4],\n # [6, 14]]\n ```\n\n Args:\n inputs: A list of `Tensor` objects, each with same shape and type.\n shape: Expected shape of elements of `inputs` (optional). Also controls the\n output shape of this op, which may affect type inference in other ops. A\n value of `None` means \"infer the input shape from the shapes in `inputs`\".\n tensor_dtype: Expected data type of `inputs` (optional). A value of `None`\n means \"infer the input dtype from `inputs[0]`\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Returns the element-wise sum of a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.acos", "docs": "Computes acos of x element-wise.\n\n Provided an input tensor, the `tf.math.acos` operation\n returns the inverse cosine of each element of the tensor.\n If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`.\n\n Input range is `[-1, 1]` and the output has a range of `[0, pi]`.\n\n For example:\n\n >>> x = tf.constant([1.0, -0.5, 3.4, 0.2, 0.0, -2], dtype = tf.float32)\n >>> tf.math.acos(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Computes acos of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.acosh", "docs": "Computes inverse hyperbolic cosine of x element-wise.\n\n Given an input tensor, the function computes inverse hyperbolic cosine of every element.\n Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.\n\n ```python\n x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.add", "docs": "Returns x + y element-wise.\n\n Example usages below.\n\n Add a scalar and a list:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.add(x, y)\n \n\n Note that binary `+` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x + y\n \n\n Add a tensor and a list of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([1, 2, 3, 4, 5])\n >>> tf.add(x, y)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**7 + 1, 2**7 + 2]\n >>> tf.add(x, y)\n \n\n When adding two input values of different shapes, `Add` follows NumPy\n broadcasting rules. The two input array shapes are compared element-wise.\n Starting with the trailing dimensions, the two dimensions either have to be\n equal or one of them needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(1, 2, 1, 3)\n >>> y = np.ones(6).reshape(2, 1, 3, 1)\n >>> tf.add(x, y).shape.as_list()\n [2, 2, 3, 3]\n\n Another example with two arrays of different dimension.\n\n >>> x = np.ones([1, 2, 1, 4])\n >>> y = np.ones([3, 4])\n >>> tf.add(x, y).shape.as_list()\n [1, 2, 3, 4]\n\n The reduction version of this elementwise operation is `tf.math.reduce_sum`\n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: bfloat16, half,\n float32, float64, uint8, int8, int16, int32, int64, complex64, complex128,\n string.\n y: A `tf.Tensor`. Must have the same type as x.\n name: A name for the operation (optional)\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.compat.v1.add_check_numerics_ops", "docs": "Connect a `tf.debugging.check_numerics` to every floating point tensor.\n\n `check_numerics` operations themselves are added for each `half`, `float`,\n or `double` tensor in the current default graph. For all ops in the graph, the\n `check_numerics` op for all of its (`half`, `float`, or `double`) inputs\n is guaranteed to run before the `check_numerics` op on any of its outputs.\n\n Note: This API is not compatible with the use of `tf.cond` or\n `tf.while_loop`, and will raise a `ValueError` if you attempt to call it\n in such a graph.\n\n Returns:\n A `group` op depending on all `check_numerics` ops added.\n\n Raises:\n ValueError: If the graph contains any numeric operations in a control flow\n structure.\n RuntimeError: If called with eager execution enabled.\n\n @compatibility(eager)\n Not compatible with eager execution. To check for `Inf`s and `NaN`s under\n eager execution, call `tf.debugging.enable_check_numerics()` once before\n executing the checked operations.\n @end_compatibility\n ", "desc": "Connect a `tf.debugging.check_numerics` to every floating point tensor.", "type": "API"}, {"name": "tf.compat.v1.add_n", "docs": "Adds all input tensors element-wise.\n\n `tf.math.add_n` performs the same operation as `tf.math.accumulate_n`.\n\n This op does not [broadcast](\n https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)\n its inputs. If you need broadcasting, use `tf.math.add` (or the `+` operator)\n instead.\n\n For example:\n\n >>> a = tf.constant([[3, 5], [4, 8]])\n >>> b = tf.constant([[1, 6], [2, 9]])\n >>> tf.math.add_n([a, b, a])\n \n\n Args:\n inputs: A list of `tf.Tensor` or `tf.IndexedSlices` objects, each with the\n same shape and type. `tf.IndexedSlices` objects will be converted into\n dense tensors prior to adding.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Adds all input tensors element-wise.", "type": "API"}, {"name": "tf.compat.v1.add_to_collection", "docs": "Wrapper for `Graph.add_to_collection()` using the default graph.\n\n See `tf.Graph.add_to_collection`\n for more details.\n\n Args:\n name: The key for the collection. For example, the `GraphKeys` class\n contains many standard names for collections.\n value: The value to add to the collection.\n\n @compatibility(eager)\n Collections are only supported in eager when variables are created inside\n an EagerVariableStore (e.g. as part of a layer or template).\n @end_compatibility\n ", "desc": "Wrapper for `Graph.add_to_collection()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.add_to_collections", "docs": "Wrapper for `Graph.add_to_collections()` using the default graph.\n\n See `tf.Graph.add_to_collections`\n for more details.\n\n Args:\n names: The key for the collections. The `GraphKeys` class contains many\n standard names for collections.\n value: The value to add to the collections.\n\n @compatibility(eager)\n Collections are only supported in eager when variables are created inside\n an EagerVariableStore (e.g. as part of a layer or template).\n @end_compatibility\n ", "desc": "Wrapper for `Graph.add_to_collections()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.AggregationMethod", "docs": "A class listing aggregation methods used to combine gradients.\n\n Computing partial derivatives can require aggregating gradient\n contributions. This class lists the various methods that can\n be used to combine gradients in the graph.\n\n The following aggregation methods are part of the stable API for\n aggregating gradients:\n\n * `ADD_N`: All of the gradient terms are summed as part of one\n operation using the \"AddN\" op (see `tf.add_n`). This\n method has the property that all gradients must be ready and\n buffered separately in memory before any aggregation is performed.\n * `DEFAULT`: The system-chosen default aggregation method.\n\n The following aggregation methods are experimental and may not\n be supported in future releases:\n\n * `EXPERIMENTAL_TREE`: Gradient terms are summed in pairs using\n the \"AddN\" op. This method of summing gradients may reduce\n performance, but it can improve memory utilization because the\n gradients can be released earlier.\n\n ", "desc": "A class listing aggregation methods used to combine gradients.", "type": "API"}, {"name": "tf.compat.v1.all_variables", "docs": "Use `tf.compat.v1.global_variables` instead. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.\nInstructions for updating:\nPlease use tf.global_variables instead.", "desc": "Use `tf.compat.v1.global_variables` instead. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.angle", "docs": "Returns the element-wise argument of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the argument of each element in `input` considered as a complex number.\n\n The elements in `input` are considered to be complex numbers of the form\n \\\\(a + bj\\\\), where *a* is the real part and *b* is the imaginary part.\n If `input` is real then *b* is zero by definition.\n\n The argument returned by this function is of the form \\\\(atan2(b, a)\\\\).\n If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```\n input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64)\n tf.math.angle(input).numpy()\n # ==> array([2.0131705, 1.056345 ], dtype=float32)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the element-wise argument of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.app", "docs": "Generic entry point script.\n", "desc": "Generic entry point script.", "type": "API"}, {"name": "tf.compat.v1.app.flags", "docs": "Import router for absl.flags. See https://github.com/abseil/abseil-py.", "desc": "Import router for absl.flags. See https://github.com/abseil/abseil-py.", "type": "API"}, {"name": "tf.compat.v1.app.flags.adopt_module_key_flags", "docs": "Declares that all flags key to a module are key to the current module.\n\n Args:\n module: module, the module object from which all key flags will be declared\n as key flags to the current module.\n flag_values: FlagValues, the FlagValues instance in which the flags will be\n declared as key flags. This should almost never need to be overridden.\n\n Raises:\n Error: Raised when given an argument that is a module name (a string),\n instead of a module object.\n ", "desc": "Declares that all flags key to a module are key to the current module.", "type": "API"}, {"name": "tf.compat.v1.app.flags.ArgumentParser", "docs": "Base class used to parse and convert arguments.\n\n The parse() method checks to make sure that the string argument is a\n legal value and convert it to a native type. If the value cannot be\n converted, it should throw a 'ValueError' exception with a human\n readable explanation of why the value is illegal.\n\n Subclasses should also define a syntactic_help string which may be\n presented to the user to describe the form of the legal values.\n\n Argument parser classes must be stateless, since instances are cached\n and shared between flags. Initializer arguments are allowed, but all\n member variables must be derived from initializer arguments only.\n ", "desc": "Base class used to parse and convert arguments.", "type": "API"}, {"name": "tf.compat.v1.app.flags.ArgumentSerializer", "docs": "Base class for generating string representations of a flag value.", "desc": "Base class for generating string representations of a flag value.", "type": "API"}, {"name": "tf.compat.v1.app.flags.BaseListParser", "docs": "Base class for a parser of lists of strings.\n\n To extend, inherit from this class; from the subclass __init__, call\n\n BaseListParser.__init__(self, token, name)\n\n where token is a character used to tokenize, and name is a description\n of the separator.\n ", "desc": "Base class for a parser of lists of strings.", "type": "API"}, {"name": "tf.compat.v1.app.flags.BooleanFlag", "docs": "Basic boolean flag.\n\n Boolean flags do not take any arguments, and their value is either\n True (1) or False (0). The false value is specified on the command\n line by prepending the word 'no' to either the long or the short flag\n name.\n\n For example, if a Boolean flag was created whose long name was\n 'update' and whose short name was 'x', then this flag could be\n explicitly unset through either --noupdate or --nox.\n ", "desc": "Basic boolean flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.BooleanParser", "docs": "Parser of boolean values.", "desc": "Parser of boolean values.", "type": "API"}, {"name": "tf.compat.v1.app.flags.CantOpenFlagFileError", "docs": "Raised when flagfile fails to open.\n\n E.g. the file doesn't exist, or has wrong permissions.\n ", "desc": "Raised when flagfile fails to open.", "type": "API"}, {"name": "tf.compat.v1.app.flags.CsvListSerializer", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.app.flags.declare_key_flag", "docs": "Declares one flag as key to the current module.\n\n Key flags are flags that are deemed really important for a module.\n They are important when listing help messages; e.g., if the\n --helpshort command-line flag is used, then only the key flags of the\n main module are listed (instead of all flags, as in the case of\n --helpfull).\n\n Sample usage:\n\n flags.declare_key_flag('flag_1')\n\n Args:\n flag_name: str, the name of an already declared flag. (Redeclaring flags as\n key, including flags implicitly key because they were declared in this\n module, is a no-op.)\n flag_values: FlagValues, the FlagValues instance in which the flag will be\n declared as a key flag. This should almost never need to be overridden.\n\n Raises:\n ValueError: Raised if flag_name not defined as a Python flag.\n ", "desc": "Declares one flag as key to the current module.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE", "docs": "Registers a generic Flag object.\n\n NOTE: in the docstrings of all DEFINE* functions, \"registers\" is short\n for \"creates a new flag and registers it\".\n\n Auxiliary function: clients should use the specialized DEFINE_\n function instead.\n\n Args:\n parser: ArgumentParser, used to parse the flag arguments.\n name: str, the flag name.\n default: The default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n serializer: ArgumentSerializer, the flag serializer instance.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a generic Flag object.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_alias", "docs": "Defines an alias flag for an existing one.\n\n Args:\n name: str, the flag name.\n original_name: str, the original flag name.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the module that defines this flag.\n\n Returns:\n a handle to defined flag.\n\n Raises:\n flags.FlagError:\n UnrecognizedFlagError: if the referenced flag doesn't exist.\n DuplicateFlagError: if the alias name has been used by some existing flag.\n ", "desc": "Defines an alias flag for an existing one.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_bool", "docs": "Registers a boolean flag.\n\n Such a boolean flag does not take an argument. If a user wants to\n specify a false value explicitly, the long option beginning with 'no'\n must be used: i.e. --noflag\n\n This flag will have a value of None, True or False. None is possible\n if default=None and the user does not specify the flag on the command\n line.\n\n Args:\n name: str, the flag name.\n default: bool|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a boolean flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_boolean", "docs": "Registers a boolean flag.\n\n Such a boolean flag does not take an argument. If a user wants to\n specify a false value explicitly, the long option beginning with 'no'\n must be used: i.e. --noflag\n\n This flag will have a value of None, True or False. None is possible\n if default=None and the user does not specify the flag on the command\n line.\n\n Args:\n name: str, the flag name.\n default: bool|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a boolean flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_enum", "docs": "Registers a flag whose value can be any string from enum_values.\n\n Instead of a string enum, prefer `DEFINE_enum_class`, which allows\n defining enums from an `enum.Enum` class.\n\n Args:\n name: str, the flag name.\n default: str|None, the default value of the flag.\n enum_values: [str], a non-empty list of strings with the possible values for\n the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be any string from enum_values.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_enum_class", "docs": "Registers a flag whose value can be the name of enum members.\n\n Args:\n name: str, the flag name.\n default: Enum|str|None, the default value of the flag.\n enum_class: class, the Enum class with all the possible values for the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n case_sensitive: bool, whether to map strings to members of the enum_class\n without considering case.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be the name of enum members.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_flag", "docs": "Registers a 'Flag' object with a 'FlagValues' object.\n\n By default, the global FLAGS 'FlagValue' object is used.\n\n Typical users will use one of the more specialized DEFINE_xxx\n functions, such as DEFINE_string or DEFINE_integer. But developers\n who need to create Flag objects themselves should use this function\n to register their flags.\n\n Args:\n flag: Flag, a flag that is key to the module.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a 'Flag' object with a 'FlagValues' object.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_float", "docs": "Registers a flag whose value must be a float.\n\n If lower_bound or upper_bound are set, then this flag must be\n within the given range.\n\n Args:\n name: str, the flag name.\n default: float|str|None, the default value of the flag.\n help: str, the help message.\n lower_bound: float, min value of the flag.\n upper_bound: float, max value of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to DEFINE.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value must be a float.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_integer", "docs": "Registers a flag whose value must be an integer.\n\n If lower_bound, or upper_bound are set, then this flag must be\n within the given range.\n\n Args:\n name: str, the flag name.\n default: int|str|None, the default value of the flag.\n help: str, the help message.\n lower_bound: int, min value of the flag.\n upper_bound: int, max value of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to DEFINE.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value must be an integer.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_list", "docs": "Registers a flag whose value is a comma-separated list of strings.\n\n The flag value is parsed with a CSV parser.\n\n Args:\n name: str, the flag name.\n default: list|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value is a comma-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi", "docs": "Registers a generic MultiFlag that parses its args with a given parser.\n\n Auxiliary function. Normal users should NOT use it directly.\n\n Developers who need to create their own 'Parser' classes for options\n which can appear multiple times can call this module function to\n register their flags.\n\n Args:\n parser: ArgumentParser, used to parse the flag arguments.\n serializer: ArgumentSerializer, the flag serializer instance.\n name: str, the flag name.\n default: Union[Iterable[T], Text, None], the default value of the flag. If\n the value is text, it will be parsed as if it was provided from the\n command line. If the value is a non-string iterable, it will be iterated\n over to create a shallow copy of the values. If it is None, it is left\n as-is.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the Python module declaring this flag. If\n not provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a generic MultiFlag that parses its args with a given parser.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi_enum", "docs": "Registers a flag whose value can be a list strings from enum_values.\n\n Use the flag on the command line multiple times to place multiple\n enum values into the list. The 'default' may be a single string\n (which will be converted into a single-element list) or a list of\n strings.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Text], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n enum_values: [str], a non-empty list of strings with the possible values for\n the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n case_sensitive: Whether or not the enum is to be case-sensitive.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list strings from enum_values.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi_enum_class", "docs": "Registers a flag whose value can be a list of enum members.\n\n Use the flag on the command line multiple times to place multiple\n enum values into the list.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the\n default value of the flag; see `DEFINE_multi`; only differences are\n documented here. If the value is a single Enum, it is treated as a\n single-item list of that Enum value. If it is an iterable, text values\n within the iterable will be converted to the equivalent Enum objects.\n enum_class: class, the Enum class with all the possible values for the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the Python module declaring this flag. If\n not provided, it will be computed using the stack trace of this call.\n case_sensitive: bool, whether to map strings to members of the enum_class\n without considering case.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of enum members.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi_float", "docs": "Registers a flag whose value can be a list of arbitrary floats.\n\n Use the flag on the command line multiple times to place multiple\n float values into the list. The 'default' may be a single float\n (which will be converted into a single-element list) or a list of\n floats.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[float], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n lower_bound: float, min values of the flag.\n upper_bound: float, max values of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of arbitrary floats.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi_integer", "docs": "Registers a flag whose value can be a list of arbitrary integers.\n\n Use the flag on the command line multiple times to place multiple\n integer values into the list. The 'default' may be a single integer\n (which will be converted into a single-element list) or a list of\n integers.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[int], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n lower_bound: int, min values of the flag.\n upper_bound: int, max values of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of arbitrary integers.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_multi_string", "docs": "Registers a flag whose value can be a list of any strings.\n\n Use the flag on the command line multiple times to place multiple\n string values into the list. The 'default' may be a single string\n (which will be converted into a single-element list) or a list of\n strings.\n\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Text], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of any strings.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_spaceseplist", "docs": "Registers a flag whose value is a whitespace-separated list of strings.\n\n Any whitespace can be used as a separator.\n\n Args:\n name: str, the flag name.\n default: list|str|None, the default value of the flag.\n help: str, the help message.\n comma_compat: bool - Whether to support comma as an additional separator. If\n false then only whitespace is supported. This is intended only for\n backwards compatibility with flags that used to be comma-separated.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value is a whitespace-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DEFINE_string", "docs": "Registers a flag whose value can be any string.", "desc": "Registers a flag whose value can be any string.", "type": "API"}, {"name": "tf.compat.v1.app.flags.disclaim_key_flags", "docs": "Declares that the current module will not define any more key flags.\n\n Normally, the module that calls the DEFINE_xxx functions claims the\n flag to be its key flag. This is undesirable for modules that\n define additional DEFINE_yyy functions with its own flag parsers and\n serializers, since that module will accidentally claim flags defined\n by DEFINE_yyy as its key flags. After calling this function, the\n module disclaims flag definitions thereafter, so the key flags will\n be correctly attributed to the caller of DEFINE_yyy.\n\n After calling this function, the module will not be able to define\n any more flags. This function will affect all FlagValues objects.\n ", "desc": "Declares that the current module will not define any more key flags.", "type": "API"}, {"name": "tf.compat.v1.app.flags.doc_to_help", "docs": "Takes a __doc__ string and reformats it as help.", "desc": "Takes a __doc__ string and reformats it as help.", "type": "API"}, {"name": "tf.compat.v1.app.flags.DuplicateFlagError", "docs": "Raised if there is a flag naming conflict.", "desc": "Raised if there is a flag naming conflict.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumClassFlag", "docs": "Basic enum flag; its value is an enum class's member.", "desc": "Basic enum flag; its value is an enum class's member.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumClassListSerializer", "docs": "A serializer for MultiEnumClass flags.\n\n This serializer simply joins the output of `EnumClassSerializer` using a\n provided seperator.\n ", "desc": "A serializer for MultiEnumClass flags.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumClassParser", "docs": "Parser of an Enum class member.", "desc": "Parser of an Enum class member.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumClassSerializer", "docs": "Class for generating string representations of an enum class flag value.", "desc": "Class for generating string representations of an enum class flag value.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumFlag", "docs": "Basic enum flag; its value can be any string from list of enum_values.", "desc": "Basic enum flag; its value can be any string from list of enum_values.", "type": "API"}, {"name": "tf.compat.v1.app.flags.EnumParser", "docs": "Parser of a string enum value (a string value from a given set).", "desc": "Parser of a string enum value (a string value from a given set).", "type": "API"}, {"name": "tf.compat.v1.app.flags.Error", "docs": "The base class for all flags errors.", "desc": "The base class for all flags errors.", "type": "API"}, {"name": "tf.compat.v1.app.flags.Flag", "docs": "Information about a command-line flag.\n\n 'Flag' objects define the following fields:\n .name - the name for this flag;\n .default - the default value for this flag;\n .default_unparsed - the unparsed default value for this flag.\n .default_as_str - default value as repr'd string, e.g., \"'true'\" (or None);\n .value - the most recent parsed value of this flag; set by parse();\n .help - a help string or None if no help is available;\n .short_name - the single letter alias for this flag (or None);\n .boolean - if 'true', this flag does not accept arguments;\n .present - true if this flag was parsed from command line flags;\n .parser - an ArgumentParser object;\n .serializer - an ArgumentSerializer object;\n .allow_override - the flag may be redefined without raising an error, and\n newly defined flag overrides the old one.\n .allow_override_cpp - use the flag from C++ if available; the flag\n definition is replaced by the C++ flag after init;\n .allow_hide_cpp - use the Python flag despite having a C++ flag with\n the same name (ignore the C++ flag);\n .using_default_value - the flag value has not been set by user;\n .allow_overwrite - the flag may be parsed more than once without raising\n an error, the last set value will be used;\n .allow_using_method_names - whether this flag can be defined even if it has\n a name that conflicts with a FlagValues method.\n\n The only public method of a 'Flag' object is parse(), but it is\n typically only called by a 'FlagValues' object. The parse() method is\n a thin wrapper around the 'ArgumentParser' parse() method. The parsed\n value is saved in .value, and the .present attribute is updated. If\n this flag was already present, an Error is raised.\n\n parse() is also called during __init__ to parse the default value and\n initialize the .value attribute. This enables other python modules to\n safely use flags even if the __main__ module neglects to parse the\n command line arguments. The .present attribute is cleared after\n __init__ parsing. If the default value is set to None, then the\n __init__ parsing step is skipped and the .value attribute is\n initialized to None.\n\n Note: The default value is also presented to the user in the help\n string, so it is important that it be a legal value for this flag.\n ", "desc": "Information about a command-line flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.flag_dict_to_args", "docs": "Convert a dict of values into process call parameters.\n\n This method is used to convert a dictionary into a sequence of parameters\n for a binary that parses arguments using this module.\n\n Args:\n flag_map: dict, a mapping where the keys are flag names (strings).\n values are treated according to their type:\n * If value is None, then only the name is emitted.\n * If value is True, then only the name is emitted.\n * If value is False, then only the name prepended with 'no' is emitted.\n * If value is a string then --name=value is emitted.\n * If value is a collection, this will emit --name=value1,value2,value3,\n unless the flag name is in multi_flags, in which case this will emit\n --name=value1 --name=value2 --name=value3.\n * Everything else is converted to string an passed as such.\n multi_flags: set, names (strings) of flags that should be treated as\n multi-flags.\n Yields:\n sequence of string suitable for a subprocess execution.\n ", "desc": "Convert a dict of values into process call parameters.", "type": "API"}, {"name": "tf.compat.v1.app.flags.FlagHolder", "docs": "Holds a defined flag.\n\n This facilitates a cleaner api around global state. Instead of\n\n ```\n flags.DEFINE_integer('foo', ...)\n flags.DEFINE_integer('bar', ...)\n ...\n def method():\n # prints parsed value of 'bar' flag\n print(flags.FLAGS.foo)\n # runtime error due to typo or possibly bad coding style.\n print(flags.FLAGS.baz)\n ```\n\n it encourages code like\n\n ```\n FOO_FLAG = flags.DEFINE_integer('foo', ...)\n BAR_FLAG = flags.DEFINE_integer('bar', ...)\n ...\n def method():\n print(FOO_FLAG.value)\n print(BAR_FLAG.value)\n ```\n\n since the name of the flag appears only once in the source code.\n ", "desc": "Holds a defined flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.FlagNameConflictsWithMethodError", "docs": "Raised when a flag name conflicts with FlagValues methods.", "desc": "Raised when a flag name conflicts with FlagValues methods.", "type": "API"}, {"name": "tf.compat.v1.app.flags.FLAGS", "docs": "Registry of 'Flag' objects.\n\n A 'FlagValues' can then scan command line arguments, passing flag\n arguments through to the 'Flag' objects that it owns. It also\n provides easy access to the flag values. Typically only one\n 'FlagValues' object is needed by an application: flags.FLAGS\n\n This class is heavily overloaded:\n\n 'Flag' objects are registered via __setitem__:\n FLAGS['longname'] = x # register a new flag\n\n The .value attribute of the registered 'Flag' objects can be accessed\n as attributes of this 'FlagValues' object, through __getattr__. Both\n the long and short name of the original 'Flag' objects can be used to\n access its value:\n FLAGS.longname # parsed flag value\n FLAGS.x # parsed flag value (short name)\n\n Command line arguments are scanned and passed to the registered 'Flag'\n objects through the __call__ method. Unparsed arguments, including\n argv[0] (e.g. the program name) are returned.\n argv = FLAGS(sys.argv) # scan command line arguments\n\n The original registered Flag objects can be retrieved through the use\n of the dictionary-like operator, __getitem__:\n x = FLAGS['longname'] # access the registered Flag object\n\n The str() operator of a 'FlagValues' object provides help for all of\n the registered 'Flag' objects.\n ", "desc": "Registry of 'Flag' objects.", "type": "API"}, {"name": "tf.compat.v1.app.flags.FlagValues", "docs": "Registry of 'Flag' objects.\n\n A 'FlagValues' can then scan command line arguments, passing flag\n arguments through to the 'Flag' objects that it owns. It also\n provides easy access to the flag values. Typically only one\n 'FlagValues' object is needed by an application: flags.FLAGS\n\n This class is heavily overloaded:\n\n 'Flag' objects are registered via __setitem__:\n FLAGS['longname'] = x # register a new flag\n\n The .value attribute of the registered 'Flag' objects can be accessed\n as attributes of this 'FlagValues' object, through __getattr__. Both\n the long and short name of the original 'Flag' objects can be used to\n access its value:\n FLAGS.longname # parsed flag value\n FLAGS.x # parsed flag value (short name)\n\n Command line arguments are scanned and passed to the registered 'Flag'\n objects through the __call__ method. Unparsed arguments, including\n argv[0] (e.g. the program name) are returned.\n argv = FLAGS(sys.argv) # scan command line arguments\n\n The original registered Flag objects can be retrieved through the use\n of the dictionary-like operator, __getitem__:\n x = FLAGS['longname'] # access the registered Flag object\n\n The str() operator of a 'FlagValues' object provides help for all of\n the registered 'Flag' objects.\n ", "desc": "Registry of 'Flag' objects.", "type": "API"}, {"name": "tf.compat.v1.app.flags.FloatParser", "docs": "Parser of floating point values.\n\n Parsed value may be bounded to a given upper and lower bound.\n ", "desc": "Parser of floating point values.", "type": "API"}, {"name": "tf.compat.v1.app.flags.get_help_width", "docs": "Returns the integer width of help lines that is used in TextWrap.", "desc": "Returns the integer width of help lines that is used in TextWrap.", "type": "API"}, {"name": "tf.compat.v1.app.flags.IllegalFlagValueError", "docs": "Raised when the flag command line argument is illegal.", "desc": "Raised when the flag command line argument is illegal.", "type": "API"}, {"name": "tf.compat.v1.app.flags.IntegerParser", "docs": "Parser of an integer value.\n\n Parsed value may be bounded to a given upper and lower bound.\n ", "desc": "Parser of an integer value.", "type": "API"}, {"name": "tf.compat.v1.app.flags.ListParser", "docs": "Parser for a comma-separated list of strings.", "desc": "Parser for a comma-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.app.flags.ListSerializer", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.app.flags.mark_bool_flags_as_mutual_exclusive", "docs": "Ensures that only one flag among flag_names is True.\n\n Args:\n flag_names: [str], names of the flags.\n required: bool. If true, exactly one flag must be True. Otherwise, at most\n one flag can be True, and it is valid for all flags to be False.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n ", "desc": "Ensures that only one flag among flag_names is True.", "type": "API"}, {"name": "tf.compat.v1.app.flags.mark_flag_as_required", "docs": "Ensures that flag is not None during program execution.\n\n Registers a flag validator, which will follow usual validator rules.\n Important note: validator will pass for any non-None value, such as False,\n 0 (zero), '' (empty string) and so on.\n\n If your module might be imported by others, and you only wish to make the flag\n required when the module is directly executed, call this method like this:\n\n if __name__ == '__main__':\n flags.mark_flag_as_required('your_flag_name')\n app.run()\n\n Args:\n flag_name: str, name of the flag\n flag_values: flags.FlagValues, optional FlagValues instance where the flag\n is defined.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "Ensures that flag is not None during program execution.", "type": "API"}, {"name": "tf.compat.v1.app.flags.mark_flags_as_mutual_exclusive", "docs": "Ensures that only one flag among flag_names is not None.\n\n Important note: This validator checks if flag values are None, and it does not\n distinguish between default and explicit values. Therefore, this validator\n does not make sense when applied to flags with default values other than None,\n including other false values (e.g. False, 0, '', []). That includes multi\n flags with a default value of [] instead of None.\n\n Args:\n flag_names: [str], names of the flags.\n required: bool. If true, exactly one of the flags must have a value other\n than None. Otherwise, at most one of the flags can have a value other\n than None, and it is valid for all of the flags to be None.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n ", "desc": "Ensures that only one flag among flag_names is not None.", "type": "API"}, {"name": "tf.compat.v1.app.flags.mark_flags_as_required", "docs": "Ensures that flags are not None during program execution.\n\n If your module might be imported by others, and you only wish to make the flag\n required when the module is directly executed, call this method like this:\n\n if __name__ == '__main__':\n flags.mark_flags_as_required(['flag1', 'flag2', 'flag3'])\n app.run()\n\n Args:\n flag_names: Sequence[str], names of the flags.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n Raises:\n AttributeError: If any of flag name has not already been defined as a flag.\n ", "desc": "Ensures that flags are not None during program execution.", "type": "API"}, {"name": "tf.compat.v1.app.flags.multi_flags_validator", "docs": "A function decorator for defining a multi-flag validator.\n\n Registers the decorated function as a validator for flag_names, e.g.\n\n @flags.multi_flags_validator(['foo', 'bar'])\n def _CheckFooBar(flags_dict):\n ...\n\n See register_multi_flags_validator() for the specification of checker\n function.\n\n Args:\n flag_names: [str], a list of the flag names to be checked.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n\n Returns:\n A function decorator that registers its function argument as a validator.\n\n Raises:\n AttributeError: Raised when a flag is not registered as a valid flag name.\n ", "desc": "A function decorator for defining a multi-flag validator.", "type": "API"}, {"name": "tf.compat.v1.app.flags.MultiEnumClassFlag", "docs": "A multi_enum_class flag.\n\n See the __doc__ for MultiFlag for most behaviors of this class. In addition,\n this class knows how to handle enum.Enum instances as values for this flag\n type.\n ", "desc": "A multi_enum_class flag.", "type": "API"}, {"name": "tf.compat.v1.app.flags.MultiFlag", "docs": "A flag that can appear multiple time on the command-line.\n\n The value of such a flag is a list that contains the individual values\n from all the appearances of that flag on the command-line.\n\n See the __doc__ for Flag for most behavior of this class. Only\n differences in behavior are described here:\n\n * The default value may be either a single value or an iterable of values.\n A single value is transformed into a single-item list of that value.\n\n * The value of the flag is always a list, even if the option was\n only supplied once, and even if the default value is a single\n value\n ", "desc": "A flag that can appear multiple time on the command-line.", "type": "API"}, {"name": "tf.compat.v1.app.flags.register_multi_flags_validator", "docs": "Adds a constraint to multiple flags.\n\n The constraint is validated when flags are initially parsed, and after each\n change of the corresponding flag's value.\n\n Args:\n flag_names: [str], a list of the flag names to be checked.\n multi_flags_checker: callable, a function to validate the flag.\n input - dict, with keys() being flag_names, and value for each key\n being the value of the corresponding flag (string, boolean, etc).\n output - bool, True if validator constraint is satisfied.\n If constraint is not satisfied, it should either return False or\n raise flags.ValidationError.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n\n Raises:\n AttributeError: Raised when a flag is not registered as a valid flag name.\n ", "desc": "Adds a constraint to multiple flags.", "type": "API"}, {"name": "tf.compat.v1.app.flags.register_validator", "docs": "Adds a constraint, which will be enforced during program execution.\n\n The constraint is validated when flags are initially parsed, and after each\n change of the corresponding flag's value.\n Args:\n flag_name: str, name of the flag to be checked.\n checker: callable, a function to validate the flag.\n input - A single positional argument: The value of the corresponding\n flag (string, boolean, etc. This value will be passed to checker\n by the library).\n output - bool, True if validator constraint is satisfied.\n If constraint is not satisfied, it should either return False or\n raise flags.ValidationError(desired_error_message).\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "Adds a constraint, which will be enforced during program execution.", "type": "API"}, {"name": "tf.compat.v1.app.flags.text_wrap", "docs": "Wraps a given text to a maximum line length and returns it.\n\n It turns lines that only contain whitespace into empty lines, keeps new lines,\n and expands tabs using 4 spaces.\n\n Args:\n text: str, text to wrap.\n length: int, maximum length of a line, includes indentation.\n If this is None then use get_help_width()\n indent: str, indent for all but first line.\n firstline_indent: str, indent for first line; if None, fall back to indent.\n\n Returns:\n str, the wrapped text.\n\n Raises:\n ValueError: Raised if indent or firstline_indent not shorter than length.\n ", "desc": "Wraps a given text to a maximum line length and returns it.", "type": "API"}, {"name": "tf.compat.v1.app.flags.tf_decorator", "docs": "Base TFDecorator class and utility functions for working with decorators.\n\nThere are two ways to create decorators that TensorFlow can introspect into.\nThis is important for documentation generation purposes, so that function\nsignatures aren't obscured by the (*args, **kwds) signature that decorators\noften provide.\n\n1. Call `tf_decorator.make_decorator` on your wrapper function. If your\ndecorator is stateless, or can capture all of the variables it needs to work\nwith through lexical closure, this is the simplest option. Create your wrapper\nfunction as usual, but instead of returning it, return\n`tf_decorator.make_decorator(target, your_wrapper)`. This will attach some\ndecorator introspection metadata onto your wrapper and return it.\n\nExample:\n\n def print_hello_before_calling(target):\n def wrapper(*args, **kwargs):\n print('hello')\n return target(*args, **kwargs)\n return tf_decorator.make_decorator(target, wrapper)\n\n2. Derive from TFDecorator. If your decorator needs to be stateful, you can\nimplement it in terms of a TFDecorator. Store whatever state you need in your\nderived class, and implement the `__call__` method to do your work before\ncalling into your target. You can retrieve the target via\n`super(MyDecoratorClass, self).decorated_target`, and call it with whatever\nparameters it needs.\n\nExample:\n\n class CallCounter(tf_decorator.TFDecorator):\n def __init__(self, target):\n super(CallCounter, self).__init__('count_calls', target)\n self.call_count = 0\n\n def __call__(self, *args, **kwargs):\n self.call_count += 1\n return super(CallCounter, self).decorated_target(*args, **kwargs)\n\n def count_calls(target):\n return CallCounter(target)\n", "desc": "Base TFDecorator class and utility functions for working with decorators.", "type": "API"}, {"name": "tf.compat.v1.app.flags.tf_decorator.make_decorator", "docs": "Make a decorator from a wrapper and a target.\n\n Args:\n target: The final callable to be wrapped.\n decorator_func: The wrapper function.\n decorator_name: The name of the decorator. If `None`, the name of the\n function calling make_decorator.\n decorator_doc: Documentation specific to this application of\n `decorator_func` to `target`.\n decorator_argspec: The new callable signature of this decorator.\n\n Returns:\n The `decorator_func` argument with new metadata attached.\n ", "desc": "Make a decorator from a wrapper and a target.", "type": "API"}, {"name": "tf.compat.v1.app.flags.tf_decorator.rewrap", "docs": "Injects a new target into a function built by make_decorator.\n\n This function allows replacing a function wrapped by `decorator_func`,\n assuming the decorator that wraps the function is written as described below.\n\n The decorator function must use `.__wrapped__` instead of the\n wrapped function that is normally used:\n\n Example:\n\n # Instead of this:\n def simple_parametrized_wrapper(*args, **kwds):\n return wrapped_fn(*args, **kwds)\n\n tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn)\n\n # Write this:\n def simple_parametrized_wrapper(*args, **kwds):\n return simple_parametrized_wrapper.__wrapped__(*args, **kwds)\n\n tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn)\n\n Note that this process modifies decorator_func.\n\n Args:\n decorator_func: Callable returned by `wrap`.\n previous_target: Callable that needs to be replaced.\n new_target: Callable to replace previous_target with.\n\n Returns:\n The updated decorator. If decorator_func is not a tf_decorator, new_target\n is returned.\n ", "desc": "Injects a new target into a function built by make_decorator.", "type": "API"}, {"name": "tf.compat.v1.app.flags.tf_decorator.TFDecorator", "docs": "Base class for all TensorFlow decorators.\n\n TFDecorator captures and exposes the wrapped target, and provides details\n about the current decorator.\n ", "desc": "Base class for all TensorFlow decorators.", "type": "API"}, {"name": "tf.compat.v1.app.flags.tf_decorator.unwrap", "docs": "Unwraps an object into a list of TFDecorators and a final target.\n\n Args:\n maybe_tf_decorator: Any callable object.\n\n Returns:\n A tuple whose first element is an list of TFDecorator-derived objects that\n were applied to the final callable target, and whose second element is the\n final undecorated callable target. If the `maybe_tf_decorator` parameter is\n not decorated by any TFDecorators, the first tuple element will be an empty\n list. The `TFDecorator` list is ordered from outermost to innermost\n decorators.\n ", "desc": "Unwraps an object into a list of TFDecorators and a final target.", "type": "API"}, {"name": "tf.compat.v1.app.flags.UnparsedFlagAccessError", "docs": "Raised when accessing the flag value from unparsed FlagValues.", "desc": "Raised when accessing the flag value from unparsed FlagValues.", "type": "API"}, {"name": "tf.compat.v1.app.flags.UnrecognizedFlagError", "docs": "Raised when a flag is unrecognized.\n\n Attributes:\n flagname: str, the name of the unrecognized flag.\n flagvalue: The value of the flag, empty if the flag is not defined.\n ", "desc": "Raised when a flag is unrecognized.", "type": "API"}, {"name": "tf.compat.v1.app.flags.ValidationError", "docs": "Raised when flag validator constraint is not satisfied.", "desc": "Raised when flag validator constraint is not satisfied.", "type": "API"}, {"name": "tf.compat.v1.app.flags.validator", "docs": "A function decorator for defining a flag validator.\n\n Registers the decorated function as a validator for flag_name, e.g.\n\n @flags.validator('foo')\n def _CheckFoo(foo):\n ...\n\n See register_validator() for the specification of checker function.\n\n Args:\n flag_name: str, name of the flag to be checked.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n Returns:\n A function decorator that registers its function argument as a validator.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "A function decorator for defining a flag validator.", "type": "API"}, {"name": "tf.compat.v1.app.flags.WhitespaceSeparatedListParser", "docs": "Parser for a whitespace-separated list of strings.", "desc": "Parser for a whitespace-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.app.run", "docs": "Runs the program with an optional 'main' function and 'argv' list.", "desc": "Runs the program with an optional 'main' function and 'argv' list.", "type": "API"}, {"name": "tf.compat.v1.arg_max", "docs": "Returns the index with the largest value across dimensions of a tensor.\n\n Note that in case of ties the identity of the return value is not guaranteed.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmax(input = a)\n c = tf.keras.backend.eval(b)\n # c = 4\n # here a[4] = 166.32 which is the largest element of a across axis 0\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n dimension: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n int16, int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which dimension of the input Tensor to reduce across. For vectors,\n use dimension = 0.\n output_type: An optional `tf.DType` from: `tf.int16, tf.uint16, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the largest value across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.arg_min", "docs": "Returns the index with the smallest value across dimensions of a tensor.\n\n Note that in case of ties the identity of the return value is not guaranteed.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n dimension: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which dimension of the input Tensor to reduce across. For vectors,\n use dimension = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the smallest value across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.argmax", "docs": "Returns the index with the largest value across axes of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nNote that in case of ties the identity of the return value is not guaranteed.\n\nUsage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmax(input = a)\n c = tf.keras.backend.eval(b)\n # c = 4\n # here a[4] = 166.32 which is the largest element of a across axis 0\n ```\n\nArgs:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n axis: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n int16, int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int16, tf.uint16, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` of type `output_type`.", "desc": "Returns the index with the largest value across axes of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.argmin", "docs": "Returns the index with the smallest value across axes of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nNote that in case of ties the identity of the return value is not guaranteed.\n\nUsage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n\nArgs:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` of type `output_type`.", "desc": "Returns the index with the smallest value across axes of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.argsort", "docs": "Returns the indices of a tensor that give its sorted order along an axis.\n\n >>> values = [1, 10, 26.9, 2.8, 166.32, 62.3]\n >>> sort_order = tf.argsort(values)\n >>> sort_order.numpy()\n array([0, 3, 1, 2, 5, 4], dtype=int32)\n\n For a 1D tensor:\n\n >>> sorted = tf.gather(values, sort_order)\n >>> assert tf.reduce_all(sorted == tf.sort(values))\n\n For higher dimensions, the output has the same shape as\n `values`, but along the given axis, values represent the index of the sorted\n element in that slice of the tensor at the given position.\n\n >>> mat = [[30,20,10],\n ... [20,10,30],\n ... [10,30,20]]\n >>> indices = tf.argsort(mat)\n >>> indices.numpy()\n array([[2, 1, 0],\n [1, 0, 2],\n [0, 2, 1]], dtype=int32)\n\n If `axis=-1` these indices can be used to apply a sort using `tf.gather`:\n\n >>> tf.gather(mat, indices, batch_dims=-1).numpy()\n array([[10, 20, 30],\n [10, 20, 30],\n [10, 20, 30]], dtype=int32)\n\n See also:\n\n * `tf.sort`: Sort along an axis.\n * `tf.math.top_k`: A partial sort that returns a fixed number of top values\n and corresponding indices.\n\n Args:\n values: 1-D or higher **numeric** `Tensor`.\n axis: The axis along which to sort. The default is -1, which sorts the last\n axis.\n direction: The direction in which to sort the values (`'ASCENDING'` or\n `'DESCENDING'`).\n stable: If True, equal elements in the original tensor will not be\n re-ordered in the returned order. Unstable sort is not yet implemented,\n but will eventually be the default for performance reasons. If you require\n a stable order, pass `stable=True` for forwards compatibility.\n name: Optional name for the operation.\n\n Returns:\n An int32 `Tensor` with the same shape as `values`. The indices that would\n sort each slice of the given `values` along the given `axis`.\n\n Raises:\n ValueError: If axis is not a constant scalar, or the direction is invalid.\n tf.errors.InvalidArgumentError: If the `values.dtype` is not a `float` or\n `int` type.\n ", "desc": "Returns the indices of a tensor that give its sorted order along an axis.", "type": "API"}, {"name": "tf.compat.v1.as_dtype", "docs": "Converts the given `type_value` to a `DType`.\n\n Note: `DType` values are interned. When passed a new `DType` object,\n `as_dtype` always returns the interned value.\n\n Args:\n type_value: A value that can be converted to a `tf.DType` object. This may\n currently be a `tf.DType` object, a [`DataType`\n enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),\n a string type name, or a [`numpy.dtype`](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html).\n\n Returns:\n A `DType` corresponding to `type_value`.\n\n Raises:\n TypeError: If `type_value` cannot be converted to a `DType`.\n ", "desc": "Converts the given `type_value` to a `DType`.", "type": "API"}, {"name": "tf.compat.v1.as_string", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.compat.v1.asin", "docs": "Computes the trignometric inverse sine of x element-wise.\n\n The `tf.math.asin` operation returns the inverse of `tf.math.sin`, such that\n if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.\n\n **Note**: The output of `tf.math.asin` will lie within the invertible range\n of sine, i.e [-pi/2, pi/2].\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.sin(x) # [0.8659266, 0.7068252]\n\n tf.math.asin(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.asinh", "docs": "Computes inverse hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic sine\n for every element in the tensor. Both input and output has a range of\n `[-inf, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.Assert", "docs": "Asserts that the given condition is true.\n\nIf `condition` evaluates to false, print the list of tensors in `data`.\n`summarize` determines how many entries of the tensors to print.\n\nArgs:\n condition: The condition to evaluate.\n data: The tensors to print out when condition is false.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional).\n\nReturns:\n assert_op: An `Operation` that, when executed, raises a\n `tf.errors.InvalidArgumentError` if `condition` is not true.\n @compatibility(eager)\n returns None\n @end_compatibility\n\nRaises:\n @compatibility(TF1)\n When in TF V1 mode (that is, outside `tf.function`) Assert needs a control\n dependency on the output to ensure the assertion executes:\n\n```python\n# Ensure maximum element of x is smaller or equal to 1\nassert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])\nwith tf.control_dependencies([assert_op]):\n ... code using x ...\n```\n\n @end_compatibility\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Asserts that the given condition is true.", "type": "API"}, {"name": "tf.compat.v1.assert_equal", "docs": "\n Assert the condition `x == y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] == y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x == y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x == y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_equal(a, b,\n ... message='\"a == b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 2]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 2], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_equal(a, b, message=\n ... '\"a == b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_greater", "docs": "\n Assert the condition `x > y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] > y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_greater\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_greater` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_greater` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_greater(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_greater(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_greater(a, b,\n ... message='\"a > b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[0, 1]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([0, 1], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_greater(a, b, message=\n ... '\"a > b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_greater_equal", "docs": "\n Assert the condition `x >= y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] >= y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_greater_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x >= y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x >= y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_greater_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_greater_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_greater_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_greater_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_greater_equal(a, b,\n ... message='\"a >= b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 0]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 0], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_greater_equal(a, b, message=\n ... '\"a >= b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_integer", "docs": "Assert that `x` is of integer dtype.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: `Tensor` whose basetype is integer and is not quantized.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_integer\".\n\n Raises:\n TypeError: If `x.dtype` is anything other than non-quantized integer.\n\n Returns:\n A `no_op` that does nothing. Type can be determined statically.\n ", "desc": "Assert that `x` is of integer dtype.", "type": "API"}, {"name": "tf.compat.v1.assert_less", "docs": "\n Assert the condition `x < y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] < y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_less\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_less` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_less` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_less(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_less(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_less(a, b,\n ... message='\"a < b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[2, 3]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([2, 3], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_less(a, b, message=\n ... '\"a < b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_less_equal", "docs": "\n Assert the condition `x <= y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] <= y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_less_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x <= y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x <= y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_less_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_less_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_less_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_less_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_less_equal(a, b,\n ... message='\"a <= b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 3]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 3], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_less_equal(a, b, message=\n ... '\"a <= b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_near", "docs": "Assert the condition `x` and `y` are close element-wise.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have\n\n ```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```.\n\n If both `x` and `y` are empty, this is trivially satisfied.\n\n The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest\n representable positive number such that `1 + eps != 1`. This is about\n `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`.\n See `numpy.finfo`.\n\n Args:\n x: Float or complex `Tensor`.\n y: Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`.\n rtol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The relative tolerance. Default is `10 * eps`.\n atol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The absolute tolerance. Default is `10 * eps`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_near\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.\n\n @compatibility(numpy)\n Similar to `numpy.testing.assert_allclose`, except tolerance depends on data\n type. This is due to the fact that `TensorFlow` is often used with `32bit`,\n `64bit`, and even `16bit` data.\n @end_compatibility\n ", "desc": "Assert the condition `x` and `y` are close element-wise.", "type": "API"}, {"name": "tf.compat.v1.assert_negative", "docs": "\n Assert the condition `x < 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_negative\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_non_negative", "docs": "\n Assert the condition `x >= 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_non_negative\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x >= 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x >= 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_non_positive", "docs": "\n Assert the condition `x <= 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_non_positive\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x <= 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x <= 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_none_equal", "docs": "\n Assert the condition `x != y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] != y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_none_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x != y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x != y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_none_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_none_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_none_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_none_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_none_equal(a, b,\n ... message='\"a != b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[2, 1]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([2, 1], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_none_equal(a, b, message=\n ... '\"a != b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_positive", "docs": "\n Assert the condition `x > 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_positive\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.assert_proper_iterable", "docs": "Static assert that values is a \"proper\" iterable.\n\n `Ops` that expect iterables of `Tensor` can call this to validate input.\n Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.\n\n Args:\n values: Object to be checked.\n\n Raises:\n TypeError: If `values` is not iterable or is one of\n `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`.\n ", "desc": "Static assert that values is a \"proper\" iterable.", "type": "API"}, {"name": "tf.compat.v1.assert_rank", "docs": "Assert `x` has rank equal to `rank`.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar integer `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and the shape of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_rank\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank.\n ", "desc": "Assert `x` has rank equal to `rank`.", "type": "API"}, {"name": "tf.compat.v1.assert_rank_at_least", "docs": "Assert `x` has rank equal to `rank` or higher.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_at_least\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank or higher.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank.\n ", "desc": "Assert `x` has rank equal to `rank` or higher.", "type": "API"}, {"name": "tf.compat.v1.assert_rank_in", "docs": "Assert `x` has rank in `ranks`.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n ranks: Iterable of scalar `Tensor` objects.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_in\".\n\n Returns:\n Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`.\n If static checks determine `x` has matching rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has mismatched rank.\n ", "desc": "Assert `x` has rank in `ranks`.", "type": "API"}, {"name": "tf.compat.v1.assert_same_float_dtype", "docs": "Validate and return float type based on `tensors` and `dtype`.\n\n For ops such as matrix multiplication, inputs and weights must be of the\n same float type. This function validates that all `tensors` are the same type,\n validates that type is `dtype` (if supplied), and returns the type. Type must\n be a floating point type. If neither `tensors` nor `dtype` is supplied,\n the function will return `dtypes.float32`.\n\n Args:\n tensors: Tensors of input values. Can include `None` elements, which will be\n ignored.\n dtype: Expected type.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if neither `tensors` nor `dtype` is supplied, or result is not\n float, or the common type of the inputs is not a floating point type.\n ", "desc": "Validate and return float type based on `tensors` and `dtype`.", "type": "API"}, {"name": "tf.compat.v1.assert_scalar", "docs": "Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).\n\n This function raises `ValueError` unless it can be certain that the given\n `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is\n unknown.\n\n Args:\n tensor: A `Tensor`.\n name: A name for this operation. Defaults to \"assert_scalar\"\n message: A string to prefix to the default message.\n\n Returns:\n The input tensor (potentially converted to a `Tensor`).\n\n Raises:\n ValueError: If the tensor is not scalar (rank 0), or if its shape is\n unknown.\n ", "desc": "Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).", "type": "API"}, {"name": "tf.compat.v1.assert_type", "docs": "Statically asserts that the given `Tensor` is of the specified type.\n\n Args:\n tensor: A `Tensor` or `SparseTensor`.\n tf_type: A tensorflow type (`dtypes.float32`, `tf.int64`, `dtypes.bool`,\n etc).\n message: A string to prefix to the default message.\n name: A name to give this `Op`. Defaults to \"assert_type\"\n\n Raises:\n TypeError: If the tensors data type doesn't match `tf_type`.\n\n Returns:\n A `no_op` that does nothing. Type can be determined statically.\n ", "desc": "Statically asserts that the given `Tensor` is of the specified type.", "type": "API"}, {"name": "tf.compat.v1.assert_variables_initialized", "docs": "Returns an Op to check if variables are initialized.\n\nNOTE: This function is obsolete and will be removed in 6 months. Please\nchange your implementation to use `report_uninitialized_variables()`.\n\nWhen run, the returned Op will raise the exception `FailedPreconditionError`\nif any of the variables has not yet been initialized.\n\nNote: This function is implemented by trying to fetch the values of the\nvariables. If one of the variables is not initialized a message may be\nlogged by the C++ runtime. This is expected.\n\nArgs:\n var_list: List of `Variable` objects to check. Defaults to the value of\n `global_variables().`\n\nReturns:\n An Op, or None if there are no variables.\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Returns an Op to check if variables are initialized.", "type": "API"}, {"name": "tf.compat.v1.assign", "docs": "Update `ref` by assigning `value` to it.\n\n This operation outputs a Tensor that holds the new value of `ref` after\n the value has been assigned. This makes it easier to chain operations that\n need to use the reset value.\n\n Args:\n ref: A mutable `Tensor`. Should be from a `Variable` node. May be\n uninitialized.\n value: A `Tensor`. Must have the same shape and dtype as `ref`. The value to\n be assigned to the variable.\n validate_shape: An optional `bool`. Defaults to `True`. If true, the\n operation will validate that the shape of 'value' matches the shape of the\n Tensor being assigned to. If false, 'ref' will take on the shape of\n 'value'.\n use_locking: An optional `bool`. Defaults to `True`. If True, the assignment\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that will hold the new value of `ref` after\n the assignment has completed.\n\n @compatibility(TF2)\n `tf.compat.v1.assign` is mostly compatible with eager\n execution and `tf.function`. However, argument 'validate_shape' will be\n ignored. To avoid shape validation, set 'shape' to tf.TensorShape(None) when\n constructing the variable:\n\n >>> import tensorflow as tf\n >>> a = tf.Variable([1], shape=tf.TensorShape(None))\n >>> tf.compat.v1.assign(a, [2,3])\n\n To switch to the native TF2 style, one could use method 'assign' of\n `tf.Variable`:\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `ref` | `self` | In `assign()` method |\n | `value` | `value` | In `assign()` method |\n | `validate_shape` | Not supported | Specify `shape` in the |\n : : : constructor to replicate :\n : : : behavior :\n | `use_locking` | `use_locking` | In `assign()` method |\n | `name` | `name` | In `assign()` method |\n | - | `read_value` | Set to True to replicate |\n : : : behavior (True is default) :\n @end_compatibility\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> with tf.Graph().as_default():\n ... with tf.compat.v1.Session() as sess:\n ... a = tf.compat.v1.Variable(0, dtype=tf.int64)\n ... sess.run(a.initializer)\n ... update_op = tf.compat.v1.assign(a, 2)\n ... res_a = sess.run(update_op)\n ... res_a\n 2\n\n After:\n\n >>> b = tf.Variable(0, dtype=tf.int64)\n >>> res_b = b.assign(2)\n >>> res_b.numpy()\n 2\n ", "desc": "Update `ref` by assigning `value` to it.", "type": "API"}, {"name": "tf.compat.v1.assign_add", "docs": "Update `ref` by adding `value` to it.\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n Unlike `tf.math.add`, this op does not broadcast. `ref` and `value` must have\n the same shape.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`,\n `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be\n from a `Variable` node.\n value: A `Tensor`. Must have the same shape and dtype as `ref`. The value to\n be added to the variable.\n use_locking: An optional `bool`. Defaults to `False`. If True, the addition\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n Same as `ref`. Returned as a convenience for operations that want\n to use the new value after the variable has been updated.\n\n @compatibility(TF2)\n `tf.compat.v1.assign_add` is mostly compatible with eager\n execution and `tf.function`.\n\n To switch to the native TF2 style, one could use method 'assign_add' of\n `tf.Variable`:\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `ref` | `self` | In `assign_add()` method |\n | `value` | `value` | In `assign_add()` method |\n | `use_locking` | `use_locking` | In `assign_add()` method |\n | `name` | `name` | In `assign_add()` method |\n | - | `read_value` | Set to True to replicate |\n : : : behavior (True is default) :\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> with tf.Graph().as_default():\n ... with tf.compat.v1.Session() as sess:\n ... a = tf.compat.v1.Variable(0, dtype=tf.int64)\n ... sess.run(a.initializer)\n ... update_op = tf.compat.v1.assign_add(a, 1)\n ... res_a = sess.run(update_op)\n ... res_a\n 1\n\n After:\n\n >>> b = tf.Variable(0, dtype=tf.int64)\n >>> res_b = b.assign_add(1)\n >>> res_b.numpy()\n 1\n\n @end_compatibility\n ", "desc": "Update `ref` by adding `value` to it.", "type": "API"}, {"name": "tf.compat.v1.assign_sub", "docs": "Update `ref` by subtracting `value` from it.\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n Unlike `tf.math.subtract`, this op does not broadcast. `ref` and `value`\n must have the same shape.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`,\n `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be\n from a `Variable` node.\n value: A `Tensor`. Must have the same shape and dtype as `ref`. The value to\n be subtracted to the variable.\n use_locking: An optional `bool`. Defaults to `False`. If True, the\n subtraction will be protected by a lock; otherwise the behavior is\n undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n Same as `ref`. Returned as a convenience for operations that want\n to use the new value after the variable has been updated.\n\n @compatibility(TF2)\n `tf.compat.v1.assign_sub` is mostly compatible with eager\n execution and `tf.function`.\n\n To switch to the native TF2 style, one could use method 'assign_sub' of\n `tf.Variable`:\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `ref` | `self` | In `assign_sub()` method |\n | `value` | `value` | In `assign_sub()` method |\n | `use_locking` | `use_locking` | In `assign_sub()` method |\n | `name` | `name` | In `assign_sub()` method |\n | - | `read_value` | Set to True to replicate |\n : : : behavior (True is default) :\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> with tf.Graph().as_default():\n ... with tf.compat.v1.Session() as sess:\n ... a = tf.compat.v1.Variable(1, dtype=tf.int64)\n ... sess.run(a.initializer)\n ... update_op = tf.compat.v1.assign_sub(a, 1)\n ... res_a = sess.run(update_op)\n ... res_a\n 0\n\n After:\n\n >>> b = tf.Variable(1, dtype=tf.int64)\n >>> res_b = b.assign_sub(1)\n >>> res_b.numpy()\n 0\n\n @end_compatibility\n ", "desc": "Update `ref` by subtracting `value` from it.", "type": "API"}, {"name": "tf.compat.v1.atan", "docs": "Computes the trignometric inverse tangent of x element-wise.\n\n The `tf.math.atan` operation returns the inverse of `tf.math.tan`, such that\n if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.\n\n **Note**: The output of `tf.math.atan` will lie within the invertible range\n of tan, i.e (-pi/2, pi/2).\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.tan(x) # [1.731261, 0.99920404]\n\n tf.math.atan(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse tangent of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.atan2", "docs": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.\n\n This is the angle \\\\( \\theta \\in [-\\pi, \\pi] \\\\) such that\n \\\\[ x = r \\cos(\\theta) \\\\]\n and\n \\\\[ y = r \\sin(\\theta) \\\\]\n where \\\\(r = \\sqrt{x^2 + y^2} \\\\).\n\n For example:\n\n >>> x = [1., 1.]\n >>> y = [1., -1.]\n >>> print((tf.math.atan2(y,x) * (180 / np.pi)).numpy())\n [ 45. -45.]\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.", "type": "API"}, {"name": "tf.compat.v1.atanh", "docs": "Computes inverse hyperbolic tangent of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic tangent\n for every element in the tensor. Input range is `[-1,1]` and output range is\n `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the\n input is `1`, output will be `inf`. Values outside the range will have\n `nan` as output.\n\n ```python\n x = tf.constant([-float(\"inf\"), -1, -0.5, 1, 0, 0.5, 10, float(\"inf\")])\n tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic tangent of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.AttrValue", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.AttrValue.ListValue", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.audio", "docs": "Public API for tf.audio namespace.\n", "desc": "Public API for tf.audio namespace.", "type": "API"}, {"name": "tf.compat.v1.audio.decode_wav", "docs": "Decode a 16-bit PCM WAV file to a float tensor.\n\n The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.\n\n When desired_channels is set, if the input contains fewer channels than this\n then the last channel will be duplicated to give the requested number, else if\n the input has more channels than requested then the additional channels will be\n ignored.\n\n If desired_samples is set, then the audio will be cropped or padded with zeroes\n to the requested length.\n\n The first output contains a Tensor with the content of the audio samples. The\n lowest dimension will be the number of channels, and the second will be the\n number of samples. For example, a ten-sample-long stereo WAV file should give an\n output shape of [10, 2].\n\n Args:\n contents: A `Tensor` of type `string`.\n The WAV-encoded audio, usually from a file.\n desired_channels: An optional `int`. Defaults to `-1`.\n Number of sample channels wanted.\n desired_samples: An optional `int`. Defaults to `-1`.\n Length of audio requested.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (audio, sample_rate).\n\n audio: A `Tensor` of type `float32`.\n sample_rate: A `Tensor` of type `int32`.\n ", "desc": "Decode a 16-bit PCM WAV file to a float tensor.", "type": "API"}, {"name": "tf.compat.v1.audio.encode_wav", "docs": "Encode audio data using the WAV file format.\n\n This operation will generate a string suitable to be saved out to create a .wav\n audio file. It will be encoded in the 16-bit PCM format. It takes in float\n values in the range -1.0f to 1.0f, and any outside that value will be clamped to\n that range.\n\n `audio` is a 2-D float Tensor of shape `[length, channels]`.\n `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100).\n\n Args:\n audio: A `Tensor` of type `float32`. 2-D with shape `[length, channels]`.\n sample_rate: A `Tensor` of type `int32`.\n Scalar containing the sample frequency.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode audio data using the WAV file format.", "type": "API"}, {"name": "tf.compat.v1.autograph", "docs": "Conversion of eager-style Python into TensorFlow graph code.\n\nNOTE: In TensorFlow 2.0, AutoGraph is automatically applied when using\n`tf.function`. This module contains lower-level APIs for advanced use.\n\nAutoGraph transforms a subset of Python which operates on TensorFlow objects\ninto equivalent TensorFlow graph code. When executing the graph, it has the same\neffect as if you ran the original code in eager mode.\nPython code which doesn't operate on TensorFlow objects remains functionally\nunchanged, but keep in mind that `tf.function` only executes such code at trace\ntime, and generally will not be consistent with eager execution.\n\nFor more information, see the\n[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md),\nand the [tf.function guide](https://www.tensorflow.org/guide/function#autograph_transformations).\n\n", "desc": "Conversion of eager-style Python into TensorFlow graph code.", "type": "API"}, {"name": "tf.compat.v1.autograph.experimental", "docs": "Public API for tf.autograph.experimental namespace.\n", "desc": "Public API for tf.autograph.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.autograph.experimental.do_not_convert", "docs": "Decorator that suppresses the conversion of a function.\n\n Args:\n func: function to decorate.\n\n Returns:\n If `func` is not None, returns a `Callable` which is equivalent to\n `func`, but is not converted by AutoGraph.\n If `func` is None, returns a decorator that, when invoked with a\n single `func` argument, returns a `Callable` equivalent to the\n above case.\n ", "desc": "Decorator that suppresses the conversion of a function.", "type": "API"}, {"name": "tf.compat.v1.autograph.experimental.Feature", "docs": "This enumeration represents optional conversion options.\n\n These conversion options are experimental. They are subject to change without\n notice and offer no guarantees.\n\n _Example Usage_\n\n ```python\n optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS\n @tf.function(experimental_autograph_options=optionals)\n def f(i):\n if i == 0: # EQUALITY_OPERATORS allows the use of == here.\n tf.print('i is zero')\n ```\n\n Attributes:\n ALL: Enable all features.\n AUTO_CONTROL_DEPS: Insert of control dependencies in the generated code.\n ASSERT_STATEMENTS: Convert Tensor-dependent assert statements to tf.Assert.\n BUILTIN_FUNCTIONS: Convert builtin functions applied to Tensors to\n their TF counterparts.\n EQUALITY_OPERATORS: Whether to convert the comparison operators, like\n equality. This is soon to be deprecated as support is being added to the\n Tensor class.\n LISTS: Convert list idioms, like initializers, slices, append, etc.\n NAME_SCOPES: Insert name scopes that name ops according to context, like the\n function they were defined in.\n ", "desc": "This enumeration represents optional conversion options.", "type": "API"}, {"name": "tf.compat.v1.autograph.experimental.set_loop_options", "docs": "Specifies additional arguments to be passed to the enclosing while_loop.\n\n The parameters apply to and only to the immediately enclosing loop. It only\n has effect if the loop is staged as a TF while_loop; otherwise the parameters\n have no effect.\n\n Usage:\n\n >>> @tf.function(autograph=True)\n ... def f():\n ... n = 0\n ... for i in tf.range(10):\n ... tf.autograph.experimental.set_loop_options(maximum_iterations=3)\n ... n += 1\n ... return n\n\n >>> @tf.function(autograph=True)\n ... def f():\n ... v = tf.constant((0,))\n ... for i in tf.range(3):\n ... tf.autograph.experimental.set_loop_options(\n ... shape_invariants=[(v, tf.TensorShape([None]))]\n ... )\n ... v = tf.concat((v, [i]), 0)\n ... return v\n\n Also see tf.while_loop.\n\n Args:\n parallel_iterations: The maximum number of iterations allowed to run in\n parallel at any given time. Note that this does not guarantee parallel\n execution.\n swap_memory: Whether to store intermediate values needed for\n gradients on the CPU instead of GPU.\n maximum_iterations: Allows limiting the total number of iterations executed\n by the loop.\n shape_invariants: Allows controlling the argument with the same name passed\n to tf.while_loop. Unlike tf.while_loop, this is a list of\n `(tensor, shape)` pairs.\n ", "desc": "Specifies additional arguments to be passed to the enclosing while_loop.", "type": "API"}, {"name": "tf.compat.v1.autograph.set_verbosity", "docs": "Sets the AutoGraph verbosity level.\n\n _Debug logging in AutoGraph_\n\n More verbose logging is useful to enable when filing bug reports or doing\n more in-depth debugging.\n\n There are two means to control the logging verbosity:\n\n * The `set_verbosity` function\n\n * The `AUTOGRAPH_VERBOSITY` environment variable\n\n `set_verbosity` takes precedence over the environment variable.\n\n For example:\n\n ```python\n import os\n import tensorflow as tf\n\n os.environ['AUTOGRAPH_VERBOSITY'] = '5'\n # Verbosity is now 5\n\n tf.autograph.set_verbosity(0)\n # Verbosity is now 0\n\n os.environ['AUTOGRAPH_VERBOSITY'] = '1'\n # No effect, because set_verbosity was already called.\n ```\n\n Logs entries are output to [absl](https://abseil.io)'s\n [default output](https://abseil.io/docs/python/guides/logging),\n with `INFO` level.\n Logs can be mirrored to stdout by using the `alsologtostdout` argument.\n Mirroring is enabled by default when Python runs in interactive mode.\n\n Args:\n level: int, the verbosity level; larger values specify increased verbosity;\n 0 means no logging. When reporting bugs, it is recommended to set this\n value to a larger number, like 10.\n alsologtostdout: bool, whether to also output log messages to `sys.stdout`.\n ", "desc": "Sets the AutoGraph verbosity level.", "type": "API"}, {"name": "tf.compat.v1.autograph.to_code", "docs": "Returns the source code generated by AutoGraph, as a string.\n\n Example usage:\n\n >>> def f(x):\n ... if x < 0:\n ... x = -x\n ... return x\n >>> tf.autograph.to_code(f)\n \"...def tf__f(x):...\"\n\n Also see: `tf.autograph.to_graph`.\n\n Note: If a function has been decorated with `tf.function`, pass its\n underlying Python function, rather than the callable that `tf.function\n creates:\n\n >>> @tf.function\n ... def f(x):\n ... if x < 0:\n ... x = -x\n ... return x\n >>> tf.autograph.to_code(f.python_function)\n \"...def tf__f(x):...\"\n\n Args:\n entity: Python callable or class.\n recursive: Whether to recursively convert any functions that the converted\n function may call.\n arg_values: Deprecated.\n arg_types: Deprecated.\n indentation: Deprecated.\n experimental_optional_features: `None`, a tuple of, or a single\n `tf.autograph.experimental.Feature` value.\n\n Returns:\n The converted code as string.\n ", "desc": "Returns the source code generated by AutoGraph, as a string.", "type": "API"}, {"name": "tf.compat.v1.autograph.to_graph", "docs": "Converts a Python entity into a TensorFlow graph.\n\n Also see: `tf.autograph.to_code`, `tf.function`.\n\n Unlike `tf.function`, `to_graph` is a low-level transpiler that converts\n Python code to TensorFlow graph code. It does not implement any caching,\n variable management or create any actual ops, and is best used where greater\n control over the generated TensorFlow graph is desired. Another difference\n from `tf.function` is that `to_graph` will not wrap the graph into a\n TensorFlow function or a Python callable. Internally, `tf.function` uses\n `to_graph`.\n\n _Example Usage_\n\n ```python\n def foo(x):\n if x > 0:\n y = x * x\n else:\n y = -x\n return y\n\n converted_foo = to_graph(foo)\n\n x = tf.constant(1)\n y = converted_foo(x) # converted_foo is a TensorFlow Op-like.\n assert is_tensor(y)\n ```\n\n Supported Python entities include:\n * functions\n * classes\n * object methods\n\n Functions are converted into new functions with converted code.\n\n Classes are converted by generating a new class whose methods use converted\n code.\n\n Methods are converted into unbound function that have an additional first\n argument called `self`.\n\n Args:\n entity: Python callable or class to convert.\n recursive: Whether to recursively convert any functions that the converted\n function may call.\n arg_values: Deprecated.\n arg_types: Deprecated.\n experimental_optional_features: `None`, a tuple of, or a single\n `tf.autograph.experimental.Feature` value.\n\n Returns:\n Same as `entity`, the converted Python function or class.\n\n Raises:\n ValueError: If the entity could not be converted.\n ", "desc": "Converts a Python entity into a TensorFlow graph.", "type": "API"}, {"name": "tf.compat.v1.autograph.trace", "docs": "Traces argument information at compilation time.\n\n `trace` is useful when debugging, and it always executes during the tracing\n phase, that is, when the TF graph is constructed.\n\n _Example usage_\n\n ```python\n import tensorflow as tf\n\n for i in tf.range(10):\n tf.autograph.trace(i)\n # Output: \n ```\n\n Args:\n *args: Arguments to print to `sys.stdout`.\n ", "desc": "Traces argument information at compilation time.", "type": "API"}, {"name": "tf.compat.v1.batch_gather", "docs": "Gather slices from params according to indices with leading batch dims. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25.\nInstructions for updating:\n`tf.batch_gather` is deprecated, please use `tf.gather` with `batch_dims=tf.rank(indices) - 1` instead.", "desc": "Gather slices from params according to indices with leading batch dims. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.batch_scatter_update", "docs": "Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29.\nInstructions for updating:\nUse the batch_scatter_update method of Variable instead.\n\nAnalogous to `batch_gather`. This assumes that `ref`, `indices` and `updates`\nhave a series of leading dimensions that are the same for all of them, and the\nupdates are performed on the last dimension of indices. In other words, the\ndimensions should be the following:\n\n`num_prefix_dims = indices.ndims - 1`\n`batch_dim = num_prefix_dims + 1`\n`updates.shape = indices.shape + var.shape[batch_dim:]`\n\nwhere\n\n`updates.shape[:num_prefix_dims]`\n`== indices.shape[:num_prefix_dims]`\n`== var.shape[:num_prefix_dims]`\n\nAnd the operation performed can be expressed as:\n\n`var[i_1, ..., i_n, indices[i_1, ..., i_n, j]] = updates[i_1, ..., i_n, j]`\n\nWhen indices is a 1D tensor, this operation is equivalent to\n`tf.compat.v1.scatter_update`.\n\nTo avoid this operation there would be 2 alternatives:\n1) Reshaping the variable by merging the first `ndims` dimensions. However,\n this is not possible because `tf.reshape` returns a Tensor, which we\n cannot use `tf.compat.v1.scatter_update` on.\n2) Looping over the first `ndims` of the variable and using\n `tf.compat.v1.scatter_update` on the subtensors that result of slicing the\n first\n dimension. This is a valid option for `ndims = 1`, but less efficient than\n this implementation.\n\nSee also `tf.compat.v1.scatter_update` and `tf.compat.v1.scatter_nd_update`.\n\nArgs:\n ref: `Variable` to scatter onto.\n indices: Tensor containing indices as described above.\n updates: Tensor of updates to apply to `ref`.\n use_locking: Boolean indicating whether to lock the writing operation.\n name: Optional scope name string.\n\nReturns:\n Ref to `variable` after it has been modified.\n\nRaises:\n ValueError: If the initial `ndims` of `ref`, `indices`, and `updates` are\n not the same.", "desc": "Generalization of `tf.compat.v1.scatter_update` to axis different than 0. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.batch_to_space", "docs": "BatchToSpace for 4-D tensors of type T.\n\n This is a legacy version of the more general BatchToSpaceND.\n\n Rearranges (permutes) data from batch into blocks of spatial data, followed by\n cropping. This is the reverse transformation of SpaceToBatch. More specifically,\n this op outputs a copy of the input tensor where values from the `batch`\n dimension are moved in spatial blocks to the `height` and `width` dimensions,\n followed by cropping along the `height` and `width` dimensions.\n\n Args:\n input: A `Tensor`. 4-D tensor with shape\n `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,\n depth]`. Note that the batch size of the input tensor must be divisible by\n `block_size * block_size`.\n crops: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies\n how many elements to crop from the intermediate result across the spatial\n dimensions as follows:\n\n crops = [[crop_top, crop_bottom], [crop_left, crop_right]]\n block_size: An `int` that is `>= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for 4-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.batch_to_space_nd", "docs": "BatchToSpace for N-D tensors of type T.\n\n This operation reshapes the \"batch\" dimension 0 into `M + 1` dimensions of shape\n `block_shape + [batch]`, interleaves these blocks back into the grid defined by\n the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as\n the input. The spatial dimensions of this intermediate result are then\n optionally cropped according to `crops` to produce the output. This is the\n reverse of SpaceToBatch. See below for a precise description.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has M dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n crops: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input\n dimension `i + 1`, which corresponds to spatial dimension `i`. It is\n required that\n `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.\n\n This operation is equivalent to the following steps:\n\n 1. Reshape `input` to `reshaped` of shape:\n [block_shape[0], ..., block_shape[M-1],\n batch / prod(block_shape),\n input_shape[1], ..., input_shape[N-1]]\n\n 2. Permute dimensions of `reshaped` to produce `permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1], block_shape[0],\n ...,\n input_shape[M], block_shape[M-1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n 3. Reshape `permuted` to produce `reshaped_permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0],\n ...,\n input_shape[M] * block_shape[M-1],\n\n input_shape[M+1],\n ...,\n input_shape[N-1]]\n\n 4. Crop the start and end of dimensions `[1, ..., M]` of\n `reshaped_permuted` according to `crops` to produce the output of shape:\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],\n ...,\n input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n Some examples:\n\n (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 3]` and value:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n The output tensor has shape `[1, 4, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [2, 0]]`:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n The output tensor has shape `[2, 2, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for N-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.betainc", "docs": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).\n\n The regularized incomplete beta integral is defined as:\n\n\n \\\\(I_x(a, b) = \\frac{B(x; a, b)}{B(a, b)}\\\\)\n\n where\n\n\n \\\\(B(x; a, b) = \\int_0^x t^{a-1} (1 - t)^{b-1} dt\\\\)\n\n\n is the incomplete beta function and \\\\(B(a, b)\\\\) is the *complete*\n beta function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n b: A `Tensor`. Must have the same type as `a`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).", "type": "API"}, {"name": "tf.compat.v1.bincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector with length\n `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise.\n If `weights` are non-None, then index `i` of the output stores the sum of the\n value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Args:\n arr: An int32 tensor of non-negative values.\n weights: If non-None, must be the same shape as arr. For each value in\n `arr`, the bin will be incremented by the corresponding weight instead of\n 1.\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `arr` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n dtype: If `weights` is None, determines the type of the output bins.\n\n Returns:\n A vector with the same dtype as `weights` or the given `dtype`. The bin\n values.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.compat.v1.bitcast", "docs": "Bitcasts a tensor from one type to another without copying data.\n\n Given a tensor `input`, this operation returns a tensor that has the same buffer\n data as `input` with datatype `type`.\n\n If the input datatype `T` is larger than the output datatype `type` then the\n shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].\n\n If `T` is smaller than `type`, the operator requires that the rightmost\n dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from\n [..., sizeof(`type`)/sizeof(`T`)] to [...].\n\n tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype\n (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast()\n gives module error.\n For example,\n\n Example 1:\n\n >>> a = [1., 2., 3.]\n >>> equality_bitcast = tf.bitcast(a, tf.complex128)\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast]\n >>> equality_cast = tf.cast(a, tf.complex128)\n >>> print(equality_cast)\n tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)\n\n Example 2:\n\n >>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)\n \n\n Example 3:\n\n >>> x = [1., 2., 3.]\n >>> y = [0., 2., 3.]\n >>> equality= tf.equal(x,y)\n >>> equality_cast = tf.cast(equality,tf.float32)\n >>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8)\n >>> print(equality)\n tf.Tensor([False True True], shape=(3,), dtype=bool)\n >>> print(equality_cast)\n tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32)\n >>> print(equality_bitcast)\n tf.Tensor(\n [[ 0 0 0 0]\n [ 0 0 128 63]\n [ 0 0 128 63]], shape=(3, 4), dtype=uint8)\n\n *NOTE*: Bitcast is implemented as a low-level cast, so machines with different\n endian orderings will give different results.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.\n type: A `tf.DType` from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `type`.\n ", "desc": "Bitcasts a tensor from one type to another without copying data.", "type": "API"}, {"name": "tf.compat.v1.bitwise", "docs": "Operations for manipulating the binary representations of integers.\n", "desc": "Operations for manipulating the binary representations of integers.", "type": "API"}, {"name": "tf.compat.v1.bitwise.bitwise_and", "docs": "Elementwise computes the bitwise AND of `x` and `y`.\n\n The result will have those bits set, that are set in both `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_and(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise AND of `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.bitwise.bitwise_or", "docs": "Elementwise computes the bitwise OR of `x` and `y`.\n\n The result will have those bits set, that are set in `x`, `y` or both. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_or(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise OR of `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.bitwise.bitwise_xor", "docs": "Elementwise computes the bitwise XOR of `x` and `y`.\n\n The result will have those bits set, that are different in `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 4, 5], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_xor(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise XOR of `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.bitwise.invert", "docs": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.\n\n Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101.\n This operation is performed on each element of the tensor argument `x`.\n\n Example:\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n\n # flip 2 (00000010) to -3 (11111101)\n tf.assert_equal(-3, bitwise_ops.invert(2))\n\n dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,\n dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]\n\n inputs = [0, 5, 3, 14]\n for dtype in dtype_list:\n # Because of issues with negative numbers, let's test this indirectly.\n # 1. invert(a) and a = 0\n # 2. invert(a) or a = invert(0)\n input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)\n not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.bitwise_or(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.invert(\n tf.constant(0, dtype=dtype))]\n\n expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)\n tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)\n\n expected = tf.cast([not_0] * 4, tf.float32)\n tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)\n\n # For unsigned dtypes let's also check the result directly.\n if dtype.is_unsigned:\n inverted = bitwise_ops.invert(input_tensor)\n expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)\n tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.", "type": "API"}, {"name": "tf.compat.v1.bitwise.left_shift", "docs": "Elementwise computes the bitwise left-shift of `x` and `y`.\n\n If `y` is negative, or greater than or equal to the width of `x` in bits the\n result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n left_shift_result = bitwise_ops.left_shift(lhs, rhs)\n\n print(left_shift_result)\n\n # This will print:\n # tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.left_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise left-shift of `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.bitwise.right_shift", "docs": "Elementwise computes the bitwise right-shift of `x` and `y`.\n\n Performs a logical shift for unsigned integer types, and an arithmetic shift\n for signed integer types.\n\n If `y` is negative, or greater than or equal to than the width of `x` in bits\n the result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n right_shift_result = bitwise_ops.right_shift(lhs, rhs)\n\n print(right_shift_result)\n\n # This will print:\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.right_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise right-shift of `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.boolean_mask", "docs": "Apply boolean mask to tensor.\n\n Numpy equivalent is `tensor[mask]`.\n\n In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match\n the first K dimensions of `tensor`'s shape. We then have:\n `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`\n where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).\n The `axis` could be used with `mask` to indicate the axis to mask from.\n In that case, `axis + dim(mask) <= dim(tensor)` and `mask`'s shape must match\n the first `axis + dim(mask)` dimensions of `tensor`'s shape.\n\n See also: `tf.ragged.boolean_mask`, which can be applied to both dense and\n ragged tensors, and can be used if you need to preserve the masked dimensions\n of `tensor` (rather than flattening them, as `tf.boolean_mask` does).\n\n Examples:\n\n ```python\n # 1-D example\n tensor = [0, 1, 2, 3]\n mask = np.array([True, False, True, False])\n tf.boolean_mask(tensor, mask) # [0, 2]\n\n # 2-D example\n tensor = [[1, 2], [3, 4], [5, 6]]\n mask = np.array([True, False, True])\n tf.boolean_mask(tensor, mask) # [[1, 2], [5, 6]]\n ```\n\n Args:\n tensor: N-D Tensor.\n mask: K-D boolean Tensor, K <= N and K must be known statically.\n name: A name for this operation (optional).\n axis: A 0-D int Tensor representing the axis in `tensor` to mask from. By\n default, axis is 0 which will mask from the first dimension. Otherwise K +\n axis <= N.\n\n Returns:\n (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding\n to `True` values in `mask`.\n\n Raises:\n ValueError: If shapes do not conform.\n ", "desc": "Apply boolean mask to tensor.", "type": "API"}, {"name": "tf.compat.v1.broadcast_dynamic_shape", "docs": "Computes the shape of a broadcast given symbolic shapes.\n\n When `shape_x` and `shape_y` are Tensors representing shapes (i.e. the result\n of calling tf.shape on another Tensor) this computes a Tensor which is the\n shape of the result of a broadcasting op applied in tensors of shapes\n `shape_x` and `shape_y`.\n\n This is useful when validating the result of a broadcasting operation when the\n tensors do not have statically known shapes.\n\n Example:\n\n >>> shape_x = (1, 2, 3)\n >>> shape_y = (5, 1, 3)\n >>> tf.broadcast_dynamic_shape(shape_x, shape_y)\n \n\n Args:\n shape_x: A rank 1 integer `Tensor`, representing the shape of x.\n shape_y: A rank 1 integer `Tensor`, representing the shape of y.\n\n Returns:\n A rank 1 integer `Tensor` representing the broadcasted shape.\n\n Raises:\n InvalidArgumentError: If the two shapes are incompatible for\n broadcasting.\n ", "desc": "Computes the shape of a broadcast given symbolic shapes.", "type": "API"}, {"name": "tf.compat.v1.broadcast_static_shape", "docs": "Computes the shape of a broadcast given known shapes.\n\n When `shape_x` and `shape_y` are fully known `TensorShape`s this computes a\n `TensorShape` which is the shape of the result of a broadcasting op applied in\n tensors of shapes `shape_x` and `shape_y`.\n\n For example, if shape_x is `TensorShape([1, 2, 3])` and shape_y is\n `TensorShape([5, 1, 3])`, the result is a TensorShape whose value is\n `TensorShape([5, 2, 3])`.\n\n This is useful when validating the result of a broadcasting operation when the\n tensors have statically known shapes.\n\n Example:\n\n >>> shape_x = tf.TensorShape([1, 2, 3])\n >>> shape_y = tf.TensorShape([5, 1 ,3])\n >>> tf.broadcast_static_shape(shape_x, shape_y)\n TensorShape([5, 2, 3])\n\n Args:\n shape_x: A `TensorShape`\n shape_y: A `TensorShape`\n\n Returns:\n A `TensorShape` representing the broadcasted shape.\n\n Raises:\n ValueError: If the two shapes can not be broadcasted.\n ", "desc": "Computes the shape of a broadcast given known shapes.", "type": "API"}, {"name": "tf.compat.v1.broadcast_to", "docs": "Broadcast an array for a compatible shape.\n\n Broadcasting is the process of making arrays to have compatible shapes\n for arithmetic operations. Two shapes are compatible if for each\n dimension pair they are either equal or one of them is one. When trying\n to broadcast a Tensor to a shape, it starts with the trailing dimensions,\n and works its way forward.\n\n For example,\n\n >>> x = tf.constant([1, 2, 3])\n >>> y = tf.broadcast_to(x, [3, 3])\n >>> print(y)\n tf.Tensor(\n [[1 2 3]\n [1 2 3]\n [1 2 3]], shape=(3, 3), dtype=int32)\n\n In the above example, the input Tensor with the shape of `[1, 3]`\n is broadcasted to output Tensor with shape of `[3, 3]`.\n\n When doing broadcasted operations such as multiplying a tensor\n by a scalar, broadcasting (usually) confers some time or space\n benefit, as the broadcasted tensor is never materialized.\n\n However, `broadcast_to` does not carry with it any such benefits.\n The newly-created tensor takes the full memory of the broadcasted\n shape. (In a graph context, `broadcast_to` might be fused to\n subsequent operation and then be optimized away, however.)\n\n Args:\n input: A `Tensor`. A Tensor to broadcast.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 1-D `int` Tensor. The shape of the desired output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Broadcast an array for a compatible shape.", "type": "API"}, {"name": "tf.compat.v1.case", "docs": "Create a case operation.\n\n See also `tf.switch_case`.\n\n The `pred_fn_pairs` parameter is a dict or list of pairs of size N.\n Each pair contains a boolean scalar tensor and a python callable that\n creates the tensors to be returned if the boolean evaluates to True.\n `default` is a callable generating a list of tensors. All the callables\n in `pred_fn_pairs` as well as `default` (if provided) should return the same\n number and types of tensors.\n\n If `exclusive==True`, all predicates are evaluated, and an exception is\n thrown if more than one of the predicates evaluates to `True`.\n If `exclusive==False`, execution stops at the first predicate which\n evaluates to True, and the tensors generated by the corresponding function\n are returned immediately. If none of the predicates evaluate to True, this\n operation returns the tensors generated by `default`.\n\n `tf.case` supports nested structures as implemented in\n `tf.nest`. All of the callables must return the same (possibly nested) value\n structure of lists, tuples, and/or named tuples. Singleton lists and tuples\n form the only exceptions to this: when returned by a callable, they are\n implicitly unpacked to single values. This behavior is disabled by passing\n `strict=True`.\n\n If an unordered dictionary is used for `pred_fn_pairs`, the order of the\n conditional tests is not guaranteed. However, the order is guaranteed to be\n deterministic, so that variables created in conditional branches are created\n in fixed order across runs.\n\n @compatibility(eager)\n Unordered dictionaries are not supported in eager mode when `exclusive=False`.\n Use a list of tuples instead.\n @end_compatibility\n\n\n **Example 1:**\n\n Pseudocode:\n\n ```\n if (x < y) return 17;\n else return 23;\n ```\n\n Expressions:\n\n ```python\n f1 = lambda: tf.constant(17)\n f2 = lambda: tf.constant(23)\n r = tf.case([(tf.less(x, y), f1)], default=f2)\n ```\n\n **Example 2:**\n\n Pseudocode:\n\n ```\n if (x < y && x > z) raise OpError(\"Only one predicate may evaluate to True\");\n if (x < y) return 17;\n else if (x > z) return 23;\n else return -1;\n ```\n\n Expressions:\n\n ```python\n def f1(): return tf.constant(17)\n def f2(): return tf.constant(23)\n def f3(): return tf.constant(-1)\n r = tf.case({tf.less(x, y): f1, tf.greater(x, z): f2},\n default=f3, exclusive=True)\n ```\n\n Args:\n pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a\n callable which returns a list of tensors.\n default: Optional callable that returns a list of tensors.\n exclusive: True iff at most one predicate is allowed to evaluate to `True`.\n strict: A boolean that enables/disables 'strict' mode; see above.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the first pair whose predicate evaluated to True, or\n those returned by `default` if none does.\n\n Raises:\n TypeError: If `pred_fn_pairs` is not a list/dictionary.\n TypeError: If `pred_fn_pairs` is a list but does not contain 2-tuples.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable.\n ", "desc": "Create a case operation.", "type": "API"}, {"name": "tf.compat.v1.cast", "docs": "Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.cast(x, tf.int32)\n \n\n Notice `tf.cast` has an alias `tf.dtypes.cast`:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.dtypes.cast(x, tf.int32)\n \n\n The operation supports data types (for `x` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`.\n In case of casting from complex types (`complex64`, `complex128`) to real\n types, only the real part of `x` is returned. In case of casting from real\n types to complex types (`complex64`, `complex128`), the imaginary part of the\n returned value is set to `0`. The handling of complex types here matches the\n behavior of numpy.\n\n Note casting nan and inf values to integral types has undefined behavior.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices` of numeric type. It could\n be `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`,\n `int64`, `float16`, `float32`, `float64`, `complex64`, `complex128`,\n `bfloat16`.\n dtype: The destination type. The list of supported dtypes is the same as\n `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` and\n same type as `dtype`.\n\n Raises:\n TypeError: If `x` cannot be cast to the `dtype`.\n ", "desc": "Casts a tensor to a new type.", "type": "API"}, {"name": "tf.compat.v1.ceil", "docs": "Return the ceiling of the input, element-wise.\n\n For example:\n\n >>> tf.math.ceil([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`. `int32`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.ceil\n @end_compatibility\n ", "desc": "Return the ceiling of the input, element-wise.", "type": "API"}, {"name": "tf.compat.v1.check_numerics", "docs": "Checks a tensor for NaN and Inf values.\n\n When run, reports an `InvalidArgument` error if `tensor` has any values\n that are not a number (NaN) or infinity (Inf). Otherwise, returns the input\n tensor.\n\n Example usage:\n\n ``` python\n a = tf.Variable(1.0)\n tf.debugging.check_numerics(a, message='')\n\n b = tf.Variable(np.nan)\n try:\n tf.debugging.check_numerics(b, message='Checking b')\n except Exception as e:\n assert \"Checking b : Tensor had NaN values\" in e.message\n\n c = tf.Variable(np.inf)\n try:\n tf.debugging.check_numerics(c, message='Checking c')\n except Exception as e:\n assert \"Checking c : Tensor had Inf values\" in e.message\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n message: A `string`. Prefix of the error message.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Checks a tensor for NaN and Inf values.", "type": "API"}, {"name": "tf.compat.v1.cholesky", "docs": "Computes the Cholesky decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be symmetric and positive definite. Only the lower-triangular\n part of the input will be used for this operation. The upper-triangular part\n will not be read.\n\n The output is a tensor of the same shape as the input\n containing the Cholesky decompositions for all input submatrices `[..., :, :]`.\n\n **Note**: The gradient computation on GPU is faster for large matrices but\n not for large batch dimensions when the submatrices are small. In this\n case it might be faster to use the CPU.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the Cholesky decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.cholesky_solve", "docs": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.\n\n Specifically, returns `X` from `A X = RHS`, where `A = L L^T`, `L` is the\n `chol` arg and `RHS` is the `rhs` arg.\n\n ```python\n # Solve 10 separate 2x2 linear systems:\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 1\n chol = tf.linalg.cholesky(A) # shape 10 x 2 x 2\n X = tf.linalg.cholesky_solve(chol, RHS) # shape 10 x 2 x 1\n # tf.matmul(A, X) ~ RHS\n X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]\n\n # Solve five linear systems (K = 5) for every member of the length 10 batch.\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 5\n ...\n X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]\n ```\n\n Args:\n chol: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.\n Cholesky factorization of `A`, e.g. `chol = tf.linalg.cholesky(A)`.\n For that reason, only the lower triangular parts (including the diagonal)\n of the last two dimensions of `chol` are used. The strictly upper part is\n assumed to be zero and not accessed.\n rhs: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.\n name: A name to give this `Op`. Defaults to `cholesky_solve`.\n\n Returns:\n Solution to `A x = rhs`, shape `[..., M, K]`.\n ", "desc": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.", "type": "API"}, {"name": "tf.compat.v1.clip_by_average_norm", "docs": "Clips tensor values to a maximum average L2-norm. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nclip_by_average_norm is deprecated in TensorFlow 2.0. Please use clip_by_norm(t, clip_norm * tf.cast(tf.size(t), tf.float32), name) instead.\n\nGiven a tensor `t`, and a maximum clip value `clip_norm`, this operation\nnormalizes `t` so that its average L2-norm is less than or equal to\n`clip_norm`. Specifically, if the average L2-norm is already less than or\nequal to `clip_norm`, then `t` is not modified. If the average L2-norm is\ngreater than `clip_norm`, then this operation returns a tensor of the same\ntype and shape as `t` with its values set to:\n\n`t * clip_norm / l2norm_avg(t)`\n\nIn this case, the average L2-norm of the output tensor is `clip_norm`.\n\nThis operation is typically used to clip gradients before applying them with\nan optimizer.\n\nArgs:\n t: A `Tensor`.\n clip_norm: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.\n name: A name for the operation (optional).\n\nReturns:\n A clipped `Tensor`.", "desc": "Clips tensor values to a maximum average L2-norm. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.clip_by_global_norm", "docs": "Clips values of multiple tensors by the ratio of the sum of their norms.\n\n Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,\n this operation returns a list of clipped tensors `list_clipped`\n and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,\n if you've already computed the global norm for `t_list`, you can specify\n the global norm with `use_norm`.\n\n To perform the clipping, the values `t_list[i]` are set to:\n\n t_list[i] * clip_norm / max(global_norm, clip_norm)\n\n where:\n\n global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))\n\n If `clip_norm > global_norm` then the entries in `t_list` remain as they are,\n otherwise they're all shrunk by the global ratio.\n\n If `global_norm == infinity` then the entries in `t_list` are all set to `NaN`\n to signal that an error occurred.\n\n Any of the entries of `t_list` that are of type `None` are ignored.\n\n This is the correct way to perform gradient clipping (Pascanu et al., 2012).\n\n However, it is slower than `clip_by_norm()` because all the parameters must be\n ready before the clipping operation can be performed.\n\n Args:\n t_list: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.\n clip_norm: A 0-D (scalar) `Tensor` > 0. The clipping ratio.\n use_norm: A 0-D (scalar) `Tensor` of type `float` (optional). The global\n norm to use. If not provided, `global_norm()` is used to compute the norm.\n name: A name for the operation (optional).\n\n Returns:\n list_clipped: A list of `Tensors` of the same type as `list_t`.\n global_norm: A 0-D (scalar) `Tensor` representing the global norm.\n\n Raises:\n TypeError: If `t_list` is not a sequence.\n\n References:\n On the difficulty of training Recurrent Neural Networks:\n [Pascanu et al., 2012](http://proceedings.mlr.press/v28/pascanu13.html)\n ([pdf](http://proceedings.mlr.press/v28/pascanu13.pdf))\n ", "desc": "Clips values of multiple tensors by the ratio of the sum of their norms.", "type": "API"}, {"name": "tf.compat.v1.clip_by_norm", "docs": "Clips tensor values to a maximum L2-norm.\n\n Given a tensor `t`, and a maximum clip value `clip_norm`, this operation\n normalizes `t` so that its L2-norm is less than or equal to `clip_norm`,\n along the dimensions given in `axes`. Specifically, in the default case\n where all dimensions are used for calculation, if the L2-norm of `t` is\n already less than or equal to `clip_norm`, then `t` is not modified. If\n the L2-norm is greater than `clip_norm`, then this operation returns a\n tensor of the same type and shape as `t` with its values set to:\n\n `t * clip_norm / l2norm(t)`\n\n In this case, the L2-norm of the output tensor is `clip_norm`.\n\n As another example, if `t` is a matrix and `axes == [1]`, then each row\n of the output will have L2-norm less than or equal to `clip_norm`. If\n `axes == [0]` instead, each column of the output will be clipped.\n\n Code example:\n\n >>> some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32)\n >>> tf.clip_by_norm(some_nums, 2.0).numpy()\n array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]],\n dtype=float32)\n\n This operation is typically used to clip gradients before applying them with\n an optimizer. Most gradient data is a collection of different shaped tensors\n for different parts of the model. Thus, this is a common usage:\n\n ```\n # Get your gradients after training\n loss_value, grads = grad(model, features, labels)\n\n # Apply some clipping\n grads = [tf.clip_by_norm(g, norm)\n for g in grads]\n\n # Continue on with training\n optimizer.apply_gradients(grads)\n ```\n\n Args:\n t: A `Tensor` or `IndexedSlices`. This must be a floating point type.\n clip_norm: A 0-D (scalar) `Tensor` > 0. A maximum clipping value, also\n floating point\n axes: A 1-D (vector) `Tensor` of type int32 containing the dimensions\n to use for computing the L2-norm. If `None` (the default), uses all\n dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A clipped `Tensor` or `IndexedSlices`.\n\n Raises:\n ValueError: If the clip_norm tensor is not a 0-D scalar tensor.\n TypeError: If dtype of the input is not a floating point or\n complex type.\n ", "desc": "Clips tensor values to a maximum L2-norm.", "type": "API"}, {"name": "tf.compat.v1.clip_by_value", "docs": "Clips tensor values to a specified min and max.\n\n Given a tensor `t`, this operation returns a tensor of the same type and\n shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.\n Any values less than `clip_value_min` are set to `clip_value_min`. Any values\n greater than `clip_value_max` are set to `clip_value_max`.\n\n Note: `clip_value_min` needs to be smaller or equal to `clip_value_max` for\n correct results.\n\n For example:\n\n Basic usage passes a scalar as the min and max value.\n\n >>> t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])\n >>> t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)\n >>> t2.numpy()\n array([[-1., -1., 0.],\n [ 0., 1., 1.]], dtype=float32)\n\n The min and max can be the same size as `t`, or broadcastable to that size.\n\n >>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])\n >>> clip_min = [[2],[1]]\n >>> t3 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)\n >>> t3.numpy()\n array([[ 2., 2., 10.],\n [ 1., 1., 10.]], dtype=float32)\n\n Broadcasting fails, intentionally, if you would expand the dimensions of `t`\n\n >>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])\n >>> clip_min = [[[2, 1]]] # Has a third axis\n >>> t4 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Incompatible shapes: [2,3] vs. [1,1,2]\n\n It throws a `TypeError` if you try to clip an `int` to a `float` value\n (`tf.cast` the input to `float` first).\n\n >>> t = tf.constant([[1, 2], [3, 4]], dtype=tf.int32)\n >>> t5 = tf.clip_by_value(t, clip_value_min=-3.1, clip_value_max=3.1)\n Traceback (most recent call last):\n ...\n TypeError: Cannot convert ...\n\n\n Args:\n t: A `Tensor` or `IndexedSlices`.\n clip_value_min: The minimum value to clip to. A scalar `Tensor` or one that\n is broadcastable to the shape of `t`.\n clip_value_max: The maximum value to clip to. A scalar `Tensor` or one that\n is broadcastable to the shape of `t`.\n name: A name for the operation (optional).\n\n Returns:\n A clipped `Tensor` or `IndexedSlices`.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If the clip tensors would trigger array\n broadcasting that would make the returned tensor larger than the input.\n TypeError: If dtype of the input is `int32` and dtype of\n the `clip_value_min` or `clip_value_max` is `float32`\n ", "desc": "Clips tensor values to a specified min and max.", "type": "API"}, {"name": "tf.compat.v1.colocate_with", "docs": "DEPRECATED FUNCTION\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.", "desc": "DEPRECATED FUNCTION", "type": "API"}, {"name": "tf.compat.v1.compat", "docs": "Compatibility functions.\n\nThe `tf.compat` module contains two sets of compatibility functions.\n\n## Tensorflow 1.x and 2.x APIs\n\nThe `compat.v1` and `compat.v2` submodules provide a complete copy of both the\n`v1` and `v2` APIs for backwards and forwards compatibility across TensorFlow\nversions 1.x and 2.x. See the\n[migration guide](https://www.tensorflow.org/guide/migrate) for details.\n\n## Utilities for writing compatible code\n\nAside from the `compat.v1` and `compat.v2` submodules, `tf.compat` also contains\na set of helper functions for writing code that works in both:\n\n* TensorFlow 1.x and 2.x\n* Python 2 and 3\n\n\n## Type collections\n\nThe compatibility module also provides the following aliases for common\nsets of python types:\n\n* `bytes_or_text_types`\n* `complex_types`\n* `integral_types`\n* `real_types`\n\n", "desc": "Compatibility functions.", "type": "API"}, {"name": "tf.compat.v1.compat.as_bytes", "docs": "Converts `bytearray`, `bytes`, or unicode python input types to `bytes`.\n\n Uses utf-8 encoding for text by default.\n\n Args:\n bytes_or_text: A `bytearray`, `bytes`, `str`, or `unicode` object.\n encoding: A string indicating the charset for encoding unicode.\n\n Returns:\n A `bytes` object.\n\n Raises:\n TypeError: If `bytes_or_text` is not a binary or unicode string.\n ", "desc": "Converts `bytearray`, `bytes`, or unicode python input types to `bytes`.", "type": "API"}, {"name": "tf.compat.v1.compat.as_str", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.compat.as_str_any", "docs": "Converts input to `str` type.\n\n Uses `str(value)`, except for `bytes` typed inputs, which are converted\n using `as_str`.\n\n Args:\n value: A object that can be converted to `str`.\n\n Returns:\n A `str` object.\n ", "desc": "Converts input to `str` type.", "type": "API"}, {"name": "tf.compat.v1.compat.as_text", "docs": "Converts any string-like python input types to unicode.\n\n Returns the input as a unicode string. Uses utf-8 encoding for text\n by default.\n\n Args:\n bytes_or_text: A `bytes`, `str`, or `unicode` object.\n encoding: A string indicating the charset for decoding unicode.\n\n Returns:\n A `unicode` (Python 2) or `str` (Python 3) object.\n\n Raises:\n TypeError: If `bytes_or_text` is not a binary or unicode string.\n ", "desc": "Converts any string-like python input types to unicode.", "type": "API"}, {"name": "tf.compat.v1.compat.dimension_at_index", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n If you want to retrieve the Dimension instance corresponding to a certain\n index in a TensorShape instance, use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n dim = tensor_shape[i]\n\n # Use `dimension_at_index` as direct replacement compatible with both V1 & V2:\n dim = dimension_at_index(tensor_shape, i)\n\n # Another possibility would be this, but WARNING: it only works if the\n # tensor_shape instance has a defined rank.\n dim = tensor_shape.dims[i] # `dims` may be None if the rank is undefined!\n\n # In native V2 code, we recommend instead being more explicit:\n if tensor_shape.rank is None:\n dim = Dimension(None)\n else:\n dim = tensor_shape.dims[i]\n\n # Being more explicit will save you from the following trap (present in V1):\n # you might do in-place modifications to `dim` and expect them to be reflected\n # in `tensor_shape[i]`, but they would not be (as the Dimension object was\n # instantiated on the fly.\n ```\n\n Args:\n shape: A TensorShape instance.\n index: An integer index.\n\n Returns:\n A dimension object.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.v1.compat.dimension_value", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n When accessing the value of a TensorShape dimension,\n use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n value = tensor_shape[i].value\n\n # Use `dimension_value` as direct replacement compatible with both V1 & V2:\n value = dimension_value(tensor_shape[i])\n\n # This would be the V2 equivalent:\n value = tensor_shape[i] # Warning: this will return the dim value in V2!\n ```\n\n Args:\n dimension: Either a `Dimension` instance, an integer, or None.\n\n Returns:\n A plain value, i.e. an integer or None.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.v1.compat.forward_compatibility_horizon", "docs": "Context manager for testing forward compatibility of generated graphs.\n\n See [Version\n compatibility](https://tensorflow.org/guide/version_compat#backward_forward).\n\n To ensure forward compatibility of generated graphs (see `forward_compatible`)\n with older binaries, new features can be gated with:\n\n ```python\n if compat.forward_compatible(year=2018, month=08, date=01):\n generate_graph_with_new_features()\n else:\n generate_graph_so_older_binaries_can_consume_it()\n ```\n\n However, when adding new features, one may want to unittest it before\n the forward compatibility window expires. This context manager enables\n such tests. For example:\n\n ```python\n from tensorflow.python.compat import compat\n\n def testMyNewFeature(self):\n with compat.forward_compatibility_horizon(2018, 08, 02):\n # Test that generate_graph_with_new_features() has an effect\n ```\n\n Args:\n year: A year (e.g., 2018). Must be an `int`.\n month: A month (1 <= month <= 12) in year. Must be an `int`.\n day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an\n `int`.\n\n Yields:\n Nothing.\n ", "desc": "Context manager for testing forward compatibility of generated graphs.", "type": "API"}, {"name": "tf.compat.v1.compat.forward_compatible", "docs": "Return true if the forward compatibility window has expired.\n\n See [Version\n compatibility](https://tensorflow.org/guide/version_compat#backward_forward).\n\n Forward-compatibility refers to scenarios where the producer of a TensorFlow\n model (a GraphDef or SavedModel) is compiled against a version of the\n TensorFlow library newer than what the consumer was compiled against. The\n \"producer\" is typically a Python program that constructs and trains a model\n while the \"consumer\" is typically another program that loads and serves the\n model.\n\n TensorFlow has been supporting a 3 week forward-compatibility window for\n programs compiled from source at HEAD.\n\n For example, consider the case where a new operation `MyNewAwesomeAdd` is\n created with the intent of replacing the implementation of an existing Python\n wrapper - `tf.add`. The Python wrapper implementation should change from\n something like:\n\n ```python\n def add(inputs, name=None):\n return gen_math_ops.add(inputs, name)\n ```\n\n to:\n\n ```python\n from tensorflow.python.compat import compat\n\n def add(inputs, name=None):\n if compat.forward_compatible(year, month, day):\n # Can use the awesome new implementation.\n return gen_math_ops.my_new_awesome_add(inputs, name)\n # To maintain forward compatibility, use the old implementation.\n return gen_math_ops.add(inputs, name)\n ```\n\n Where `year`, `month`, and `day` specify the date beyond which binaries\n that consume a model are expected to have been updated to include the\n new operations. This date is typically at least 3 weeks beyond the date\n the code that adds the new operation is committed.\n\n Args:\n year: A year (e.g., 2018). Must be an `int`.\n month: A month (1 <= month <= 12) in year. Must be an `int`.\n day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an\n `int`.\n\n Returns:\n True if the caller can expect that serialized TensorFlow graphs produced\n can be consumed by programs that are compiled with the TensorFlow library\n source code after (year, month, day).\n ", "desc": "Return true if the forward compatibility window has expired.", "type": "API"}, {"name": "tf.compat.v1.compat.path_to_str", "docs": "Converts input which is a `PathLike` object to `str` type.\n\n Converts from any python constant representation of a `PathLike` object to\n a string. If the input is not a `PathLike` object, simply returns the input.\n\n Args:\n path: An object that can be converted to path representation.\n\n Returns:\n A `str` object.\n\n Usage:\n In case a simplified `str` version of the path is needed from an\n `os.PathLike` object\n\n Examples:\n ```python\n $ tf.compat.path_to_str('C:\\XYZ\\tensorflow\\./.././tensorflow')\n 'C:\\XYZ\\tensorflow\\./.././tensorflow' # Windows OS\n $ tf.compat.path_to_str(Path('C:\\XYZ\\tensorflow\\./.././tensorflow'))\n 'C:\\XYZ\\tensorflow\\..\\tensorflow' # Windows OS\n $ tf.compat.path_to_str(Path('./corpus'))\n 'corpus' # Linux OS\n $ tf.compat.path_to_str('./.././Corpus')\n './.././Corpus' # Linux OS\n $ tf.compat.path_to_str(Path('./.././Corpus'))\n '../Corpus' # Linux OS\n $ tf.compat.path_to_str(Path('./..////../'))\n '../..' # Linux OS\n\n ```\n ", "desc": "Converts input which is a `PathLike` object to `str` type.", "type": "API"}, {"name": "tf.compat.v1.complex", "docs": "Converts two real numbers to a complex number.\n\n Given a tensor `real` representing the real part of a complex number, and a\n tensor `imag` representing the imaginary part of a complex number, this\n operation returns complex numbers elementwise of the form \\\\(a + bj\\\\), where\n *a* represents the `real` part and *b* represents the `imag` part.\n\n The input tensors `real` and `imag` must have the same shape.\n\n For example:\n\n ```python\n real = tf.constant([2.25, 3.25])\n imag = tf.constant([4.75, 5.75])\n tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]]\n ```\n\n Args:\n real: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n imag: A `Tensor`. Must have the same type as `real`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64` or `complex128`.\n\n Raises:\n TypeError: Real and imag must be correct types\n ", "desc": "Converts two real numbers to a complex number.", "type": "API"}, {"name": "tf.compat.v1.concat", "docs": "Concatenates tensors along one dimension.\n\n See also `tf.tile`, `tf.stack`, `tf.repeat`.\n\n Concatenates the list of tensors `values` along dimension `axis`. If\n `values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, the concatenated\n result has shape\n\n [D0, D1, ... Raxis, ...Dn]\n\n where\n\n Raxis = sum(Daxis(i))\n\n That is, the data from the input tensors is joined along the `axis`\n dimension.\n\n The number of dimensions of the input tensors must match, and all dimensions\n except `axis` must be equal.\n\n For example:\n\n >>> t1 = [[1, 2, 3], [4, 5, 6]]\n >>> t2 = [[7, 8, 9], [10, 11, 12]]\n >>> tf.concat([t1, t2], 0)\n \n\n >>> tf.concat([t1, t2], 1)\n \n\n As in Python, the `axis` could also be negative numbers. Negative `axis`\n are interpreted as counting from the end of the rank, i.e.,\n `axis + rank(values)`-th dimension.\n\n For example:\n\n >>> t1 = [[[1, 2], [2, 3]], [[4, 4], [5, 3]]]\n >>> t2 = [[[7, 4], [8, 4]], [[2, 10], [15, 11]]]\n >>> tf.concat([t1, t2], -1)\n \n\n Note: If you are concatenating along a new axis consider using stack.\n E.g.\n\n ```python\n tf.concat([tf.expand_dims(t, axis) for t in tensors], axis)\n ```\n\n can be rewritten as\n\n ```python\n tf.stack(tensors, axis=axis)\n ```\n\n Args:\n values: A list of `Tensor` objects or a single `Tensor`.\n axis: 0-D `int32` `Tensor`. Dimension along which to concatenate. Must be\n in the range `[-rank(values), rank(values))`. As in Python, indexing for\n axis is 0-based. Positive axis in the rage of `[0, rank(values))` refers\n to `axis`-th dimension. And negative axis refers to `axis +\n rank(values)`-th dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` resulting from concatenation of the input tensors.\n ", "desc": "Concatenates tensors along one dimension.", "type": "API"}, {"name": "tf.compat.v1.cond", "docs": "Return `true_fn()` if the predicate `pred` is true else `false_fn()`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(fn1, fn2)`. They will be removed in a future version.\nInstructions for updating:\nfn1/fn2 are deprecated in favor of the true_fn/false_fn arguments.\n\n`true_fn` and `false_fn` both return lists of output tensors. `true_fn` and\n`false_fn` must have the same non-zero number and type of outputs.\n\n**WARNING**: Any Tensors or Operations created outside of `true_fn` and\n`false_fn` will be executed regardless of which branch is selected at runtime.\n\nAlthough this behavior is consistent with the dataflow model of TensorFlow,\nit has frequently surprised users who expected a lazier semantics.\nConsider the following simple program:\n\n```python\nz = tf.multiply(a, b)\nresult = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))\n```\n\nIf `x < y`, the `tf.add` operation will be executed and `tf.square`\noperation will not be executed. Since `z` is needed for at least one\nbranch of the `cond`, the `tf.multiply` operation is always executed,\nunconditionally.\n\nNote that `cond` calls `true_fn` and `false_fn` *exactly once* (inside the\ncall to `cond`, and not at all during `Session.run()`). `cond`\nstitches together the graph fragments created during the `true_fn` and\n`false_fn` calls with some additional graph nodes to ensure that the right\nbranch gets executed depending on the value of `pred`.\n\n`tf.cond` supports nested structures as implemented in\n`tensorflow.python.util.nest`. Both `true_fn` and `false_fn` must return the\nsame (possibly nested) value structure of lists, tuples, and/or named tuples.\nSingleton lists and tuples form the only exceptions to this: when returned by\n`true_fn` and/or `false_fn`, they are implicitly unpacked to single values.\nThis behavior is disabled by passing `strict=True`.\n\nArgs:\n pred: A scalar determining whether to return the result of `true_fn` or\n `false_fn`.\n true_fn: The callable to be performed if pred is true.\n false_fn: The callable to be performed if pred is false.\n strict: A boolean that enables/disables 'strict' mode; see above.\n name: Optional name prefix for the returned tensors.\n\nReturns:\n Tensors returned by the call to either `true_fn` or `false_fn`. If the\n callables return a singleton list, the element is extracted from the list.\n\nRaises:\n TypeError: if `true_fn` or `false_fn` is not callable.\n ValueError: if `true_fn` and `false_fn` do not return the same number of\n tensors, or return tensors of different types.\n\nExample:\n\n```python\nx = tf.constant(2)\ny = tf.constant(5)\ndef f1(): return tf.multiply(x, 17)\ndef f2(): return tf.add(y, 23)\nr = tf.cond(tf.less(x, y), f1, f2)\n# r is set to f1().\n# Operations in f2 (e.g., tf.add) are not executed.\n```", "desc": "Return `true_fn()` if the predicate `pred` is true else `false_fn()`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.ConditionalAccumulator", "docs": "A conditional accumulator for aggregating gradients.\n\n Up-to-date gradients (i.e., time step at which gradient was computed is\n equal to the accumulator's time step) are added to the accumulator.\n\n Extraction of the average gradient is blocked until the required number of\n gradients has been accumulated.\n ", "desc": "A conditional accumulator for aggregating gradients.", "type": "API"}, {"name": "tf.compat.v1.ConditionalAccumulatorBase", "docs": "A conditional accumulator for aggregating gradients.\n\n Up-to-date gradients (i.e., time step at which gradient was computed is\n equal to the accumulator's time step) are added to the accumulator.\n\n Extraction of the average gradient is blocked until the required number of\n gradients has been accumulated.\n ", "desc": "A conditional accumulator for aggregating gradients.", "type": "API"}, {"name": "tf.compat.v1.config", "docs": "Public API for tf.config namespace.\n", "desc": "Public API for tf.config namespace.", "type": "API"}, {"name": "tf.compat.v1.config.experimental", "docs": "Public API for tf.config.experimental namespace.\n", "desc": "Public API for tf.config.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.ClusterDeviceFilters", "docs": "Represent a collection of device filters for the remote workers in cluster.\n\n NOTE: this is an experimental API and subject to changes.\n\n Set device filters for selective jobs and tasks. For each remote worker, the\n device filters are a list of strings. When any filters are present, the remote\n worker will ignore all devices which do not match any of its filters. Each\n filter can be partially specified, e.g. \"/job:ps\", \"/job:worker/replica:3\",\n etc. Note that a device is always visible to the worker it is located on.\n\n For example, to set the device filters for a parameter server cluster:\n\n ```python\n cdf = tf.config.experimental.ClusterDeviceFilters()\n for i in range(num_workers):\n cdf.set_device_filters('worker', i, ['/job:ps'])\n for i in range(num_ps):\n cdf.set_device_filters('ps', i, ['/job:worker'])\n\n tf.config.experimental_connect_to_cluster(cluster_def,\n cluster_device_filters=cdf)\n ```\n\n The device filters can be partically specified. For remote tasks that do not\n have device filters specified, all devices will be visible to them.\n ", "desc": "Represent a collection of device filters for the remote workers in cluster.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.disable_mlir_bridge", "docs": "Disables experimental MLIR-Based TensorFlow Compiler Bridge.", "desc": "Disables experimental MLIR-Based TensorFlow Compiler Bridge.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.disable_mlir_graph_optimization", "docs": "Disables experimental MLIR-Based TensorFlow Compiler Optimizations.", "desc": "Disables experimental MLIR-Based TensorFlow Compiler Optimizations.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.enable_mlir_bridge", "docs": "Enables experimental MLIR-Based TensorFlow Compiler Bridge.\n\n DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.\n\n NOTE: MLIR-Based TensorFlow Compiler is under active development and has\n missing features, please refrain from using. This API exists for development\n and testing only.\n\n TensorFlow Compiler Bridge (TF Bridge) is responsible for translating parts\n of TensorFlow graph into a form that can be accepted as an input by a backend\n compiler such as XLA.\n ", "desc": "Enables experimental MLIR-Based TensorFlow Compiler Bridge.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.enable_mlir_graph_optimization", "docs": "Enables experimental MLIR-Based TensorFlow Compiler Optimizations.\n\n DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.\n\n NOTE: MLIR-Based TensorFlow Compiler is under active development and has\n missing features, please refrain from using. This API exists for development\n and testing only.\n\n TensorFlow Compiler Optimizations are responsible general graph level\n optimizations that in the current stack mostly done by Grappler graph\n optimizers.\n ", "desc": "Enables experimental MLIR-Based TensorFlow Compiler Optimizations.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.enable_tensor_float_32_execution", "docs": "Enable or disable the use of TensorFloat-32 on supported hardware.\n\n [TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format),\n or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32\n execution causes certain float32 ops, such as matrix multiplications and\n convolutions, to run much faster on Ampere GPUs but with reduced precision.\n This reduced precision should not impact convergence of deep learning models\n in practice.\n\n TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on\n Ampere GPUs, so all other hardware will use the full float32 precision\n regardless of whether TensorFloat-32 is enabled or not. If you want to use the\n full float32 precision on Ampere, you can disable TensorFloat-32 execution\n with this function. For example:\n\n ```python\n x = tf.fill((2, 2), 1.0001)\n y = tf.fill((2, 2), 1.)\n # TensorFloat-32 is enabled, so matmul is run with reduced precision\n print(tf.linalg.matmul(x, y)) # [[2., 2.], [2., 2.]]\n tf.config.experimental.enable_tensor_float_32_execution(False)\n # Matmul is run with full precision\n print(tf.linalg.matmul(x, y)) # [[2.0002, 2.0002], [2.0002, 2.0002]]\n ```\n\n To check whether TensorFloat-32 execution is currently enabled, use\n `tf.config.experimental.tensor_float_32_execution_enabled`.\n\n If TensorFloat-32 is enabled, float32 inputs of supported ops, such as\n `tf.linalg.matmul`, will be rounded from 23 bits of precision to 10 bits of\n precision in most cases. This allows the ops to execute much faster by\n utilizing the GPU's tensor cores. TensorFloat-32 has the same dynamic range as\n float32, meaning it is no more likely to underflow or overflow than float32.\n Ops still use float32 accumulation when TensorFloat-32 is enabled. Enabling or\n disabling TensorFloat-32 only affects Ampere GPUs and subsequent GPUs that\n support TensorFloat-32.\n\n Note TensorFloat-32 is not always used in supported ops, as only inputs of\n certain shapes are supported. Support for more input shapes and more ops may\n be added in the future. As a result, precision of float32 ops may decrease in\n minor versions of TensorFlow.\n\n TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32\n is used in fewer cases for complex64 as it is for float32.\n\n Args:\n enabled: Bool indicating whether to enable TensorFloat-32 execution.\n ", "desc": "Enable or disable the use of TensorFloat-32 on supported hardware.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_device_details", "docs": "Returns details about a physical devices.\n\n This API takes in a `tf.config.PhysicalDevice` returned by\n `tf.config.list_physical_devices`. It returns a dict with string keys\n containing various details about the device. Each key is only supported by a\n subset of devices, so you should not assume the returned dict will have any\n particular key.\n\n >>> gpu_devices = tf.config.list_physical_devices('GPU')\n >>> if gpu_devices:\n ... details = tf.config.experimental.get_device_details(gpu_devices[0])\n ... details.get('device_name', 'Unknown GPU')\n\n Currently, details are only returned for GPUs. This function returns an\n empty dict if passed a non-GPU device.\n\n The returned dict may have the following keys:\n * `'device_name'`: A human-readable name of the device as a string, e.g.\n \"Titan V\". Unlike `tf.config.PhysicalDevice.name`, this will be the same for\n multiple devices if each device is the same model. Currently only available\n for GPUs.\n * `'compute_capability'`: The\n [compute capability](https://developer.nvidia.com/cuda-gpus) of the device\n as a tuple of two ints, in the form `(major_version, minor_version)`. Only\n available for NVIDIA GPUs\n\n Note: This is similar to `tf.sysconfig.get_build_info` in that both functions\n can return information relating to GPUs. However, this function returns\n run-time information about a specific device (such as a GPU's compute\n capability), while `tf.sysconfig.get_build_info` returns compile-time\n information about how TensorFlow was built (such as what version of CUDA\n TensorFlow was built for).\n\n Args:\n device: A `tf.config.PhysicalDevice` returned by\n `tf.config.list_physical_devices` or `tf.config.get_visible_devices`.\n\n Returns:\n A dict with string keys.\n ", "desc": "Returns details about a physical devices.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_device_policy", "docs": "Gets the current device policy.\n\n The device policy controls how operations requiring inputs on a specific\n device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).\n\n This function only gets the device policy for the current thread. Any\n subsequently started thread will again use the default policy.\n\n Returns:\n Current thread device policy\n ", "desc": "Gets the current device policy.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_memory_growth", "docs": "Get if memory growth is enabled for a `PhysicalDevice`.\n\n If memory growth is enabled for a `PhysicalDevice`, the runtime initialization\n will not allocate all memory on the device.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.experimental.set_memory_growth(physical_devices[0], True)\n ... assert tf.config.experimental.get_memory_growth(physical_devices[0])\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n A boolean indicating the memory growth setting for the `PhysicalDevice`.\n\n Raises:\n ValueError: Invalid `PhysicalDevice` specified.\n ", "desc": "Get if memory growth is enabled for a `PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_memory_info", "docs": "Get memory info for the chosen device, as a dict.\n\n This function returns a dict containing information about the device's memory\n usage. For example:\n\n >>> if tf.config.list_physical_devices('GPU'):\n ... # Returns a dict in the form {'current': ,\n ... # 'peak': }\n ... tf.config.experimental.get_memory_info('GPU:0')\n\n Currently returns the following keys:\n - `'current'`: The current memory used by the device, in bytes.\n - `'peak'`: The peak memory used by the device across the run of the\n program, in bytes. Can be reset with\n `tf.config.experimental.reset_memory_stats`.\n\n More keys may be added in the future, including device-specific keys.\n\n Currently only supports GPU and TPU. If called on a CPU device, an exception\n will be raised.\n\n For GPUs, TensorFlow will allocate all the memory by default, unless changed\n with `tf.config.experimental.set_memory_growth`. The dict specifies only the\n current and peak memory that TensorFlow is actually using, not the memory that\n TensorFlow has allocated on the GPU.\n\n Args:\n device: Device string to get the memory information for, e.g. `\"GPU:0\"`,\n `\"TPU:0\"`. See https://www.tensorflow.org/api_docs/python/tf/device for\n specifying device strings.\n\n Returns:\n A dict with keys `'current'` and `'peak'`, specifying the current and peak\n memory usage respectively.\n\n Raises:\n ValueError: No device found with the device name, like '\"nonexistent\"'.\n ValueError: Invalid device name, like '\"GPU\"', '\"CPU:GPU\"', '\"CPU:\"'.\n ValueError: Multiple devices matched with the device name.\n ValueError: Memory statistics not tracked, like '\"CPU:0\"'.\n ", "desc": "Get memory info for the chosen device, as a dict.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_memory_usage", "docs": "Get the current memory usage, in bytes, for the chosen device. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.config.experimental.get_memory_info(device)['current'] instead.\n\nThis function is deprecated in favor of\n`tf.config.experimental.get_memory_info`. Calling this function is equivalent\nto calling `tf.config.experimental.get_memory_info()['current']`.\n\nSee https://www.tensorflow.org/api_docs/python/tf/device for specifying device\nstrings.\n\nFor example:\n\n>>> gpu_devices = tf.config.list_physical_devices('GPU')\n>>> if gpu_devices:\n... tf.config.experimental.get_memory_usage('GPU:0')\n\nDoes not work for CPU.\n\nFor GPUs, TensorFlow will allocate all the memory by default, unless changed\nwith `tf.config.experimental.set_memory_growth`. This function only returns\nthe memory that TensorFlow is actually using, not the memory that TensorFlow\nhas allocated on the GPU.\n\nArgs:\n device: Device string to get the bytes in use for, e.g. `\"GPU:0\"`\n\nReturns:\n Total memory usage in bytes.\n\nRaises:\n ValueError: Non-existent or CPU device specified.", "desc": "Get the current memory usage, in bytes, for the chosen device. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_synchronous_execution", "docs": "Gets whether operations are executed synchronously or asynchronously.\n\n TensorFlow can execute operations synchronously or asynchronously. If\n asynchronous execution is enabled, operations may return \"non-ready\" handles.\n\n Returns:\n Current thread execution mode\n ", "desc": "Gets whether operations are executed synchronously or asynchronously.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_virtual_device_configuration", "docs": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.\n\n Returns the list of `tf.config.LogicalDeviceConfiguration`\n objects previously configured by a call to\n `tf.config.set_logical_device_configuration`.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n >>> try:\n ... assert configs is None\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n ... assert len(configs) == 2\n ... except:\n ... # Cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n List of `tf.config.LogicalDeviceConfiguration` objects or\n `None` if no virtual device configuration has been set for this physical\n device.\n ", "desc": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.get_visible_devices", "docs": "Get the list of visible physical devices.\n\n Returns the list of `PhysicalDevice`s currently marked as visible to the\n runtime. A visible device will have at least one `LogicalDevice` associated\n with it once the runtime is initialized.\n\n The following example verifies all visible GPUs have been disabled:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable all GPUS\n ... tf.config.set_visible_devices([], 'GPU')\n ... visible_devices = tf.config.get_visible_devices()\n ... for device in visible_devices:\n ... assert device.device_type != 'GPU'\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of visible `PhysicalDevice`s\n ", "desc": "Get the list of visible physical devices.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.list_logical_devices", "docs": "Return a list of logical devices created by runtime.\n\n Logical devices may correspond to physical devices or remote devices in the\n cluster. Operations and tensors may be placed on these devices by using the\n `name` of the `tf.config.LogicalDevice`.\n\n Calling `tf.config.list_logical_devices` triggers the runtime to configure any\n `tf.config.PhysicalDevice` visible to the runtime, thereby preventing\n further configuration. To avoid runtime initialization, call\n `tf.config.list_physical_devices` instead.\n\n For example:\n\n >>> logical_devices = tf.config.list_logical_devices('GPU')\n >>> if len(logical_devices) > 0:\n ... # Allocate on GPU:0\n ... with tf.device(logical_devices[0].name):\n ... one = tf.constant(1)\n ... # Allocate on GPU:1\n ... with tf.device(logical_devices[1].name):\n ... two = tf.constant(2)\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of initialized `LogicalDevice`s\n ", "desc": "Return a list of logical devices created by runtime.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.list_physical_devices", "docs": "Return a list of physical devices visible to the host runtime.\n\n Physical devices are hardware devices present on the host machine. By default\n all discovered CPU and GPU devices are considered visible.\n\n This API allows querying the physical hardware resources prior to runtime\n initialization. Thus, giving an opportunity to call any additional\n configuration APIs. This is in contrast to `tf.config.list_logical_devices`,\n which triggers runtime initialization in order to list the configured devices.\n\n The following example lists the number of visible GPUs on the host.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> print(\"Num GPUs:\", len(physical_devices))\n Num GPUs: ...\n\n However, the number of GPUs available to the runtime may change during runtime\n initialization due to marking certain devices as not visible or configuring\n multiple logical devices.\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of discovered `tf.config.PhysicalDevice` objects\n ", "desc": "Return a list of physical devices visible to the host runtime.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.set_device_policy", "docs": "Sets the current thread device policy.\n\n The device policy controls how operations requiring inputs on a specific\n device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).\n\n When using the default, an appropriate policy will be picked automatically.\n The default policy may change over time.\n\n This function only sets the device policy for the current thread. Any\n subsequently started thread will again use the default policy.\n\n Args:\n device_policy: A device policy.\n Valid values:\n - None: Switch to a system default.\n - 'warn': Copies the tensors which are not on the right device and logs a\n warning.\n - 'explicit': Raises an error if the placement is not as required.\n - 'silent': Silently copies the tensors. Note that this may hide\n performance problems as there is no notification provided when\n operations are blocked on the tensor being copied between devices.\n - 'silent_for_int32': silently copies `int32` tensors, raising errors on\n the other ones.\n\n Raises:\n ValueError: If an invalid `device_policy` is passed.\n ", "desc": "Sets the current thread device policy.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.set_memory_growth", "docs": "Set if memory growth should be enabled for a `PhysicalDevice`.\n\n If memory growth is enabled for a `PhysicalDevice`, the runtime initialization\n will not allocate all memory on the device. Memory growth cannot be configured\n on a `PhysicalDevice` with virtual devices configured.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.experimental.set_memory_growth(physical_devices[0], True)\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to configure\n enable: (Boolean) Whether to enable or disable memory growth\n\n Raises:\n ValueError: Invalid `PhysicalDevice` specified.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set if memory growth should be enabled for a `PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.set_synchronous_execution", "docs": "Specifies whether operations are executed synchronously or asynchronously.\n\n TensorFlow can execute operations synchronously or asynchronously. If\n asynchronous execution is enabled, operations may return \"non-ready\" handles.\n\n When `enable` is set to None, an appropriate value will be picked\n automatically. The value picked may change between TensorFlow releases.\n\n Args:\n enable: Whether operations should be dispatched synchronously.\n Valid values:\n - None: sets the system default.\n - True: executes each operation synchronously.\n - False: executes each operation asynchronously.\n ", "desc": "Specifies whether operations are executed synchronously or asynchronously.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.set_virtual_device_configuration", "docs": "Set the logical device configuration for a `tf.config.PhysicalDevice`.\n\n A visible `tf.config.PhysicalDevice` will by default have a single\n `tf.config.LogicalDevice` associated with it once the runtime is initialized.\n Specifying a list of `tf.config.LogicalDeviceConfiguration` objects allows\n multiple devices to be created on the same `tf.config.PhysicalDevice`.\n\n Logical device configurations can be modified by calling this function as\n long as the runtime is uninitialized. After the runtime is initialized\n calling this function raises a RuntimeError.\n\n The following example splits the CPU into 2 logical devices:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> # Specify 2 virtual CPUs. Note currently memory limit is not supported.\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... logical_devices = tf.config.list_logical_devices('CPU')\n ... assert len(logical_devices) == 2\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... except:\n ... # Cannot modify logical devices once initialized.\n ... pass\n\n The following example splits the GPU into 2 logical devices with 100 MB each:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=100),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=100)])\n ...\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... assert len(logical_devices) == len(physical_devices) + 1\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=10),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=10)])\n ... except:\n ... # Invalid device or cannot modify logical devices once initialized.\n ... pass\n\n Args:\n device: The `PhysicalDevice` to configure.\n logical_devices: (optional) List of `tf.config.LogicalDeviceConfiguration`\n objects to allocate for the specified `PhysicalDevice`. If None, the\n default configuration will be used.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the logical device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.set_visible_devices", "docs": "Set the list of visible devices.\n\n Specifies which `PhysicalDevice` objects are visible to the runtime.\n TensorFlow will only allocate memory and place operations on visible\n physical devices, as otherwise no `LogicalDevice` will be created on them.\n By default all discovered devices are marked as visible.\n\n The following example demonstrates disabling the first GPU on the machine.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable first GPU\n ... tf.config.set_visible_devices(physical_devices[1:], 'GPU')\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... # Logical device was not created for first GPU\n ... assert len(logical_devices) == len(physical_devices) - 1\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n devices: List of `PhysicalDevice`s to make visible\n device_type: (optional) Only configure devices matching this device type.\n For example \"CPU\" or \"GPU\". Other devices will be left unaltered.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the list of visible devices.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.tensor_float_32_execution_enabled", "docs": "Returns whether TensorFloat-32 is enabled.\n\n By default, TensorFloat-32 is enabled, but this can be changed with\n `tf.config.experimental.enable_tensor_float_32_execution`.\n\n Returns:\n True if TensorFloat-32 is enabled (the default) and False otherwise\n ", "desc": "Returns whether TensorFloat-32 is enabled.", "type": "API"}, {"name": "tf.compat.v1.config.experimental.VirtualDeviceConfiguration", "docs": "Configuration class for a logical devices.\n\n The class specifies the parameters to configure a `tf.config.PhysicalDevice`\n as it is initialized to a `tf.config.LogicalDevice` during runtime\n initialization. Not all fields are valid for all device types.\n\n See `tf.config.get_logical_device_configuration` and\n `tf.config.set_logical_device_configuration` for usage examples.\n\n Fields:\n memory_limit: (optional) Maximum memory (in MB) to allocate on the virtual\n device. Currently only supported for GPUs.\n experimental_priority: (optional) Priority to assign to a virtual device.\n Lower values have higher priorities and 0 is the default.\n Within a physical GPU, the GPU scheduler will prioritize ops on virtual\n devices with higher priority. Currently only supported for Nvidia GPUs.\n ", "desc": "Configuration class for a logical devices.", "type": "API"}, {"name": "tf.compat.v1.config.experimental_connect_to_cluster", "docs": "Connects to the given cluster.\n\n Will make devices on the cluster available to use. Note that calling this more\n than once will work, but will invalidate any tensor handles on the old remote\n devices.\n\n If the given local job name is not present in the cluster specification, it\n will be automatically added, using an unused port on the localhost.\n\n Device filters can be specified to isolate groups of remote tasks to avoid\n undesired accesses between workers. Workers accessing resources or launching\n ops / functions on filtered remote devices will result in errors (unknown\n devices). For any remote task, if no device filter is present, all cluster\n devices will be visible; if any device filter is specified, it can only\n see devices matching at least one filter. Devices on the task itself are\n always visible. Device filters can be particially specified.\n\n For example, for a cluster set up for parameter server training, the following\n device filters might be specified:\n\n ```python\n cdf = tf.config.experimental.ClusterDeviceFilters()\n # For any worker, only the devices on PS nodes and itself are visible\n for i in range(num_workers):\n cdf.set_device_filters('worker', i, ['/job:ps'])\n # Similarly for any ps, only the devices on workers and itself are visible\n for i in range(num_ps):\n cdf.set_device_filters('ps', i, ['/job:worker'])\n\n tf.config.experimental_connect_to_cluster(cluster_def,\n cluster_device_filters=cdf)\n ```\n\n Args:\n cluster_spec_or_resolver: A `ClusterSpec` or `ClusterResolver` describing\n the cluster.\n job_name: The name of the local job.\n task_index: The local task index.\n protocol: The communication protocol, such as `\"grpc\"`. If unspecified, will\n use the default from `python/platform/remote_utils.py`.\n make_master_device_default: If True and a cluster resolver is passed, will\n automatically enter the master task device scope, which indicates the\n master becomes the default device to run ops. It won't do anything if\n a cluster spec is passed. Will throw an error if the caller is currently\n already in some device scope.\n cluster_device_filters: an instance of\n `tf.train.experimental/ClusterDeviceFilters` that specify device filters\n to the remote tasks in cluster.\n ", "desc": "Connects to the given cluster.", "type": "API"}, {"name": "tf.compat.v1.config.experimental_connect_to_host", "docs": "Connects to a single machine to enable remote execution on it.\n\n Will make devices on the remote host available to use. Note that calling this\n more than once will work, but will invalidate any tensor handles on the old\n remote devices.\n\n Using the default job_name of worker, you can schedule ops to run remotely as\n follows:\n ```python\n # When eager execution is enabled, connect to the remote host.\n tf.config.experimental_connect_to_host(\"exampleaddr.com:9876\")\n\n with ops.device(\"job:worker/replica:0/task:1/device:CPU:0\"):\n # The following tensors should be resident on the remote device, and the op\n # will also execute remotely.\n x1 = array_ops.ones([2, 2])\n x2 = array_ops.ones([2, 2])\n y = math_ops.matmul(x1, x2)\n ```\n\n Args:\n remote_host: a single or a list the remote server addr in host-port format.\n job_name: The job name under which the new server will be accessible.\n\n Raises:\n ValueError: if remote_host is None.\n ", "desc": "Connects to a single machine to enable remote execution on it.", "type": "API"}, {"name": "tf.compat.v1.config.experimental_functions_run_eagerly", "docs": "Returns the value of the `experimental_run_functions_eagerly` setting. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.config.functions_run_eagerly instead of the experimental version.", "desc": "Returns the value of the `experimental_run_functions_eagerly` setting. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.config.experimental_run_functions_eagerly", "docs": "Enables / disables eager execution of `tf.function`s. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.config.run_functions_eagerly` instead of the experimental version.\n\nCalling `tf.config.experimental_run_functions_eagerly(True)` will make all\ninvocations of `tf.function` run eagerly instead of running as a traced graph\nfunction.\n\nSee `tf.config.run_functions_eagerly` for an example.\n\nNote: This flag has no effect on functions passed into tf.data transformations\nas arguments. tf.data functions are never executed eagerly and are always\nexecuted as a compiled Tensorflow Graph.\n\nArgs:\n run_eagerly: Boolean. Whether to run functions eagerly.", "desc": "Enables / disables eager execution of `tf.function`s. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.config.functions_run_eagerly", "docs": "Returns the value of the `run_functions_eagerly` setting.", "desc": "Returns the value of the `run_functions_eagerly` setting.", "type": "API"}, {"name": "tf.compat.v1.config.get_logical_device_configuration", "docs": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.\n\n Returns the list of `tf.config.LogicalDeviceConfiguration`\n objects previously configured by a call to\n `tf.config.set_logical_device_configuration`.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n >>> try:\n ... assert configs is None\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n ... assert len(configs) == 2\n ... except:\n ... # Cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n List of `tf.config.LogicalDeviceConfiguration` objects or\n `None` if no virtual device configuration has been set for this physical\n device.\n ", "desc": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.get_soft_device_placement", "docs": "Return status of soft device placement flag.\n\n If enabled, an op will be placed on CPU if any of the following are true\n 1. there's no GPU implementation for the OP\n 2. no GPU devices are known or registered\n 3. need to co-locate with reftype input(s) which are from CPU\n\n If disabled, the placement is strict and CPU fallback is not allowed.\n An error is raised when an Op cannot be placed onto its intended device.\n\n Returns:\n A boolean indicating if soft placement is enabled.\n ", "desc": "Return status of soft device placement flag.", "type": "API"}, {"name": "tf.compat.v1.config.get_visible_devices", "docs": "Get the list of visible physical devices.\n\n Returns the list of `PhysicalDevice`s currently marked as visible to the\n runtime. A visible device will have at least one `LogicalDevice` associated\n with it once the runtime is initialized.\n\n The following example verifies all visible GPUs have been disabled:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable all GPUS\n ... tf.config.set_visible_devices([], 'GPU')\n ... visible_devices = tf.config.get_visible_devices()\n ... for device in visible_devices:\n ... assert device.device_type != 'GPU'\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of visible `PhysicalDevice`s\n ", "desc": "Get the list of visible physical devices.", "type": "API"}, {"name": "tf.compat.v1.config.list_logical_devices", "docs": "Return a list of logical devices created by runtime.\n\n Logical devices may correspond to physical devices or remote devices in the\n cluster. Operations and tensors may be placed on these devices by using the\n `name` of the `tf.config.LogicalDevice`.\n\n Calling `tf.config.list_logical_devices` triggers the runtime to configure any\n `tf.config.PhysicalDevice` visible to the runtime, thereby preventing\n further configuration. To avoid runtime initialization, call\n `tf.config.list_physical_devices` instead.\n\n For example:\n\n >>> logical_devices = tf.config.list_logical_devices('GPU')\n >>> if len(logical_devices) > 0:\n ... # Allocate on GPU:0\n ... with tf.device(logical_devices[0].name):\n ... one = tf.constant(1)\n ... # Allocate on GPU:1\n ... with tf.device(logical_devices[1].name):\n ... two = tf.constant(2)\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of initialized `LogicalDevice`s\n ", "desc": "Return a list of logical devices created by runtime.", "type": "API"}, {"name": "tf.compat.v1.config.list_physical_devices", "docs": "Return a list of physical devices visible to the host runtime.\n\n Physical devices are hardware devices present on the host machine. By default\n all discovered CPU and GPU devices are considered visible.\n\n This API allows querying the physical hardware resources prior to runtime\n initialization. Thus, giving an opportunity to call any additional\n configuration APIs. This is in contrast to `tf.config.list_logical_devices`,\n which triggers runtime initialization in order to list the configured devices.\n\n The following example lists the number of visible GPUs on the host.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> print(\"Num GPUs:\", len(physical_devices))\n Num GPUs: ...\n\n However, the number of GPUs available to the runtime may change during runtime\n initialization due to marking certain devices as not visible or configuring\n multiple logical devices.\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of discovered `tf.config.PhysicalDevice` objects\n ", "desc": "Return a list of physical devices visible to the host runtime.", "type": "API"}, {"name": "tf.compat.v1.config.LogicalDevice", "docs": "Abstraction for a logical device initialized by the runtime.\n\n A `tf.config.LogicalDevice` corresponds to an initialized logical device on a\n `tf.config.PhysicalDevice` or a remote device visible to the cluster. Tensors\n and operations can be placed on a specific logical device by calling\n `tf.device` with a specified `tf.config.LogicalDevice`.\n\n Fields:\n name: The fully qualified name of the device. Can be used for Op or function\n placement.\n device_type: String declaring the type of device such as \"CPU\" or \"GPU\".\n ", "desc": "Abstraction for a logical device initialized by the runtime.", "type": "API"}, {"name": "tf.compat.v1.config.LogicalDeviceConfiguration", "docs": "Configuration class for a logical devices.\n\n The class specifies the parameters to configure a `tf.config.PhysicalDevice`\n as it is initialized to a `tf.config.LogicalDevice` during runtime\n initialization. Not all fields are valid for all device types.\n\n See `tf.config.get_logical_device_configuration` and\n `tf.config.set_logical_device_configuration` for usage examples.\n\n Fields:\n memory_limit: (optional) Maximum memory (in MB) to allocate on the virtual\n device. Currently only supported for GPUs.\n experimental_priority: (optional) Priority to assign to a virtual device.\n Lower values have higher priorities and 0 is the default.\n Within a physical GPU, the GPU scheduler will prioritize ops on virtual\n devices with higher priority. Currently only supported for Nvidia GPUs.\n ", "desc": "Configuration class for a logical devices.", "type": "API"}, {"name": "tf.compat.v1.config.optimizer", "docs": "Public API for tf.config.optimizer namespace.\n", "desc": "Public API for tf.config.optimizer namespace.", "type": "API"}, {"name": "tf.compat.v1.config.optimizer.get_experimental_options", "docs": "Get experimental optimizer options.\n\n Refer to tf.config.optimizer.set_experimental_options for a list of current\n options.\n\n Note that optimizations are only applied in graph mode, (within tf.function).\n In addition, as these are experimental options, the list is subject to change.\n\n Returns:\n Dictionary of configured experimental optimizer options\n ", "desc": "Get experimental optimizer options.", "type": "API"}, {"name": "tf.compat.v1.config.optimizer.get_jit", "docs": "Returns JIT compilation configuration for code inside `tf.function`.\n\n Possible return values:\n -`\"autoclustering\"` if\n [autoclustering](https://www.tensorflow.org/xla#auto-clustering) is enabled\n - `\"\"` when no default compilation is applied.\n ", "desc": "Returns JIT compilation configuration for code inside `tf.function`.", "type": "API"}, {"name": "tf.compat.v1.config.optimizer.set_experimental_options", "docs": "Set experimental optimizer options.\n\n Note that optimizations are only applied in graph mode, (within tf.function).\n In addition, as these are experimental options, the list is subject to change.\n\n Args:\n options: Dictionary of experimental optimizer options to configure.\n Valid keys:\n - layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW\n layout on GPU which is faster.\n - constant_folding: Fold constants Statically infer the value of tensors\n when possible, and materialize the result using constants.\n - shape_optimization: Simplify computations made on shapes.\n - remapping: Remap subgraphs onto more efficient implementations.\n - arithmetic_optimization: Simplify arithmetic ops with common\n sub-expression elimination and arithmetic simplification.\n - dependency_optimization: Control dependency optimizations. Remove\n redundant control dependencies, which may enable other optimization.\n This optimizer is also essential for pruning Identity and NoOp nodes.\n - loop_optimization: Loop optimizations.\n - function_optimization: Function optimizations and inlining.\n - debug_stripper: Strips debug-related nodes from the graph.\n - disable_model_pruning: Disable removal of unnecessary ops from the graph\n - scoped_allocator_optimization: Try to allocate some independent Op\n outputs contiguously in order to merge or eliminate downstream Ops.\n - pin_to_host_optimization: Force small ops onto the CPU.\n - implementation_selector: Enable the swap of kernel implementations based\n on the device placement.\n - auto_mixed_precision: Change certain float32 ops to float16 on Volta\n GPUs and above. Without the use of loss scaling, this can cause\n numerical underflow (see\n `keras.mixed_precision.experimental.LossScaleOptimizer`).\n - disable_meta_optimizer: Disable the entire meta optimizer.\n - min_graph_nodes: The minimum number of nodes in a graph to optimizer.\n For smaller graphs, optimization is skipped.\n ", "desc": "Set experimental optimizer options.", "type": "API"}, {"name": "tf.compat.v1.config.optimizer.set_jit", "docs": "Configure JIT compilation. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(jit_config=True)`. They will be removed in a future version.\nInstructions for updating:\n`True` setting is deprecated, use `autoclustering` instead.\n\nNote: compilation is only applied to code that is compiled into a\ngraph (in TF2 that's only a code inside `tf.function`).\n\nArgs:\n enabled: JIT compilation configuration.\n Possible values:\n - `\"autoclustering\"` (`True` is a deprecated alias): perform\n [autoclustering](https://www.tensorflow.org/xla#auto-clustering)\n (automatically identify and compile clusters of nodes) on all graphs\n using\n [XLA](https://www.tensorflow.org/xla).\n - `False`: do not automatically compile any graphs.", "desc": "Configure JIT compilation. (deprecated argument values)", "type": "API"}, {"name": "tf.compat.v1.config.PhysicalDevice", "docs": "Abstraction for a locally visible physical device.\n\n TensorFlow can utilize various devices such as the CPU or multiple GPUs\n for computation. Before initializing a local device for use, the user can\n customize certain properties of the device such as it's visibility or memory\n configuration.\n\n Once a visible `tf.config.PhysicalDevice` is initialized one or more\n `tf.config.LogicalDevice` objects are created. Use\n `tf.config.set_visible_devices` to configure the visibility of a physical\n device and `tf.config.set_logical_device_configuration` to configure multiple\n `tf.config.LogicalDevice` objects for a `tf.config.PhysicalDevice`. This is\n useful when separation between models is needed or to simulate a multi-device\n environment.\n\n Fields:\n name: Unique identifier for device.\n device_type: String declaring the type of device such as \"CPU\" or \"GPU\".\n ", "desc": "Abstraction for a locally visible physical device.", "type": "API"}, {"name": "tf.compat.v1.config.run_functions_eagerly", "docs": "Enables / disables eager execution of `tf.function`s.\n\n Calling `tf.config.run_functions_eagerly(True)` will make all\n invocations of `tf.function` run eagerly instead of running as a traced graph\n function.\n\n This can be useful for debugging.\n\n >>> def my_func(a):\n ... print(\"Python side effect\")\n ... return a + a\n >>> a_fn = tf.function(my_func)\n\n >>> # A side effect the first time the function is traced\n >>> a_fn(tf.constant(1))\n Python side effect\n \n\n >>> # No further side effect, as the traced function is called\n >>> a_fn(tf.constant(2))\n \n\n >>> # Now, switch to eager running\n >>> tf.config.run_functions_eagerly(True)\n >>> # Side effect, as the function is called directly\n >>> a_fn(tf.constant(2))\n Python side effect\n \n\n >>> # Turn this back off\n >>> tf.config.run_functions_eagerly(False)\n\n Note: This flag has no effect on functions passed into tf.data transformations\n as arguments. tf.data functions are never executed eagerly and are always\n executed as a compiled Tensorflow Graph.\n\n Args:\n run_eagerly: Boolean. Whether to run functions eagerly.\n ", "desc": "Enables / disables eager execution of `tf.function`s.", "type": "API"}, {"name": "tf.compat.v1.config.set_logical_device_configuration", "docs": "Set the logical device configuration for a `tf.config.PhysicalDevice`.\n\n A visible `tf.config.PhysicalDevice` will by default have a single\n `tf.config.LogicalDevice` associated with it once the runtime is initialized.\n Specifying a list of `tf.config.LogicalDeviceConfiguration` objects allows\n multiple devices to be created on the same `tf.config.PhysicalDevice`.\n\n Logical device configurations can be modified by calling this function as\n long as the runtime is uninitialized. After the runtime is initialized\n calling this function raises a RuntimeError.\n\n The following example splits the CPU into 2 logical devices:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> # Specify 2 virtual CPUs. Note currently memory limit is not supported.\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... logical_devices = tf.config.list_logical_devices('CPU')\n ... assert len(logical_devices) == 2\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... except:\n ... # Cannot modify logical devices once initialized.\n ... pass\n\n The following example splits the GPU into 2 logical devices with 100 MB each:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=100),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=100)])\n ...\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... assert len(logical_devices) == len(physical_devices) + 1\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=10),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=10)])\n ... except:\n ... # Invalid device or cannot modify logical devices once initialized.\n ... pass\n\n Args:\n device: The `PhysicalDevice` to configure.\n logical_devices: (optional) List of `tf.config.LogicalDeviceConfiguration`\n objects to allocate for the specified `PhysicalDevice`. If None, the\n default configuration will be used.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the logical device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.compat.v1.config.set_soft_device_placement", "docs": "Enable or disable soft device placement.\n\n If enabled, an op will be placed on CPU if any of the following are true\n 1. there's no GPU implementation for the OP\n 2. no GPU devices are known or registered\n 3. need to co-locate with reftype input(s) which are from CPU\n\n Note: by default soft device placement is enabled when running in eager mode\n (for convenience) and disabled in graph mode (for performance).\n\n Args:\n enabled: A boolean indicating whether to enable soft placement.\n ", "desc": "Enable or disable soft device placement.", "type": "API"}, {"name": "tf.compat.v1.config.set_visible_devices", "docs": "Set the list of visible devices.\n\n Specifies which `PhysicalDevice` objects are visible to the runtime.\n TensorFlow will only allocate memory and place operations on visible\n physical devices, as otherwise no `LogicalDevice` will be created on them.\n By default all discovered devices are marked as visible.\n\n The following example demonstrates disabling the first GPU on the machine.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable first GPU\n ... tf.config.set_visible_devices(physical_devices[1:], 'GPU')\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... # Logical device was not created for first GPU\n ... assert len(logical_devices) == len(physical_devices) - 1\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n devices: List of `PhysicalDevice`s to make visible\n device_type: (optional) Only configure devices matching this device type.\n For example \"CPU\" or \"GPU\". Other devices will be left unaltered.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the list of visible devices.", "type": "API"}, {"name": "tf.compat.v1.config.threading", "docs": "Public API for tf.config.threading namespace.\n", "desc": "Public API for tf.config.threading namespace.", "type": "API"}, {"name": "tf.compat.v1.config.threading.get_inter_op_parallelism_threads", "docs": "Get number of threads used for parallelism between independent operations.\n\n Determines the number of threads used by independent non-blocking operations.\n 0 means the system picks an appropriate number.\n\n Returns:\n Number of parallel threads\n ", "desc": "Get number of threads used for parallelism between independent operations.", "type": "API"}, {"name": "tf.compat.v1.config.threading.get_intra_op_parallelism_threads", "docs": "Get number of threads used within an individual op for parallelism.\n\n Certain operations like matrix multiplication and reductions can utilize\n parallel threads for speed ups. A value of 0 means the system picks an\n appropriate number.\n\n Returns:\n Number of parallel threads\n ", "desc": "Get number of threads used within an individual op for parallelism.", "type": "API"}, {"name": "tf.compat.v1.config.threading.set_inter_op_parallelism_threads", "docs": "Set number of threads used for parallelism between independent operations.\n\n Determines the number of threads used by independent non-blocking operations.\n 0 means the system picks an appropriate number.\n\n Args:\n num_threads: Number of parallel threads\n ", "desc": "Set number of threads used for parallelism between independent operations.", "type": "API"}, {"name": "tf.compat.v1.config.threading.set_intra_op_parallelism_threads", "docs": "Set number of threads used within an individual op for parallelism.\n\n Certain operations like matrix multiplication and reductions can utilize\n parallel threads for speed ups. A value of 0 means the system picks an\n appropriate number.\n\n Args:\n num_threads: Number of parallel threads\n ", "desc": "Set number of threads used within an individual op for parallelism.", "type": "API"}, {"name": "tf.compat.v1.ConfigProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.ConfigProto.DeviceCountEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.ConfigProto.Experimental", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.confusion_matrix", "docs": "Computes the confusion matrix from predictions and labels.\n\n The matrix columns represent the prediction labels and the rows represent the\n real labels. The confusion matrix is always a 2-D array of shape `[n, n]`,\n where `n` is the number of valid labels for a given classification task. Both\n prediction and labels must be 1-D arrays of the same shape in order for this\n function to work.\n\n If `num_classes` is `None`, then `num_classes` will be set to one plus the\n maximum value in either predictions or labels. Class labels are expected to\n start at 0. For example, if `num_classes` is 3, then the possible labels\n would be `[0, 1, 2]`.\n\n If `weights` is not `None`, then each prediction contributes its\n corresponding weight to the total value of the confusion matrix cell.\n\n For example:\n\n ```python\n tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>\n [[0 0 0 0 0]\n [0 0 1 0 0]\n [0 0 1 0 0]\n [0 0 0 0 0]\n [0 0 0 0 1]]\n ```\n\n Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`,\n resulting in a 5x5 confusion matrix.\n\n Args:\n labels: 1-D `Tensor` of real labels for the classification task.\n predictions: 1-D `Tensor` of predictions for a given classification.\n num_classes: The possible number of labels the classification task can have.\n If this value is not provided, it will be calculated using both\n predictions and labels array.\n dtype: Data type of the confusion matrix.\n name: Scope name.\n weights: An optional `Tensor` whose shape matches `predictions`.\n\n Returns:\n A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion\n matrix, where `n` is the number of possible labels in the classification\n task.\n\n Raises:\n ValueError: If both predictions and labels are not 1-D vectors and have\n mismatched shapes, or if `weights` is not `None` and its shape doesn't\n match `predictions`.\n ", "desc": "Computes the confusion matrix from predictions and labels.", "type": "API"}, {"name": "tf.compat.v1.conj", "docs": "Returns the complex conjugate of a complex number.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of\n complex numbers that are the complex conjugate of each element in `x`. The\n complex numbers in `x` must be of the form \\\\(a + bj\\\\), where `a` is the\n real part and `b` is the imaginary part.\n\n The complex conjugate returned by this operation is of the form \\\\(a - bj\\\\).\n\n For example:\n\n >>> x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n >>> tf.math.conj(x)\n \n\n If `x` is real, it is returned unchanged.\n\n For example:\n\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.math.conj(x)\n \n\n Args:\n x: `Tensor` to conjugate. Must have numeric or variant type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that is the conjugate of `x` (with the same type).\n\n Raises:\n TypeError: If `x` is not a numeric tensor.\n\n @compatibility(numpy)\n Equivalent to numpy.conj.\n @end_compatibility\n ", "desc": "Returns the complex conjugate of a complex number.", "type": "API"}, {"name": "tf.compat.v1.constant", "docs": "Creates a constant tensor.\n\n The resulting tensor is populated with values of type `dtype`, as\n specified by arguments `value` and (optionally) `shape` (see examples\n below).\n\n The argument `value` can be a constant value, or a list of values of type\n `dtype`. If `value` is a list, then the length of the list must be less\n than or equal to the number of elements implied by the `shape` argument (if\n specified). In the case where the list length is less than the number of\n elements specified by `shape`, the last element in the list will be used\n to fill the remaining entries.\n\n The argument `shape` is optional. If present, it specifies the dimensions of\n the resulting tensor. If not present, the shape of `value` is used.\n\n If the argument `dtype` is not specified, then the type is inferred from\n the type of `value`.\n\n For example:\n\n ```python\n # Constant 1-D Tensor populated with value list.\n tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]\n\n # Constant 2-D tensor populated with scalar value -1.\n tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]\n [-1. -1. -1.]]\n ```\n\n `tf.constant` differs from `tf.fill` in a few ways:\n\n * `tf.constant` supports arbitrary constants, not just uniform scalar\n Tensors like `tf.fill`.\n * `tf.constant` creates a `Const` node in the computation graph with the\n exact value at graph construction time. On the other hand, `tf.fill`\n creates an Op in the graph that is expanded at runtime.\n * Because `tf.constant` only embeds constant values in the graph, it does\n not support dynamic shapes based on other runtime Tensors, whereas\n `tf.fill` does.\n\n Args:\n value: A constant value (or list) of output type `dtype`.\n\n dtype: The type of the elements of the resulting tensor.\n\n shape: Optional dimensions of resulting tensor.\n\n name: Optional name for the tensor.\n\n verify_shape: Boolean that enables verification of a shape of values.\n\n Returns:\n A Constant Tensor.\n\n Raises:\n TypeError: if shape is incorrectly specified or unsupported.\n ", "desc": "Creates a constant tensor.", "type": "API"}, {"name": "tf.compat.v1.constant_initializer", "docs": "Initializer that generates tensors with constant values.\n\n The resulting tensor is populated with values of type `dtype`, as\n specified by arguments `value` following the desired `shape` of the\n new tensor (see examples below).\n\n The argument `value` can be a constant value, or a list of values of type\n `dtype`. If `value` is a list, then the length of the list must be less\n than or equal to the number of elements implied by the desired shape of the\n tensor. In the case where the total number of elements in `value` is less\n than the number of elements required by the tensor shape, the last element\n in `value` will be used to fill the remaining entries. If the total number of\n elements in `value` is greater than the number of elements required by the\n tensor shape, the initializer will raise a `ValueError`.\n\n Args:\n value: A Python scalar, list or tuple of values, or a N-dimensional numpy\n array. All elements of the initialized variable will be set to the\n corresponding value in the `value` argument.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n verify_shape: Boolean that enables verification of the shape of `value`. If\n `True`, the initializer will throw an error if the shape of `value` is not\n compatible with the shape of the initialized tensor.\n\n Raises:\n TypeError: If the input `value` is not one of the expected types.\n\n Examples:\n The following example can be rewritten using a numpy.ndarray instead\n of the `value` list, even reshaped, as shown in the two commented lines\n below the `value` list initialization.\n\n >>> value = [0, 1, 2, 3, 4, 5, 6, 7]\n >>> init = tf.compat.v1.constant_initializer(value)\n >>> # fitting shape\n >>> with tf.compat.v1.Session():\n ... x = tf.compat.v1.get_variable('x', shape=[2, 4], initializer=init)\n ... x.initializer.run()\n ... print(x.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]]\n >>> # Larger shape\n >>> with tf.compat.v1.Session():\n ... y = tf.compat.v1.get_variable('y', shape=[3, 4], initializer=init)\n ... y.initializer.run()\n ... print(y.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]\n [7. 7. 7. 7.]]\n >>> # Smaller shape\n >>> with tf.compat.v1.Session():\n ... z = tf.compat.v1.get_variable('z', shape=[2, 3], initializer=init)\n Traceback (most recent call last):\n ...\n ValueError: Too many elements provided. Needed at most 6, but received 8\n >>> # Shape verification\n >>> init_verify = tf.compat.v1.constant_initializer(value, verify_shape=True)\n >>> with tf.compat.v1.Session():\n ... u = tf.compat.v1.get_variable('u', shape=[3, 4],\n ... initializer=init_verify)\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (3, 4), got (8,).\n\n @compatibility(TF2)\n Although it is a legacy API endpoint, `tf.compat.v1.constant_initializer`\n is compatible with eager execution and `tf.function`.\n\n To migrate to a non-legacy TF2 API, please use `tf.constant_initializer`\n instead. The `dtype`\n argument in `tf.compat.v1.constant_initializer.__init__()` does not exist in\n `tf.constant_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n In the `compat.v1` symbol, if `verify_shape` is set to `True`, an exception\n is raised when initializing a variable with a different shape from\n `value`. If set to `False`, `value` is reshaped to initialize the variable\n if necessary. An exception would only be raised when the number of\n elements are different.\n\n The `verify_shape` argument is not supported in TF2. Using\n `tf.constant_initializer` is equivalent to setting `verify_shape` to `False`.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.compat.v1.constant_initializer(\n value=value,\n dtype=tf.float32,\n verify_shape=False)\n variable = tf.Variable(initializer(shape=[2, 4]))\n ```\n\n After:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.constant_initializer(value=value)\n tf.Variable(initializer(shape=[2, 4], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :--------------- | :-------------------------- |\n | `value` | `value` | In constructor |\n | `dtype` | `dtype` | In `__call__()` method |\n | `verify_shape` | Not Supported | Equivalent to set to `False`|\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=True)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (2, 2), got (4,).\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=False)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n After:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.constant_initializer(value=value)\n >>> tf.Variable(initializer(shape=[2, 2], dtype=tf.float32)).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.compat.v1.container", "docs": "Wrapper for `Graph.container()` using the default graph.\n\n Args:\n container_name: The container string to use in the context.\n\n Returns:\n A context manager that specifies the default container to use for newly\n created stateful ops.\n ", "desc": "Wrapper for `Graph.container()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.control_dependencies", "docs": "Wrapper for `Graph.control_dependencies()` using the default graph.\n\n See `tf.Graph.control_dependencies` for more details.\n\n Note: *In TensorFlow 2 with eager and/or Autograph, you should not require\n this method, as ops execute in the expected order thanks to automatic control\n dependencies.* Only use `tf.control_dependencies` when working with v1\n `tf.Graph` code.\n\n When eager execution is enabled, any callable object in the `control_inputs`\n list will be called.\n\n Args:\n control_inputs: A list of `Operation` or `Tensor` objects which must be\n executed or computed before running the operations defined in the context.\n Can also be `None` to clear the control dependencies. If eager execution\n is enabled, any callable object in the `control_inputs` list will be\n called.\n\n Returns:\n A context manager that specifies control dependencies for all\n operations constructed within the context.\n ", "desc": "Wrapper for `Graph.control_dependencies()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.control_flow_v2_enabled", "docs": "Returns `True` if v2 control flow is enabled.\n\n Note: v2 control flow is always enabled inside of tf.function.\n ", "desc": "Returns `True` if v2 control flow is enabled.", "type": "API"}, {"name": "tf.compat.v1.convert_to_tensor", "docs": "Converts the given `value` to a `Tensor`.\n\n This function converts Python objects of various types to `Tensor`\n objects. It accepts `Tensor` objects, numpy arrays, Python lists,\n and Python scalars. For example:\n\n ```python\n import numpy as np\n\n def my_func(arg):\n arg = tf.convert_to_tensor(arg, dtype=tf.float32)\n return tf.matmul(arg, arg) + arg\n\n # The following calls are equivalent.\n value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))\n value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])\n value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))\n ```\n\n This function can be useful when composing a new operation in Python\n (such as `my_func` in the example above). All standard Python op\n constructors apply this function to each of their Tensor-valued\n inputs, which allows those ops to accept numpy arrays, Python lists,\n and scalars in addition to `Tensor` objects.\n\n Note: This function diverges from default Numpy behavior for `float` and\n `string` types when `None` is present in a Python list or scalar. Rather\n than silently converting `None` values, an error will be thrown.\n\n Args:\n value: An object whose type has a registered `Tensor` conversion function.\n dtype: Optional element type for the returned tensor. If missing, the type\n is inferred from the type of `value`.\n name: Optional name to use if a new `Tensor` is created.\n preferred_dtype: Optional element type for the returned tensor, used when\n dtype is None. In some cases, a caller may not have a dtype in mind when\n converting to a tensor, so preferred_dtype can be used as a soft\n preference. If the conversion to `preferred_dtype` is not possible, this\n argument has no effect.\n dtype_hint: same meaning as preferred_dtype, and overrides it.\n\n Returns:\n A `Tensor` based on `value`.\n\n Raises:\n TypeError: If no conversion function is registered for `value` to `dtype`.\n RuntimeError: If a registered conversion function returns an invalid value.\n ValueError: If the `value` is a tensor not of given `dtype` in graph mode.\n ", "desc": "Converts the given `value` to a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.convert_to_tensor_or_indexed_slices", "docs": "Converts the given object to a `Tensor` or an `IndexedSlices`.\n\n If `value` is an `IndexedSlices` or `SparseTensor` it is returned\n unmodified. Otherwise, it is converted to a `Tensor` using\n `convert_to_tensor()`.\n\n Args:\n value: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed\n by `convert_to_tensor()`.\n dtype: (Optional.) The required `DType` of the returned `Tensor` or\n `IndexedSlices`.\n name: (Optional.) A name to use if a new `Tensor` is created.\n\n Returns:\n A `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`.\n\n Raises:\n ValueError: If `dtype` does not match the element type of `value`.\n ", "desc": "Converts the given object to a `Tensor` or an `IndexedSlices`.", "type": "API"}, {"name": "tf.compat.v1.convert_to_tensor_or_sparse_tensor", "docs": "Converts value to a `SparseTensor` or `Tensor`.\n\n Args:\n value: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a\n registered `Tensor` conversion function.\n dtype: Optional element type for the returned tensor. If missing, the type\n is inferred from the type of `value`.\n name: Optional name to use if a new `Tensor` is created.\n\n Returns:\n A `SparseTensor` or `Tensor` based on `value`.\n\n Raises:\n RuntimeError: If result type is incompatible with `dtype`.\n ", "desc": "Converts value to a `SparseTensor` or `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.cos", "docs": "Computes cos of x element-wise.\n\n Given an input tensor, this function computes cosine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes cos of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.cosh", "docs": "Computes hyperbolic cosine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic cosine of every\n element in the tensor. Input range is `[-inf, inf]` and output range\n is `[1, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.count_nonzero", "docs": "Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_indices)`. They will be removed in a future version.\nInstructions for updating:\nreduction_indices is deprecated, use axis instead\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nentry in `axis`. If `keepdims` is true, the reduced dimensions\nare retained with length 1.\n\nIf `axis` has no entries, all dimensions are reduced, and a\ntensor with a single element is returned.\n\n**NOTE** Floating point comparison to zero is done by exact floating point\nequality check. Small values are **not** rounded to zero for purposes of\nthe nonzero check.\n\nFor example:\n\n```python\nx = tf.constant([[0, 1, 0], [1, 1, 0]])\ntf.math.count_nonzero(x) # 3\ntf.math.count_nonzero(x, 0) # [1, 2, 0]\ntf.math.count_nonzero(x, 1) # [1, 2]\ntf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]]\ntf.math.count_nonzero(x, [0, 1]) # 3\n```\n\n**NOTE** Strings are compared against zero-length empty string `\"\"`. Any\nstring with a size greater than zero is already considered as nonzero.\n\nFor example:\n```python\nx = tf.constant([\"\", \"a\", \" \", \"b\", \"\"])\ntf.math.count_nonzero(x) # 3, with \"a\", \" \", and \"b\" as nonzero strings.\n```\n\nArgs:\n input_tensor: The tensor to reduce. Should be of numeric type, `bool`, or\n `string`.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n dtype: The output dtype; defaults to `tf.int64`.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n input: Overrides input_tensor. For compatibility.\n\nReturns:\n The reduced tensor (number of nonzero values).", "desc": "Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.count_up_to", "docs": "Increments 'ref' until it reaches 'limit'. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPrefer Dataset.range instead.\n\nArgs:\n ref: A Variable. Must be one of the following types: `int32`, `int64`.\n Should be from a scalar `Variable` node.\n limit: An `int`.\n If incrementing ref would bring it above limit, instead generates an\n 'OutOfRange' error.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor`. Has the same type as `ref`.\n A copy of the input before increment. If nothing else modifies the\n input, the values produced will all be distinct.", "desc": "Increments 'ref' until it reaches 'limit'. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.create_partitioned_variables", "docs": "Create a list of partitioned variables according to the given `slicing`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.get_variable` with a partitioner set.\n\nCurrently only one dimension of the full variable can be sliced, and the\nfull variable can be reconstructed by the concatenation of the returned\nlist along that dimension.\n\nArgs:\n shape: List of integers. The shape of the full variable.\n slicing: List of integers. How to partition the variable.\n Must be of the same length as `shape`. Each value\n indicate how many slices to create in the corresponding\n dimension. Presently only one of the values can be more than 1;\n that is, the variable can only be sliced along one dimension.\n\n For convenience, The requested number of partitions does not have to\n divide the corresponding dimension evenly. If it does not, the\n shapes of the partitions are incremented by 1 starting from partition\n 0 until all slack is absorbed. The adjustment rules may change in the\n future, but as you can save/restore these variables with different\n slicing specifications this should not be a problem.\n initializer: A `Tensor` of shape `shape` or a variable initializer\n function. If a function, it will be called once for each slice,\n passing the shape and data type of the slice as parameters. The\n function must return a tensor with the same shape as the slice.\n dtype: Type of the variables. Ignored if `initializer` is a `Tensor`.\n trainable: If True also add all the variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES`.\n collections: List of graph collections keys to add the variables to.\n Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.\n name: Optional name for the full variable. Defaults to\n `\"PartitionedVariable\"` and gets uniquified automatically.\n reuse: Boolean or `None`; if `True` and name is set, it would reuse\n previously created variables. if `False` it will create new variables.\n if `None`, it would inherit the parent scope reuse.\n\nReturns:\n A list of Variables corresponding to the slicing.\n\nRaises:\n ValueError: If any of the arguments is malformed.", "desc": "Create a list of partitioned variables according to the given `slicing`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.CriticalSection", "docs": "Critical section.\n\n A `CriticalSection` object is a resource in the graph which executes subgraphs\n in **serial** order. A common example of a subgraph one may wish to run\n exclusively is the one given by the following function:\n\n ```python\n v = resource_variable_ops.ResourceVariable(0.0, name=\"v\")\n\n def count():\n value = v.read_value()\n with tf.control_dependencies([value]):\n with tf.control_dependencies([v.assign_add(1)]):\n return tf.identity(value)\n ```\n\n Here, a snapshot of `v` is captured in `value`; and then `v` is updated.\n The snapshot value is returned.\n\n If multiple workers or threads all execute `count` in parallel, there is no\n guarantee that access to the variable `v` is atomic at any point within\n any thread's calculation of `count`. In fact, even implementing an atomic\n counter that guarantees that the user will see each value `0, 1, ...,` is\n currently impossible.\n\n The solution is to ensure any access to the underlying resource `v` is\n only processed through a critical section:\n\n ```python\n cs = CriticalSection()\n f1 = cs.execute(count)\n f2 = cs.execute(count)\n output = f1 + f2\n session.run(output)\n ```\n The functions `f1` and `f2` will be executed serially, and updates to `v`\n will be atomic.\n\n **NOTES**\n\n All resource objects, including the critical section and any captured\n variables of functions executed on that critical section, will be\n colocated to the same device (host and cpu/gpu).\n\n When using multiple critical sections on the same resources, there is no\n guarantee of exclusive access to those resources. This behavior is disallowed\n by default (but see the kwarg `exclusive_resource_access`).\n\n For example, running the same function in two separate critical sections\n will not ensure serial execution:\n\n ```python\n v = tf.compat.v1.get_variable(\"v\", initializer=0.0, use_resource=True)\n def accumulate(up):\n x = v.read_value()\n with tf.control_dependencies([x]):\n with tf.control_dependencies([v.assign_add(up)]):\n return tf.identity(x)\n ex1 = CriticalSection().execute(\n accumulate, 1.0, exclusive_resource_access=False)\n ex2 = CriticalSection().execute(\n accumulate, 1.0, exclusive_resource_access=False)\n bad_sum = ex1 + ex2\n sess.run(v.initializer)\n sess.run(bad_sum) # May return 0.0\n ```\n ", "desc": "Critical section.", "type": "API"}, {"name": "tf.compat.v1.cross", "docs": "Compute the pairwise cross product.\n\n `a` and `b` must be the same shape; they can either be simple 3-element vectors,\n or any shape where the innermost dimension is 3. In the latter case, each pair\n of corresponding 3-element vectors is cross-multiplied independently.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n A tensor containing 3-element vectors.\n b: A `Tensor`. Must have the same type as `a`.\n Another tensor, of same type and shape as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the pairwise cross product.", "type": "API"}, {"name": "tf.compat.v1.cumprod", "docs": "Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the\n first element of the input is identical to the first element of the output:\n\n ```python\n tf.math.cumprod([a, b, c]) # [a, a * b, a * b * c]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumprod is\n performed\n instead:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True) # [1, a, a * b]\n ```\n\n By setting the `reverse` kwarg to `True`, the cumprod is performed in the\n opposite direction:\n\n ```python\n tf.math.cumprod([a, b, c], reverse=True) # [a * b * c, b * c, c]\n ```\n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True, reverse=True) # [b * c, c, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumprod.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative product of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.compat.v1.cumsum", "docs": "Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:\n For example:\n\n >>> # tf.cumsum([a, b, c]) # [a, a + b, a + b + c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x)\n \n\n >>> # using varying `axis` values\n >>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]])\n >>> tf.cumsum(y, axis=0)\n \n >>> tf.cumsum(y, axis=1)\n \n\n By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed\n instead:\n\n >>> # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True)\n \n\n By setting the `reverse` kwarg to `True`, the cumsum is performed in the\n opposite direction:\n\n >>> # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, reverse=True)\n \n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n >>> # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True, reverse=True)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumsum.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative sum of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.compat.v1.custom_gradient", "docs": "Decorator to define a function with a custom gradient.\n\n This decorator allows fine grained control over the gradients of a sequence\n for operations. This may be useful for multiple reasons, including providing\n a more efficient or numerically stable gradient for a sequence of operations.\n\n For example, consider the following function that commonly occurs in the\n computation of cross entropy and log likelihoods:\n\n ```python\n def log1pexp(x):\n return tf.math.log(1 + tf.exp(x))\n ```\n\n Due to numerical instability, the gradient of this function evaluated at x=100\n is NaN. For example:\n\n ```python\n x = tf.constant(100.)\n y = log1pexp(x)\n dy_dx = tf.gradients(y, x) # Will be NaN when evaluated.\n ```\n\n The gradient expression can be analytically simplified to provide numerical\n stability:\n\n ```python\n @tf.custom_gradient\n def log1pexp(x):\n e = tf.exp(x)\n def grad(upstream):\n return upstream * (1 - 1 / (1 + e))\n return tf.math.log(1 + e), grad\n ```\n\n With this definition, the gradient `dy_dx` at `x = 100` will be correctly\n evaluated as 1.0.\n\n The variable `upstream` is defined as the upstream gradient. i.e. the gradient\n from all the layers or functions originating from this layer. The above\n example has no upstream functions, therefore `upstream = dy/dy = 1.0`.\n\n Assume that `x_i` is `log1pexp` in the forward pass `x_1 = x_1(x_0)`,\n `x_2 = x_2(x_1)`, ..., `x_i = x_i(x_i-1)`, ..., `x_n = x_n(x_n-1)`. By\n chain rule we know that `dx_n/dx_0 = dx_n/dx_n-1 * dx_n-1/dx_n-2 * ... *\n dx_i/dx_i-1 * ... * dx_1/dx_0`.\n\n In this case the gradient of our current function defined as\n `dx_i/dx_i-1 = (1 - 1 / (1 + e))`. The upstream gradient `upstream` would be\n `dx_n/dx_n-1 * dx_n-1/dx_n-2 * ... * dx_i+1/dx_i`. The upstream gradient\n multiplied by the current gradient is then passed downstream.\n\n In case the function takes multiple variables as input, the `grad`\n function must also return the same number of variables.\n We take the function `z = x * y` as an example.\n\n >>> @tf.custom_gradient\n ... def bar(x, y):\n ... def grad(upstream):\n ... dz_dx = y\n ... dz_dy = x\n ... return upstream * dz_dx, upstream * dz_dy\n ... z = x * y\n ... return z, grad\n >>> x = tf.constant(2.0, dtype=tf.float32)\n >>> y = tf.constant(3.0, dtype=tf.float32)\n >>> with tf.GradientTape(persistent=True) as tape:\n ... tape.watch(x)\n ... tape.watch(y)\n ... z = bar(x, y)\n >>> z\n \n >>> tape.gradient(z, x)\n \n >>> tape.gradient(z, y)\n \n\n Nesting custom gradients can lead to unintuitive results. The default\n behavior does not correspond to n-th order derivatives. For example\n\n ```python\n @tf.custom_gradient\n def op(x):\n y = op1(x)\n @tf.custom_gradient\n def grad_fn(dy):\n gdy = op2(x, y, dy)\n def grad_grad_fn(ddy): # Not the 2nd order gradient of op w.r.t. x.\n return op3(x, y, dy, ddy)\n return gdy, grad_grad_fn\n return y, grad_fn\n ```\n\n The function `grad_grad_fn` will be calculating the first order gradient\n of `grad_fn` with respect to `dy`, which is used to generate forward-mode\n gradient graphs from backward-mode gradient graphs, but is not the same as\n the second order gradient of `op` with respect to `x`.\n\n Instead, wrap nested `@tf.custom_gradients` in another function:\n\n ```python\n @tf.custom_gradient\n def op_with_fused_backprop(x):\n y, x_grad = fused_op(x)\n def first_order_gradient(dy):\n @tf.custom_gradient\n def first_order_custom(unused_x):\n def second_order_and_transpose(ddy):\n return second_order_for_x(...), gradient_wrt_dy(...)\n return x_grad, second_order_and_transpose\n return dy * first_order_custom(x)\n return y, first_order_gradient\n ```\n\n Additional arguments to the inner `@tf.custom_gradient`-decorated function\n control the expected return values of the innermost function.\n\n The examples above illustrate how to specify custom gradients for functions\n which do not read from variables. The following example uses variables, which\n require special handling because they are effectively inputs of the forward\n function.\n\n >>> weights = tf.Variable(tf.ones([2])) # Trainable variable weights\n >>> @tf.custom_gradient\n ... def linear_poly(x):\n ... # Creating polynomial\n ... poly = weights[1] * x + weights[0]\n ...\n ... def grad_fn(dpoly, variables):\n ... # dy/dx = weights[1] and we need to left multiply dpoly\n ... grad_xs = dpoly * weights[1] # Scalar gradient\n ...\n ... grad_vars = [] # To store gradients of passed variables\n ... assert variables is not None\n ... assert len(variables) == 1\n ... assert variables[0] is weights\n ... # Manually computing dy/dweights\n ... dy_dw = dpoly * tf.stack([x ** 1, x ** 0])\n ... grad_vars.append(\n ... tf.reduce_sum(tf.reshape(dy_dw, [2, -1]), axis=1)\n ... )\n ... return grad_xs, grad_vars\n ... return poly, grad_fn\n >>> x = tf.constant([1., 2., 3.])\n >>> with tf.GradientTape(persistent=True) as tape:\n ... tape.watch(x)\n ... poly = linear_poly(x)\n >>> poly # poly = x + 1\n \n >>> tape.gradient(poly, x) # conventional scalar gradient dy/dx\n \n >>> tape.gradient(poly, weights)\n \n\n Above example illustrates usage of trainable variable `weights`.\n In the example, the inner `grad_fn` accepts an extra `variables` input\n parameter and also returns an extra `grad_vars` output. That extra argument\n is passed if the forward function reads any variables. You need to\n compute the gradient w.r.t. each of those `variables` and output it as a list\n of `grad_vars`. Note here that default value of `variables` is set to `None`\n when no variables are used in the forward function.\n\n It should be noted `tf.GradientTape` is still watching the forward pass of a\n `tf.custom_gradient`, and will use the ops it watches. As a consequence,\n calling `tf.function` while the tape is still watching leads\n to a gradient graph being built. If an op is used in `tf.function` without\n registered gradient, a `LookupError` will be raised.\n\n Users can insert `tf.stop_gradient` to customize this behavior. This\n is demonstrated in the example below. `tf.random.shuffle` does not have a\n registered gradient. As a result `tf.stop_gradient` is used to avoid the\n `LookupError`.\n\n ```python\n x = tf.constant([0.3, 0.5], dtype=tf.float32)\n\n @tf.custom_gradient\n def test_func_with_stop_grad(x):\n @tf.function\n def _inner_func():\n # Avoid exception during the forward pass\n return tf.stop_gradient(tf.random.shuffle(x))\n # return tf.random.shuffle(x) # This will raise\n\n res = _inner_func()\n def grad(upstream):\n return upstream # Arbitrarily defined custom gradient\n return res, grad\n\n with tf.GradientTape() as g:\n g.watch(x)\n res = test_func_with_stop_grad(x)\n\n g.gradient(res, x)\n ```\n\n See also `tf.RegisterGradient` which registers a gradient function for a\n primitive TensorFlow operation. `tf.custom_gradient` on the other hand allows\n for fine grained control over the gradient computation of a sequence of\n operations.\n\n Note that if the decorated function uses `Variable`s, the enclosing variable\n scope must be using `ResourceVariable`s.\n\n Args:\n f: function `f(*x)` that returns a tuple `(y, grad_fn)` where:\n - `x` is a sequence of (nested structures of) `Tensor` inputs to the\n function.\n - `y` is a (nested structure of) `Tensor` outputs of applying TensorFlow\n operations in `f` to `x`.\n - `grad_fn` is a function with the signature `g(*grad_ys)` which returns\n a list of `Tensor`s the same size as (flattened) `x` - the derivatives\n of `Tensor`s in `y` with respect to the `Tensor`s in `x`. `grad_ys` is\n a sequence of `Tensor`s the same size as (flattened) `y` holding the\n initial value gradients for each `Tensor` in `y`.\n\n In a pure mathematical sense, a vector-argument vector-valued function\n `f`'s derivatives should be its Jacobian matrix `J`. Here we are\n expressing the Jacobian `J` as a function `grad_fn` which defines how\n `J` will transform a vector `grad_ys` when left-multiplied with it\n (`grad_ys * J`, the vector-Jacobian product, or VJP). This functional\n representation of a matrix is convenient to use for chain-rule\n calculation (in e.g. the back-propagation algorithm).\n\n If `f` uses `Variable`s (that are not part of the\n inputs), i.e. through `get_variable`, then `grad_fn` should have\n signature `g(*grad_ys, variables=None)`, where `variables` is a list of\n the `Variable`s, and return a 2-tuple `(grad_xs, grad_vars)`, where\n `grad_xs` is the same as above, and `grad_vars` is a `list`\n with the derivatives of `Tensor`s in `y` with respect to the variables\n (that is, grad_vars has one Tensor per variable in variables).\n\n Returns:\n A function `h(x)` which returns the same value as `f(x)[0]` and whose\n gradient (as calculated by `tf.gradients`) is determined by `f(x)[1]`.\n ", "desc": "Decorator to define a function with a custom gradient.", "type": "API"}, {"name": "tf.compat.v1.data", "docs": "`tf.data.Dataset` API for input pipelines.\n\nSee [Importing Data](https://tensorflow.org/guide/data) for an overview.\n\n", "desc": "`tf.data.Dataset` API for input pipelines.", "type": "API"}, {"name": "tf.compat.v1.data.Dataset", "docs": "Represents a potentially large set of elements.\n\n A `Dataset` can be used to represent an input pipeline as a\n collection of elements and a \"logical plan\" of transformations that act on\n those elements.\n ", "desc": "Represents a potentially large set of elements.", "type": "API"}, {"name": "tf.compat.v1.data.DatasetSpec", "docs": "Type specification for `tf.data.Dataset`.\n\n See `tf.TypeSpec` for more information about TensorFlow type specifications.\n\n >>> dataset = tf.data.Dataset.range(3)\n >>> tf.data.DatasetSpec.from_value(dataset)\n DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([]))\n ", "desc": "Type specification for `tf.data.Dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental", "docs": "Experimental API for building input pipelines.\n\nThis module contains experimental `Dataset` sources and transformations that can\nbe used in conjunction with the `tf.data.Dataset` API. Note that the\n`tf.data.experimental` API is not subject to the same backwards compatibility\nguarantees as `tf.data`, but we will provide deprecation advice in advance of\nremoving existing functionality.\n\nSee [Importing Data](https://tensorflow.org/guide/datasets) for an overview.\n\n@@AutoShardPolicy\n@@AutotuneAlgorithm\n@@AutotuneOptions\n@@CheckpointInputPipelineHook\n@@Counter\n@@CsvDataset\n@@DatasetInitializer\n@@DatasetStructure\n@@DistributeOptions\n@@ExternalStatePolicy\n@@OptimizationOptions\n@@Optional\n@@OptionalStructure\n@@RaggedTensorStructure\n@@RandomDataset\n@@Reducer\n@@SparseTensorStructure\n@@SqlDataset\n@@Structure\n@@TFRecordWriter\n@@TensorArrayStructure\n@@TensorStructure\n@@ThreadingOptions\n\n@@assert_cardinality\n@@bucket_by_sequence_length\n@@cardinality\n@@choose_from_datasets\n@@copy_to_device\n@@dense_to_ragged_batch\n@@dense_to_sparse_batch\n@@distribute\n@@enable_debug_mode\n@@enumerate_dataset\n@@from_variant\n@@get_next_as_optional\n@@get_single_element\n@@get_structure\n@@group_by_reducer\n@@group_by_window\n@@ignore_errors\n@@index_table_from_dataset\n@@load\n@@make_batched_features_dataset\n@@make_csv_dataset\n@@make_saveable_from_iterator\n@@map_and_batch\n@@map_and_batch_with_legacy_function\n@@parallel_interleave\n@@parse_example_dataset\n@@prefetch_to_device\n@@rejection_resample\n@@sample_from_datasets\n@@save\n@@scan\n@@shuffle_and_repeat\n@@snapshot\n@@table_from_dataset\n@@take_while\n@@to_variant\n@@unbatch\n@@unique\n\n@@AUTOTUNE\n@@INFINITE_CARDINALITY\n@@SHARD_HINT\n@@UNKNOWN_CARDINALITY\n\n", "desc": "Experimental API for building input pipelines.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.assert_cardinality", "docs": "Asserts the cardinality of the input dataset.\n\n NOTE: The following assumes that \"examples.tfrecord\" contains 42 records.\n\n >>> dataset = tf.data.TFRecordDataset(\"examples.tfrecord\")\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())\n True\n >>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42))\n >>> print(tf.data.experimental.cardinality(dataset).numpy())\n 42\n\n Args:\n expected_cardinality: The expected cardinality of the input dataset.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\n Raises:\n FailedPreconditionError: The assertion is checked at runtime (when iterating\n the dataset) and an error is raised if the actual and expected cardinality\n differ.\n ", "desc": "Asserts the cardinality of the input dataset.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.AutoShardPolicy", "docs": "Represents the type of auto-sharding to use.\n\n OFF: No sharding will be performed.\n\n AUTO: Attempts FILE-based sharding, falling back to DATA-based sharding.\n\n FILE: Shards by input files (i.e. each worker will get a set of files to\n process). When this option is selected, make sure that there is at least as\n many files as workers. If there are fewer input files than workers, a runtime\n error will be raised.\n\n DATA: Shards by elements produced by the dataset. Each worker will process the\n whole dataset and discard the portion that is not for itself. Note that for\n this mode to correctly partitions the dataset elements, the dataset needs to\n produce elements in a deterministic order.\n\n HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a\n placeholder to replace with `shard(num_workers, worker_index)`.\n ", "desc": "Represents the type of auto-sharding to use.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.bucket_by_sequence_length", "docs": "A transformation that buckets elements in a `Dataset` by length. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.bucket_by_sequence_length(...)`.\n\nElements of the `Dataset` are grouped together by length and then are padded\nand batched.\n\nThis is useful for sequence tasks in which the elements have variable length.\nGrouping together elements that have similar lengths reduces the total\nfraction of padding in a batch which increases training step efficiency.\n\nBelow is an example to bucketize the input data to the 3 buckets\n\"[0, 3), [3, 5), [5, inf)\" based on sequence length, with batch size 2.\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int64, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[3, 5],\n... bucket_batch_sizes=[2, 2, 2]))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[1 2 3 4]\n [5 6 7 0]]\n[[ 7 8 9 10 11 0]\n [13 14 15 16 19 20]]\n[[ 0 0]\n [21 22]]\n\nThere is also a possibility to pad the dataset till the bucket boundary.\nYou can also provide which value to be used while padding the data.\nBelow example uses `-1` as padding and it also shows the input data\nbeing bucketizied to two buckets \"[0,3], [4,6]\".\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int32, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[4, 7],\n... bucket_batch_sizes=[2, 2, 2],\n... pad_to_bucket_boundary=True,\n... padding_values=-1))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[ 0 -1 -1]\n [ 5 6 7]]\n[[ 1 2 3 4 -1 -1]\n [ 7 8 9 10 11 -1]]\n[[21 22 -1]]\n[[13 14 15 16 19 20]]\n\nWhen using `pad_to_bucket_boundary` option, it can be seen that it is\nnot always possible to maintain the bucket batch size.\nYou can drop the batches that do not maintain the bucket batch size by\nusing the option `drop_remainder`. Using the same input data as in the\nabove example you get the following result.\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int32, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[4, 7],\n... bucket_batch_sizes=[2, 2, 2],\n... pad_to_bucket_boundary=True,\n... padding_values=-1,\n... drop_remainder=True))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[ 0 -1 -1]\n [ 5 6 7]]\n[[ 1 2 3 4 -1 -1]\n [ 7 8 9 10 11 -1]]\n\nArgs:\n element_length_func: function from element in `Dataset` to `tf.int32`,\n determines the length of the element, which will determine the bucket it\n goes into.\n bucket_boundaries: `list`, upper length boundaries of the buckets.\n bucket_batch_sizes: `list`, batch size per bucket. Length should be\n `len(bucket_boundaries) + 1`.\n padded_shapes: Nested structure of `tf.TensorShape` to pass to\n `tf.data.Dataset.padded_batch`. If not provided, will use\n `dataset.output_shapes`, which will result in variable length dimensions\n being padded out to the maximum length in each batch.\n padding_values: Values to pad with, passed to\n `tf.data.Dataset.padded_batch`. Defaults to padding with 0.\n pad_to_bucket_boundary: bool, if `False`, will pad dimensions with unknown\n size to maximum length in batch. If `True`, will pad dimensions with\n unknown size to bucket boundary minus 1 (i.e., the maximum length in each\n bucket), and caller must ensure that the source `Dataset` does not contain\n any elements with length longer than `max(bucket_boundaries)`.\n no_padding: `bool`, indicates whether to pad the batch features (features\n need to be either of type `tf.sparse.SparseTensor` or of same shape).\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in the case it has fewer than\n `batch_size` elements; the default behavior is not to drop the smaller\n batch.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`.", "desc": "A transformation that buckets elements in a `Dataset` by length. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.cardinality", "docs": "Returns the cardinality of `dataset`, if known.\n\n The operation returns the cardinality of `dataset`. The operation may return\n `tf.data.experimental.INFINITE_CARDINALITY` if `dataset` contains an infinite\n number of elements or `tf.data.experimental.UNKNOWN_CARDINALITY` if the\n analysis fails to determine the number of elements in `dataset` (e.g. when the\n dataset source is a file).\n\n >>> dataset = tf.data.Dataset.range(42)\n >>> print(tf.data.experimental.cardinality(dataset).numpy())\n 42\n >>> dataset = dataset.repeat()\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy())\n True\n >>> dataset = dataset.filter(lambda x: True)\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())\n True\n\n Args:\n dataset: A `tf.data.Dataset` for which to determine cardinality.\n\n Returns:\n A scalar `tf.int64` `Tensor` representing the cardinality of `dataset`. If\n the cardinality is infinite or unknown, the operation returns the named\n constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively.\n ", "desc": "Returns the cardinality of `dataset`, if known.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.CheckpointInputPipelineHook", "docs": "Checkpoints input pipeline state every N steps or seconds.\n\n This hook saves the state of the iterators in the `Graph` so that when\n training is resumed the input pipeline continues from where it left off.\n This could potentially avoid overfitting in certain pipelines where the\n number of training steps per eval are small compared to the dataset\n size or if the training pipeline is pre-empted.\n\n Differences from `CheckpointSaverHook`:\n 1. Saves only the input pipelines in the \"iterators\" collection and not the\n global variables or other saveable objects.\n 2. Does not write the `GraphDef` and `MetaGraphDef` to the summary.\n\n Example of checkpointing the training pipeline:\n\n ```python\n est = tf.estimator.Estimator(model_fn)\n while True:\n est.train(\n train_input_fn,\n hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)],\n steps=train_steps_per_eval)\n # Note: We do not pass the hook here.\n metrics = est.evaluate(eval_input_fn)\n if should_stop_the_training(metrics):\n break\n ```\n\n This hook should be used if the input pipeline state needs to be saved\n separate from the model checkpoint. Doing so may be useful for a few reasons:\n 1. The input pipeline checkpoint may be large, if there are large shuffle\n or prefetch buffers for instance, and may bloat the checkpoint size.\n 2. If the input pipeline is shared between training and validation, restoring\n the checkpoint during validation may override the validation input\n pipeline.\n\n For saving the input pipeline checkpoint alongside the model weights use\n `tf.data.experimental.make_saveable_from_iterator` directly to create a\n `SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however,\n that you will need to be careful not to restore the training iterator during\n eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS\n collector when building the eval graph.\n ", "desc": "Checkpoints input pipeline state every N steps or seconds.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.choose_from_datasets", "docs": "Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.choose_from_datasets(...)` instead. Note that, unlike the experimental endpoint, the non-experimental endpoint sets `stop_on_empty_dataset=True` by default. You should set this argument explicitly in case you would like to match the behavior of the experimental endpoint.\n\nFor example, given the following datasets:\n\n```python\ndatasets = [tf.data.Dataset.from_tensors(\"foo\").repeat(),\n tf.data.Dataset.from_tensors(\"bar\").repeat(),\n tf.data.Dataset.from_tensors(\"baz\").repeat()]\n\n# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.\nchoice_dataset = tf.data.Dataset.range(3).repeat(3)\n\nresult = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)\n```\n\nThe elements of `result` will be:\n\n```\n\"foo\", \"bar\", \"baz\", \"foo\", \"bar\", \"baz\", \"foo\", \"bar\", \"baz\"\n```\n\nArgs:\n datasets: A non-empty list of `tf.data.Dataset` objects with compatible\n structure.\n choice_dataset: A `tf.data.Dataset` of scalar `tf.int64` tensors between `0`\n and `len(datasets) - 1`.\n stop_on_empty_dataset: If `True`, selection stops if it encounters an empty\n dataset. If `False`, it skips empty datasets. It is recommended to set it\n to `True`. Otherwise, the selected elements start off as the user intends,\n but may change as input datasets become empty. This can be difficult to\n detect since the dataset starts off looking correct. Default to `False`\n for backward compatibility.\n\nReturns:\n A dataset that interleaves elements from `datasets` according to the values\n of `choice_dataset`.\n\nRaises:\n TypeError: If `datasets` or `choice_dataset` has the wrong type.\n ValueError: If `datasets` is empty.", "desc": "Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.copy_to_device", "docs": "A transformation that copies dataset elements to the given `target_device`.\n\n Args:\n target_device: The name of a device to which elements will be copied.\n source_device: The original device on which `input_dataset` will be placed.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that copies dataset elements to the given `target_device`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.Counter", "docs": "Creates a `Dataset` that counts from `start` in steps of size `step`.\n\n Unlike `tf.data.Dataset.range` which will stop at some ending number,\n `Counter` will produce elements indefinitely.\n\n >>> dataset = tf.data.experimental.Counter().take(5)\n >>> list(dataset.as_numpy_iterator())\n [0, 1, 2, 3, 4]\n >>> dataset.element_spec\n TensorSpec(shape=(), dtype=tf.int64, name=None)\n >>> dataset = tf.data.experimental.Counter(dtype=tf.int32)\n >>> dataset.element_spec\n TensorSpec(shape=(), dtype=tf.int32, name=None)\n >>> dataset = tf.data.experimental.Counter(start=2).take(5)\n >>> list(dataset.as_numpy_iterator())\n [2, 3, 4, 5, 6]\n >>> dataset = tf.data.experimental.Counter(start=2, step=5).take(5)\n >>> list(dataset.as_numpy_iterator())\n [2, 7, 12, 17, 22]\n >>> dataset = tf.data.experimental.Counter(start=10, step=-1).take(5)\n >>> list(dataset.as_numpy_iterator())\n [10, 9, 8, 7, 6]\n\n Args:\n start: (Optional.) The starting value for the counter. Defaults to 0.\n step: (Optional.) The step size for the counter. Defaults to 1.\n dtype: (Optional.) The data type for counter elements. Defaults to\n `tf.int64`.\n\n Returns:\n A `Dataset` of scalar `dtype` elements.\n ", "desc": "Creates a `Dataset` that counts from `start` in steps of size `step`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.CsvDataset", "docs": "A Dataset comprising lines from one or more CSV files.", "desc": "A Dataset comprising lines from one or more CSV files.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.DatasetStructure", "docs": "Type specification for `tf.data.Dataset`.\n\n See `tf.TypeSpec` for more information about TensorFlow type specifications.\n\n >>> dataset = tf.data.Dataset.range(3)\n >>> tf.data.DatasetSpec.from_value(dataset)\n DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([]))\n ", "desc": "Type specification for `tf.data.Dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.dense_to_ragged_batch", "docs": "A transformation that batches ragged elements into `tf.RaggedTensor`s.\n\n This transformation combines multiple consecutive elements of the input\n dataset into a single element.\n\n Like `tf.data.Dataset.batch`, the components of the resulting element will\n have an additional outer dimension, which will be `batch_size` (or\n `N % batch_size` for the last element if `batch_size` does not divide the\n number of input elements `N` evenly and `drop_remainder` is `False`). If\n your program depends on the batches having the same outer dimension, you\n should set the `drop_remainder` argument to `True` to prevent the smaller\n batch from being produced.\n\n Unlike `tf.data.Dataset.batch`, the input elements to be batched may have\n different shapes:\n\n * If an input element is a `tf.Tensor` whose static `tf.TensorShape` is\n fully defined, then it is batched as normal.\n * If an input element is a `tf.Tensor` whose static `tf.TensorShape` contains\n one or more axes with unknown size (i.e., `shape[i]=None`), then the output\n will contain a `tf.RaggedTensor` that is ragged up to any of such\n dimensions.\n * If an input element is a `tf.RaggedTensor` or any other type, then it is\n batched as normal.\n\n Example:\n\n >>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6))\n >>> dataset = dataset.map(lambda x: tf.range(x))\n >>> dataset.element_spec.shape\n TensorShape([None])\n >>> dataset = dataset.apply(\n ... tf.data.experimental.dense_to_ragged_batch(batch_size=2))\n >>> for batch in dataset:\n ... print(batch)\n \n \n \n\n Args:\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in the case it has fewer than\n `batch_size` elements; the default behavior is not to drop the smaller\n batch.\n row_splits_dtype: The dtype that should be used for the `row_splits` of any\n new ragged tensors. Existing `tf.RaggedTensor` elements do not have their\n row_splits dtype changed.\n\n Returns:\n Dataset: A `Dataset`.\n ", "desc": "A transformation that batches ragged elements into `tf.RaggedTensor`s.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.dense_to_sparse_batch", "docs": "A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.\n\n Like `Dataset.padded_batch()`, this transformation combines multiple\n consecutive elements of the dataset, which might have different\n shapes, into a single element. The resulting element has three\n components (`indices`, `values`, and `dense_shape`), which\n comprise a `tf.sparse.SparseTensor` that represents the same data. The\n `row_shape` represents the dense shape of each row in the\n resulting `tf.sparse.SparseTensor`, to which the effective batch size is\n prepended. For example:\n\n ```python\n # NOTE: The following examples use `{ ... }` to represent the\n # contents of a dataset.\n a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }\n\n a.apply(tf.data.experimental.dense_to_sparse_batch(\n batch_size=2, row_shape=[6])) ==\n {\n ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices\n ['a', 'b', 'c', 'a', 'b'], # values\n [2, 6]), # dense_shape\n ([[0, 0], [0, 1], [0, 2], [0, 3]],\n ['a', 'b', 'c', 'd'],\n [1, 6])\n }\n ```\n\n Args:\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like object\n representing the equivalent dense shape of a row in the resulting\n `tf.sparse.SparseTensor`. Each element of this dataset must have the same\n rank as `row_shape`, and must have size less than or equal to `row_shape`\n in each dimension.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.DistributeOptions", "docs": "Represents options for distributed data processing.\n\n You can set the distribution options of a dataset through the\n `experimental_distribute` property of `tf.data.Options`; the property is\n an instance of `tf.data.experimental.DistributeOptions`.\n\n ```python\n options = tf.data.Options()\n options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for distributed data processing.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.enable_debug_mode", "docs": "Enables debug mode for tf.data.\n\n Example usage with pdb module:\n ```\n import tensorflow as tf\n import pdb\n\n tf.data.experimental.enable_debug_mode()\n\n def func(x):\n # Python 3.7 and older requires `pdb.Pdb(nosigint=True).set_trace()`\n pdb.set_trace()\n x = x + 1\n return x\n\n dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n dataset = dataset.map(func)\n\n for item in dataset:\n print(item)\n ```\n\n The effect of debug mode is two-fold:\n\n 1) Any transformations that would introduce asynchrony, parallelism, or\n non-determinism to the input pipeline execution will be forced to execute\n synchronously, sequentially, and deterministically.\n\n 2) Any user-defined functions passed into tf.data transformations such as\n `map` will be wrapped in `tf.py_function` so that their body is executed\n \"eagerly\" as a Python function as opposed to a traced TensorFlow graph, which\n is the default behavior. Note that even when debug mode is enabled, the\n user-defined function is still traced to infer the shape and type of its\n outputs; as a consequence, any `print` statements or breakpoints will be\n triggered once during the tracing before the actual execution of the input\n pipeline.\n\n NOTE: As the debug mode setting affects the construction of the tf.data input\n pipeline, it should be enabled before any tf.data definitions.\n\n Raises:\n ValueError: When invoked from graph mode.\n ", "desc": "Enables debug mode for tf.data.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.enumerate_dataset", "docs": "A transformation that enumerates the elements of a dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.enumerate()`.\n\nIt is similar to python's `enumerate`.\nFor example:\n\n```python\n# NOTE: The following examples use `{ ... }` to represent the\n# contents of a dataset.\na = { 1, 2, 3 }\nb = { (7, 8), (9, 10) }\n\n# The nested structure of the `datasets` argument determines the\n# structure of elements in the resulting dataset.\na.apply(tf.data.experimental.enumerate_dataset(start=5))\n=> { (5, 1), (6, 2), (7, 3) }\nb.apply(tf.data.experimental.enumerate_dataset())\n=> { (0, (7, 8)), (1, (9, 10)) }\n```\n\nArgs:\n start: A `tf.int64` scalar `tf.Tensor`, representing the start value for\n enumeration.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that enumerates the elements of a dataset. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.ExternalStatePolicy", "docs": "Represents how to handle external state during serialization.\n\n See the `tf.data.Options.experimental_external_state_policy` documentation\n for more information.\n ", "desc": "Represents how to handle external state during serialization.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.from_variant", "docs": "Constructs a dataset from the given variant and (nested) structure.\n\n Args:\n variant: A scalar `tf.variant` tensor representing a dataset.\n structure: A (nested) structure of `tf.TypeSpec` objects representing the\n structure of each element in the dataset.\n\n Returns:\n A `tf.data.Dataset` instance.\n ", "desc": "Constructs a dataset from the given variant and (nested) structure.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.get_next_as_optional", "docs": "Returns a `tf.experimental.Optional` with the next element of the iterator. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Iterator.get_next_as_optional()` instead.\n\nIf the iterator has reached the end of the sequence, the returned\n`tf.experimental.Optional` will have no value.\n\nArgs:\n iterator: A `tf.data.Iterator`.\n\nReturns:\n A `tf.experimental.Optional` object which either contains the next element\n of the iterator (if it exists) or no value.", "desc": "Returns a `tf.experimental.Optional` with the next element of the iterator. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.get_single_element", "docs": "Returns the single element of the `dataset` as a nested structure of tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.get_single_element()`.\n\nThe function enables you to use a `tf.data.Dataset` in a stateless\n\"tensor-in tensor-out\" expression, without creating an iterator.\nThis facilitates the ease of data transformation on tensors using the\noptimized `tf.data.Dataset` abstraction on top of them.\n\nFor example, lets consider a `preprocessing_fn` which would take as an\ninput the raw features and returns the processed feature along with\nit's label.\n\n```python\ndef preprocessing_fn(raw_feature):\n # ... the raw_feature is preprocessed as per the use-case\n return feature\n\nraw_features = ... # input batch of BATCH_SIZE elements.\ndataset = (tf.data.Dataset.from_tensor_slices(raw_features)\n .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n .batch(BATCH_SIZE))\n\nprocessed_features = tf.data.experimental.get_single_element(dataset)\n```\n\nIn the above example, the `raw_features` tensor of length=BATCH_SIZE\nwas converted to a `tf.data.Dataset`. Next, each of the `raw_feature` was\nmapped using the `preprocessing_fn` and the processed features were\ngrouped into a single batch. The final `dataset` contains only one element\nwhich is a batch of all the processed features.\n\nNOTE: The `dataset` should contain only one element.\n\nNow, instead of creating an iterator for the `dataset` and retrieving the\nbatch of features, the `tf.data.experimental.get_single_element()` function\nis used to skip the iterator creation process and directly output the batch\nof features.\n\nThis can be particularly useful when your tensor transformations are\nexpressed as `tf.data.Dataset` operations, and you want to use those\ntransformations while serving your model.\n\n# Keras\n\n```python\n\nmodel = ... # A pre-built or custom model\n\nclass PreprocessingModel(tf.keras.Model):\n def __init__(self, model):\n super().__init__(self)\n self.model = model\n\n @tf.function(input_signature=[...])\n def serving_fn(self, data):\n ds = tf.data.Dataset.from_tensor_slices(data)\n ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n ds = ds.batch(batch_size=BATCH_SIZE)\n return tf.argmax(\n self.model(tf.data.experimental.get_single_element(ds)),\n axis=-1\n )\n\npreprocessing_model = PreprocessingModel(model)\nyour_exported_model_dir = ... # save the model to this path.\ntf.saved_model.save(preprocessing_model, your_exported_model_dir,\n signatures={'serving_default': preprocessing_model.serving_fn})\n```\n\n# Estimator\n\nIn the case of estimators, you need to generally define a `serving_input_fn`\nwhich would require the features to be processed by the model while\ninferencing.\n\n```python\ndef serving_input_fn():\n\n raw_feature_spec = ... # Spec for the raw_features\n input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n raw_feature_spec, default_batch_size=None)\n )\n serving_input_receiver = input_fn()\n raw_features = serving_input_receiver.features\n\n def preprocessing_fn(raw_feature):\n # ... the raw_feature is preprocessed as per the use-case\n return feature\n\n dataset = (tf.data.Dataset.from_tensor_slices(raw_features)\n .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n .batch(BATCH_SIZE))\n\n processed_features = tf.data.experimental.get_single_element(dataset)\n\n # Please note that the value of `BATCH_SIZE` should be equal to\n # the size of the leading dimension of `raw_features`. This ensures\n # that `dataset` has only element, which is a pre-requisite for\n # using `tf.data.experimental.get_single_element(dataset)`.\n\n return tf.estimator.export.ServingInputReceiver(\n processed_features, serving_input_receiver.receiver_tensors)\n\nestimator = ... # A pre-built or custom estimator\nestimator.export_saved_model(your_exported_model_dir, serving_input_fn)\n```\n\nArgs:\n dataset: A `tf.data.Dataset` object containing a single element.\n\nReturns:\n A nested structure of `tf.Tensor` objects, corresponding to the single\n element of `dataset`.\n\nRaises:\n TypeError: if `dataset` is not a `tf.data.Dataset` object.\n InvalidArgumentError: (at runtime) if `dataset` does not contain exactly\n one element.", "desc": "Returns the single element of the `dataset` as a nested structure of tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.get_structure", "docs": "Returns the type signature for elements of the input dataset / iterator.\n\n Args:\n dataset_or_iterator: A `tf.data.Dataset` or an `tf.data.Iterator`.\n\n Returns:\n A (nested) structure of `tf.TypeSpec` objects matching the structure of an\n element of `dataset_or_iterator` and specifying the type of individual\n components.\n\n Raises:\n TypeError: If input is not a `tf.data.Dataset` or an `tf.data.Iterator`\n object.\n ", "desc": "Returns the type signature for elements of the input dataset / iterator.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.group_by_reducer", "docs": "A transformation that groups elements and performs a reduction.\n\n This transformation maps element of a dataset to a key using `key_func` and\n groups the elements by key. The `reducer` is used to process each group; its\n `init_func` is used to initialize state for each group when it is created, the\n `reduce_func` is used to update the state every time an element is mapped to\n the matching group, and the `finalize_func` is used to map the final state to\n an output value.\n\n Args:\n key_func: A function mapping a nested structure of tensors\n (having shapes and types defined by `self.output_shapes` and\n `self.output_types`) to a scalar `tf.int64` tensor.\n reducer: An instance of `Reducer`, which captures the reduction logic using\n the `init_func`, `reduce_func`, and `finalize_func` functions.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that groups elements and performs a reduction.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.group_by_window", "docs": "A transformation that groups windows of elements by key and reduces them. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.group_by_window(...)`.\n\nThis transformation maps each consecutive element in a dataset to a key\nusing `key_func` and groups the elements by key. It then applies\n`reduce_func` to at most `window_size_func(key)` elements matching the same\nkey. All except the final window for each key will contain\n`window_size_func(key)` elements; the final window may be smaller.\n\nYou may provide either a constant `window_size` or a window size determined by\nthe key through `window_size_func`.\n\nArgs:\n key_func: A function mapping a nested structure of tensors\n (having shapes and types defined by `self.output_shapes` and\n `self.output_types`) to a scalar `tf.int64` tensor.\n reduce_func: A function mapping a key and a dataset of up to `window_size`\n consecutive elements matching that key to another dataset.\n window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements matching the same key to combine in a single\n batch, which will be passed to `reduce_func`. Mutually exclusive with\n `window_size_func`.\n window_size_func: A function mapping a key to a `tf.int64` scalar\n `tf.Tensor`, representing the number of consecutive elements matching\n the same key to combine in a single batch, which will be passed to\n `reduce_func`. Mutually exclusive with `window_size`.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: if neither or both of {`window_size`, `window_size_func`} are\n passed.", "desc": "A transformation that groups windows of elements by key and reduces them. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.ignore_errors", "docs": "Creates a `Dataset` from another `Dataset` and silently ignores any errors.\n\n Use this transformation to produce a dataset that contains the same elements\n as the input, but silently drops any elements that caused an error. For\n example:\n\n ```python\n dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.])\n\n # Computing `tf.debugging.check_numerics(1. / 0.)` will raise an\n InvalidArgumentError.\n dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, \"error\"))\n\n # Using `ignore_errors()` will drop the element that causes an error.\n dataset =\n dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2}\n ```\n Args:\n log_warning: (Optional.) A 'tf.bool' scalar indicating whether ignored\n errors should be logged to stderr. Defaults to 'False'.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "Creates a `Dataset` from another `Dataset` and silently ignores any errors.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.make_batched_features_dataset", "docs": "Returns a `Dataset` of feature dictionaries from `Example` protos.\n\n If label_key argument is provided, returns a `Dataset` of tuple\n comprising of feature dictionaries and label.\n\n Example:\n\n ```\n serialized_examples = [\n features {\n feature { key: \"age\" value { int64_list { value: [ 0 ] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n feature { key: \"kws\" value { bytes_list { value: [ \"code\", \"art\" ] } } }\n },\n features {\n feature { key: \"age\" value { int64_list { value: [] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n feature { key: \"kws\" value { bytes_list { value: [ \"sports\" ] } } }\n }\n ]\n ```\n\n We can use arguments:\n\n ```\n features: {\n \"age\": FixedLenFeature([], dtype=tf.int64, default_value=-1),\n \"gender\": FixedLenFeature([], dtype=tf.string),\n \"kws\": VarLenFeature(dtype=tf.string),\n }\n ```\n\n And the expected output is:\n\n ```python\n {\n \"age\": [[0], [-1]],\n \"gender\": [[\"f\"], [\"f\"]],\n \"kws\": SparseTensor(\n indices=[[0, 0], [0, 1], [1, 0]],\n values=[\"code\", \"art\", \"sports\"]\n dense_shape=[2, 2]),\n }\n ```\n\n Args:\n file_pattern: List of files or patterns of file paths containing\n `Example` records. See `tf.io.gfile.glob` for pattern rules.\n batch_size: An int representing the number of records to combine\n in a single batch.\n features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` values. See `tf.io.parse_example`.\n reader: A function or class that can be\n called with a `filenames` tensor and (optional) `reader_args` and returns\n a `Dataset` of `Example` tensors. Defaults to `tf.data.TFRecordDataset`.\n label_key: (Optional) A string corresponding to the key labels are stored in\n `tf.Examples`. If provided, it must be one of the `features` key,\n otherwise results in `ValueError`.\n reader_args: Additional arguments to pass to the reader class.\n num_epochs: Integer specifying the number of times to read through the\n dataset. If None, cycles through the dataset forever. Defaults to `None`.\n shuffle: A boolean, indicates whether the input should be shuffled. Defaults\n to `True`.\n shuffle_buffer_size: Buffer size of the ShuffleDataset. A large capacity\n ensures better shuffling but would increase memory usage and startup time.\n shuffle_seed: Randomization seed to use for shuffling.\n prefetch_buffer_size: Number of feature batches to prefetch in order to\n improve performance. Recommended value is the number of batches consumed\n per training step. Defaults to auto-tune.\n reader_num_threads: Number of threads used to read `Example` records. If >1,\n the results will be interleaved. Defaults to `1`.\n parser_num_threads: Number of threads to use for parsing `Example` tensors\n into a dictionary of `Feature` tensors. Defaults to `2`.\n sloppy_ordering: If `True`, reading performance will be improved at\n the cost of non-deterministic ordering. If `False`, the order of elements\n produced is deterministic prior to shuffling (elements are still\n randomized if `shuffle=True`. Note that if the seed is set, then order\n of elements after shuffling is deterministic). Defaults to `False`.\n drop_final_batch: If `True`, and the batch size does not evenly divide the\n input dataset size, the final smaller batch will be dropped. Defaults to\n `False`.\n\n Returns:\n A dataset of `dict` elements, (or a tuple of `dict` elements and label).\n Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects.\n\n Raises:\n TypeError: If `reader` is of the wrong type.\n ValueError: If `label_key` is not one of the `features` keys.\n ", "desc": "Returns a `Dataset` of feature dictionaries from `Example` protos.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.make_csv_dataset", "docs": "Reads CSV files into a dataset.\n\n Reads CSV files into a dataset, where each element of the dataset is a\n (features, labels) tuple that corresponds to a batch of CSV rows. The features\n dictionary maps feature column names to `Tensor`s containing the corresponding\n feature data, and labels is a `Tensor` containing the batch's label data.\n\n By default, the first rows of the CSV files are expected to be headers listing\n the column names. If the first rows are not headers, set `header=False` and\n provide the column names with the `column_names` argument.\n\n By default, the dataset is repeated indefinitely, reshuffling the order each\n time. This behavior can be modified by setting the `num_epochs` and `shuffle`\n arguments.\n\n For example, suppose you have a CSV file containing\n\n | Feature_A | Feature_B |\n | --------- | --------- |\n | 1 | \"a\" |\n | 2 | \"b\" |\n | 3 | \"c\" |\n | 4 | \"d\" |\n\n ```\n # No label column specified\n dataset = tf.data.experimental.make_csv_dataset(filename, batch_size=2)\n iterator = dataset.as_numpy_iterator()\n print(dict(next(iterator)))\n # prints a dictionary of batched features:\n # OrderedDict([('Feature_A', array([1, 4], dtype=int32)),\n # ('Feature_B', array([b'a', b'd'], dtype=object))])\n ```\n\n ```\n # Set Feature_B as label column\n dataset = tf.data.experimental.make_csv_dataset(\n filename, batch_size=2, label_name=\"Feature_B\")\n iterator = dataset.as_numpy_iterator()\n print(next(iterator))\n # prints (features, labels) tuple:\n # (OrderedDict([('Feature_A', array([1, 2], dtype=int32))]),\n # array([b'a', b'b'], dtype=object))\n ```\n\n See the\n [Load CSV data guide](https://www.tensorflow.org/tutorials/load_data/csv) for\n more examples of using `make_csv_dataset` to read CSV data.\n\n Args:\n file_pattern: List of files or patterns of file paths containing CSV\n records. See `tf.io.gfile.glob` for pattern rules.\n batch_size: An int representing the number of records to combine\n in a single batch.\n column_names: An optional list of strings that corresponds to the CSV\n columns, in order. One per column of the input record. If this is not\n provided, infers the column names from the first row of the records.\n These names will be the keys of the features dict of each dataset element.\n column_defaults: A optional list of default values for the CSV fields. One\n item per selected column of the input record. Each item in the list is\n either a valid CSV dtype (float32, float64, int32, int64, or string), or a\n `Tensor` with one of the aforementioned types. The tensor can either be\n a scalar default value (if the column is optional), or an empty tensor (if\n the column is required). If a dtype is provided instead of a tensor, the\n column is also treated as required. If this list is not provided, tries\n to infer types based on reading the first num_rows_for_inference rows of\n files specified, and assumes all columns are optional, defaulting to `0`\n for numeric values and `\"\"` for string values. If both this and\n `select_columns` are specified, these must have the same lengths, and\n `column_defaults` is assumed to be sorted in order of increasing column\n index.\n label_name: A optional string corresponding to the label column. If\n provided, the data for this column is returned as a separate `Tensor` from\n the features dictionary, so that the dataset complies with the format\n expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input\n function.\n select_columns: An optional list of integer indices or string column\n names, that specifies a subset of columns of CSV data to select. If\n column names are provided, these must correspond to names provided in\n `column_names` or inferred from the file header lines. When this argument\n is specified, only a subset of CSV columns will be parsed and returned,\n corresponding to the columns specified. Using this results in faster\n parsing and lower memory usage. If both this and `column_defaults` are\n specified, these must have the same lengths, and `column_defaults` is\n assumed to be sorted in order of increasing column index.\n field_delim: An optional `string`. Defaults to `\",\"`. Char delimiter to\n separate fields in a record.\n use_quote_delim: An optional bool. Defaults to `True`. If false, treats\n double quotation marks as regular characters inside of the string fields.\n na_value: Additional string to recognize as NA/NaN.\n header: A bool that indicates whether the first rows of provided CSV files\n correspond to header lines with column names, and should not be included\n in the data.\n num_epochs: An int specifying the number of times this dataset is repeated.\n If None, cycles through the dataset forever.\n shuffle: A bool that indicates whether the input should be shuffled.\n shuffle_buffer_size: Buffer size to use for shuffling. A large buffer size\n ensures better shuffling, but increases memory usage and startup time.\n shuffle_seed: Randomization seed to use for shuffling.\n prefetch_buffer_size: An int specifying the number of feature\n batches to prefetch for performance improvement. Recommended value is the\n number of batches consumed per training step. Defaults to auto-tune.\n num_parallel_reads: Number of threads used to read CSV records from files.\n If >1, the results will be interleaved. Defaults to `1`.\n sloppy: If `True`, reading performance will be improved at\n the cost of non-deterministic ordering. If `False`, the order of elements\n produced is deterministic prior to shuffling (elements are still\n randomized if `shuffle=True`. Note that if the seed is set, then order\n of elements after shuffling is deterministic). Defaults to `False`.\n num_rows_for_inference: Number of rows of a file to use for type inference\n if record_defaults is not provided. If None, reads all the rows of all\n the files. Defaults to 100.\n compression_type: (Optional.) A `tf.string` scalar evaluating to one of\n `\"\"` (no compression), `\"ZLIB\"`, or `\"GZIP\"`. Defaults to no compression.\n ignore_errors: (Optional.) If `True`, ignores errors with CSV file parsing,\n such as malformed data or empty lines, and moves on to the next valid\n CSV record. Otherwise, the dataset raises an error and stops processing\n when encountering any invalid records. Defaults to `False`.\n\n Returns:\n A dataset, where each element is a (features, labels) tuple that corresponds\n to a batch of `batch_size` CSV rows. The features dictionary maps feature\n column names to `Tensor`s containing the corresponding column data, and\n labels is a `Tensor` containing the column data for the label column\n specified by `label_name`.\n\n Raises:\n ValueError: If any of the arguments is malformed.\n ", "desc": "Reads CSV files into a dataset.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.make_saveable_from_iterator", "docs": "Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n`make_saveable_from_iterator` is intended for use in TF1 with `tf.compat.v1.Saver`. In TF2, use `tf.train.Checkpoint` instead.\n\nArgs:\n iterator: Iterator.\n external_state_policy: A string that identifies how to handle input\n pipelines that depend on external state. Possible values are\n 'ignore': The external state is silently ignored.\n 'warn': The external state is ignored, logging a warning.\n 'fail': The operation fails upon encountering external state.\n By default we set it to 'fail'.\n\nReturns:\n A SaveableObject for saving/restoring iterator state using Saver.\n\nRaises:\n ValueError: If iterator does not support checkpointing.\n ValueError: If `external_state_policy` is not one of 'warn', 'ignore' or\n 'fail'.\n\nFor example:\n\n```python\nwith tf.Graph().as_default():\n ds = tf.data.Dataset.range(10)\n iterator = ds.make_initializable_iterator()\n # Build the iterator SaveableObject.\n saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator)\n # Add the SaveableObject to the SAVEABLE_OBJECTS collection so\n # it can be automatically saved using Saver.\n tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj)\n saver = tf.compat.v1.train.Saver()\n\n while continue_training:\n ... Perform training ...\n if should_save_checkpoint:\n saver.save()\n```\n\nNote: When restoring the iterator, the existing iterator state is completely\ndiscarded. This means that any changes you may have made to the Dataset\ngraph will be discarded as well! This includes the new Dataset graph\nthat you may have built during validation. So, while running validation,\nmake sure to run the initializer for the validation input pipeline after\nrestoring the checkpoint.\n\nNote: Not all iterators support checkpointing yet. Attempting to save the\nstate of an unsupported iterator will throw an error.", "desc": "Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.map_and_batch", "docs": "Fused implementation of `map` and `batch`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.map(map_func, num_parallel_calls)` followed by `tf.data.Dataset.batch(batch_size, drop_remainder)`. Static tf.data optimizations will take care of using the fused implementation.\n\nMaps `map_func` across `batch_size` consecutive elements of this dataset\nand then combines them into a batch. Functionally, it is equivalent to `map`\nfollowed by `batch`. This API is temporary and deprecated since input pipeline\noptimization now fuses consecutive `map` and `batch` operations automatically.\n\nArgs:\n map_func: A function mapping a nested structure of tensors to another\n nested structure of tensors.\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,\n representing the number of batches to create in parallel. On one hand,\n higher values can help mitigate the effect of stragglers. On the other\n hand, higher values can increase contention if CPU is scarce.\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in case its size is smaller than\n desired; the default behavior is not to drop the smaller batch.\n num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,\n representing the number of elements to process in parallel. If not\n specified, `batch_size * num_parallel_batches` elements will be processed\n in parallel. If the value `tf.data.AUTOTUNE` is used, then\n the number of parallel calls is set dynamically based on available CPU.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: If both `num_parallel_batches` and `num_parallel_calls` are\n specified.", "desc": "Fused implementation of `map` and `batch`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.map_and_batch_with_legacy_function", "docs": "Fused implementation of `map` and `batch`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.experimental.map_and_batch()\n\nNOTE: This is an escape hatch for existing uses of `map_and_batch` that do not\nwork with V2 functions. New uses are strongly discouraged and existing uses\nshould migrate to `map_and_batch` as this method will not be removed in V2.\n\nArgs:\n map_func: A function mapping a nested structure of tensors to another\n nested structure of tensors.\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,\n representing the number of batches to create in parallel. On one hand,\n higher values can help mitigate the effect of stragglers. On the other\n hand, higher values can increase contention if CPU is scarce.\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in case its size is smaller than\n desired; the default behavior is not to drop the smaller batch.\n num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,\n representing the number of elements to process in parallel. If not\n specified, `batch_size * num_parallel_batches` elements will be processed\n in parallel. If the value `tf.data.AUTOTUNE` is used, then\n the number of parallel calls is set dynamically based on available CPU.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: If both `num_parallel_batches` and `num_parallel_calls` are\n specified.", "desc": "Fused implementation of `map` and `batch`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.OptimizationOptions", "docs": "Represents options for dataset optimizations.\n\n You can set the optimization options of a dataset through the\n `experimental_optimization` property of `tf.data.Options`; the property is\n an instance of `tf.data.experimental.OptimizationOptions`.\n\n ```python\n options = tf.data.Options()\n options.experimental_optimization.noop_elimination = True\n options.experimental_optimization.apply_default_optimizations = False\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for dataset optimizations.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.Optional", "docs": "Represents a value that may or may not be present.\n\n A `tf.experimental.Optional` can represent the result of an operation that may\n fail as a value, rather than raising an exception and halting execution. For\n example, `tf.data.Iterator.get_next_as_optional()` returns a\n `tf.experimental.Optional` that either contains the next element of an\n iterator if one exists, or an \"empty\" value that indicates the end of the\n sequence has been reached.\n\n `tf.experimental.Optional` can only be used with values that are convertible\n to `tf.Tensor` or `tf.CompositeTensor`.\n\n One can create a `tf.experimental.Optional` from a value using the\n `from_value()` method:\n\n >>> optional = tf.experimental.Optional.from_value(42)\n >>> print(optional.has_value())\n tf.Tensor(True, shape=(), dtype=bool)\n >>> print(optional.get_value())\n tf.Tensor(42, shape=(), dtype=int32)\n\n or without a value using the `empty()` method:\n\n >>> optional = tf.experimental.Optional.empty(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))\n >>> print(optional.has_value())\n tf.Tensor(False, shape=(), dtype=bool)\n ", "desc": "Represents a value that may or may not be present.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.OptionalStructure", "docs": "Type specification for `tf.experimental.Optional`.\n\n For instance, `tf.OptionalSpec` can be used to define a tf.function that takes\n `tf.experimental.Optional` as an input argument:\n\n >>> @tf.function(input_signature=[tf.OptionalSpec(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])\n ... def maybe_square(optional):\n ... if optional.has_value():\n ... x = optional.get_value()\n ... return x * x\n ... return -1\n >>> optional = tf.experimental.Optional.from_value(5)\n >>> print(maybe_square(optional))\n tf.Tensor(25, shape=(), dtype=int32)\n\n Attributes:\n element_spec: A (nested) structure of `TypeSpec` objects that represents the\n type specification of the optional element.\n ", "desc": "Type specification for `tf.experimental.Optional`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.parallel_interleave", "docs": "A parallel version of the `Dataset.interleave()` transformation. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.\n\n`parallel_interleave()` maps `map_func` across its input to produce nested\ndatasets, and outputs their elements interleaved. Unlike\n`tf.data.Dataset.interleave`, it gets elements from `cycle_length` nested\ndatasets in parallel, which increases the throughput, especially in the\npresence of stragglers. Furthermore, the `sloppy` argument can be used to\nimprove performance, by relaxing the requirement that the outputs are produced\nin a deterministic order, and allowing the implementation to skip over nested\ndatasets whose elements are not readily available when requested.\n\nExample usage:\n\n```python\n# Preprocess 4 files concurrently.\nfilenames = tf.data.Dataset.list_files(\"/path/to/data/train*.tfrecords\")\ndataset = filenames.apply(\n tf.data.experimental.parallel_interleave(\n lambda filename: tf.data.TFRecordDataset(filename),\n cycle_length=4))\n```\n\nWARNING: If `sloppy` is `True`, the order of produced elements is not\ndeterministic.\n\nArgs:\n map_func: A function mapping a nested structure of tensors to a `Dataset`.\n cycle_length: The number of input `Dataset`s to interleave from in parallel.\n block_length: The number of consecutive elements to pull from an input\n `Dataset` before advancing to the next input `Dataset`.\n sloppy: A boolean controlling whether determinism should be traded for\n performance by allowing elements to be produced out of order. If `sloppy`\n is `None`, the `tf.data.Options.deterministic` dataset option (`True` by\n default) is used to decide whether to enforce a deterministic order.\n buffer_output_elements: The number of elements each iterator being\n interleaved should buffer (similar to the `.prefetch()` transformation for\n each interleaved iterator).\n prefetch_input_elements: The number of input elements to transform to\n iterators before they are needed for interleaving.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A parallel version of the `Dataset.interleave()` transformation. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.parse_example_dataset", "docs": "A transformation that parses `Example` protos into a `dict` of tensors.\n\n Parses a number of serialized `Example` protos given in `serialized`. We refer\n to `serialized` as a batch with `batch_size` many entries of individual\n `Example` protos.\n\n This op parses serialized examples into a dictionary mapping keys to `Tensor`,\n `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to\n `VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature`\n objects. Each `VarLenFeature` and `SparseFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenFeature` is mapped to a `Tensor`. See `tf.io.parse_example` for more\n details about feature dictionaries.\n\n Args:\n features: A `dict` mapping feature keys to `FixedLenFeature`,\n `VarLenFeature`, `RaggedFeature`, and `SparseFeature` values.\n num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,\n representing the number of parsing processes to call in parallel.\n deterministic: (Optional.) A boolean controlling whether determinism\n should be traded for performance by allowing elements to be produced out\n of order if some parsing calls complete faster than others. If\n `deterministic` is `None`, the\n `tf.data.Options.deterministic` dataset option (`True` by default) is used\n to decide whether to produce elements deterministically.\n\n Returns:\n A dataset transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\n Raises:\n ValueError: if features argument is None.\n ", "desc": "A transformation that parses `Example` protos into a `dict` of tensors.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.prefetch_to_device", "docs": "A transformation that prefetches dataset values to the given `device`.\n\n NOTE: Although the transformation creates a `tf.data.Dataset`, the\n transformation must be the final `Dataset` in the input pipeline.\n\n For example,\n >>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n >>> dataset = dataset.apply(tf.data.experimental.prefetch_to_device(\"/cpu:0\"))\n >>> for element in dataset:\n ... print(f'Tensor {element} is on device {element.device}')\n Tensor 1 is on device /job:localhost/replica:0/task:0/device:CPU:0\n Tensor 2 is on device /job:localhost/replica:0/task:0/device:CPU:0\n Tensor 3 is on device /job:localhost/replica:0/task:0/device:CPU:0\n\n Args:\n device: A string. The name of a device to which elements will be prefetched.\n buffer_size: (Optional.) The number of elements to buffer on `device`.\n Defaults to an automatically chosen value.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that prefetches dataset values to the given `device`.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.RaggedTensorStructure", "docs": "DEPRECATED FUNCTION\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.RaggedTensorSpec` instead.", "desc": "DEPRECATED FUNCTION", "type": "API"}, {"name": "tf.compat.v1.data.experimental.RandomDataset", "docs": "A `Dataset` of pseudorandom values. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.random(...)`.", "desc": "A `Dataset` of pseudorandom values. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.Reducer", "docs": "A reducer is used for reducing a set of elements.\n\n A reducer is represented as a tuple of the three functions:\n - init_func - to define initial value: key => initial state\n - reducer_func - operation to perform on values with same key: (old state, input) => new state\n - finalize_func - value to return in the end: state => result\n \n For example,\n \n ```\n def init_func(_):\n return (0.0, 0.0)\n\n def reduce_func(state, value):\n return (state[0] + value['features'], state[1] + 1)\n\n def finalize_func(s, n):\n return s / n\n\n reducer = tf.data.experimental.Reducer(init_func, reduce_func, finalize_func)\n ```\n ", "desc": "A reducer is used for reducing a set of elements.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.rejection_resample", "docs": "A transformation that resamples a dataset to achieve a target distribution. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.rejection_resample(...)`.\n\n**NOTE** Resampling is performed via rejection sampling; some fraction\nof the input values will be dropped.\n\nArgs:\n class_func: A function mapping an element of the input dataset to a scalar\n `tf.int32` tensor. Values should be in `[0, num_classes)`.\n target_dist: A floating point type tensor, shaped `[num_classes]`.\n initial_dist: (Optional.) A floating point type tensor, shaped\n `[num_classes]`. If not provided, the true class distribution is\n estimated live in a streaming fashion.\n seed: (Optional.) Python integer seed for the resampler.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that resamples a dataset to achieve a target distribution. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.sample_from_datasets", "docs": "Samples elements at random from the datasets in `datasets`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.sample_from_datasets(...)`.\n\nCreates a dataset by interleaving elements of `datasets` with `weight[i]`\nprobability of picking an element from dataset `i`. Sampling is done without\nreplacement. For example, suppose we have 2 datasets:\n\n```python\ndataset1 = tf.data.Dataset.range(0, 3)\ndataset2 = tf.data.Dataset.range(100, 103)\n```\n\nSuppose also that we sample from these 2 datasets with the following weights:\n\n```python\nsample_dataset = tf.data.Dataset.sample_from_datasets(\n [dataset1, dataset2], weights=[0.5, 0.5])\n```\n\nOne possible outcome of elements in sample_dataset is:\n\n```\nprint(list(sample_dataset.as_numpy_iterator()))\n# [100, 0, 1, 101, 2, 102]\n```\n\nArgs:\n datasets: A non-empty list of `tf.data.Dataset` objects with compatible\n structure.\n weights: (Optional.) A list or Tensor of `len(datasets)` floating-point\n values where `weights[i]` represents the probability to sample from\n `datasets[i]`, or a `tf.data.Dataset` object where each element is such a\n list. Defaults to a uniform distribution across `datasets`.\n seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random\n seed that will be used to create the distribution. See\n `tf.random.set_seed` for behavior.\n stop_on_empty_dataset: If `True`, sampling stops if it encounters an empty\n dataset. If `False`, it skips empty datasets. It is recommended to set it\n to `True`. Otherwise, the distribution of samples starts off as the user\n intends, but may change as input datasets become empty. This can be\n difficult to detect since the dataset starts off looking correct. Default\n to `False` for backward compatibility.\n\nReturns:\n A dataset that interleaves elements from `datasets` at random, according to\n `weights` if provided, otherwise with uniform probability.\n\nRaises:\n TypeError: If the `datasets` or `weights` arguments have the wrong type.\n ValueError:\n - If `datasets` is empty, or\n - If `weights` is specified and does not match the length of `datasets`.", "desc": "Samples elements at random from the datasets in `datasets`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.scan", "docs": "A transformation that scans a function across an input dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.scan(...) instead\n\nThis transformation is a stateful relative of `tf.data.Dataset.map`.\nIn addition to mapping `scan_func` across the elements of the input dataset,\n`scan()` accumulates one or more state tensors, whose initial values are\n`initial_state`.\n\nArgs:\n initial_state: A nested structure of tensors, representing the initial state\n of the accumulator.\n scan_func: A function that maps `(old_state, input_element)` to\n `(new_state, output_element)`. It must take two arguments and return a\n pair of nested structures of tensors. The `new_state` must match the\n structure of `initial_state`.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that scans a function across an input dataset. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service", "docs": "API for using the tf.data service.\n\nThis module contains:\n\n1. tf.data server implementations for running the tf.data service.\n2. APIs for registering datasets with the tf.data service and reading from\n the registered datasets.\n\nThe tf.data service provides the following benefits:\n\n- Horizontal scaling of tf.data input pipeline processing to solve input\n bottlenecks.\n- Data coordination for distributed training. Coordinated reads\n enable all replicas to train on similar-length examples across each global\n training step, improving step times in synchronous training.\n- Dynamic balancing of data across training replicas.\n\n>>> dispatcher = tf.data.experimental.service.DispatchServer()\n>>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n>>> worker = tf.data.experimental.service.WorkerServer(\n... tf.data.experimental.service.WorkerConfig(\n... dispatcher_address=dispatcher_address))\n>>> dataset = tf.data.Dataset.range(10)\n>>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n... processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n... service=dispatcher.target))\n>>> print(list(dataset.as_numpy_iterator()))\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n## Setup\n\nThis section goes over how to set up the tf.data service.\n\n### Run tf.data servers\n\nThe tf.data service consists of one dispatch server and `n` worker servers.\ntf.data servers should be brought up alongside your training jobs, then brought\ndown when the jobs are finished.\nUse `tf.data.experimental.service.DispatchServer` to start a dispatch server,\nand `tf.data.experimental.service.WorkerServer` to start worker servers. Servers\ncan be run in the same process for testing purposes, or scaled up on separate\nmachines.\n\nSee https://github.com/tensorflow/ecosystem/tree/master/data_service for an\nexample of using Google Kubernetes Engine (GKE) to manage the tf.data service.\nNote that the server implementation in\n[tf_std_data_server.py](https://github.com/tensorflow/ecosystem/blob/master/data_service/tf_std_data_server.py)\nis not GKE-specific, and can be used to run the tf.data service in other\ncontexts.\n\n### Custom ops\n\nIf your dataset uses custom ops, these ops need to be made available to tf.data\nservers by calling\n[load_op_library](https://www.tensorflow.org/api_docs/python/tf/load_op_library)\nfrom the dispatcher and worker processes at startup.\n\n## Usage\n\nUsers interact with tf.data service by programmatically registering their\ndatasets with tf.data service, then creating datasets that read from the\nregistered datasets. The\n[register_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/register_dataset)\nfunction registers a dataset, then the\n[from_dataset_id](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/from_dataset_id)\nfunction creates a new dataset which reads from the registered dataset.\nThe\n[distribute](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/distribute)\nfunction wraps `register_dataset` and `from_dataset_id` into a single convenient\ntransformation which registers its input dataset and then reads from it.\n`distribute` enables tf.data service to be used with a one-line code change.\nHowever, it assumes that the dataset is created and consumed by the same entity\nand this assumption might not always be valid or desirable. In particular, in\ncertain scenarios, such as distributed training, it might be desirable to\ndecouple the creation and consumption of the dataset (via `register_dataset`\nand `from_dataset_id` respectively) to avoid having to create the dataset on\neach of the training workers.\n\n### Example\n\n#### `distribute`\n\nTo use the `distribute` transformation, apply the transformation after the\nprefix of your input pipeline that you would like to be executed using tf.data\nservice (typically at the end).\n\n```\ndataset = ... # Define your dataset here.\n# Move dataset processing from the local machine to the tf.data service\ndataset = dataset.apply(\n tf.data.experimental.service.distribute(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n job_name=\"shared_job\"))\n# Any transformations added after `distribute` will be run on the local machine.\ndataset = dataset.prefetch(1)\n```\n\nThe above code will create a tf.data service \"job\", which iterates through the\ndataset to generate data. To share the data from a job across multiple clients\n(e.g. when using TPUStrategy or MultiWorkerMirroredStrategy), set a common\n`job_name` across all clients.\n\n#### `register_dataset` and `from_dataset_id`\n\n`register_dataset` registers a dataset with the tf.data service, returning a\ndataset id for the registered dataset. `from_dataset_id` creates a dataset that\nreads from the registered dataset. These APIs can be used to reduce dataset\nbuilding time for distributed training. Instead of building the dataset on all\ntraining workers, we can build the dataset just once and then register the\ndataset using `register_dataset`. Then all workers can call `from_dataset_id`\nwithout needing to build the dataset themselves.\n\n```\ndataset = ... # Define your dataset here.\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n# Use `from_dataset_id` to create per-worker datasets.\nper_worker_datasets = {}\nfor worker in workers:\n per_worker_datasets[worker] = tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n```\n\n### Processing Modes\n\n`processing_mode` specifies how to shard a dataset among tf.data service\nworkers. tf.data service supports `OFF`, `DYNAMIC`, `FILE`, `DATA`,\n`FILE_OR_DATA`, `HINT` sharding policies.\n\nOFF: No sharding will be performed. The entire input dataset will be processed\nindependently by each of the tf.data service workers. For this reason, it is\nimportant to shuffle data (e.g. filenames) non-deterministically, so that each\nworker will process the elements of the dataset in a different order. This mode\ncan be used to distribute datasets that aren't splittable.\n\nIf a worker is added or restarted during ShardingPolicy.OFF processing, the\nworker will instantiate a new copy of the dataset and begin producing data from\nthe beginning.\n\n#### Dynamic Sharding\n\nDYNAMIC: In this mode, tf.data service divides the dataset into two components:\na source component that generates \"splits\" such as filenames, and a processing\ncomponent that takes splits and outputs dataset elements. The source component\nis executed in a centralized fashion by the tf.data service dispatcher, which\ngenerates different splits of input data. The processing component is executed\nin a parallel fashion by the tf.data service workers, each operating on a\ndifferent set of input data splits.\n\nFor example, consider the following dataset:\n\n```\ndataset = tf.data.Dataset.from_tensor_slices(filenames)\ndataset = dataset.interleave(TFRecordDataset)\ndataset = dataset.map(preprocess_fn)\ndataset = dataset.batch(batch_size)\ndataset = dataset.apply(\n tf.data.experimental.service.distribute(\n processing_mode=tf.data.experimental.service.ShardingPolicy.DYNAMIC,\n ...))\n```\n\nThe `from_tensor_slices` will be run on the dispatcher, while the `interleave`,\n`map`, and `batch` will be run on tf.data service workers. The workers will pull\nfilenames from the dispatcher for processing. To process a dataset with\ndynamic sharding, the dataset must have a splittable source, and all of\nits transformations must be compatible with splitting. While most sources and\ntransformations support splitting, there are exceptions, such as custom datasets\nwhich may not implement the splitting API. Please file a Github issue if you\nwould like to use distributed epoch processing for a currently unsupported\ndataset source or transformation.\n\nIf no workers are restarted during training, dynamic sharding mode will visit\nevery example exactly once. If workers are restarted during training, the splits\nthey were processing will not be fully visited. The dispatcher maintains a\ncursor through the dataset's splits. Assuming fault tolerance is enabled (See\n\"Fault Tolerance\" below), the dispatcher will store cursor state in write-ahead\nlogs so that the cursor can be restored in case the dispatcher is restarted\nmid-training. This provides an at-most-once visitation guarantee in the presence\nof server restarts.\n\n#### Static Sharding\n\nThe following are static sharding policies. The semantics are similar to\n`tf.data.experimental.AutoShardPolicy`. These policies require:\n\n * The tf.data service cluster is configured with a fixed list of workers\n in DispatcherConfig.\n * Each client only reads from the local tf.data service worker.\n\nIf a worker is restarted while performing static sharding, the worker will\nbegin processing its shard again from the beginning.\n\nFILE: Shards by input files (i.e. each worker will get a fixed set of files to\nprocess). When this option is selected, make sure that there is at least as\nmany files as workers. If there are fewer input files than workers, a runtime\nerror will be raised.\n\nDATA: Shards by elements produced by the dataset. Each worker will process the\nwhole dataset and discard the portion that is not for itself. Note that for\nthis mode to correctly partition the dataset elements, the dataset needs to\nproduce elements in a deterministic order.\n\nFILE_OR_DATA: Attempts FILE-based sharding, falling back to DATA-based\nsharding on failure.\n\nHINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a\nplaceholder to replace with `shard(num_workers, worker_index)`.\n\nFor backwards compatibility, `processing_mode` may also be set to the strings\n`\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively equivalent\nto `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n\n### Coordinated Data Read\n\nBy default, when multiple consumers read from the same job, they receive data on\na first-come first-served basis. In some use cases, it is advantageous to\ncoordinate the consumers. At each step, consumers read data from the same\nworker.\n\nFor example, the tf.data service can be used to coordinate example sizes across\na cluster during synchronous training, so that during each step all replicas\ntrain on similar-sized elements. To achieve this, define a dataset which\ngenerates rounds of `num_consumers` consecutive similar-sized batches, then\nenable coordinated reads by setting `consumer_index` and `num_consumers`.\n\nNOTE: To keep consumers in sync, coordinated reads require that the dataset have\ninfinite cardinality. You can get this by adding `.repeat()` at the end of the\ndataset definition.\n\n### Jobs\n\nA tf.data service \"job\" refers to the process of reading from a dataset managed\nby the tf.data service, using one or more data consumers. Jobs are created when\niterating over datasets that read from tf.data service. The data produced by a\njob is determined by (1) dataset associated with the job and (2) the job's\nprocessing mode. For example, if a job is created for the dataset\n`Dataset.range(5)`, and the processing mode is `ShardingPolicy.OFF`, each\ntf.data worker will produce the elements `{0, 1, 2, 3, 4}` for the job,\nresulting in the\njob producing `5 * num_workers` elements. If the processing mode is\n`ShardingPolicy.DYNAMIC`, the job will only produce `5` elements.\n\nOne or more consumers can consume data from a job. By default, jobs are\n\"anonymous\", meaning that only the consumer which created the job can read from\nit. To share the output of a job across multiple consumers, you can set a common\n`job_name`.\n\n### Fault Tolerance\n\nBy default, the tf.data dispatch server stores its state in-memory, making it a\nsingle point of failure during training. To avoid this, pass\n`fault_tolerant_mode=True` when creating your `DispatchServer`. Dispatcher\nfault tolerance requires `work_dir` to be configured and accessible from the\ndispatcher both before and after restart (e.g. a GCS path). With fault tolerant\nmode enabled, the dispatcher will journal its state to the work directory so\nthat no state is lost when the dispatcher is restarted.\n\nWorkerServers may be freely restarted, added, or removed during training. At\nstartup, workers will register with the dispatcher and begin processing all\noutstanding jobs from the beginning.\n\n### Usage with tf.distribute\n\ntf.distribute is the TensorFlow API for distributed training. There are\nseveral ways to use tf.data with tf.distribute:\n`strategy.experimental_distribute_dataset`,\n`strategy.distribute_datasets_from_function`, and (for PSStrategy)\n`coordinator.create_per_worker_dataset`. The following sections give code\nexamples for each.\n\nIn general we recommend using\n`tf.data.experimental.service.{register_dataset,from_dataset_id}` over\n`tf.data.experimental.service.distribute` for two reasons:\n\n- The dataset only needs to be constructed and optimized once, instead of once\n per worker. This can significantly reduce startup time, because the current\n `experimental_distribute_dataset` and `distribute_datasets_from_function`\n implementations create and optimize worker datasets sequentially.\n- If a dataset depends on lookup tables or variables that are only present on\n one host, the dataset needs to be registered from that host. Typically this\n only happens when resources are placed on the chief or worker 0. Registering\n the dataset from the chief will avoid issues with depending on remote\n resources.\n\n#### strategy.experimental_distribute_dataset\n\nNothing special is required when using\n`strategy.experimental_distribute_dataset`, just apply `register_dataset` and\n`from_dataset_id` as above, making sure to specify a `job_name` so that all\nworkers consume from the same tf.data service job.\n\n```\ndataset = ... # Define your dataset here.\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\ndataset = tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = strategy.experimental_distribute_dataset(dataset)\n```\n\n#### strategy.distribute_datasets_from_function\n\nFirst, make sure the dataset produced by the `dataset_fn` does not depend on the\n`input_context` for the training worker on which it is run. Instead of each\nworker building its own (sharded) dataset, one worker should register an\nunsharded dataset, and the remaining workers should consume data from that\ndataset.\n\n```\ndataset = dataset_fn()\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n\ndef new_dataset_fn(input_context):\n del input_context\n return tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = strategy.distribute_datasets_from_function(new_dataset_fn)\n```\n\n#### coordinator.create_per_worker_dataset\n\n`create_per_worker_dataset` works the same as\n`distribute_datasets_from_function`.\n\n```\ndataset = dataset_fn()\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n\ndef new_dataset_fn(input_context):\n del input_context\n return tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = coordinator.create_per_worker_dataset(new_dataset_fn)\n```\n\n## Limitations\n\n- Python-based data processing: Datasets which use Python-based data processing\n (e.g. `tf.py_function`, `tf.numpy_function`, or\n `tf.data.Dataset.from_generator`) are currently not supported.\n- Non-Serializable Resources: Datasets may only depend on TF resources that\n support serialization. Serialization is currently supported for lookup\n tables and variables. If your dataset depends on a TF resource that cannot be\n serialized, please file a Github issue.\n- Remote Resources: If a dataset depends on a resource, the dataset must be\n registered from the same process that created the resource (e.g. the \"chief\"\n job of ParameterServerStrategy).\n\n", "desc": "API for using the tf.data service.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service.DispatcherConfig", "docs": "Configuration class for tf.data service dispatchers.\n\n Fields:\n port: Specifies the port to bind to. A value of 0 indicates that the server\n may bind to any available port.\n protocol: The protocol to use for communicating with the tf.data service,\n e.g. \"grpc\".\n work_dir: A directory to store dispatcher state in. This\n argument is required for the dispatcher to be able to recover from\n restarts.\n fault_tolerant_mode: Whether the dispatcher should write its state to a\n journal so that it can recover from restarts. Dispatcher state, including\n registered datasets and created jobs, is synchronously written to the\n journal before responding to RPCs. If `True`, `work_dir` must also be\n specified.\n worker_addresses: If the job uses auto-sharding, it needs to specify a fixed\n list of worker addresses that will register with the dispatcher. The\n worker addresses should be in the format `\"host\"` or `\"host:port\"`, where\n `\"port\"` is an integer, named port, or `%port%` to match any port.\n job_gc_check_interval_ms: How often the dispatcher should scan through to\n delete old and unused jobs, in milliseconds. If not set, the runtime will\n select a reasonable default. A higher value will reduce load on the\n dispatcher, while a lower value will reduce the time it takes for the\n dispatcher to garbage collect expired jobs.\n job_gc_timeout_ms: How long a job needs to be unused before it becomes a\n candidate for garbage collection, in milliseconds. A value of -1 indicates\n that jobs should never be garbage collected. If not set, the runtime will\n select a reasonable default. A higher value will cause jobs to stay around\n longer with no consumers. This is useful if there is a large gap in\n time between when consumers read from the job. A lower value will reduce\n the time it takes to reclaim the resources from expired jobs.\n ", "desc": "Configuration class for tf.data service dispatchers.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service.distribute", "docs": "A transformation that moves dataset processing to the tf.data service.\n\n When you iterate over a dataset containing the `distribute` transformation,\n the tf.data service creates a \"job\" which produces data for the dataset\n iteration.\n\n The tf.data service uses a cluster of workers to prepare data for training\n your model.\n The `processing_mode` argument to `tf.data.experimental.service.distribute`\n describes how to leverage multiple workers to process the input dataset.\n Currently, there are two processing modes to choose from: \"distributed_epoch\"\n and \"parallel_epochs\".\n\n \"distributed_epoch\" means that the dataset will be split across all tf.data\n service workers.\n The dispatcher produces \"splits\" for the dataset and sends them to workers for\n further processing. For example, if a dataset begins with a list of filenames,\n the dispatcher will iterate through the filenames and send the filenames to\n tf.data workers, which will perform the rest of the dataset transformations on\n those files. \"distributed_epoch\" is useful when your model needs to see each\n element of the dataset exactly once, or if it needs to see the data in a\n generally-sequential order. \"distributed_epoch\" only works for datasets with\n splittable sources, such as `Dataset.from_tensor_slices`,\n `Dataset.list_files`, or `Dataset.range`.\n\n \"parallel_epochs\" means that the entire input dataset will be processed\n independently by each of the tf.data service workers.\n For this reason, it is important to shuffle data (e.g. filenames)\n non-deterministically, so that each worker will process the elements of the\n dataset in a different order. \"parallel_epochs\" can be used to distribute\n datasets that aren't splittable.\n\n With two workers, \"parallel_epochs\" will produce every element of the dataset\n twice:\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> # Start two workers\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"parallel_epochs\", service=dispatcher.target))\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]\n\n \"distributed_epoch\", on the other hand, will still produce each element once:\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"distributed_epoch\", service=dispatcher.target))\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n When using `apply(tf.data.experimental.service.distribute(...))`, the dataset\n before the `apply` transformation executes within the tf.data service, while\n the operations after `apply` happen within the local process.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(5)\n >>> dataset = dataset.map(lambda x: x*x)\n >>> dataset = dataset.apply(\n ... tf.data.experimental.service.distribute(\"parallel_epochs\",\n ... dispatcher.target))\n >>> dataset = dataset.map(lambda x: x+1)\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [1, 1, 2, 2, 5, 5, 10, 10, 17, 17]\n\n In the above example, the dataset operations (before applying the `distribute`\n function on the elements) will be executed on the tf.data workers,\n and the elements are provided over RPC. The remaining transformations\n (after the call to `distribute`) will be executed locally. The dispatcher\n and the workers will bind to usused free ports (which are chosen at random),\n in order to communicate with each other. However, to bind them to specific\n ports, the `port` parameter can be passed.\n\n The `job_name` argument allows jobs to be shared across multiple\n datasets. Instead of each dataset creating its own job, all\n datasets with the same `job_name` will consume from the same job. A new job\n will be created for each iteration of the dataset (with each repetition of\n `Dataset.repeat` counting as a new iteration). Suppose the `DispatchServer`\n is serving on `localhost:5000` and two training workers (in either a single\n client or multi-client setup) iterate over the below dataset, and there is a\n single tf.data worker:\n\n ```\n range5_dataset = tf.data.Dataset.range(5)\n dataset = range5_dataset.apply(tf.data.experimental.service.distribute(\n \"parallel_epochs\", \"localhost:5000\", job_name=\"my_job_name\"))\n for iteration in range(3):\n print(list(dataset))\n ```\n\n The elements of each job will be split between the two processes, with\n elements being consumed by the processes on a first-come first-served basis.\n One possible result is that process 1 prints\n\n ```\n [0, 2, 4]\n [0, 1, 3]\n [1]\n ```\n\n and process 2 prints\n\n ```\n [1, 3]\n [2, 4]\n [0, 2, 3, 4]\n ```\n\n Job names must not be re-used across different training jobs within the\n lifetime of the tf.data service. In general, the tf.data service is expected\n to live for the duration of a single training job.\n To use the tf.data service with multiple training jobs, make sure to use\n different job names to avoid conflicts. For example, suppose a training job\n calls `distribute` with `job_name=\"job\"` and reads until end of input. If\n another independent job connects to the same tf.data service and tries to read\n from `job_name=\"job\"`, it will immediately receive end of input, without\n getting any data.\n\n **Coordinated data read**\n\n By default, when multiple consumers read from the same job, they receive data\n on a first-come first-served basis. In some use cases, it is advantageous to\n coordinate the consumers. At each step, consumers read data from the same\n worker.\n\n For example, the tf.data service can be used to coordinate example sizes\n across a cluster during synchronous training, so that during each step all\n replicas train on similar-sized elements. To achieve this, define a dataset\n which generates rounds of `num_consumers` consecutive similar-sized batches,\n then enable coordinated reads by setting `consumer_index` and `num_consumers`.\n\n NOTE: To keep consumers in sync, round robin data consumption requires that\n the dataset have infinite cardinality. You can get this by adding `.repeat()`\n at the end of the dataset definition.\n\n **Keras and Distribution Strategies**\n\n The dataset produced by the `distribute` transformation can be passed to\n Keras' `Model.fit` or Distribution Strategy's\n `tf.distribute.Strategy.experimental_distribute_dataset` like any other\n `tf.data.Dataset`. We recommend setting a `job_name` on the call to\n `distribute` so that if there are multiple workers, they read data from the\n same job. Note that the autosharding normally performed by\n `experimental_distribute_dataset` will be disabled when setting a `job_name`,\n since sharing the job already results in splitting data across the workers.\n When using a shared job, data will be dynamically balanced across workers, so\n that they reach end of input about the same time. This results in better\n worker utilization than with autosharding, where each worker processes an\n independent set of files, and some workers may run out of data earlier than\n others.\n\n Args:\n processing_mode: A `tf.data.experimental.service.ShardingPolicy` specifying\n how to shard the dataset among tf.data workers. See\n `tf.data.experimental.service.ShardingPolicy` for details. For backwards\n compatibility, `processing_mode` may also be set to the strings\n `\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively\n equivalent to `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n job_name: (Optional.) The name of the job. If provided, it must be a\n non-empty string. This argument makes it possible for multiple datasets to\n share the same job. The default behavior is that the dataset creates\n anonymous, exclusively owned jobs.\n consumer_index: (Optional.) The index of the consumer in the range from `0`\n to `num_consumers`. Must be specified alongside `num_consumers`. When\n specified, consumers will read from the job in a strict round-robin order,\n instead of the default first-come-first-served order.\n num_consumers: (Optional.) The number of consumers which will consume from\n the job. Must be specified alongside `consumer_index`. When specified,\n consumers will read from the job in a strict round-robin order, instead of\n the default first-come-first-served order. When `num_consumers` is\n specified, the dataset must have infinite cardinality to prevent a\n producer from running out of data early and causing consumers to go out of\n sync.\n max_outstanding_requests: (Optional.) A limit on how many elements may be\n requested at the same time. You can use this option to control the amount\n of memory used, since `distribute` won't use more than `element_size` *\n `max_outstanding_requests` of memory.\n data_transfer_protocol: (Optional.) The protocol to use for transferring\n data with the tf.data service. By default, data is transferred using gRPC.\n compression: How to compress the dataset's elements before transferring them\n over the network. \"AUTO\" leaves the decision of how to compress up to the\n tf.data service runtime. `None` indicates not to compress.\n target_workers: (Optional.) Which workers to read from. If `\"AUTO\"`, tf.data\n runtime decides which workers to read from. If `\"ANY\"`, reads from any\n tf.data service workers. If `\"LOCAL\"`, only reads from local in-processs\n tf.data service workers. `\"AUTO\"` works well for most cases, while users\n can specify other targets. For example, `\"LOCAL\"` helps avoid RPCs and\n data copy if every TF worker colocates with a tf.data service worker.\n Consumers of a shared job must use the same `target_workers`. Defaults to\n `\"AUTO\"`.\n\n Returns:\n Dataset: A `Dataset` of the elements produced by the data service.\n ", "desc": "A transformation that moves dataset processing to the tf.data service.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service.from_dataset_id", "docs": "Creates a dataset which reads data from the tf.data service.\n\n This is useful when the dataset is registered by one process, then used in\n another process. When the same process is both registering and reading from\n the dataset, it is simpler to use `tf.data.experimental.service.distribute`\n instead.\n\n Before using `from_dataset_id`, the dataset must have been registered with the\n tf.data service using `tf.data.experimental.service.register_dataset`.\n `register_dataset` returns a dataset id for the registered dataset. That is\n the `dataset_id` which should be passed to `from_dataset_id`.\n\n The `element_spec` argument indicates the `tf.TypeSpec`s for the elements\n produced by the dataset. Currently `element_spec` must be explicitly\n specified, and match the dataset registered under `dataset_id`. `element_spec`\n defaults to `None` so that in the future we can support automatically\n discovering the `element_spec` by querying the tf.data service.\n\n `tf.data.experimental.service.distribute` is a convenience method which\n combines `register_dataset` and `from_dataset_id` into a dataset\n transformation.\n See the documentation for `tf.data.experimental.service.distribute` for more\n detail about how `from_dataset_id` works.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset_id = tf.data.experimental.service.register_dataset(\n ... dispatcher.target, dataset)\n >>> dataset = tf.data.experimental.service.from_dataset_id(\n ... processing_mode=\"parallel_epochs\",\n ... service=dispatcher.target,\n ... dataset_id=dataset_id,\n ... element_spec=dataset.element_spec)\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n Args:\n processing_mode: A `tf.data.experimental.service.ShardingPolicy` specifying\n how to shard the dataset among tf.data workers. See\n `tf.data.experimental.service.ShardingPolicy` for details. For backwards\n compatibility, `processing_mode` may also be set to the strings\n `\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively\n equivalent to `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n dataset_id: The id of the dataset to read from. This id is returned by\n `register_dataset` when the dataset is registered with the tf.data\n service.\n element_spec: A nested structure of `tf.TypeSpec`s representing the type of\n elements produced by the dataset. This argument is only required inside a\n tf.function. Use `tf.data.Dataset.element_spec` to get the element spec\n for a given dataset.\n job_name: (Optional.) The name of the job. If provided, it must be a\n non-empty string. This argument makes it possible for multiple datasets to\n share the same job. The default behavior is that the dataset creates\n anonymous, exclusively owned jobs.\n consumer_index: (Optional.) The index of the consumer in the range from `0`\n to `num_consumers`. Must be specified alongside `num_consumers`. When\n specified, consumers will read from the job in a strict round-robin order,\n instead of the default first-come-first-served order.\n num_consumers: (Optional.) The number of consumers which will consume from\n the job. Must be specified alongside `consumer_index`. When specified,\n consumers will read from the job in a strict round-robin order, instead of\n the default first-come-first-served order. When `num_consumers` is\n specified, the dataset must have infinite cardinality to prevent a\n producer from running out of data early and causing consumers to go out of\n sync.\n max_outstanding_requests: (Optional.) A limit on how many elements may be\n requested at the same time. You can use this option to control the amount\n of memory used, since `distribute` won't use more than `element_size` *\n `max_outstanding_requests` of memory.\n data_transfer_protocol: (Optional.) The protocol to use for transferring\n data with the tf.data service. By default, data is transferred using gRPC.\n target_workers: (Optional.) Which workers to read from. If `\"AUTO\"`, tf.data\n runtime decides which workers to read from. If `\"ANY\"`, reads from any\n tf.data service workers. If `\"LOCAL\"`, only reads from local in-processs\n tf.data service workers. `\"AUTO\"` works well for most cases, while users\n can specify other targets. For example, `\"LOCAL\"` helps avoid RPCs and\n data copy if every TF worker colocates with a tf.data service worker.\n Consumers of a shared job must use the same `target_workers`. Defaults to\n `\"AUTO\"`.\n\n Returns:\n A `tf.data.Dataset` which reads from the tf.data service.\n ", "desc": "Creates a dataset which reads data from the tf.data service.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service.register_dataset", "docs": "Registers a dataset with the tf.data service.\n\n `register_dataset` registers a dataset with the tf.data service so that\n datasets can be created later with\n `tf.data.experimental.service.from_dataset_id`. This is useful when the\n dataset\n is registered by one process, then used in another process. When the same\n process is both registering and reading from the dataset, it is simpler to use\n `tf.data.experimental.service.distribute` instead.\n\n If the dataset is already registered with the tf.data service,\n `register_dataset` returns the already-registered dataset's id.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset_id = tf.data.experimental.service.register_dataset(\n ... dispatcher.target, dataset)\n >>> dataset = tf.data.experimental.service.from_dataset_id(\n ... processing_mode=\"parallel_epochs\",\n ... service=dispatcher.target,\n ... dataset_id=dataset_id,\n ... element_spec=dataset.element_spec)\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n Args:\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n dataset: A `tf.data.Dataset` to register with the tf.data service.\n compression: (Optional.) How to compress the dataset's elements before\n transferring them over the network. \"AUTO\" leaves the decision of how to\n compress up to the tf.data service runtime. `None` indicates not to\n compress.\n\n Returns:\n A scalar int64 tensor of the registered dataset's id.\n ", "desc": "Registers a dataset with the tf.data service.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.service.WorkerConfig", "docs": "Configuration class for tf.data service dispatchers.\n\n Fields:\n dispatcher_address: Specifies the address of the dispatcher.\n worker_address: Specifies the address of the worker server. This address is\n passed to the dispatcher so that the dispatcher can tell clients how to\n connect to this worker.\n port: Specifies the port to bind to. A value of 0 indicates that the worker\n can bind to any available port.\n protocol: (Optional.) Specifies the protocol to be used by the server, e.g.\n \"grpc\".\n heartbeat_interval_ms: How often the worker should heartbeat to the\n dispatcher, in milliseconds. If not set, the runtime will select a\n reasonable default. A higher value will reduce the load on the dispatcher,\n while a lower value will reduce the time it takes to reclaim resources\n from finished jobs.\n dispatcher_timeout_ms: How long, in milliseconds, to retry requests to the\n dispatcher before giving up and reporting an error. Defaults to 1 hour.\n ", "desc": "Configuration class for tf.data service dispatchers.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.shuffle_and_repeat", "docs": "Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.\n\n>>> d = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n>>> d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2))\n>>> [elem.numpy() for elem in d] # doctest: +SKIP\n[2, 3, 1, 1, 3, 2]\n\n```python\ndataset.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed))\n```\n\nproduces the same output as\n\n```python\ndataset.shuffle(\n buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count)\n```\n\nIn each repetition, this dataset fills a buffer with `buffer_size` elements,\nthen randomly samples elements from this buffer, replacing the selected\nelements with new elements. For perfect shuffling, set the buffer size equal\nto the full size of the dataset.\n\nFor instance, if your dataset contains 10,000 elements but `buffer_size` is\nset to 1,000, then `shuffle` will initially select a random element from\nonly the first 1,000 elements in the buffer. Once an element is selected,\nits space in the buffer is replaced by the next (i.e. 1,001-st) element,\nmaintaining the 1,000 element buffer.\n\nArgs:\n buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the maximum\n number elements that will be buffered when prefetching.\n count: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the number\n of times the dataset should be repeated. The default behavior (if `count`\n is `None` or `-1`) is for the dataset be repeated indefinitely.\n seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random\n seed that will be used to create the distribution. See\n `tf.random.set_seed` for behavior.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.snapshot", "docs": "API to persist the output of the input dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.snapshot(...)`.\n\nThe snapshot API allows users to transparently persist the output of their\npreprocessing pipeline to disk, and materialize the pre-processed data on a\ndifferent training run.\n\nThis API enables repeated preprocessing steps to be consolidated, and allows\nre-use of already processed data, trading off disk storage and network\nbandwidth for freeing up more valuable CPU resources and accelerator compute\ntime.\n\nhttps://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md\nhas detailed design documentation of this feature.\n\nUsers can specify various options to control the behavior of snapshot,\nincluding how snapshots are read from and written to by passing in\nuser-defined functions to the `reader_func` and `shard_func` parameters.\n\n`shard_func` is a user specified function that maps input elements to snapshot\nshards.\n\nUsers may want to specify this function to control how snapshot files should\nbe written to disk. Below is an example of how a potential shard_func could\nbe written.\n\n```python\ndataset = ...\ndataset = dataset.enumerate()\ndataset = dataset.apply(tf.data.experimental.snapshot(\"/path/to/snapshot/dir\",\n shard_func=lambda x, y: x % NUM_SHARDS, ...))\ndataset = dataset.map(lambda x, y: y)\n```\n\n`reader_func` is a user specified function that accepts a single argument:\n(1) a Dataset of Datasets, each representing a \"split\" of elements of the\noriginal dataset. The cardinality of the input dataset matches the\nnumber of the shards specified in the `shard_func` (see above). The function\nshould return a Dataset of elements of the original dataset.\n\nUsers may want specify this function to control how snapshot files should be\nread from disk, including the amount of shuffling and parallelism.\n\nHere is an example of a standard reader function a user can define. This\nfunction enables both dataset shuffling and parallel reading of datasets:\n\n```python\ndef user_reader_func(datasets):\n # shuffle the datasets splits\n datasets = datasets.shuffle(NUM_CORES)\n # read datasets in parallel and interleave their elements\n return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)\n\ndataset = dataset.apply(tf.data.experimental.snapshot(\"/path/to/snapshot/dir\",\n reader_func=user_reader_func))\n```\n\nBy default, snapshot parallelizes reads by the number of cores available on\nthe system, but will not attempt to shuffle the data.\n\nArgs:\n path: Required. A directory to use for storing / loading the snapshot to /\n from.\n compression: Optional. The type of compression to apply to the snapshot\n written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None.\n Defaults to AUTO, which attempts to pick an appropriate compression\n algorithm for the dataset.\n reader_func: Optional. A function to control how to read data from snapshot\n shards.\n shard_func: Optional. A function to control how to shard data when writing a\n snapshot.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "API to persist the output of the input dataset. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.SparseTensorStructure", "docs": "DEPRECATED FUNCTION\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.SparseTensorSpec` instead.", "desc": "DEPRECATED FUNCTION", "type": "API"}, {"name": "tf.compat.v1.data.experimental.SqlDataset", "docs": "A `Dataset` consisting of the results from a SQL query.", "desc": "A `Dataset` consisting of the results from a SQL query.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.Structure", "docs": "Specifies a TensorFlow value type.\n\n A `tf.TypeSpec` provides metadata describing an object accepted or returned\n by TensorFlow APIs. Concrete subclasses, such as `tf.TensorSpec` and\n `tf.RaggedTensorSpec`, are used to describe different value types.\n\n For example, `tf.function`'s `input_signature` argument accepts a list\n (or nested structure) of `TypeSpec`s.\n\n Creating new subclasses of `TypeSpec` (outside of TensorFlow core) is not\n currently supported. In particular, we may make breaking changes to the\n private methods and properties defined by this base class.\n\n Example:\n\n >>> spec = tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)\n >>> @tf.function(input_signature=[spec])\n ... def double(x):\n ... return x * 2\n >>> print(double(tf.ragged.constant([[1, 2], [3]])))\n \n ", "desc": "Specifies a TensorFlow value type.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.take_while", "docs": "A transformation that stops dataset iteration based on a `predicate`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.take_while(...)\n\nArgs:\n predicate: A function that maps a nested structure of tensors (having shapes\n and types defined by `self.output_shapes` and `self.output_types`) to a\n scalar `tf.bool` tensor.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that stops dataset iteration based on a `predicate`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.TensorArrayStructure", "docs": "DEPRECATED FUNCTION\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.TensorArraySpec` instead.", "desc": "DEPRECATED FUNCTION", "type": "API"}, {"name": "tf.compat.v1.data.experimental.TensorStructure", "docs": "DEPRECATED FUNCTION\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.TensorSpec` instead.", "desc": "DEPRECATED FUNCTION", "type": "API"}, {"name": "tf.compat.v1.data.experimental.TFRecordWriter", "docs": "Writes a dataset to a TFRecord file. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo write TFRecords to disk, use `tf.io.TFRecordWriter`. To save and load the contents of a dataset, use `tf.data.experimental.save` and `tf.data.experimental.load`\n\nThe elements of the dataset must be scalar strings. To serialize dataset\nelements as strings, you can use the `tf.io.serialize_tensor` function.\n\n```python\ndataset = tf.data.Dataset.range(3)\ndataset = dataset.map(tf.io.serialize_tensor)\nwriter = tf.data.experimental.TFRecordWriter(\"/path/to/file.tfrecord\")\nwriter.write(dataset)\n```\n\nTo read back the elements, use `TFRecordDataset`.\n\n```python\ndataset = tf.data.TFRecordDataset(\"/path/to/file.tfrecord\")\ndataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64))\n```\n\nTo shard a `dataset` across multiple TFRecord files:\n\n```python\ndataset = ... # dataset to be written\n\ndef reduce_func(key, dataset):\n filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)])\n writer = tf.data.experimental.TFRecordWriter(filename)\n writer.write(dataset.map(lambda _, x: x))\n return tf.data.Dataset.from_tensors(filename)\n\ndataset = dataset.enumerate()\ndataset = dataset.apply(tf.data.experimental.group_by_window(\n lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max\n))\n\n# Iterate through the dataset to trigger data writing.\nfor _ in dataset:\n pass\n```", "desc": "Writes a dataset to a TFRecord file. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.ThreadingOptions", "docs": "Represents options for dataset threading.\n\n You can set the threading options of a dataset through the\n `experimental_threading` property of `tf.data.Options`; the property is\n an instance of `tf.data.ThreadingOptions`.\n\n ```python\n options = tf.data.Options()\n options.threading.private_threadpool_size = 10\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for dataset threading.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.to_variant", "docs": "Returns a variant representing the given dataset.\n\n Args:\n dataset: A `tf.data.Dataset`.\n\n Returns:\n A scalar `tf.variant` tensor representing the given dataset.\n ", "desc": "Returns a variant representing the given dataset.", "type": "API"}, {"name": "tf.compat.v1.data.experimental.unbatch", "docs": "Splits elements of a dataset into multiple elements on the batch dimension. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.unbatch()`.\n\nFor example, if elements of the dataset are shaped `[B, a0, a1, ...]`,\nwhere `B` may vary for each input element, then for each element in the\ndataset, the unbatched dataset will contain `B` consecutive elements\nof shape `[a0, a1, ...]`.\n\n```python\n# NOTE: The following example uses `{ ... }` to represent the contents\n# of a dataset.\na = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }\n\na.unbatch() == {\n 'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'}\n```\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Splits elements of a dataset into multiple elements on the batch dimension. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.experimental.unique", "docs": "Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.unique(...)\n\nUse this transformation to produce a dataset that contains one instance of\neach unique element in the input. For example:\n\n```python\ndataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])\n\n# Using `unique()` will drop the duplicate elements.\ndataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 }\n```\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.data.FixedLengthRecordDataset", "docs": "A `Dataset` of fixed-length records from one or more binary files.", "desc": "A `Dataset` of fixed-length records from one or more binary files.", "type": "API"}, {"name": "tf.compat.v1.data.get_output_classes", "docs": "Returns the output classes for elements of the input dataset / iterator.\n\n Args:\n dataset_or_iterator: A `tf.data.Dataset` or `tf.data.Iterator`.\n\n Returns:\n A (nested) structure of Python `type` objects matching the structure of the\n dataset / iterator elements and specifying the class of the individual\n components.\n\n @compatibility(TF2)\n This is a legacy API for inspecting the type signature of dataset elements. In\n TF 2, you should use the `tf.data.Dataset.element_spec` attribute instead.\n @end_compatibility\n ", "desc": "Returns the output classes for elements of the input dataset / iterator.", "type": "API"}, {"name": "tf.compat.v1.data.get_output_shapes", "docs": "Returns the output shapes for elements of the input dataset / iterator.\n\n Args:\n dataset_or_iterator: A `tf.data.Dataset` or `tf.data.Iterator`.\n\n Returns:\n A (nested) structure of `tf.TensorShape` objects matching the structure of\n the dataset / iterator elements and specifying the shape of the individual\n components.\n\n @compatibility(TF2)\n This is a legacy API for inspecting the type signature of dataset elements. In\n TF 2, you should use the `tf.data.Dataset.element_spec` attribute instead.\n @end_compatibility\n ", "desc": "Returns the output shapes for elements of the input dataset / iterator.", "type": "API"}, {"name": "tf.compat.v1.data.get_output_types", "docs": "Returns the output shapes for elements of the input dataset / iterator.\n\n Args:\n dataset_or_iterator: A `tf.data.Dataset` or `tf.data.Iterator`.\n\n Returns:\n A (nested) structure of `tf.DType` objects matching the structure of\n dataset / iterator elements and specifying the shape of the individual\n components.\n\n @compatibility(TF2)\n This is a legacy API for inspecting the type signature of dataset elements. In\n TF 2, you should use the `tf.data.Dataset.element_spec` attribute instead.\n @end_compatibility\n ", "desc": "Returns the output shapes for elements of the input dataset / iterator.", "type": "API"}, {"name": "tf.compat.v1.data.Iterator", "docs": "Represents the state of iterating through a `Dataset`.", "desc": "Represents the state of iterating through a `Dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.make_initializable_iterator", "docs": "Creates an iterator for elements of `dataset`.\n\n Note: The returned iterator will be in an uninitialized state,\n and you must run the `iterator.initializer` operation before using it:\n\n ```python\n dataset = ...\n iterator = tf.compat.v1.data.make_initializable_iterator(dataset)\n # ...\n sess.run(iterator.initializer)\n ```\n\n Args:\n dataset: A `tf.data.Dataset`.\n shared_name: (Optional.) If non-empty, the returned iterator will be shared\n under the given name across multiple sessions that share the same devices\n (e.g. when using a remote server).\n\n Returns:\n A `tf.data.Iterator` for elements of `dataset`.\n\n Raises:\n RuntimeError: If eager execution is enabled.\n\n @compatibility(TF2)\n This is a legacy API for consuming dataset elements and should only be used\n during transition from TF 1 to TF 2. Note that using this API should be\n a transient state of your code base as there are in general no guarantees\n about the interoperability of TF 1 and TF 2 code.\n\n In TF 2 datasets are Python iterables which means you can consume their\n elements using `for elem in dataset: ...` or by explicitly creating iterator\n via `iterator = iter(dataset)` and fetching its elements via\n `values = next(iterator)`.\n @end_compatibility\n ", "desc": "Creates an iterator for elements of `dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.make_one_shot_iterator", "docs": "Creates an iterator for elements of `dataset`.\n\n Note: The returned iterator will be initialized automatically.\n A \"one-shot\" iterator does not support re-initialization.\n\n Args:\n dataset: A `tf.data.Dataset`.\n\n Returns:\n A `tf.data.Iterator` for elements of `dataset`.\n\n @compatibility(TF2)\n This is a legacy API for consuming dataset elements and should only be used\n during transition from TF 1 to TF 2. Note that using this API should be\n a transient state of your code base as there are in general no guarantees\n about the interoperability of TF 1 and TF 2 code.\n\n In TF 2 datasets are Python iterables which means you can consume their\n elements using `for elem in dataset: ...` or by explicitly creating iterator\n via `iterator = iter(dataset)` and fetching its elements via\n `values = next(iterator)`.\n @end_compatibility\n ", "desc": "Creates an iterator for elements of `dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.Options", "docs": "Represents options for `tf.data.Dataset`.\n\n A `tf.data.Options` object can be, for instance, used to control which static\n optimizations to apply to the input pipeline graph or whether to use\n performance modeling to dynamically tune the parallelism of operations such as\n `tf.data.Dataset.map` or `tf.data.Dataset.interleave`.\n\n The options are set for the entire dataset and are carried over to datasets\n created through tf.data transformations.\n\n The options can be set by constructing an `Options` object and using the\n `tf.data.Dataset.with_options(options)` transformation, which returns a\n dataset with the options set.\n\n >>> dataset = tf.data.Dataset.range(42)\n >>> options = tf.data.Options()\n >>> options.deterministic = False\n >>> dataset = dataset.with_options(options)\n >>> print(dataset.options().deterministic)\n False\n\n Note: A known limitation of the `tf.data.Options` implementation is that the\n options are not preserved across tf.function boundaries. In particular, to\n set options for a dataset that is iterated within a tf.function, the options\n need to be set within the same tf.function.\n ", "desc": "Represents options for `tf.data.Dataset`.", "type": "API"}, {"name": "tf.compat.v1.data.TextLineDataset", "docs": "A `Dataset` comprising lines from one or more text files.", "desc": "A `Dataset` comprising lines from one or more text files.", "type": "API"}, {"name": "tf.compat.v1.data.TFRecordDataset", "docs": "A `Dataset` comprising records from one or more TFRecord files.", "desc": "A `Dataset` comprising records from one or more TFRecord files.", "type": "API"}, {"name": "tf.compat.v1.debugging", "docs": "Public API for tf.debugging namespace.\n", "desc": "Public API for tf.debugging namespace.", "type": "API"}, {"name": "tf.compat.v1.debugging.Assert", "docs": "Asserts that the given condition is true.\n\nIf `condition` evaluates to false, print the list of tensors in `data`.\n`summarize` determines how many entries of the tensors to print.\n\nArgs:\n condition: The condition to evaluate.\n data: The tensors to print out when condition is false.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional).\n\nReturns:\n assert_op: An `Operation` that, when executed, raises a\n `tf.errors.InvalidArgumentError` if `condition` is not true.\n @compatibility(eager)\n returns None\n @end_compatibility\n\nRaises:\n @compatibility(TF1)\n When in TF V1 mode (that is, outside `tf.function`) Assert needs a control\n dependency on the output to ensure the assertion executes:\n\n```python\n# Ensure maximum element of x is smaller or equal to 1\nassert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])\nwith tf.control_dependencies([assert_op]):\n ... code using x ...\n```\n\n @end_compatibility\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Asserts that the given condition is true.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_all_finite", "docs": "Assert that the tensor does not contain any NaN's or Inf's.\n\n Args:\n t: Tensor to check.\n msg: Message to log on failure.\n name: A name for this operation (optional).\n x: Alias for t.\n message: Alias for msg.\n\n Returns:\n Same tensor as `t`.\n ", "desc": "Assert that the tensor does not contain any NaN's or Inf's.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_equal", "docs": "\n Assert the condition `x == y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] == y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x == y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x == y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_equal(a, b,\n ... message='\"a == b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 2]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 2], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_equal(a, b, message=\n ... '\"a == b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_greater", "docs": "\n Assert the condition `x > y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] > y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_greater\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_greater` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_greater` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_greater(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_greater(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_greater(a, b,\n ... message='\"a > b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[0, 1]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([0, 1], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_greater(a, b, message=\n ... '\"a > b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_greater_equal", "docs": "\n Assert the condition `x >= y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] >= y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_greater_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x >= y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x >= y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_greater_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_greater_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_greater_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_greater_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_greater_equal(a, b,\n ... message='\"a >= b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 0]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 0], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_greater_equal(a, b, message=\n ... '\"a >= b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_integer", "docs": "Assert that `x` is of integer dtype.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: `Tensor` whose basetype is integer and is not quantized.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_integer\".\n\n Raises:\n TypeError: If `x.dtype` is anything other than non-quantized integer.\n\n Returns:\n A `no_op` that does nothing. Type can be determined statically.\n ", "desc": "Assert that `x` is of integer dtype.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_less", "docs": "\n Assert the condition `x < y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] < y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_less\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_less` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_less` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_less(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_less(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_less(a, b,\n ... message='\"a < b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[2, 3]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([2, 3], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_less(a, b, message=\n ... '\"a < b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_less_equal", "docs": "\n Assert the condition `x <= y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] <= y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_less_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x <= y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x <= y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_less_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_less_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_less_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_less_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_less_equal(a, b,\n ... message='\"a <= b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[1, 3]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([1, 3], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_less_equal(a, b, message=\n ... '\"a <= b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_near", "docs": "Assert the condition `x` and `y` are close element-wise.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have\n\n ```tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i])```.\n\n If both `x` and `y` are empty, this is trivially satisfied.\n\n The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest\n representable positive number such that `1 + eps != 1`. This is about\n `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`.\n See `numpy.finfo`.\n\n Args:\n x: Float or complex `Tensor`.\n y: Float or complex `Tensor`, same `dtype` as, and broadcastable to, `x`.\n rtol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The relative tolerance. Default is `10 * eps`.\n atol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The absolute tolerance. Default is `10 * eps`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_near\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.\n\n @compatibility(numpy)\n Similar to `numpy.testing.assert_allclose`, except tolerance depends on data\n type. This is due to the fact that `TensorFlow` is often used with `32bit`,\n `64bit`, and even `16bit` data.\n @end_compatibility\n ", "desc": "Assert the condition `x` and `y` are close element-wise.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_negative", "docs": "\n Assert the condition `x < 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_negative\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_non_negative", "docs": "\n Assert the condition `x >= 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_non_negative\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x >= 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x >= 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_non_positive", "docs": "\n Assert the condition `x <= 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_non_positive\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x <= 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x <= 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_none_equal", "docs": "\n Assert the condition `x != y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] != y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_none_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x != y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x != y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.assert_none_equal` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.assert_none_equal` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n tf.compat.v1.assert_none_equal(\n x=x, y=y, data=data, summarize=summarize,\n message=message, name=name)\n ```\n\n After:\n\n ```python\n tf.debugging.assert_none_equal(\n x=x, y=y, message=message,\n summarize=summarize, name=name)\n ```\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... a = tf.compat.v1.placeholder(tf.float32, [2])\n ... b = tf.compat.v1.placeholder(tf.float32, [2])\n ... result = tf.compat.v1.assert_none_equal(a, b,\n ... message='\"a != b\" does not hold for the given inputs')\n ... with tf.compat.v1.control_dependencies([result]):\n ... sum_node = a + b\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> val = sess.run(sum_node, feed_dict={a: [1, 2], b:[2, 1]})\n\n\n TF2:\n\n >>> a = tf.Variable([1, 2], dtype=tf.float32)\n >>> b = tf.Variable([2, 1], dtype=tf.float32)\n >>> assert_op = tf.debugging.assert_none_equal(a, b, message=\n ... '\"a != b\" does not hold for the given inputs')\n >>> # When working with tf.control_dependencies\n >>> with tf.control_dependencies([assert_op]):\n ... val = a + b\n\n @end_compatibility\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_positive", "docs": "\n Assert the condition `x > 0` holds element-wise.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`.\n If `x` is empty this is trivially satisfied.\n\n Args:\n x: Numeric `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_positive\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > 0` is False.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > 0` is False. The check can be performed immediately during\n eager execution or if `x` is statically known.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_proper_iterable", "docs": "Static assert that values is a \"proper\" iterable.\n\n `Ops` that expect iterables of `Tensor` can call this to validate input.\n Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.\n\n Args:\n values: Object to be checked.\n\n Raises:\n TypeError: If `values` is not iterable or is one of\n `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`.\n ", "desc": "Static assert that values is a \"proper\" iterable.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_rank", "docs": "Assert `x` has rank equal to `rank`.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar integer `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and the shape of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_rank\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank.\n ", "desc": "Assert `x` has rank equal to `rank`.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_rank_at_least", "docs": "Assert `x` has rank equal to `rank` or higher.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_at_least\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank or higher.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank.\n ", "desc": "Assert `x` has rank equal to `rank` or higher.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_rank_in", "docs": "Assert `x` has rank in `ranks`.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n ranks: Iterable of scalar `Tensor` objects.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_in\".\n\n Returns:\n Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`.\n If static checks determine `x` has matching rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has mismatched rank.\n ", "desc": "Assert `x` has rank in `ranks`.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_same_float_dtype", "docs": "Validate and return float type based on `tensors` and `dtype`.\n\n For ops such as matrix multiplication, inputs and weights must be of the\n same float type. This function validates that all `tensors` are the same type,\n validates that type is `dtype` (if supplied), and returns the type. Type must\n be a floating point type. If neither `tensors` nor `dtype` is supplied,\n the function will return `dtypes.float32`.\n\n Args:\n tensors: Tensors of input values. Can include `None` elements, which will be\n ignored.\n dtype: Expected type.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if neither `tensors` nor `dtype` is supplied, or result is not\n float, or the common type of the inputs is not a floating point type.\n ", "desc": "Validate and return float type based on `tensors` and `dtype`.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_scalar", "docs": "Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).\n\n This function raises `ValueError` unless it can be certain that the given\n `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is\n unknown.\n\n Args:\n tensor: A `Tensor`.\n name: A name for this operation. Defaults to \"assert_scalar\"\n message: A string to prefix to the default message.\n\n Returns:\n The input tensor (potentially converted to a `Tensor`).\n\n Raises:\n ValueError: If the tensor is not scalar (rank 0), or if its shape is\n unknown.\n ", "desc": "Asserts that the given `tensor` is a scalar (i.e. zero-dimensional).", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_shapes", "docs": "Assert tensor shapes and dimension size relationships between tensors.\n\n This Op checks that a collection of tensors shape relationships\n satisfies given constraints.\n\n Example:\n\n >>> n = 10\n >>> q = 3\n >>> d = 7\n >>> x = tf.zeros([n,q])\n >>> y = tf.ones([n,d])\n >>> param = tf.Variable([1.0, 2.0, 3.0])\n >>> scalar = 1.0\n >>> tf.debugging.assert_shapes([\n ... (x, ('N', 'Q')),\n ... (y, ('N', 'D')),\n ... (param, ('Q',)),\n ... (scalar, ()),\n ... ])\n\n >>> tf.debugging.assert_shapes([\n ... (x, ('N', 'D')),\n ... (y, ('N', 'D'))\n ... ])\n Traceback (most recent call last):\n ...\n ValueError: ...\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.assert_shapes(shapes)]):\n output = tf.matmul(x, y, transpose_a=True)\n ```\n\n If `x`, `y`, `param` or `scalar` does not have a shape that satisfies\n all specified constraints, `message`, as well as the first `summarize` entries\n of the first encountered violating tensor are printed, and\n `InvalidArgumentError` is raised.\n\n Size entries in the specified shapes are checked against other entries by\n their __hash__, except:\n - a size entry is interpreted as an explicit size if it can be parsed as an\n integer primitive.\n - a size entry is interpreted as *any* size if it is None or '.'.\n\n If the first entry of a shape is `...` (type `Ellipsis`) or '*' that indicates\n a variable number of outer dimensions of unspecified size, i.e. the constraint\n applies to the inner-most dimensions only.\n\n Scalar tensors and specified shapes of length zero (excluding the 'inner-most'\n prefix) are both treated as having a single dimension of size one.\n\n Args:\n shapes: A list of (`Tensor`, `shape`) tuples, wherein `shape` is the\n expected shape of `Tensor`. See the example code above. The `shape` must\n be an iterable. Each element of the iterable can be either a concrete\n integer value or a string that abstractly represents the dimension.\n For example,\n - `('N', 'Q')` specifies a 2D shape wherein the first and second\n dimensions of shape may or may not be equal.\n - `('N', 'N', 'Q')` specifies a 3D shape wherein the first and second\n dimensions are equal.\n - `(1, 'N')` specifies a 2D shape wherein the first dimension is\n exactly 1 and the second dimension can be any value.\n Note that the abstract dimension letters take effect across different\n tuple elements of the list. For example,\n `tf.debugging.assert_shapes([(x, ('N', 'A')), (y, ('N', 'B'))]` asserts\n that both `x` and `y` are rank-2 tensors and their first dimensions are\n equal (`N`).\n `shape` can also be a `tf.TensorShape`.\n data: The tensors to print out if the condition is False. Defaults to error\n message and first few entries of the violating tensor.\n summarize: Print this many entries of the tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_shapes\".\n\n Returns:\n Op raising `InvalidArgumentError` unless all shape constraints are\n satisfied.\n If static checks determine all constraints are satisfied, a `no_op` is\n returned.\n\n Raises:\n ValueError: If static checks determine any shape constraint is violated.\n ", "desc": "Assert tensor shapes and dimension size relationships between tensors.", "type": "API"}, {"name": "tf.compat.v1.debugging.assert_type", "docs": "Statically asserts that the given `Tensor` is of the specified type.\n\n Args:\n tensor: A `Tensor` or `SparseTensor`.\n tf_type: A tensorflow type (`dtypes.float32`, `tf.int64`, `dtypes.bool`,\n etc).\n message: A string to prefix to the default message.\n name: A name to give this `Op`. Defaults to \"assert_type\"\n\n Raises:\n TypeError: If the tensors data type doesn't match `tf_type`.\n\n Returns:\n A `no_op` that does nothing. Type can be determined statically.\n ", "desc": "Statically asserts that the given `Tensor` is of the specified type.", "type": "API"}, {"name": "tf.compat.v1.debugging.check_numerics", "docs": "Checks a tensor for NaN and Inf values.\n\n When run, reports an `InvalidArgument` error if `tensor` has any values\n that are not a number (NaN) or infinity (Inf). Otherwise, returns the input\n tensor.\n\n Example usage:\n\n ``` python\n a = tf.Variable(1.0)\n tf.debugging.check_numerics(a, message='')\n\n b = tf.Variable(np.nan)\n try:\n tf.debugging.check_numerics(b, message='Checking b')\n except Exception as e:\n assert \"Checking b : Tensor had NaN values\" in e.message\n\n c = tf.Variable(np.inf)\n try:\n tf.debugging.check_numerics(c, message='Checking c')\n except Exception as e:\n assert \"Checking c : Tensor had Inf values\" in e.message\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n message: A `string`. Prefix of the error message.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Checks a tensor for NaN and Inf values.", "type": "API"}, {"name": "tf.compat.v1.debugging.disable_check_numerics", "docs": "Disable the eager/graph unified numerics checking mechanism.\n\n This method can be used after a call to `tf.debugging.enable_check_numerics()`\n to disable the numerics-checking mechanism that catches infinity and NaN\n values output by ops executed eagerly or in tf.function-compiled graphs.\n\n This method is idempotent. Calling it multiple times has the same effect\n as calling it once.\n\n This method takes effect only on the thread in which it is called.\n ", "desc": "Disable the eager/graph unified numerics checking mechanism.", "type": "API"}, {"name": "tf.compat.v1.debugging.enable_check_numerics", "docs": "Enable tensor numerics checking in an eager/graph unified fashion.\n\n The numerics checking mechanism will cause any TensorFlow eager execution or\n graph execution to error out as soon as an op's output tensor contains\n infinity or NaN.\n\n This method is idempotent. Calling it multiple times has the same effect\n as calling it once.\n\n This method takes effect only on the thread in which it is called.\n\n When a op's float-type output tensor contains any Infinity or NaN, an\n `tf.errors.InvalidArgumentError` will be thrown, with an error message that\n reveals the following information:\n - The type of the op that generated the tensor with bad numerics.\n - Data type (dtype) of the tensor.\n - Shape of the tensor (to the extent known at the time of eager execution\n or graph construction).\n - Name of the containing graph (if available).\n - (Graph mode only): The stack trace of the intra-graph op's creation,\n with a stack-height limit and a path-length limit for visual clarity.\n The stack frames that belong to the user's code (as opposed to\n tensorflow's internal code) are highlighted with a text arrow (\"->\").\n - (Eager mode only): How many of the offending tensor's elements are\n `Infinity` and `NaN`, respectively.\n\n Once enabled, the check-numerics mechanism can be disabled by using\n `tf.debugging.disable_check_numerics()`.\n\n Example usage:\n\n 1. Catching infinity during the execution of a `tf.function` graph:\n\n ```py\n import tensorflow as tf\n\n tf.debugging.enable_check_numerics()\n\n @tf.function\n def square_log_x_plus_1(x):\n v = tf.math.log(x + 1)\n return tf.math.square(v)\n\n x = -1.0\n\n # When the following line runs, a function graph will be compiled\n # from the Python function `square_log_x_plus_1()`. Due to the\n # `enable_check_numerics()` call above, the graph will contain\n # numerics checking ops that will run during the function graph's\n # execution. The function call generates an -infinity when the Log\n # (logarithm) op operates on the output tensor of the Add op.\n # The program errors out at this line, printing an error message.\n y = square_log_x_plus_1(x)\n z = -y\n ```\n\n 2. Catching NaN during eager execution:\n\n ```py\n import numpy as np\n import tensorflow as tf\n\n tf.debugging.enable_check_numerics()\n\n x = np.array([[0.0, -1.0], [4.0, 3.0]])\n\n # The following line executes the Sqrt op eagerly. Due to the negative\n # element in the input array, a NaN is generated. Due to the\n # `enable_check_numerics()` call above, the program errors immediately\n # at this line, printing an error message.\n y = tf.math.sqrt(x)\n z = tf.matmul(y, y)\n ```\n\n NOTE: If your code is running on TPUs, be sure to call\n `tf.config.set_soft_device_placement(True)` before calling\n `tf.debugging.enable_check_numerics()` as this API uses automatic outside\n compilation on TPUs. For example:\n\n ```py\n tf.config.set_soft_device_placement(True)\n tf.debugging.enable_check_numerics()\n\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n strategy = tf.distribute.TPUStrategy(resolver)\n with strategy.scope():\n # ...\n ```\n\n Args:\n stack_height_limit: Limit to the height of the printed stack trace.\n Applicable only to ops in `tf.function`s (graphs).\n path_length_limit: Limit to the file path included in the printed stack\n trace. Applicable only to ops in `tf.function`s (graphs).\n ", "desc": "Enable tensor numerics checking in an eager/graph unified fashion.", "type": "API"}, {"name": "tf.compat.v1.debugging.experimental", "docs": "Public API for tf.debugging.experimental namespace.\n", "desc": "Public API for tf.debugging.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.debugging.experimental.disable_dump_debug_info", "docs": "Disable the currently-enabled debugging dumping.\n\n If the `enable_dump_debug_info()` method under the same Python namespace\n has been invoked before, calling this method disables it. If no call to\n `enable_dump_debug_info()` has been made, calling this method is a no-op.\n Calling this method more than once is idempotent.\n ", "desc": "Disable the currently-enabled debugging dumping.", "type": "API"}, {"name": "tf.compat.v1.debugging.experimental.enable_dump_debug_info", "docs": "Enable dumping debugging information from a TensorFlow program.\n\n The debugging information is dumped to a directory on the file system\n specified as `dump_root`.\n\n The dumped debugging information can be ingested by debugger UIs.\n\n The files in the dump directory contain the following information:\n - TensorFlow Function construction (e.g., compilation of Python functions\n decorated with @tf.function), the op types, names (if available), context,\n the input and output tensors, and the associated stack traces.\n - Execution of TensorFlow operations (ops) and Functions and their stack\n traces, op types, names (if available) and contexts. In addition,\n depending on the value of the `tensor_debug_mode` argument (see Args\n section below), the value(s) of the output tensors or more concise\n summaries of the tensor values will be dumped.\n - A snapshot of Python source files involved in the execution of the\n TensorFlow program.\n\n Once enabled, the dumping can be disabled with the corresponding\n `disable_dump_debug_info()` method under the same Python namespace.\n Calling this method more than once with the same `dump_root` is idempotent.\n Calling this method more than once with different `tensor_debug_mode`s\n leads to a `ValueError`.\n Calling this method more than once with different `circular_buffer_size`s\n leads to a `ValueError`.\n Calling this method with a different `dump_root` abolishes the\n previously-enabled `dump_root`.\n\n Usage example:\n\n ```py\n tf.debugging.experimental.enable_dump_debug_info('/tmp/my-tfdbg-dumps')\n\n # Code to build, train and run your TensorFlow model...\n ```\n\n NOTE: If your code is running on TPUs, be sure to call\n `tf.config.set_soft_device_placement(True)` before calling\n `tf.debugging.experimental.enable_dump_debug_info()` as this API uses\n automatic outside compilation on TPUs. For example:\n\n ```py\n tf.config.set_soft_device_placement(True)\n tf.debugging.experimental.enable_dump_debug_info(\n logdir, tensor_debug_mode=\"FULL_HEALTH\")\n\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n strategy = tf.distribute.TPUStrategy(resolver)\n with strategy.scope():\n # ...\n ```\n\n Args:\n dump_root: The directory path where the dumping information will be written.\n tensor_debug_mode: Debug mode for tensor values, as a string.\n The currently supported options are:\n - \"NO_TENSOR\": (Default) Only traces the output tensors of all executed\n ops (including those executed eagerly at the Python level or as a part\n of a TensorFlow graph) and functions, while not extracting any\n information from the values of the tensors.\n - \"CURT_HEALTH\": For each floating-dtype tensor (e.g., tensors of dtypes\n such as `float32`, `float64` and `bfloat16`), extracts a binary bit\n indicating whether it contains any -infinity, +infinity or NaN.\n - \"CONCISE_HEALTH\": For each floating-dtype tensor, extract total\n element count, and counts of -infinity, +infinity and NaN elements.\n - \"FULL_HEALTH\": For each floating-dtype tensor, extracts the dtype,\n rank (number of dimensions), total element count, and counts of\n -infinity, +infinity and NaN elements.\n - \"SHAPE\": For each tensor (regardless of dtype), extracts its dtype,\n rank, total element count and shape.\n circular_buffer_size: Size of the circular buffers for execution events.\n These circular buffers are designed to reduce the overhead of debugging\n dumping. They hold the most recent debug events concerning eager execution\n of ops and `tf.function`s and traces of tensor values computed inside\n `tf.function`s. They are written to the file system only when the proper\n flushing method is called (see description of return values below).\n Expected to be an integer. If <= 0, the circular-buffer behavior will be\n disabled, i.e., the execution debug events will be written to the file\n writers in the same way as non-execution events such as op creations and\n source-file snapshots.\n op_regex: Dump data from only the tensors from op types that matches to the\n regular expression (through Python's `re.match()`).\n \"Op type\" refers to the names of the TensorFlow operations (e.g.,\n \"MatMul\", \"LogSoftmax\"), which may repeat in a TensorFlow\n function. It does *not* refer to the names of nodes (e.g.,\n \"dense/MatMul\", \"dense_1/MatMul_1\") which are unique within a function.\n - Example 1: Dump tensor data from only MatMul and Relu ops\n `op_regex=\"^(MatMul|Relu)$\"`.\n - Example 2: Dump tensors from all ops *except* Relu:\n `op_regex=\"(?!^Relu$)\"`.\n This filter operates in a logical AND relation with `tensor_dtypes`.\n tensor_dtypes: Dump data from only the tensors of which the specified\n dtypes. This optional argument can be in any of the following format:\n - a list or tuple of `DType` objects or strings that can be converted\n to `DType` objects via `tf.as_dtype()`. Examples:\n - `tensor_dtype=[tf.float32, tf.float64]`,\n - `tensor_dtype=[\"float32\", \"float64\"]`,\n - `tensor_dtypes=(tf.int32, tf.bool)`,\n - `tensor_dtypes=(\"int32\", \"bool\")`\n - a callable that takes a single `DType` argument and returns a Python\n `boolean` indicating whether the dtype is to be included in the data\n dumping. Examples:\n - `tensor_dtype=lambda dtype: dtype.is_integer`.\n This filter operates in a logical AND relation with `op_regex`.\n Returns:\n A DebugEventsWriter instance used by the dumping callback. The caller\n may use its flushing methods, including `FlushNonExecutionFiles()` and\n `FlushExecutionFiles()`.\n ", "desc": "Enable dumping debugging information from a TensorFlow program.", "type": "API"}, {"name": "tf.compat.v1.debugging.get_log_device_placement", "docs": "Get if device placements are logged.\n\n Returns:\n If device placements are logged.\n ", "desc": "Get if device placements are logged.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_finite", "docs": "Returns which elements of x are finite.\n\n @compatibility(numpy)\n Equivalent to np.isfinite\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])\n tf.math.is_finite(x) ==> [True, True, True, False, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are finite.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_inf", "docs": "Returns which elements of x are Inf.\n\n @compatibility(numpy)\n Equivalent to np.isinf\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.inf, 6.8, np.inf])\n tf.math.is_inf(x) ==> [False, True, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are Inf.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_nan", "docs": "Returns which elements of x are NaN.\n\n @compatibility(numpy)\n Equivalent to np.isnan\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])\n tf.math.is_nan(x) ==> [False, True, False, True, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are NaN.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_non_decreasing", "docs": "Returns `True` if `x` is non-decreasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.\n If `x` has less than two elements, it is trivially non-decreasing.\n\n See also: `is_strictly_increasing`\n\n >>> x1 = tf.constant([1.0, 1.0, 3.0])\n >>> tf.math.is_non_decreasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_non_decreasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional). Defaults to \"is_non_decreasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is non-decreasing.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_numeric_tensor", "docs": "Returns `True` if the elements of `tensor` are numbers.\n\n Specifically, returns `True` if the dtype of `tensor` is one of the following:\n\n * `tf.float16`\n * `tf.float32`\n * `tf.float64`\n * `tf.int8`\n * `tf.int16`\n * `tf.int32`\n * `tf.int64`\n * `tf.uint8`\n * `tf.uint16`\n * `tf.uint32`\n * `tf.uint64`\n * `tf.qint8`\n * `tf.qint16`\n * `tf.qint32`\n * `tf.quint8`\n * `tf.quint16`\n * `tf.complex64`\n * `tf.complex128`\n * `tf.bfloat16`\n\n Returns `False` if `tensor` is of a non-numeric type or if `tensor` is not\n a `tf.Tensor` object.\n ", "desc": "Returns `True` if the elements of `tensor` are numbers.", "type": "API"}, {"name": "tf.compat.v1.debugging.is_strictly_increasing", "docs": "Returns `True` if `x` is strictly increasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.\n If `x` has less than two elements, it is trivially strictly increasing.\n\n See also: `is_non_decreasing`\n\n >>> x1 = tf.constant([1.0, 2.0, 3.0])\n >>> tf.math.is_strictly_increasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_strictly_increasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional).\n Defaults to \"is_strictly_increasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is strictly increasing.", "type": "API"}, {"name": "tf.compat.v1.debugging.set_log_device_placement", "docs": "Turns logging for device placement decisions on or off.\n\n Operations execute on a particular device, producing and consuming tensors on\n that device. This may change the performance of the operation or require\n TensorFlow to copy data to or from an accelerator, so knowing where operations\n execute is useful for debugging performance issues.\n\n For more advanced profiling, use the [TensorFlow\n profiler](https://www.tensorflow.org/guide/profiler).\n\n Device placement for operations is typically controlled by a `tf.device`\n scope, but there are exceptions, for example operations on a `tf.Variable`\n which follow the initial placement of the variable. Turning off soft device\n placement (with `tf.config.set_soft_device_placement`) provides more explicit\n control.\n\n >>> tf.debugging.set_log_device_placement(True)\n >>> tf.ones([])\n >>> # [...] op Fill in device /job:localhost/replica:0/task:0/device:GPU:0\n >>> with tf.device(\"CPU\"):\n ... tf.ones([])\n >>> # [...] op Fill in device /job:localhost/replica:0/task:0/device:CPU:0\n >>> tf.debugging.set_log_device_placement(False)\n\n Turning on `tf.debugging.set_log_device_placement` also logs the placement of\n ops inside `tf.function` when the function is called.\n\n Args:\n enabled: Whether to enabled device placement logging.\n ", "desc": "Turns logging for device placement decisions on or off.", "type": "API"}, {"name": "tf.compat.v1.decode_base64", "docs": "Decode web-safe base64-encoded strings.\n\n Input may or may not have padding at the end. See\n [EncodeBase64](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64)\n for padding. Web-safe means that input must use - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Base64 strings to decode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decode web-safe base64-encoded strings.", "type": "API"}, {"name": "tf.compat.v1.decode_compressed", "docs": "Decompress strings.\n\n This op decompresses each element of the `bytes` input `Tensor`, which\n is assumed to be compressed using the given `compression_type`.\n\n The `output` is a string `Tensor` of the same shape as `bytes`,\n each element containing the decompressed data from the corresponding\n element in `bytes`.\n\n Args:\n bytes: A `Tensor` of type `string`.\n A Tensor of string which is compressed.\n compression_type: An optional `string`. Defaults to `\"\"`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decompress strings.", "type": "API"}, {"name": "tf.compat.v1.decode_csv", "docs": "Convert CSV records to tensors. Each column maps to one tensor.\n\n RFC 4180 format is expected for the CSV records.\n (https://tools.ietf.org/html/rfc4180)\n Note that we allow leading and trailing spaces with int or float field.\n\n Args:\n records: A `Tensor` of type `string`.\n Each string is a record/row in the csv and all records should have\n the same format.\n record_defaults: A list of `Tensor` objects with specific types.\n Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`.\n One tensor per column of the input record, with either a\n scalar default value for that column or an empty vector if the column is\n required.\n field_delim: An optional `string`. Defaults to `\",\"`.\n char delimiter to separate fields in a record.\n use_quote_delim: An optional `bool`. Defaults to `True`.\n If false, treats double quotation marks as regular\n characters inside of the string fields (ignoring RFC 4180, Section 2,\n Bullet 5).\n name: A name for the operation (optional).\n na_value: Additional string to recognize as NA/NaN.\n select_cols: Optional sorted list of column indices to select. If specified,\n only this subset of columns will be parsed and returned.\n\n Returns:\n A list of `Tensor` objects. Has the same type as `record_defaults`.\n Each tensor will have the same shape as records.\n\n Raises:\n ValueError: If any of the arguments is malformed.\n ", "desc": "Convert CSV records to tensors. Each column maps to one tensor.", "type": "API"}, {"name": "tf.compat.v1.decode_json_example", "docs": "Convert JSON-encoded Example records to binary protocol buffer strings.\n\n Note: This is **not** a general purpose JSON parsing op.\n\n This op converts JSON-serialized `tf.train.Example` (maybe created with\n `json_format.MessageToJson`, following the\n [standard JSON mapping](\n https://developers.google.com/protocol-buffers/docs/proto3#json))\n to a binary-serialized `tf.train.Example` (equivalent to\n `Example.SerializeToString()`) suitable for conversion to tensors with\n `tf.io.parse_example`.\n\n Here is a `tf.train.Example` proto:\n\n >>> example = tf.train.Example(\n ... features=tf.train.Features(\n ... feature={\n ... \"a\": tf.train.Feature(\n ... int64_list=tf.train.Int64List(\n ... value=[1, 1, 3]))}))\n\n Here it is converted to JSON:\n\n >>> from google.protobuf import json_format\n >>> example_json = json_format.MessageToJson(example)\n >>> print(example_json)\n {\n \"features\": {\n \"feature\": {\n \"a\": {\n \"int64List\": {\n \"value\": [\n \"1\",\n \"1\",\n \"3\"\n ]\n }\n }\n }\n }\n }\n\n This op converts the above json string to a binary proto:\n\n >>> example_binary = tf.io.decode_json_example(example_json)\n >>> example_binary.numpy()\n b'\\n\\x0f\\n\\r\\n\\x01a\\x12\\x08\\x1a\\x06\\x08\\x01\\x08\\x01\\x08\\x03'\n\n The OP works on string tensors of andy shape:\n\n >>> tf.io.decode_json_example([\n ... [example_json, example_json],\n ... [example_json, example_json]]).shape.as_list()\n [2, 2]\n\n This resulting binary-string is equivalent to `Example.SerializeToString()`,\n and can be converted to Tensors using `tf.io.parse_example` and related\n functions:\n\n >>> tf.io.parse_example(\n ... serialized=[example_binary.numpy(),\n ... example.SerializeToString()],\n ... features = {'a': tf.io.FixedLenFeature(shape=[3], dtype=tf.int64)})\n {'a': }\n\n Args:\n json_examples: A string tensor containing json-serialized `tf.Example`\n protos.\n name: A name for the op.\n\n Returns:\n A string Tensor containing the binary-serialized `tf.Example` protos.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If the JSON could not be converted to a\n `tf.Example`\n ", "desc": "Convert JSON-encoded Example records to binary protocol buffer strings.", "type": "API"}, {"name": "tf.compat.v1.decode_raw", "docs": "Convert raw byte strings into tensors. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(bytes)`. They will be removed in a future version.\nInstructions for updating:\nbytes is deprecated, use input_bytes instead\n\nArgs:\n input_bytes:\n Each element of the input Tensor is converted to an array of bytes.\n out_type:\n `DType` of the output. Acceptable types are `half`, `float`, `double`,\n `int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`.\n little_endian:\n Whether the `input_bytes` data is in little-endian format. Data will be\n converted into host byte order if necessary.\n name: A name for the operation (optional).\n bytes: Deprecated parameter. Use `input_bytes` instead.\n\nReturns:\n A `Tensor` object storing the decoded bytes.", "desc": "Convert raw byte strings into tensors. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.delete_session_tensor", "docs": "Delete the tensor for the given tensor handle.\n\n This is EXPERIMENTAL and subject to change.\n\n Delete the tensor of a given tensor handle. The tensor is produced\n in a previous run() and stored in the state of the session.\n\n Args:\n handle: The string representation of a persistent tensor handle.\n name: Optional name prefix for the return tensor.\n\n Returns:\n A pair of graph elements. The first is a placeholder for feeding a\n tensor handle and the second is a deletion operation.\n ", "desc": "Delete the tensor for the given tensor handle.", "type": "API"}, {"name": "tf.compat.v1.depth_to_space", "docs": "DepthToSpace for tensors of type T.\n\n Rearranges data from depth into blocks of spatial data.\n This is the reverse transformation of SpaceToDepth. More specifically,\n this op outputs a copy of the input tensor where values from the `depth`\n dimension are moved in spatial blocks to the `height` and `width` dimensions.\n The attr `block_size` indicates the input block size and how the data is moved.\n\n * Chunks of data of size `block_size * block_size` from depth are rearranged\n into non-overlapping blocks of size `block_size x block_size`\n * The width the output tensor is `input_depth * block_size`, whereas the\n height is `input_height * block_size`.\n * The Y, X coordinates within each block of the output image are determined\n by the high order component of the input channel index.\n * The depth of the input tensor must be divisible by\n `block_size * block_size`.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates\n within the input image, bX, bY means coordinates\n within the output block, oC means output channels).\n The output would be the input transposed to the following layout:\n n,iY,bY,iX,bX,oC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 1, 1, 4]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1, 2, 3, 4]]]]\n\n ```\n\n This operation will output a tensor of shape `[1, 2, 2, 1]`:\n\n ```\n [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,\n the corresponding output will have 2x2 elements and will have a depth of\n 1 channel (1 = `4 / (block_size * block_size)`).\n The output element shape is `[2, 2, 1]`.\n\n For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.\n\n ```\n x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n This operation, for block size of 2, will return the following tensor of shape\n `[1, 2, 2, 3]`\n\n ```\n [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n\n ```\n\n Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 4 4 1]`:\n\n ```\n x = [[[ [1], [2], [5], [6]],\n [ [3], [4], [7], [8]],\n [ [9], [10], [13], [14]],\n [ [11], [12], [15], [16]]]]\n\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`.\n The size of the spatial block, same as in Space2Depth.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "DepthToSpace for tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.dequantize", "docs": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.\n\n [min_range, max_range] are scalar floats that specify the range for\n the output. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n if T == qint8: in[i] += (range(T) + 1)/ 2.0\n out[i] = min_range + (in[i]* (max_range - min_range) / range(T))\n ```\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n If the input comes from a QuantizedRelu6, the output type is\n quint8 (range of 0-255) but the possible range of QuantizedRelu6 is\n 0-6. The min_range and max_range values are therefore 0.0 and 6.0.\n Dequantize on quint8 will take each value, cast to float, and multiply\n by 6 / 255.\n Note that if quantizedtype is qint8, the operation will additionally add\n each value by 128 prior to casting.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```c++\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = range / num_discrete_values\n const double offset_input = static_cast(input) - lowest_quantized;\n result = range_min + ((input - numeric_limits::min()) * range_scale)\n ```\n\n If the mode is `SCALED`, dequantization is performed by multiplying each\n input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).\n\n The scaling_factor is determined from `min_range`, `max_range`, and\n `narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}`\n and `QuantizeV2`, using the following algorithm:\n\n ```c++\n\n const int min_expected_T = std::numeric_limits::min() +\n (narrow_range ? 1 : 0);\n const int max_expected_T = std::numeric_limits::max();\n const float max_expected_T = std::numeric_limits::max();\n\n const float scale_factor =\n (std::numeric_limits::min() == 0) ? (max_range / max_expected_T)\n : std::max(min_range / min_expected_T,\n max_range / max_expected_T);\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_range: A `Tensor` of type `float32`.\n The minimum scalar value possibly produced for the input.\n max_range: A `Tensor` of type `float32`.\n The maximum scalar value possibly produced for the input.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n dtype: An optional `tf.DType` from: `tf.bfloat16, tf.float32`. Defaults to `tf.float32`.\n Type of the output tensor. Currently Dequantize supports float and bfloat16.\n If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.", "type": "API"}, {"name": "tf.compat.v1.deserialize_many_sparse", "docs": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.\n\n The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where\n `N` is the minibatch size and the rows correspond to packed outputs of\n `serialize_sparse`. The ranks of the original `SparseTensor` objects\n must all match. When the final `SparseTensor` is created, it has rank one\n higher than the ranks of the incoming `SparseTensor` objects (they have been\n concatenated along a new row dimension).\n\n The output `SparseTensor` object's shape values for all dimensions but the\n first are the max across the input `SparseTensor` objects' shape values\n for the corresponding dimensions. Its first shape value is `N`, the minibatch\n size.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `sparse.reorder` to restore index ordering.\n\n For example, if the serialized input is a `[2, 3]` matrix representing two\n original `SparseTensor` objects:\n\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n\n and\n\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n\n then the final deserialized `SparseTensor` will be:\n\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n\n Args:\n serialized_sparse: 2-D `Tensor` of type `string` of shape `[N, 3]`.\n The serialized and packed `SparseTensor` objects.\n dtype: The `dtype` of the serialized `SparseTensor` objects.\n rank: (optional) Python int, the rank of the `SparseTensor` objects.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` representing the deserialized `SparseTensor`s,\n concatenated along the `SparseTensor`s' first dimension.\n\n All of the serialized `SparseTensor`s must have had the same rank and type.\n ", "desc": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.", "type": "API"}, {"name": "tf.compat.v1.device", "docs": "Wrapper for `Graph.device()` using the default graph.\n\n See `tf.Graph.device` for more details.\n\n Args:\n device_name_or_function: The device name or function to use in the context.\n\n Returns:\n A context manager that specifies the default device to use for newly\n created ops.\n\n Raises:\n RuntimeError: If eager execution is enabled and a function is passed in.\n ", "desc": "Wrapper for `Graph.device()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.DeviceSpec", "docs": "Represents a (possibly partial) specification for a TensorFlow device.\n\n `DeviceSpec`s are used throughout TensorFlow to describe where state is stored\n and computations occur. Using `DeviceSpec` allows you to parse device spec\n strings to verify their validity, merge them or compose them programmatically.\n\n Example:\n\n ```python\n # Place the operations on device \"GPU:0\" in the \"ps\" job.\n device_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n with tf.device(device_spec.to_string()):\n # Both my_var and squared_var will be placed on /job:ps/device:GPU:0.\n my_var = tf.Variable(..., name=\"my_variable\")\n squared_var = tf.square(my_var)\n ```\n\n With eager execution disabled (by default in TensorFlow 1.x and by calling\n disable_eager_execution() in TensorFlow 2.x), the following syntax\n can be used:\n\n ```python\n tf.compat.v1.disable_eager_execution()\n\n # Same as previous\n device_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n # No need of .to_string() method.\n with tf.device(device_spec):\n my_var = tf.Variable(..., name=\"my_variable\")\n squared_var = tf.square(my_var)\n ```\n\n If a `DeviceSpec` is partially specified, it will be merged with other\n `DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`\n components defined in inner scopes take precedence over those defined in\n outer scopes.\n\n ```python\n gpu0_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n with tf.device(DeviceSpec(job=\"train\").to_string()):\n with tf.device(gpu0_spec.to_string()):\n # Nodes created here will be assigned to /job:ps/device:GPU:0.\n with tf.device(DeviceSpec(device_type=\"GPU\", device_index=1).to_string()):\n # Nodes created here will be assigned to /job:train/device:GPU:1.\n ```\n\n A `DeviceSpec` consists of 5 components -- each of\n which is optionally specified:\n\n * Job: The job name.\n * Replica: The replica index.\n * Task: The task index.\n * Device type: The device type string (e.g. \"CPU\" or \"GPU\").\n * Device index: The device index.\n ", "desc": "Represents a (possibly partial) specification for a TensorFlow device.", "type": "API"}, {"name": "tf.compat.v1.diag", "docs": "Returns a diagonal tensor with a given diagonal values.\n\n Given a `diagonal`, this operation returns a tensor with the `diagonal` and\n everything else padded with zeros. The diagonal is computed as follows:\n\n Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of\n rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:\n\n `output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.\n\n For example:\n\n ```\n # 'diagonal' is [1, 2, 3, 4]\n tf.diag(diagonal) ==> [[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]]\n ```\n\n Args:\n diagonal: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n Rank k tensor where k is at most 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a diagonal tensor with a given diagonal values.", "type": "API"}, {"name": "tf.compat.v1.diag_part", "docs": "Returns the diagonal part of the tensor.\n\n This operation returns a tensor with the `diagonal` part\n of the `input`. The `diagonal` part is computed as follows:\n\n Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a\n tensor of rank `k` with dimensions `[D1,..., Dk]` where:\n\n `diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.\n\n For a rank 2 tensor, `linalg.diag_part` and `linalg.tensor_diag_part`\n produce the same result. For rank 3 and higher, linalg.diag_part extracts\n the diagonal of each inner-most matrix in the tensor. An example where\n they differ is given below.\n\n >>> x = [[[[1111,1112],[1121,1122]],\n ... [[1211,1212],[1221,1222]]],\n ... [[[2111, 2112], [2121, 2122]],\n ... [[2211, 2212], [2221, 2222]]]\n ... ]\n >>> tf.linalg.tensor_diag_part(x)\n \n >>> tf.linalg.diag_part(x).shape\n TensorShape([2, 2, 2])\n\n Args:\n input: A `Tensor` with rank `2k`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`, and\n rank `k`.\n ", "desc": "Returns the diagonal part of the tensor.", "type": "API"}, {"name": "tf.compat.v1.digamma", "docs": "Computes Psi, the derivative of Lgamma (the log of the absolute value of\n\n `Gamma(x)`), element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes Psi, the derivative of Lgamma (the log of the absolute value of", "type": "API"}, {"name": "tf.compat.v1.Dimension", "docs": "Represents the value of one dimension in a TensorShape.\n\n @compatibility(TF2)\n In TF2, members of a `TensorShape` object are integers. The `Dimension` class\n is not part of TF2's data model.\n\n Please refer to the [TensorShape section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/index#tensorshape) on common code\n patterns adapting Dimension objects to a TF2 syntax.\n @end_compatibility\n ", "desc": "Represents the value of one dimension in a TensorShape.", "type": "API"}, {"name": "tf.compat.v1.dimension_at_index", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n If you want to retrieve the Dimension instance corresponding to a certain\n index in a TensorShape instance, use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n dim = tensor_shape[i]\n\n # Use `dimension_at_index` as direct replacement compatible with both V1 & V2:\n dim = dimension_at_index(tensor_shape, i)\n\n # Another possibility would be this, but WARNING: it only works if the\n # tensor_shape instance has a defined rank.\n dim = tensor_shape.dims[i] # `dims` may be None if the rank is undefined!\n\n # In native V2 code, we recommend instead being more explicit:\n if tensor_shape.rank is None:\n dim = Dimension(None)\n else:\n dim = tensor_shape.dims[i]\n\n # Being more explicit will save you from the following trap (present in V1):\n # you might do in-place modifications to `dim` and expect them to be reflected\n # in `tensor_shape[i]`, but they would not be (as the Dimension object was\n # instantiated on the fly.\n ```\n\n Args:\n shape: A TensorShape instance.\n index: An integer index.\n\n Returns:\n A dimension object.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.v1.dimension_value", "docs": "Compatibility utility required to allow for both V1 and V2 behavior in TF.\n\n Until the release of TF 2.0, we need the legacy behavior of `TensorShape` to\n coexist with the new behavior. This utility is a bridge between the two.\n\n When accessing the value of a TensorShape dimension,\n use this utility, like this:\n\n ```\n # If you had this in your V1 code:\n value = tensor_shape[i].value\n\n # Use `dimension_value` as direct replacement compatible with both V1 & V2:\n value = dimension_value(tensor_shape[i])\n\n # This would be the V2 equivalent:\n value = tensor_shape[i] # Warning: this will return the dim value in V2!\n ```\n\n Args:\n dimension: Either a `Dimension` instance, an integer, or None.\n\n Returns:\n A plain value, i.e. an integer or None.\n ", "desc": "Compatibility utility required to allow for both V1 and V2 behavior in TF.", "type": "API"}, {"name": "tf.compat.v1.disable_control_flow_v2", "docs": "Opts out of control flow v2.\n\n Note: v2 control flow is always enabled inside of tf.function. Calling this\n function has no effect in that case.\n\n If your code needs tf.disable_control_flow_v2() to be called to work\n properly please file a bug.\n ", "desc": "Opts out of control flow v2.", "type": "API"}, {"name": "tf.compat.v1.disable_eager_execution", "docs": "Disables eager execution.\n\n This function can only be called before any Graphs, Ops, or Tensors have been\n created.\n\n @compatibility(TF2)\n This function is not necessary if you are using TF2. Eager execution is\n enabled by default. If you want to use Graph mode please consider\n [tf.function](https://www.tensorflow.org/api_docs/python/tf/function).\n @end_compatibility\n ", "desc": "Disables eager execution.", "type": "API"}, {"name": "tf.compat.v1.disable_resource_variables", "docs": "Opts out of resource variables. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nnon-resource variables are not supported in the long term\n\nIf your code needs tf.disable_resource_variables() to be called to work\nproperly please file a bug.", "desc": "Opts out of resource variables. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.disable_tensor_equality", "docs": "Compare Tensors by their id and be hashable.\n\n This is a legacy behaviour of TensorFlow and is highly discouraged.\n ", "desc": "Compare Tensors by their id and be hashable.", "type": "API"}, {"name": "tf.compat.v1.disable_v2_behavior", "docs": "Disables TensorFlow 2.x behaviors.\n\n This function can be called at the beginning of the program (before `Tensors`,\n `Graphs` or other structures have been created, and before devices have been\n initialized. It switches all global behaviors that are different between\n TensorFlow 1.x and 2.x to behave as intended for 1.x.\n\n User can call this function to disable 2.x behavior during complex migrations.\n\n @compatibility(TF2)\n Using this function indicates that your software is not compatible\n with eager execution and `tf.function` in TF2.\n\n To migrate to TF2, rewrite your code to be compatible with eager execution.\n Please refer to the [migration guide]\n (https://www.tensorflow.org/guide/migrate) for additional resource on the\n topic.\n @end_compatibility\n ", "desc": "Disables TensorFlow 2.x behaviors.", "type": "API"}, {"name": "tf.compat.v1.disable_v2_tensorshape", "docs": "Disables the V2 TensorShape behavior and reverts to V1 behavior.\n\n See docstring for `enable_v2_tensorshape` for details about the new behavior.\n ", "desc": "Disables the V2 TensorShape behavior and reverts to V1 behavior.", "type": "API"}, {"name": "tf.compat.v1.distribute", "docs": "Library for running a computation across multiple devices.\n\nThe intent of this library is that you can write an algorithm in a stylized way\nand it will be usable with a variety of different `tf.distribute.Strategy`\nimplementations. Each descendant will implement a different strategy for\ndistributing the algorithm across multiple devices/machines. Furthermore, these\nchanges can be hidden inside the specific layers and other library classes that\nneed special treatment to run in a distributed setting, so that most users'\nmodel definition code can run unchanged. The `tf.distribute.Strategy` API works\nthe same way with eager and graph execution.\n\n*Guides*\n\n* [TensorFlow v2.x](https://www.tensorflow.org/guide/distributed_training)\n* [TensorFlow v1.x](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb)\n\n*Tutorials*\n\n* [Distributed Training Tutorials](https://www.tensorflow.org/tutorials/distribute/)\n\n The tutorials cover how to use `tf.distribute.Strategy` to do distributed\n training with native Keras APIs, custom training loops,\n and Estimator APIs. They also cover how to save/load model when using\n `tf.distribute.Strategy`.\n\n*Glossary*\n\n* _Data parallelism_ is where we run multiple copies of the model\n on different slices of the input data. This is in contrast to\n _model parallelism_ where we divide up a single copy of a model\n across multiple devices.\n Note: we only support data parallelism for now, but\n hope to add support for model parallelism in the future.\n* A _device_ is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that\n TensorFlow can run operations on (see e.g. `tf.device`). You may have multiple\n devices on a single machine, or be connected to devices on multiple\n machines. Devices used to run computations are called _worker devices_.\n Devices used to store variables are _parameter devices_. For some strategies,\n such as `tf.distribute.MirroredStrategy`, the worker and parameter devices\n will be the same (see mirrored variables below). For others they will be\n different. For example, `tf.distribute.experimental.CentralStorageStrategy`\n puts the variables on a single device (which may be a worker device or may be\n the CPU), and `tf.distribute.experimental.ParameterServerStrategy` puts the\n variables on separate machines called _parameter servers_ (see below).\n* A _replica_ is one copy of the model, running on one slice of the\n input data. Right now each replica is executed on its own\n worker device, but once we add support for model parallelism\n a replica may span multiple worker devices.\n* A _host_ is the CPU device on a machine with worker devices, typically\n used for running input pipelines.\n* A _worker_ is defined to be the physical machine(s) containing the physical\n devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A\n worker may contain one or more replicas, but contains at least one\n replica. Typically one worker will correspond to one machine, but in the case\n of very large models with model parallelism, one worker may span multiple\n machines. We typically run one input pipeline per worker, feeding all the\n replicas on that worker.\n* _Synchronous_, or more commonly _sync_, training is where the updates from\n each replica are aggregated together before updating the model variables. This\n is in contrast to _asynchronous_, or _async_ training, where each replica\n updates the model variables independently. You may also have replicas\n partitioned into groups which are in sync within each group but async between\n groups.\n* _Parameter servers_: These are machines that hold a single copy of\n parameters/variables, used by some strategies (right now just\n `tf.distribute.experimental.ParameterServerStrategy`). All replicas that want\n to operate on a variable retrieve it at the beginning of a step and send an\n update to be applied at the end of the step. These can in principle support\n either sync or async training, but right now we only have support for async\n training with parameter servers. Compare to\n `tf.distribute.experimental.CentralStorageStrategy`, which puts all variables\n on a single device on the same machine (and does sync training), and\n `tf.distribute.MirroredStrategy`, which mirrors variables to multiple devices\n (see below).\n\n* _Replica context_ vs. _Cross-replica context_ vs _Update context_\n\n A _replica context_ applies\n when you execute the computation function that was called with `strategy.run`.\n Conceptually, you're in replica context when executing the computation\n function that is being replicated.\n\n An _update context_ is entered in a `tf.distribute.StrategyExtended.update`\n call.\n\n An _cross-replica context_ is entered when you enter a `strategy.scope`. This\n is useful for calling `tf.distribute.Strategy` methods which operate across\n the replicas (like `reduce_to()`). By default you start in a _replica context_\n (the \"default single _replica context_\") and then some methods can switch you\n back and forth.\n\n* _Distributed value_: Distributed value is represented by the base class\n `tf.distribute.DistributedValues`. `tf.distribute.DistributedValues` is useful\n to represent values on multiple devices, and it contains a map from replica id\n to values. Two representative kinds of `tf.distribute.DistributedValues` are\n \"PerReplica\" and \"Mirrored\" values.\n\n \"PerReplica\" values exist on the worker\n devices, with a different value for each replica. They are produced by\n iterating through a distributed dataset returned by\n `tf.distribute.Strategy.experimental_distribute_dataset` and\n `tf.distribute.Strategy.distribute_datasets_from_function`. They\n are also the typical result returned by\n `tf.distribute.Strategy.run`.\n\n \"Mirrored\" values are like \"PerReplica\" values, except we know that the value\n on all replicas are the same. We can safely read a \"Mirrored\" value in a\n cross-replica context by using the value on any replica.\n\n* _Unwrapping_ and _merging_: Consider calling a function `fn` on multiple\n replicas, like `strategy.run(fn, args=[w])` with an\n argument `w` that is a `tf.distribute.DistributedValues`. This means `w` will\n have a map taking replica id `0` to `w0`, replica id `1` to `w1`, etc.\n `strategy.run()` unwraps `w` before calling `fn`, so it calls `fn(w0)` on\n device `d0`, `fn(w1)` on device `d1`, etc. It then merges the return\n values from `fn()`, which leads to one common object if the returned values\n are the same object from every replica, or a `DistributedValues` object\n otherwise.\n\n* _Reductions_ and _all-reduce_: A _reduction_ is a method of aggregating\n multiple values into one value, like \"sum\" or \"mean\". If a strategy is doing\n sync training, we will perform a reduction on the gradients to a parameter\n from all replicas before applying the update. _All-reduce_ is an algorithm for\n performing a reduction on values from multiple devices and making the result\n available on all of those devices.\n\n* _Mirrored variables_: These are variables that are created on multiple\n devices, where we keep the variables in sync by applying the same\n updates to every copy. Mirrored variables are created with\n `tf.Variable(...synchronization=tf.VariableSynchronization.ON_WRITE...)`.\n Normally they are only used in synchronous training.\n\n* _SyncOnRead variables_\n\n _SyncOnRead variables_ are created by\n `tf.Variable(...synchronization=tf.VariableSynchronization.ON_READ...)`, and\n they are created on multiple devices. In replica context, each\n component variable on the local replica can perform reads and writes without\n synchronization with each other. When the\n _SyncOnRead variable_ is read in cross-replica context, the values from\n component variables are aggregated and returned.\n\n _SyncOnRead variables_ bring a lot of custom configuration difficulty to the\n underlying logic, so we do not encourage users to instantiate and use\n _SyncOnRead variable_ on their own. We have mainly used _SyncOnRead\n variables_ for use cases such as batch norm and metrics. For performance\n reasons, we often don't need to keep these statistics in sync every step and\n they can be accumulated on each replica independently. The only time we want\n to sync them is reporting or checkpointing, which typically happens in\n cross-replica context. _SyncOnRead variables_ are also often used by advanced\n users who want to control when variable values are aggregated. For example,\n users sometimes want to maintain gradients independently on each replica for a\n couple of steps without aggregation.\n\n* _Distribute-aware layers_\n\n Layers are generally called in a replica context, except when defining a\n Keras functional model. `tf.distribute.in_cross_replica_context` will let you\n determine which case you are in. If in a replica context,\n the `tf.distribute.get_replica_context` function will return the default\n replica context outside a strategy scope, `None` within a strategy scope, and\n a `tf.distribute.ReplicaContext` object inside a strategy scope and within a\n `tf.distribute.Strategy.run` function. The `ReplicaContext` object has an\n `all_reduce` method for aggregating across all replicas.\n\n\nNote that we provide a default version of `tf.distribute.Strategy` that is\nused when no other strategy is in scope, that provides the same API with\nreasonable default behavior.\n\n", "desc": "Library for running a computation across multiple devices.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver", "docs": "Library imports for ClusterResolvers.\n\n This library contains all implementations of ClusterResolvers.\n ClusterResolvers are a way of specifying cluster information for distributed\n execution. Built on top of existing `ClusterSpec` framework, ClusterResolvers\n are a way for TensorFlow to communicate with various cluster management\n systems (e.g. GCE, AWS, etc...).\n\n", "desc": "Library imports for ClusterResolvers.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.ClusterResolver", "docs": "Abstract class for all implementations of ClusterResolvers.\n\n This defines the skeleton for all implementations of ClusterResolvers.\n ClusterResolvers are a way for TensorFlow to communicate with various cluster\n management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary\n information to set up distributed training.\n\n By letting TensorFlow communicate with these systems, we will be able to\n automatically discover and resolve IP addresses for various TensorFlow\n workers. This will eventually allow us to automatically recover from\n underlying machine failures and scale TensorFlow worker clusters up and down.\n\n Note to Implementors of `tf.distribute.cluster_resolver.ClusterResolver`\n subclass: In addition to these abstract methods, when task_type, task_id, and\n rpc_layer attributes are applicable, you should also implement them either as\n properties with getters or setters, or directly set the attributes\n `self._task_type`, `self._task_id`, or `self._rpc_layer` so the base class'\n getters and setters are used. See\n `tf.distribute.cluster_resolver.SimpleClusterResolver.__init__` for an\n example.\n\n In general, multi-client tf.distribute strategies such as\n `tf.distribute.experimental.MultiWorkerMirroredStrategy` require task_type and\n task_id properties to be available in the `ClusterResolver` they are using. On\n the other hand, these concepts are not applicable in single-client strategies,\n such as `tf.distribute.experimental.TPUStrategy`, because the program is only\n expected to be run on one task, so there should not be a need to have code\n branches according to task type and task id.\n\n - task_type is the name of the server's current named job (e.g. 'worker',\n 'ps' in a distributed parameterized training job).\n - task_id is the ordinal index of the server within the task type.\n - rpc_layer is the protocol used by TensorFlow to communicate with other\n TensorFlow servers in a distributed environment.\n ", "desc": "Abstract class for all implementations of ClusterResolvers.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver", "docs": "ClusterResolver for Google Compute Engine.\n\n This is an implementation of cluster resolvers for the Google Compute Engine\n instance group platform. By specifying a project, zone, and instance group,\n this will retrieve the IP address of all the instances within the instance\n group and return a ClusterResolver object suitable for use for distributed\n TensorFlow.\n\n Note: this cluster resolver cannot retrieve `task_type`, `task_id` or\n `rpc_layer`. To use it with some distribution strategies like\n `tf.distribute.experimental.MultiWorkerMirroredStrategy`, you will need to\n specify `task_type` and `task_id` in the constructor.\n\n Usage example with tf.distribute.Strategy:\n\n ```Python\n # On worker 0\n cluster_resolver = GCEClusterResolver(\"my-project\", \"us-west1\",\n \"my-instance-group\",\n task_type=\"worker\", task_id=0)\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = GCEClusterResolver(\"my-project\", \"us-west1\",\n \"my-instance-group\",\n task_type=\"worker\", task_id=1)\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "ClusterResolver for Google Compute Engine.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver", "docs": "ClusterResolver for Kubernetes.\n\n This is an implementation of cluster resolvers for Kubernetes. When given the\n the Kubernetes namespace and label selector for pods, we will retrieve the\n pod IP addresses of all running pods matching the selector, and return a\n ClusterSpec based on that information.\n\n Note: it cannot retrieve `task_type`, `task_id` or `rpc_layer`. To use it\n with some distribution strategies like\n `tf.distribute.experimental.MultiWorkerMirroredStrategy`, you will need to\n specify `task_type` and `task_id` by setting these attributes.\n\n Usage example with tf.distribute.Strategy:\n\n ```Python\n # On worker 0\n cluster_resolver = KubernetesClusterResolver(\n {\"worker\": [\"job-name=worker-cluster-a\", \"job-name=worker-cluster-b\"]})\n cluster_resolver.task_type = \"worker\"\n cluster_resolver.task_id = 0\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = KubernetesClusterResolver(\n {\"worker\": [\"job-name=worker-cluster-a\", \"job-name=worker-cluster-b\"]})\n cluster_resolver.task_type = \"worker\"\n cluster_resolver.task_id = 1\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "ClusterResolver for Kubernetes.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver", "docs": "Simple implementation of ClusterResolver that accepts all attributes.\n\n Please see the base class for documentation of arguments of its constructor.\n\n It is useful if you want to specify some or all attributes.\n\n Usage example with `tf.distribute.Strategy`:\n\n ```Python\n cluster = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\"]})\n\n # On worker 0\n cluster_resolver = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=0,\n num_accelerators={\"GPU\": 8},\n rpc_layer=\"grpc\")\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=1,\n num_accelerators={\"GPU\": 8},\n rpc_layer=\"grpc\")\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "Simple implementation of ClusterResolver that accepts all attributes.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver", "docs": "ClusterResolver for system with Slurm workload manager.\n\n This is an implementation of ClusterResolver for Slurm clusters. This allows\n the specification of jobs and task counts, number of tasks per node, number\n of GPUs on each node and number of GPUs for each task. It retrieves system\n attributes by Slurm environment variables, resolves allocated computing node\n names, constructs a cluster and returns a ClusterResolver object which can be\n used for distributed TensorFlow.\n ", "desc": "ClusterResolver for system with Slurm workload manager.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver", "docs": "Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar.\n\n This is an implementation of cluster resolvers when using TF_CONFIG to set\n information about the cluster. The cluster spec returned will be\n initialized from the TF_CONFIG environment variable.\n\n An example to set TF_CONFIG is:\n\n ```Python\n os.environ['TF_CONFIG'] = json.dumps({\n 'cluster': {\n 'worker': [\"localhost:12345\", \"localhost:23456\"]\n },\n 'task': {'type': 'worker', 'index': 0}\n })\n ```\n\n However, sometimes the container orchestration framework will set TF_CONFIG\n for you. In this case, you can just create an instance without passing in any\n arguments. You can find an example here to let Kuburnetes set TF_CONFIG for\n you: https://github.com/tensorflow/ecosystem/tree/master/kubernetes. Then you\n can use it with `tf.distribute.Strategy` as:\n\n ```Python\n # `TFConfigClusterResolver` is already the default one in the following\n # strategy.\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=TFConfigClusterResolver())\n ```\n ", "desc": "Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver", "docs": "Cluster Resolver for Google Cloud TPUs.\n\n This is an implementation of cluster resolvers for the Google Cloud TPU\n service.\n\n TPUClusterResolver supports the following distinct environments:\n Google Compute Engine\n Google Kubernetes Engine\n Google internal\n\n It can be passed into `tf.distribute.TPUStrategy` to support TF2 training on\n Cloud TPUs.\n ", "desc": "Cluster Resolver for Google Cloud TPUs.", "type": "API"}, {"name": "tf.compat.v1.distribute.cluster_resolver.UnionResolver", "docs": "Performs a union on underlying ClusterResolvers.\n\n This class performs a union given two or more existing ClusterResolvers. It\n merges the underlying ClusterResolvers, and returns one unified ClusterSpec\n when cluster_spec is called. The details of the merge function is\n documented in the cluster_spec function.\n\n For additional ClusterResolver properties such as task type, task index,\n rpc layer, environment, etc..., we will return the value from the first\n ClusterResolver in the union.\n\n An example to combine two cluster resolvers:\n\n ```Python\n cluster_0 = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\"]})\n cluster_resolver_0 = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=0,\n rpc_layer=\"grpc\")\n\n cluster_1 = tf.train.ClusterSpec({\"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n cluster_resolver_1 = SimpleClusterResolver(cluster, task_type=\"ps\",\n task_id=0,\n rpc_layer=\"grpc\")\n\n # Its task type would be \"worker\".\n cluster_resolver = UnionClusterResolver(cluster_resolver_0,\n cluster_resolver_1)\n ```\n\n An example to override the number of GPUs in a TFConfigClusterResolver\n instance:\n\n ```Python\n tf_config = TFConfigClusterResolver()\n gpu_override = SimpleClusterResolver(tf_config.cluster_spec(),\n num_accelerators={\"GPU\": 1})\n cluster_resolver = UnionResolver(gpu_override, tf_config)\n ```\n ", "desc": "Performs a union on underlying ClusterResolvers.", "type": "API"}, {"name": "tf.compat.v1.distribute.CrossDeviceOps", "docs": "Base class for cross-device reduction and broadcasting algorithms.\n\n The main purpose of this class is to be passed to\n `tf.distribute.MirroredStrategy` in order to choose among different cross\n device communication implementations. Prefer using the methods of\n `tf.distribute.Strategy` instead of the ones of this class.\n\n Implementations:\n * `tf.distribute.ReductionToOneDevice`\n * `tf.distribute.NcclAllReduce`\n * `tf.distribute.HierarchicalCopyAllReduce`\n ", "desc": "Base class for cross-device reduction and broadcasting algorithms.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental", "docs": "Experimental Distribution Strategy library.\n", "desc": "Experimental Distribution Strategy library.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.CentralStorageStrategy", "docs": "A one-machine strategy that puts all variables on a single device.\n\n Variables are assigned to local CPU or the only GPU. If there is more\n than one GPU, compute operations (other than variable update operations)\n will be replicated across all GPUs.\n\n For Example:\n ```\n strategy = tf.distribute.experimental.CentralStorageStrategy()\n # Create a dataset\n ds = tf.data.Dataset.range(5).batch(2)\n # Distribute that dataset\n dist_dataset = strategy.experimental_distribute_dataset(ds)\n\n with strategy.scope():\n @tf.function\n def train_step(val):\n return val + 1\n\n # Iterate over the distributed dataset\n for x in dist_dataset:\n # process dataset elements\n strategy.run(train_step, args=(x,))\n ```\n ", "desc": "A one-machine strategy that puts all variables on a single device.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.CollectiveCommunication", "docs": "Cross device communication implementation.\n\n Warning: The alias `tf.distribute.experimental.CollectiveCommunication` is\n deprecated and will be removed in a future version. Use\n `tf.distribute.experimental.CommunicationImplementation` instead.\n\n * `AUTO`: Automatically chosen by Tensorflow.\n * `RING`: TensorFlow's ring algorithms for all-reduce and\n all-gather.\n * `NCCL`: NVIDIA\u00ae's NCCL library. This is now only used for all-reduce on\n GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.\n ", "desc": "Cross device communication implementation.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.CollectiveHints", "docs": "Hints for collective operations like AllReduce.\n\n This can be passed to methods like\n `tf.distribute.get_replica_context().all_reduce()` to optimize collective\n operation performance. Note that these are only hints, which may or may not\n change the actual behavior. Some options only apply to certain strategy and\n are ignored by others.\n\n One common optimization is to break gradients all-reduce into multiple packs\n so that weight updates can overlap with gradient all-reduce.\n\n Examples:\n\n - bytes_per_pack\n\n ```python\n hints = tf.distribute.experimental.CollectiveHints(\n bytes_per_pack=50 * 1024 * 1024)\n grads = tf.distribute.get_replica_context().all_reduce(\n 'sum', grads, experimental_hints=hints)\n optimizer.apply_gradients(zip(grads, vars),\n experimental_aggregate_gradients=False)\n ```\n\n - timeout_seconds\n\n ```python\n strategy = tf.distribute.MirroredStrategy()\n hints = tf.distribute.experimental.CollectiveHints(\n timeout_seconds=120.0)\n try:\n strategy.reduce(\"sum\", v, axis=None, experimental_hints=hints)\n except tf.errors.DeadlineExceededError:\n do_something()\n ```\n\n ", "desc": "Hints for collective operations like AllReduce.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.CommunicationImplementation", "docs": "Cross device communication implementation.\n\n Warning: The alias `tf.distribute.experimental.CollectiveCommunication` is\n deprecated and will be removed in a future version. Use\n `tf.distribute.experimental.CommunicationImplementation` instead.\n\n * `AUTO`: Automatically chosen by Tensorflow.\n * `RING`: TensorFlow's ring algorithms for all-reduce and\n all-gather.\n * `NCCL`: NVIDIA\u00ae's NCCL library. This is now only used for all-reduce on\n GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.\n ", "desc": "Cross device communication implementation.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.CommunicationOptions", "docs": "Options for cross device communications like All-reduce.\n\n This can be passed to methods like\n `tf.distribute.get_replica_context().all_reduce()` to optimize collective\n operation performance. Note that these are only hints, which may or may not\n change the actual behavior. Some options only apply to certain strategy and\n are ignored by others.\n\n One common optimization is to break gradients all-reduce into multiple packs\n so that weight updates can overlap with gradient all-reduce.\n\n Examples:\n\n ```python\n options = tf.distribute.experimental.CommunicationOptions(\n bytes_per_pack=50 * 1024 * 1024,\n timeout_seconds=120.0,\n implementation=tf.distribute.experimental.CommunicationImplementation.NCCL\n )\n grads = tf.distribute.get_replica_context().all_reduce(\n 'sum', grads, options=options)\n optimizer.apply_gradients(zip(grads, vars),\n experimental_aggregate_gradients=False)\n ```\n\n ", "desc": "Options for cross device communications like All-reduce.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy", "docs": "A distribution strategy for synchronous training on multiple workers.\n\n This strategy implements synchronous distributed training across multiple\n workers, each with potentially multiple GPUs. Similar to\n `tf.distribute.MirroredStrategy`, it replicates all variables and computations\n to each local device. The difference is that it uses a distributed collective\n implementation (e.g. all-reduce), so that multiple workers can work together.\n\n You need to launch your program on each worker and configure\n `cluster_resolver` correctly. For example, if you are using\n `tf.distribute.cluster_resolver.TFConfigClusterResolver`, each worker needs to\n have its corresponding `task_type` and `task_id` set in the `TF_CONFIG`\n environment variable. An example TF_CONFIG on worker-0 of a two worker cluster\n is:\n\n ```\n TF_CONFIG = '{\"cluster\": {\"worker\": [\"localhost:12345\", \"localhost:23456\"]}, \"task\": {\"type\": \"worker\", \"index\": 0} }'\n ```\n\n Your program runs on each worker as-is. Note that collectives require each\n worker to participate. All `tf.distribute` and non `tf.distribute` API may use\n collectives internally, e.g. checkpointing and saving since reading a\n `tf.Variable` with `tf.VariableSynchronization.ON_READ` all-reduces the value.\n Therefore it's recommended to run exactly the same program on each worker.\n Dispatching based on `task_type` or `task_id` of the worker is error-prone.\n\n `cluster_resolver.num_accelerators()` determines the number of GPUs the\n strategy uses. If it's zero, the strategy uses the CPU. All workers need to\n use the same number of devices, otherwise the behavior is undefined.\n\n This strategy is not intended for TPU. Use `tf.distribute.TPUStrategy`\n instead.\n\n After setting up TF_CONFIG, using this strategy is similar to using\n `tf.distribute.MirroredStrategy` and `tf.distribute.TPUStrategy`.\n\n ```\n strategy = tf.distribute.MultiWorkerMirroredStrategy()\n\n with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(2, input_shape=(5,)),\n ])\n optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n def dataset_fn(ctx):\n x = np.random.random((2, 5)).astype(np.float32)\n y = np.random.randint(2, size=(2, 1))\n dataset = tf.data.Dataset.from_tensor_slices((x, y))\n return dataset.repeat().batch(1, drop_remainder=True)\n dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)\n\n model.compile()\n model.fit(dist_dataset)\n ```\n\n You can also write your own training loop:\n\n ```\n @tf.function\n def train_step(iterator):\n\n def step_fn(inputs):\n features, labels = inputs\n with tf.GradientTape() as tape:\n logits = model(features, training=True)\n loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, logits)\n\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n\n strategy.run(step_fn, args=(next(iterator),))\n\n for _ in range(NUM_STEP):\n train_step(iterator)\n ```\n\n See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras)\n for a detailed tutorial.\n\n __Saving__\n\n You need to save and checkpoint on all workers instead of just one. This is\n because variables whose synchronization=ON_READ triggers aggregation during\n saving. It's recommended to save to a different path on each worker to avoid\n race conditions. Each worker saves the same thing. See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading)\n tutorial for examples.\n\n __Known Issues__\n\n * `tf.distribute.cluster_resolver.TFConfigClusterResolver` does not return the\n correct number of accelerators. The strategy uses all available GPUs if\n `cluster_resolver` is `tf.distribute.cluster_resolver.TFConfigClusterResolver`\n or `None`.\n * In eager mode, the strategy needs to be created before calling any other\n Tensorflow API.\n\n ", "desc": "A distribution strategy for synchronous training on multiple workers.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.ParameterServerStrategy", "docs": "An asynchronous multi-worker parameter server tf.distribute strategy.\n\n This strategy requires two roles: workers and parameter servers. Variables and\n updates to those variables will be assigned to parameter servers and other\n operations are assigned to workers.\n\n When each worker has more than one GPU, operations will be replicated on all\n GPUs. Even though operations may be replicated, variables are not and each\n worker shares a common view for which parameter server a variable is assigned\n to.\n\n By default it uses `TFConfigClusterResolver` to detect configurations for\n multi-worker training. This requires a 'TF_CONFIG' environment variable and\n the 'TF_CONFIG' must have a cluster spec.\n\n This class assumes each worker is running the same code independently, but\n parameter servers are running a standard server. This means that while each\n worker will synchronously compute a single gradient update across all GPUs,\n updates between workers proceed asynchronously. Operations that occur only on\n the first replica (such as incrementing the global step), will occur on the\n first replica *of every worker*.\n\n It is expected to call `call_for_each_replica(fn, ...)` for any\n operations which potentially can be replicated across replicas (i.e. multiple\n GPUs) even if there is only CPU or one GPU. When defining the `fn`, extra\n caution needs to be taken:\n\n 1) It is generally not recommended to open a device scope under the strategy's\n scope. A device scope (i.e. calling `tf.device`) will be merged with or\n override the device for operations but will not change the device for\n variables.\n\n 2) It is also not recommended to open a colocation scope (i.e. calling\n `tf.compat.v1.colocate_with`) under the strategy's scope. For colocating\n variables, use `strategy.extended.colocate_vars_with` instead. Colocation of\n ops will possibly create device assignment conflicts.\n\n Note: This strategy only works with the Estimator API. Pass an instance of\n this strategy to the `experimental_distribute` argument when you create the\n `RunConfig`. This instance of `RunConfig` should then be passed to the\n `Estimator` instance on which `train_and_evaluate` is called.\n\n For Example:\n ```\n strategy = tf.distribute.experimental.ParameterServerStrategy()\n run_config = tf.estimator.RunConfig(\n experimental_distribute.train_distribute=strategy)\n estimator = tf.estimator.Estimator(config=run_config)\n tf.estimator.train_and_evaluate(estimator,...)\n ```\n ", "desc": "An asynchronous multi-worker parameter server tf.distribute strategy.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental.TPUStrategy", "docs": "TPU distribution strategy implementation.", "desc": "TPU distribution strategy implementation.", "type": "API"}, {"name": "tf.compat.v1.distribute.experimental_set_strategy", "docs": "Set a `tf.distribute.Strategy` as current without `with strategy.scope()`.\n\n ```\n tf.distribute.experimental_set_strategy(strategy1)\n f()\n tf.distribute.experimental_set_strategy(strategy2)\n g()\n tf.distribute.experimental_set_strategy(None)\n h()\n ```\n\n is equivalent to:\n\n ```\n with strategy1.scope():\n f()\n with strategy2.scope():\n g()\n h()\n ```\n\n In general, you should use the `with strategy.scope():` API, but this\n alternative may be convenient in notebooks where you would have to put\n each cell in a `with strategy.scope():` block.\n\n Note: This should only be called outside of any TensorFlow scope to\n avoid improper nesting.\n\n Args:\n strategy: A `tf.distribute.Strategy` object or None.\n\n Raises:\n RuntimeError: If called inside a `with strategy.scope():`.\n ", "desc": "Set a `tf.distribute.Strategy` as current without `with strategy.scope()`.", "type": "API"}, {"name": "tf.compat.v1.distribute.get_loss_reduction", "docs": "`tf.distribute.ReduceOp` corresponding to the last loss reduction.\n\n This is used to decide whether loss should be scaled in optimizer (used only\n for estimator + v1 optimizer use case).\n\n Returns:\n `tf.distribute.ReduceOp` corresponding to the last loss reduction for\n estimator and v1 optimizer use case. `tf.distribute.ReduceOp.SUM` otherwise.\n ", "desc": "`tf.distribute.ReduceOp` corresponding to the last loss reduction.", "type": "API"}, {"name": "tf.compat.v1.distribute.get_replica_context", "docs": "Returns the current `tf.distribute.ReplicaContext` or `None`.\n\n Returns `None` if in a cross-replica context.\n\n Note that execution:\n\n 1. starts in the default (single-replica) replica context (this function\n will return the default `ReplicaContext` object);\n 2. switches to cross-replica context (in which case this will return\n `None`) when entering a `with tf.distribute.Strategy.scope():` block;\n 3. switches to a (non-default) replica context inside `strategy.run(fn, ...)`;\n 4. if `fn` calls `get_replica_context().merge_call(merge_fn, ...)`, then\n inside `merge_fn` you are back in the cross-replica context (and again\n this function will return `None`).\n\n Most `tf.distribute.Strategy` methods may only be executed in\n a cross-replica context, in a replica context you should use the\n API of the `tf.distribute.ReplicaContext` object returned by this\n method instead.\n\n ```\n assert tf.distribute.get_replica_context() is not None # default\n with strategy.scope():\n assert tf.distribute.get_replica_context() is None\n\n def f():\n replica_context = tf.distribute.get_replica_context() # for strategy\n assert replica_context is not None\n tf.print(\"Replica id: \", replica_context.replica_id_in_sync_group,\n \" of \", replica_context.num_replicas_in_sync)\n\n strategy.run(f)\n ```\n\n Returns:\n The current `tf.distribute.ReplicaContext` object when in a replica context\n scope, else `None`.\n\n Within a particular block, exactly one of these two things will be true:\n\n * `get_replica_context()` returns non-`None`, or\n * `tf.distribute.is_cross_replica_context()` returns True.\n ", "desc": "Returns the current `tf.distribute.ReplicaContext` or `None`.", "type": "API"}, {"name": "tf.compat.v1.distribute.get_strategy", "docs": "Returns the current `tf.distribute.Strategy` object.\n\n Typically only used in a cross-replica context:\n\n ```\n if tf.distribute.in_cross_replica_context():\n strategy = tf.distribute.get_strategy()\n ...\n ```\n\n Returns:\n A `tf.distribute.Strategy` object. Inside a `with strategy.scope()` block,\n it returns `strategy`, otherwise it returns the default (single-replica)\n `tf.distribute.Strategy` object.\n ", "desc": "Returns the current `tf.distribute.Strategy` object.", "type": "API"}, {"name": "tf.compat.v1.distribute.has_strategy", "docs": "Return if there is a current non-default `tf.distribute.Strategy`.\n\n ```\n assert not tf.distribute.has_strategy()\n with strategy.scope():\n assert tf.distribute.has_strategy()\n ```\n\n Returns:\n True if inside a `with strategy.scope():`.\n ", "desc": "Return if there is a current non-default `tf.distribute.Strategy`.", "type": "API"}, {"name": "tf.compat.v1.distribute.HierarchicalCopyAllReduce", "docs": "Hierarchical copy all-reduce implementation of CrossDeviceOps.\n\n It reduces to one GPU along edges in some hierarchy and broadcasts back to\n each GPU along the same path. For the batch API, tensors will be repacked or\n aggregated for more efficient cross-device transportation.\n\n This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like\n that on DGX-1 machine. If you have different GPU inter-connections, it is\n likely that it would be slower than `tf.distribute.ReductionToOneDevice`.\n\n For reduces that are not all-reduce, it falls back to\n `tf.distribute.ReductionToOneDevice`.\n\n Here is how you can use `HierarchicalCopyAllReduce` in\n `tf.distribute.MirroredStrategy`:\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())\n ```\n ", "desc": "Hierarchical copy all-reduce implementation of CrossDeviceOps.", "type": "API"}, {"name": "tf.compat.v1.distribute.in_cross_replica_context", "docs": "Returns `True` if in a cross-replica context.\n\n See `tf.distribute.get_replica_context` for details.\n\n ```\n assert not tf.distribute.in_cross_replica_context()\n with strategy.scope():\n assert tf.distribute.in_cross_replica_context()\n\n def f():\n assert not tf.distribute.in_cross_replica_context()\n\n strategy.run(f)\n ```\n\n Returns:\n `True` if in a cross-replica context (`get_replica_context()` returns\n `None`), or `False` if in a replica context (`get_replica_context()` returns\n non-`None`).\n ", "desc": "Returns `True` if in a cross-replica context.", "type": "API"}, {"name": "tf.compat.v1.distribute.InputContext", "docs": "A class wrapping information needed by an input function.\n\n This is a context class that is passed to the user's input function and\n contains information about the compute replicas and input pipelines. The\n number of compute replicas (in sync training) helps compute the local batch\n size from the desired global batch size for each replica. The input pipeline\n information can be used to return a different subset of the input in each\n replica (for e.g. shard the input pipeline, use a different input\n source etc).\n ", "desc": "A class wrapping information needed by an input function.", "type": "API"}, {"name": "tf.compat.v1.distribute.InputReplicationMode", "docs": "Replication mode for input function.\n\n * `PER_WORKER`: The input function will be called on each worker\n independently, creating as many input pipelines as number of workers.\n Replicas will dequeue from the local Dataset on their worker.\n `tf.distribute.Strategy` doesn't manage any state sharing between such\n separate input pipelines.\n * `PER_REPLICA`: The input function will be called on each replica separately.\n `tf.distribute.Strategy` doesn't manage any state sharing between such\n separate input pipelines.\n ", "desc": "Replication mode for input function.", "type": "API"}, {"name": "tf.compat.v1.distribute.MirroredStrategy", "docs": "Synchronous training across multiple replicas on one machine.\n\n This strategy is typically used for training on one\n machine with multiple GPUs. For TPUs, use\n `tf.distribute.TPUStrategy`. To use `MirroredStrategy` with multiple workers,\n please refer to `tf.distribute.experimental.MultiWorkerMirroredStrategy`.\n\n For example, a variable created under a `MirroredStrategy` is a\n `MirroredVariable`. If no devices are specified in the constructor argument of\n the strategy then it will use all the available GPUs. If no GPUs are found, it\n will use the available CPUs. Note that TensorFlow treats all CPUs on a\n machine as a single device, and uses threads internally for parallelism.\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> with strategy.scope():\n ... x = tf.Variable(1.)\n >>> x\n MirroredVariable:{\n 0: ,\n 1: \n }\n\n While using distribution strategies, all the variable creation should be done\n within the strategy's scope. This will replicate the variables across all the\n replicas and keep them in sync using an all-reduce algorithm.\n\n Variables created inside a `MirroredStrategy` which is wrapped with a\n `tf.function` are still `MirroredVariables`.\n\n >>> x = []\n >>> @tf.function # Wrap the function with tf.function.\n ... def create_variable():\n ... if not x:\n ... x.append(tf.Variable(1.))\n ... return x[0]\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> with strategy.scope():\n ... _ = create_variable()\n ... print(x[0])\n MirroredVariable:{\n 0: ,\n 1: \n }\n\n `experimental_distribute_dataset` can be used to distribute the dataset across\n the replicas when writing your own training loop. If you are using `.fit` and\n `.compile` methods available in `tf.keras`, then `tf.keras` will handle the\n distribution for you.\n\n For example:\n\n ```python\n my_strategy = tf.distribute.MirroredStrategy()\n with my_strategy.scope():\n @tf.function\n def distribute_train_epoch(dataset):\n def replica_fn(input):\n # process input and return result\n return result\n\n total_result = 0\n for x in dataset:\n per_replica_result = my_strategy.run(replica_fn, args=(x,))\n total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,\n per_replica_result, axis=None)\n return total_result\n\n dist_dataset = my_strategy.experimental_distribute_dataset(dataset)\n for _ in range(EPOCHS):\n train_result = distribute_train_epoch(dist_dataset)\n ```\n\n Args:\n devices: a list of device strings such as `['/gpu:0', '/gpu:1']`. If\n `None`, all available GPUs are used. If no GPUs are found, CPU is used.\n cross_device_ops: optional, a descedant of `CrossDeviceOps`. If this is not\n set, `NcclAllReduce()` will be used by default. One would customize this\n if NCCL isn't available or if a special implementation that exploits\n the particular hardware is available.\n ", "desc": "Synchronous training across multiple replicas on one machine.", "type": "API"}, {"name": "tf.compat.v1.distribute.NcclAllReduce", "docs": "NCCL all-reduce implementation of CrossDeviceOps.\n\n It uses Nvidia NCCL for all-reduce. For the batch API, tensors will be\n repacked or aggregated for more efficient cross-device transportation.\n\n For reduces that are not all-reduce, it falls back to\n `tf.distribute.ReductionToOneDevice`.\n\n Here is how you can use `NcclAllReduce` in `tf.distribute.MirroredStrategy`:\n\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.NcclAllReduce())\n ```\n ", "desc": "NCCL all-reduce implementation of CrossDeviceOps.", "type": "API"}, {"name": "tf.compat.v1.distribute.OneDeviceStrategy", "docs": "A distribution strategy for running on a single device.\n\n Using this strategy will place any variables created in its scope on the\n specified device. Input distributed through this strategy will be\n prefetched to the specified device. Moreover, any functions called via\n `strategy.run` will also be placed on the specified device\n as well.\n\n Typical usage of this strategy could be testing your code with the\n tf.distribute.Strategy API before switching to other strategies which\n actually distribute to multiple devices/machines.\n\n For example:\n ```\n tf.enable_eager_execution()\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n\n with strategy.scope():\n v = tf.Variable(1.0)\n print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0\n\n def step_fn(x):\n return x * 2\n\n result = 0\n for i in range(10):\n result += strategy.run(step_fn, args=(i,))\n print(result) # 90\n ```\n ", "desc": "A distribution strategy for running on a single device.", "type": "API"}, {"name": "tf.compat.v1.distribute.ReduceOp", "docs": "Indicates how a set of values should be reduced.\n\n * `SUM`: Add all the values.\n * `MEAN`: Take the arithmetic mean (\"average\") of the values.\n ", "desc": "Indicates how a set of values should be reduced.", "type": "API"}, {"name": "tf.compat.v1.distribute.ReductionToOneDevice", "docs": "A CrossDeviceOps implementation that copies values to one device to reduce.\n\n This implementation always copies values to one device to reduce them, then\n broadcast reduced values to the destinations. It doesn't support efficient\n batching.\n\n Here is how you can use `ReductionToOneDevice` in\n `tf.distribute.MirroredStrategy`:\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.ReductionToOneDevice())\n ```\n ", "desc": "A CrossDeviceOps implementation that copies values to one device to reduce.", "type": "API"}, {"name": "tf.compat.v1.distribute.ReplicaContext", "docs": "A class with a collection of APIs that can be called in a replica context.\n\n You can use `tf.distribute.get_replica_context` to get an instance of\n `ReplicaContext`, which can only be called inside the function passed to\n `tf.distribute.Strategy.run`.\n\n >>> strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1'])\n >>> def func():\n ... replica_context = tf.distribute.get_replica_context()\n ... return replica_context.replica_id_in_sync_group\n >>> strategy.run(func)\n PerReplica:{\n 0: ,\n 1: \n }\n ", "desc": "A class with a collection of APIs that can be called in a replica context.", "type": "API"}, {"name": "tf.compat.v1.distribute.RunOptions", "docs": "Run options for `strategy.run`.\n\n This can be used to hold some strategy specific configs.\n\n Attributes:\n experimental_enable_dynamic_batch_size: Boolean. Only applies to\n TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic\n padder to support dynamic batch size for the inputs. Otherwise only static\n shape inputs are allowed.\n experimental_bucketizing_dynamic_shape: Boolean. Only applies to\n TPUStrategy. Default to False. If True, TPUStrategy will automatic\n bucketize inputs passed into `run` if the input shape is\n dynamic. This is a performance optimization to reduce XLA recompilation,\n which should not have impact on correctness.\n experimental_xla_options: A `tf.tpu.XLAOptions` instance. Only applies to\n TPUStrategy. Controls the XLA compiling options on TPUs. Default to None.\n ", "desc": "Run options for `strategy.run`.", "type": "API"}, {"name": "tf.compat.v1.distribute.Server", "docs": "An in-process TensorFlow server, for use in distributed training.\n\n A `tf.distribute.Server` instance encapsulates a set of devices and a\n `tf.compat.v1.Session` target that\n can participate in distributed training. A server belongs to a\n cluster (specified by a `tf.train.ClusterSpec`), and\n corresponds to a particular task in a named job. The server can\n communicate with any other server in the same cluster.\n ", "desc": "An in-process TensorFlow server, for use in distributed training.", "type": "API"}, {"name": "tf.compat.v1.distribute.Strategy", "docs": "A list of devices with a state & compute distribution policy.\n\n See [the guide](https://www.tensorflow.org/guide/distribute_strategy)\n for overview and examples.\n\n Note: Not all `tf.distribute.Strategy` implementations currently support\n TensorFlow's partitioned variables (where a single variable is split across\n multiple devices) at this time.\n ", "desc": "A list of devices with a state & compute distribution policy.", "type": "API"}, {"name": "tf.compat.v1.distribute.StrategyExtended", "docs": "Additional APIs for algorithms that need to be distribution-aware.\n\n Note: For most usage of `tf.distribute.Strategy`, there should be no need to\n call these methods, since TensorFlow libraries (such as optimizers) already\n call these methods when needed on your behalf.\n\n\n Some common use cases of functions on this page:\n\n * _Locality_\n\n `tf.distribute.DistributedValues` can have the same _locality_ as a\n _distributed variable_, which leads to a mirrored value residing on the same\n devices as the variable (as opposed to the compute devices). Such values may\n be passed to a call to `tf.distribute.StrategyExtended.update` to update the\n value of a variable. You may use\n `tf.distribute.StrategyExtended.colocate_vars_with` to give a variable the\n same locality as another variable. You may convert a \"PerReplica\" value to a\n variable's locality by using `tf.distribute.StrategyExtended.reduce_to` or\n `tf.distribute.StrategyExtended.batch_reduce_to`.\n\n * _How to update a distributed variable_\n\n A distributed variable is variables created on multiple devices. As discussed\n in the [glossary](https://www.tensorflow.org/api_docs/python/tf/distribute),\n mirrored variable and SyncOnRead variable are two examples. The standard\n pattern for updating distributed variables is to:\n\n 1. In your function passed to `tf.distribute.Strategy.run`,\n compute a list of (update, variable) pairs. For example, the update might\n be a gradient of the loss with respect to the variable.\n 2. Switch to cross-replica mode by calling\n `tf.distribute.get_replica_context().merge_call()` with the updates and\n variables as arguments.\n 3. Call\n `tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v)`\n (for one variable) or `tf.distribute.StrategyExtended.batch_reduce_to`\n (for a list of variables) to sum the updates.\n 4. Call `tf.distribute.StrategyExtended.update(v)` for each variable to update\n its value.\n\n Steps 2 through 4 are done automatically by class\n `tf.keras.optimizers.Optimizer` if you call its\n `tf.keras.optimizers.Optimizer.apply_gradients` method in a replica context.\n\n In fact, a higher-level solution to update a distributed variable is by\n calling `assign` on the variable as you would do to a regular `tf.Variable`.\n You can call the method in both _replica context_ and _cross-replica context_.\n For a _mirrored variable_, calling `assign` in _replica context_ requires you\n to specify the `aggregation` type in the variable constructor. In that case,\n the context switching and sync described in steps 2 through 4 are handled for\n you. If you call `assign` on _mirrored variable_ in _cross-replica context_,\n you can only assign a single value or assign values from another mirrored\n variable or a mirrored `tf.distribute.DistributedValues`. For a _SyncOnRead\n variable_, in _replica context_, you can simply call `assign` on it and no\n aggregation happens under the hood. In _cross-replica context_, you can only\n assign a single value to a SyncOnRead variable. One example case is restoring\n from a checkpoint: if the `aggregation` type of the variable is\n `tf.VariableAggregation.SUM`, it is assumed that replica values were added\n before checkpointing, so at the time of restoring, the value is divided by\n the number of replicas and then assigned to each replica; if the `aggregation`\n type is `tf.VariableAggregation.MEAN`, the value is assigned to each replica\n directly.\n\n ", "desc": "Additional APIs for algorithms that need to be distribution-aware.", "type": "API"}, {"name": "tf.compat.v1.distributions", "docs": "Core module for TensorFlow distribution objects and helpers.\n", "desc": "Core module for TensorFlow distribution objects and helpers.", "type": "API"}, {"name": "tf.compat.v1.distributions.Bernoulli", "docs": "Bernoulli distribution.\n\n The Bernoulli distribution with `probs` parameter, i.e., the probability of a\n `1` outcome (vs a `0` outcome).\n ", "desc": "Bernoulli distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Beta", "docs": "Beta distribution.\n\n The Beta distribution is defined over the `(0, 1)` interval using parameters\n `concentration1` (aka \"alpha\") and `concentration0` (aka \"beta\").\n\n #### Mathematical Details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z\n Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)\n ```\n\n where:\n\n * `concentration1 = alpha`,\n * `concentration0 = beta`,\n * `Z` is the normalization constant, and,\n * `Gamma` is the [gamma function](\n https://en.wikipedia.org/wiki/Gamma_function).\n\n The concentration parameters represent mean total counts of a `1` or a `0`,\n i.e.,\n\n ```none\n concentration1 = alpha = mean * total_concentration\n concentration0 = beta = (1. - mean) * total_concentration\n ```\n\n where `mean` in `(0, 1)` and `total_concentration` is a positive real number\n representing a mean `total_count = concentration1 + concentration0`.\n\n Distribution parameters are automatically broadcast in all functions; see\n examples for details.\n\n Warning: The samples can be zero due to finite precision.\n This happens more often when some of the concentrations are very small.\n Make sure to round the samples to `np.finfo(dtype).tiny` before computing the\n density.\n\n Samples of this distribution are reparameterized (pathwise differentiable).\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n #### Examples\n\n ```python\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Create a batch of three Beta distributions.\n alpha = [1, 2, 3]\n beta = [1, 2, 3]\n dist = tfd.Beta(alpha, beta)\n\n dist.sample([4, 5]) # Shape [4, 5, 3]\n\n # `x` has three batch entries, each with two samples.\n x = [[.1, .4, .5],\n [.2, .3, .5]]\n # Calculate the probability of each pair of samples under the corresponding\n # distribution in `dist`.\n dist.prob(x) # Shape [2, 3]\n ```\n\n ```python\n # Create batch_shape=[2, 3] via parameter broadcast:\n alpha = [[1.], [2]] # Shape [2, 1]\n beta = [3., 4, 5] # Shape [3]\n dist = tfd.Beta(alpha, beta)\n\n # alpha broadcast as: [[1., 1, 1,],\n # [2, 2, 2]]\n # beta broadcast as: [[3., 4, 5],\n # [3, 4, 5]]\n # batch_Shape [2, 3]\n dist.sample([4, 5]) # Shape [4, 5, 2, 3]\n\n x = [.2, .3, .5]\n # x will be broadcast as [[.2, .3, .5],\n # [.2, .3, .5]],\n # thus matching batch_shape [2, 3].\n dist.prob(x) # Shape [2, 3]\n ```\n\n Compute the gradients of samples w.r.t. the parameters:\n\n ```python\n alpha = tf.constant(1.0)\n beta = tf.constant(2.0)\n dist = tfd.Beta(alpha, beta)\n samples = dist.sample(5) # Shape [5]\n loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function\n # Unbiased stochastic gradients of the loss function\n grads = tf.gradients(loss, [alpha, beta])\n ```\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Beta distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Categorical", "docs": "Categorical distribution.\n\n The Categorical distribution is parameterized by either probabilities or\n log-probabilities of a set of `K` classes. It is defined over the integers\n `{0, 1, ..., K}`.\n\n The Categorical distribution is closely related to the `OneHotCategorical` and\n `Multinomial` distributions. The Categorical distribution can be intuited as\n generating samples according to `argmax{ OneHotCategorical(probs) }` itself\n being identical to `argmax{ Multinomial(probs, total_count=1) }`.\n\n #### Mathematical Details\n\n The probability mass function (pmf) is,\n\n ```none\n pmf(k; pi) = prod_j pi_j**[k == j]\n ```\n\n #### Pitfalls\n\n The number of classes, `K`, must not exceed:\n - the largest integer representable by `self.dtype`, i.e.,\n `2**(mantissa_bits+1)` (IEEE 754),\n - the maximum `Tensor` index, i.e., `2**31-1`.\n\n In other words,\n\n ```python\n K <= min(2**31-1, {\n tf.float16: 2**11,\n tf.float32: 2**24,\n tf.float64: 2**53 }[param.dtype])\n ```\n\n Note: This condition is validated only when `self.validate_args = True`.\n\n #### Examples\n\n Creates a 3-class distribution with the 2nd class being most likely.\n\n ```python\n dist = Categorical(probs=[0.1, 0.5, 0.4])\n n = 1e4\n empirical_prob = tf.cast(\n tf.histogram_fixed_width(\n dist.sample(int(n)),\n [0., 2],\n nbins=3),\n dtype=tf.float32) / n\n # ==> array([ 0.1005, 0.5037, 0.3958], dtype=float32)\n ```\n\n Creates a 3-class distribution with the 2nd class being most likely.\n Parameterized by [logits](https://en.wikipedia.org/wiki/Logit) rather than\n probabilities.\n\n ```python\n dist = Categorical(logits=np.log([0.1, 0.5, 0.4])\n n = 1e4\n empirical_prob = tf.cast(\n tf.histogram_fixed_width(\n dist.sample(int(n)),\n [0., 2],\n nbins=3),\n dtype=tf.float32) / n\n # ==> array([0.1045, 0.5047, 0.3908], dtype=float32)\n ```\n\n Creates a 3-class distribution with the 3rd class being most likely.\n The distribution functions can be evaluated on counts.\n\n ```python\n # counts is a scalar.\n p = [0.1, 0.4, 0.5]\n dist = Categorical(probs=p)\n dist.prob(0) # Shape []\n\n # p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts.\n counts = [1, 0]\n dist.prob(counts) # Shape [2]\n\n # p will be broadcast to shape [3, 5, 7, 3] to match counts.\n counts = [[...]] # Shape [5, 7, 3]\n dist.prob(counts) # Shape [5, 7, 3]\n ```\n\n ", "desc": "Categorical distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Dirichlet", "docs": "Dirichlet distribution.\n\n The Dirichlet distribution is defined over the\n [`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive,\n length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the\n Beta distribution when `k = 2`.\n\n #### Mathematical Details\n\n The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e.,\n\n ```none\n S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.\n ```\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z\n Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)\n ```\n\n where:\n\n * `x in S^{k-1}`, i.e., the `(k-1)`-simplex,\n * `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,\n * `Z` is the normalization constant aka the [multivariate beta function](\n https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),\n and,\n * `Gamma` is the [gamma function](\n https://en.wikipedia.org/wiki/Gamma_function).\n\n The `concentration` represents mean total counts of class occurrence, i.e.,\n\n ```none\n concentration = alpha = mean * total_concentration\n ```\n\n where `mean` in `S^{k-1}` and `total_concentration` is a positive real number\n representing a mean total count.\n\n Distribution parameters are automatically broadcast in all functions; see\n examples for details.\n\n Warning: Some components of the samples can be zero due to finite precision.\n This happens more often when some of the concentrations are very small.\n Make sure to round the samples to `np.finfo(dtype).tiny` before computing the\n density.\n\n Samples of this distribution are reparameterized (pathwise differentiable).\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n #### Examples\n\n ```python\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Create a single trivariate Dirichlet, with the 3rd class being three times\n # more frequent than the first. I.e., batch_shape=[], event_shape=[3].\n alpha = [1., 2, 3]\n dist = tfd.Dirichlet(alpha)\n\n dist.sample([4, 5]) # shape: [4, 5, 3]\n\n # x has one sample, one batch, three classes:\n x = [.2, .3, .5] # shape: [3]\n dist.prob(x) # shape: []\n\n # x has two samples from one batch:\n x = [[.1, .4, .5],\n [.2, .3, .5]]\n dist.prob(x) # shape: [2]\n\n # alpha will be broadcast to shape [5, 7, 3] to match x.\n x = [[...]] # shape: [5, 7, 3]\n dist.prob(x) # shape: [5, 7]\n ```\n\n ```python\n # Create batch_shape=[2], event_shape=[3]:\n alpha = [[1., 2, 3],\n [4, 5, 6]] # shape: [2, 3]\n dist = tfd.Dirichlet(alpha)\n\n dist.sample([4, 5]) # shape: [4, 5, 2, 3]\n\n x = [.2, .3, .5]\n # x will be broadcast as [[.2, .3, .5],\n # [.2, .3, .5]],\n # thus matching batch_shape [2, 3].\n dist.prob(x) # shape: [2]\n ```\n\n Compute the gradients of samples w.r.t. the parameters:\n\n ```python\n alpha = tf.constant([1.0, 2.0, 3.0])\n dist = tfd.Dirichlet(alpha)\n samples = dist.sample(5) # Shape [5, 3]\n loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function\n # Unbiased stochastic gradients of the loss function\n grads = tf.gradients(loss, alpha)\n ```\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Dirichlet distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.DirichletMultinomial", "docs": "Dirichlet-Multinomial compound distribution.\n\n The Dirichlet-Multinomial distribution is parameterized by a (batch of)\n length-`K` `concentration` vectors (`K > 1`) and a `total_count` number of\n trials, i.e., the number of trials per draw from the DirichletMultinomial. It\n is defined over a (batch of) length-`K` vector `counts` such that\n `tf.reduce_sum(counts, -1) = total_count`. The Dirichlet-Multinomial is\n identically the Beta-Binomial distribution when `K = 2`.\n\n #### Mathematical Details\n\n The Dirichlet-Multinomial is a distribution over `K`-class counts, i.e., a\n length-`K` vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`.\n\n The probability mass function (pmf) is,\n\n ```none\n pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z\n Z = Beta(alpha) / N!\n ```\n\n where:\n\n * `concentration = alpha = [alpha_0, ..., alpha_{K-1}]`, `alpha_j > 0`,\n * `total_count = N`, `N` a positive integer,\n * `N!` is `N` factorial, and,\n * `Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the\n [multivariate beta function](\n https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),\n and,\n * `Gamma` is the [gamma function](\n https://en.wikipedia.org/wiki/Gamma_function).\n\n Dirichlet-Multinomial is a [compound distribution](\n https://en.wikipedia.org/wiki/Compound_probability_distribution), i.e., its\n samples are generated as follows.\n\n 1. Choose class probabilities:\n `probs = [p_0,...,p_{K-1}] ~ Dir(concentration)`\n 2. Draw integers:\n `counts = [n_0,...,n_{K-1}] ~ Multinomial(total_count, probs)`\n\n The last `concentration` dimension parametrizes a single Dirichlet-Multinomial\n distribution. When calling distribution functions (e.g., `dist.prob(counts)`),\n `concentration`, `total_count` and `counts` are broadcast to the same shape.\n The last dimension of `counts` corresponds single Dirichlet-Multinomial\n distributions.\n\n Distribution parameters are automatically broadcast in all functions; see\n examples for details.\n\n #### Pitfalls\n\n The number of classes, `K`, must not exceed:\n - the largest integer representable by `self.dtype`, i.e.,\n `2**(mantissa_bits+1)` (IEE754),\n - the maximum `Tensor` index, i.e., `2**31-1`.\n\n In other words,\n\n ```python\n K <= min(2**31-1, {\n tf.float16: 2**11,\n tf.float32: 2**24,\n tf.float64: 2**53 }[param.dtype])\n ```\n\n Note: This condition is validated only when `self.validate_args = True`.\n\n #### Examples\n\n ```python\n alpha = [1., 2., 3.]\n n = 2.\n dist = DirichletMultinomial(n, alpha)\n ```\n\n Creates a 3-class distribution, with the 3rd class is most likely to be\n drawn.\n The distribution functions can be evaluated on counts.\n\n ```python\n # counts same shape as alpha.\n counts = [0., 0., 2.]\n dist.prob(counts) # Shape []\n\n # alpha will be broadcast to [[1., 2., 3.], [1., 2., 3.]] to match counts.\n counts = [[1., 1., 0.], [1., 0., 1.]]\n dist.prob(counts) # Shape [2]\n\n # alpha will be broadcast to shape [5, 7, 3] to match counts.\n counts = [[...]] # Shape [5, 7, 3]\n dist.prob(counts) # Shape [5, 7]\n ```\n\n Creates a 2-batch of 3-class distributions.\n\n ```python\n alpha = [[1., 2., 3.], [4., 5., 6.]] # Shape [2, 3]\n n = [3., 3.]\n dist = DirichletMultinomial(n, alpha)\n\n # counts will be broadcast to [[2., 1., 0.], [2., 1., 0.]] to match alpha.\n counts = [2., 1., 0.]\n dist.prob(counts) # Shape [2]\n ```\n\n ", "desc": "Dirichlet-Multinomial compound distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Distribution", "docs": "A generic probability distribution base class.\n\n `Distribution` is a base class for constructing and organizing properties\n (e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian).\n\n #### Subclassing\n\n Subclasses are expected to implement a leading-underscore version of the\n same-named function. The argument signature should be identical except for\n the omission of `name=\"...\"`. For example, to enable `log_prob(value,\n name=\"log_prob\")` a subclass should implement `_log_prob(value)`.\n\n Subclasses can append to public-level docstrings by providing\n docstrings for their method specializations. For example:\n\n ```python\n @util.AppendDocstring(\"Some other details.\")\n def _log_prob(self, value):\n ...\n ```\n\n would add the string \"Some other details.\" to the `log_prob` function\n docstring. This is implemented as a simple decorator to avoid python\n linter complaining about missing Args/Returns/Raises sections in the\n partial docstrings.\n\n #### Broadcasting, batching, and shapes\n\n All distributions support batches of independent distributions of that type.\n The batch shape is determined by broadcasting together the parameters.\n\n The shape of arguments to `__init__`, `cdf`, `log_cdf`, `prob`, and\n `log_prob` reflect this broadcasting, as does the return value of `sample` and\n `sample_n`.\n\n `sample_n_shape = [n] + batch_shape + event_shape`, where `sample_n_shape` is\n the shape of the `Tensor` returned from `sample_n`, `n` is the number of\n samples, `batch_shape` defines how many independent distributions there are,\n and `event_shape` defines the shape of samples from each of those independent\n distributions. Samples are independent along the `batch_shape` dimensions, but\n not necessarily so along the `event_shape` dimensions (depending on the\n particulars of the underlying distribution).\n\n Using the `Uniform` distribution as an example:\n\n ```python\n minval = 3.0\n maxval = [[4.0, 6.0],\n [10.0, 12.0]]\n\n # Broadcasting:\n # This instance represents 4 Uniform distributions. Each has a lower bound at\n # 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape.\n u = Uniform(minval, maxval)\n\n # `event_shape` is `TensorShape([])`.\n event_shape = u.event_shape\n # `event_shape_t` is a `Tensor` which will evaluate to [].\n event_shape_t = u.event_shape_tensor()\n\n # Sampling returns a sample per distribution. `samples` has shape\n # [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5,\n # batch_shape=[2, 2], and event_shape=[].\n samples = u.sample_n(5)\n\n # The broadcasting holds across methods. Here we use `cdf` as an example. The\n # same holds for `log_cdf` and the likelihood functions.\n\n # `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the\n # shape of the `Uniform` instance.\n cum_prob_broadcast = u.cdf(4.0)\n\n # `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting\n # occurred.\n cum_prob_per_dist = u.cdf([[4.0, 5.0],\n [6.0, 7.0]])\n\n # INVALID as the `value` argument is not broadcastable to the distribution's\n # shape.\n cum_prob_invalid = u.cdf([4.0, 5.0, 6.0])\n ```\n\n #### Shapes\n\n There are three important concepts associated with TensorFlow Distributions\n shapes:\n - Event shape describes the shape of a single draw from the distribution;\n it may be dependent across dimensions. For scalar distributions, the event\n shape is `[]`. For a 5-dimensional MultivariateNormal, the event shape is\n `[5]`.\n - Batch shape describes independent, not identically distributed draws, aka a\n \"collection\" or \"bunch\" of distributions.\n - Sample shape describes independent, identically distributed draws of batches\n from the distribution family.\n\n The event shape and the batch shape are properties of a Distribution object,\n whereas the sample shape is associated with a specific call to `sample` or\n `log_prob`.\n\n For detailed usage examples of TensorFlow Distributions shapes, see\n [this tutorial](\n https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb)\n\n #### Parameter values leading to undefined statistics or distributions.\n\n Some distributions do not have well-defined statistics for all initialization\n parameter values. For example, the beta distribution is parameterized by\n positive real numbers `concentration1` and `concentration0`, and does not have\n well-defined mode if `concentration1 < 1` or `concentration0 < 1`.\n\n The user is given the option of raising an exception or returning `NaN`.\n\n ```python\n a = tf.exp(tf.matmul(logits, weights_a))\n b = tf.exp(tf.matmul(logits, weights_b))\n\n # Will raise exception if ANY batch member has a < 1 or b < 1.\n dist = distributions.beta(a, b, allow_nan_stats=False)\n mode = dist.mode().eval()\n\n # Will return NaN for batch members with either a < 1 or b < 1.\n dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior\n mode = dist.mode().eval()\n ```\n\n In all cases, an exception is raised if *invalid* parameters are passed, e.g.\n\n ```python\n # Will raise an exception if any Op is run.\n negative_a = -1.0 * a # beta distribution by definition has a > 0.\n dist = distributions.beta(negative_a, b, allow_nan_stats=True)\n dist.mean().eval()\n ```\n\n ", "desc": "A generic probability distribution base class.", "type": "API"}, {"name": "tf.compat.v1.distributions.Exponential", "docs": "Exponential distribution.\n\n The Exponential distribution is parameterized by an event `rate` parameter.\n\n #### Mathematical Details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; lambda, x > 0) = exp(-lambda x) / Z\n Z = 1 / lambda\n ```\n\n where `rate = lambda` and `Z` is the normalizaing constant.\n\n The Exponential distribution is a special case of the Gamma distribution,\n i.e.,\n\n ```python\n Exponential(rate) = Gamma(concentration=1., rate)\n ```\n\n The Exponential distribution uses a `rate` parameter, or \"inverse scale\",\n which can be intuited as,\n\n ```none\n X ~ Exponential(rate=1)\n Y = X / rate\n ```\n\n ", "desc": "Exponential distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Gamma", "docs": "Gamma distribution.\n\n The Gamma distribution is defined over positive real numbers using\n parameters `concentration` (aka \"alpha\") and `rate` (aka \"beta\").\n\n #### Mathematical Details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z\n Z = Gamma(alpha) beta**(-alpha)\n ```\n\n where:\n\n * `concentration = alpha`, `alpha > 0`,\n * `rate = beta`, `beta > 0`,\n * `Z` is the normalizing constant, and,\n * `Gamma` is the [gamma function](\n https://en.wikipedia.org/wiki/Gamma_function).\n\n The cumulative density function (cdf) is,\n\n ```none\n cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha)\n ```\n\n where `GammaInc` is the [lower incomplete Gamma function](\n https://en.wikipedia.org/wiki/Incomplete_gamma_function).\n\n The parameters can be intuited via their relationship to mean and stddev,\n\n ```none\n concentration = alpha = (mean / stddev)**2\n rate = beta = mean / stddev**2 = concentration / mean\n ```\n\n Distribution parameters are automatically broadcast in all functions; see\n examples for details.\n\n Warning: The samples of this distribution are always non-negative. However,\n the samples that are smaller than `np.finfo(dtype).tiny` are rounded\n to this value, so it appears more often than it should.\n This should only be noticeable when the `concentration` is very small, or the\n `rate` is very large. See note in `tf.random.gamma` docstring.\n\n Samples of this distribution are reparameterized (pathwise differentiable).\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n #### Examples\n\n ```python\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n dist = tfd.Gamma(concentration=3.0, rate=2.0)\n dist2 = tfd.Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])\n ```\n\n Compute the gradients of samples w.r.t. the parameters:\n\n ```python\n concentration = tf.constant(3.0)\n rate = tf.constant(2.0)\n dist = tfd.Gamma(concentration, rate)\n samples = dist.sample(5) # Shape [5]\n loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function\n # Unbiased stochastic gradients of the loss function\n grads = tf.gradients(loss, [concentration, rate])\n ```\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Gamma distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.kl_divergence", "docs": "Get the KL-divergence KL(distribution_a || distribution_b). (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2019-01-01.\nInstructions for updating:\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\n\nIf there is no KL method registered specifically for `type(distribution_a)`\nand `type(distribution_b)`, then the class hierarchies of these types are\nsearched.\n\nIf one KL method is registered between any pairs of classes in these two\nparent hierarchies, it is used.\n\nIf more than one such registered method exists, the method whose registered\nclasses have the shortest sum MRO paths to the input types is used.\n\nIf more than one such shortest path exists, the first method\nidentified in the search is used (favoring a shorter MRO distance to\n`type(distribution_a)`).\n\nArgs:\n distribution_a: The first distribution.\n distribution_b: The second distribution.\n allow_nan_stats: Python `bool`, default `True`. When `True`,\n statistics (e.g., mean, mode, variance) use the value \"`NaN`\" to\n indicate the result is undefined. When `False`, an exception is raised\n if one or more of the statistic's batch members are undefined.\n name: Python `str` name prefixed to Ops created by this class.\n\nReturns:\n A Tensor with the batchwise KL-divergence between `distribution_a`\n and `distribution_b`.\n\nRaises:\n NotImplementedError: If no KL method is defined for distribution types\n of `distribution_a` and `distribution_b`.", "desc": "Get the KL-divergence KL(distribution_a || distribution_b). (deprecated)", "type": "API"}, {"name": "tf.compat.v1.distributions.Laplace", "docs": "The Laplace distribution with location `loc` and `scale` parameters.\n\n #### Mathematical details\n\n The probability density function (pdf) of this distribution is,\n\n ```none\n pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z\n Z = 2 sigma\n ```\n\n where `loc = mu`, `scale = sigma`, and `Z` is the normalization constant.\n\n Note that the Laplace distribution can be thought of two exponential\n distributions spliced together \"back-to-back.\"\n\n The Lpalce distribution is a member of the [location-scale family](\n https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be\n constructed as,\n\n ```none\n X ~ Laplace(loc=0, scale=1)\n Y = loc + scale * X\n ```\n\n ", "desc": "The Laplace distribution with location `loc` and `scale` parameters.", "type": "API"}, {"name": "tf.compat.v1.distributions.Multinomial", "docs": "Multinomial distribution.\n\n This Multinomial distribution is parameterized by `probs`, a (batch of)\n length-`K` `prob` (probability) vectors (`K > 1`) such that\n `tf.reduce_sum(probs, -1) = 1`, and a `total_count` number of trials, i.e.,\n the number of trials per draw from the Multinomial. It is defined over a\n (batch of) length-`K` vector `counts` such that\n `tf.reduce_sum(counts, -1) = total_count`. The Multinomial is identically the\n Binomial distribution when `K = 2`.\n\n #### Mathematical Details\n\n The Multinomial is a distribution over `K`-class counts, i.e., a length-`K`\n vector of non-negative integer `counts = n = [n_0, ..., n_{K-1}]`.\n\n The probability mass function (pmf) is,\n\n ```none\n pmf(n; pi, N) = prod_j (pi_j)**n_j / Z\n Z = (prod_j n_j!) / N!\n ```\n\n where:\n * `probs = pi = [pi_0, ..., pi_{K-1}]`, `pi_j > 0`, `sum_j pi_j = 1`,\n * `total_count = N`, `N` a positive integer,\n * `Z` is the normalization constant, and,\n * `N!` denotes `N` factorial.\n\n Distribution parameters are automatically broadcast in all functions; see\n examples for details.\n\n #### Pitfalls\n\n The number of classes, `K`, must not exceed:\n - the largest integer representable by `self.dtype`, i.e.,\n `2**(mantissa_bits+1)` (IEE754),\n - the maximum `Tensor` index, i.e., `2**31-1`.\n\n In other words,\n\n ```python\n K <= min(2**31-1, {\n tf.float16: 2**11,\n tf.float32: 2**24,\n tf.float64: 2**53 }[param.dtype])\n ```\n\n Note: This condition is validated only when `self.validate_args = True`.\n\n #### Examples\n\n Create a 3-class distribution, with the 3rd class is most likely to be drawn,\n using logits.\n\n ```python\n logits = [-50., -43, 0]\n dist = Multinomial(total_count=4., logits=logits)\n ```\n\n Create a 3-class distribution, with the 3rd class is most likely to be drawn.\n\n ```python\n p = [.2, .3, .5]\n dist = Multinomial(total_count=4., probs=p)\n ```\n\n The distribution functions can be evaluated on counts.\n\n ```python\n # counts same shape as p.\n counts = [1., 0, 3]\n dist.prob(counts) # Shape []\n\n # p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts.\n counts = [[1., 2, 1], [2, 2, 0]]\n dist.prob(counts) # Shape [2]\n\n # p will be broadcast to shape [5, 7, 3] to match counts.\n counts = [[...]] # Shape [5, 7, 3]\n dist.prob(counts) # Shape [5, 7]\n ```\n\n Create a 2-batch of 3-class distributions.\n\n ```python\n p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]\n dist = Multinomial(total_count=[4., 5], probs=p)\n\n counts = [[2., 1, 1], [3, 1, 1]]\n dist.prob(counts) # Shape [2]\n\n dist.sample(5) # Shape [5, 2, 3]\n ```\n ", "desc": "Multinomial distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Normal", "docs": "The Normal distribution with location `loc` and `scale` parameters.\n\n #### Mathematical details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z\n Z = (2 pi sigma**2)**0.5\n ```\n\n where `loc = mu` is the mean, `scale = sigma` is the std. deviation, and, `Z`\n is the normalization constant.\n\n The Normal distribution is a member of the [location-scale family](\n https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be\n constructed as,\n\n ```none\n X ~ Normal(loc=0, scale=1)\n Y = loc + scale * X\n ```\n\n #### Examples\n\n Examples of initialization of one or a batch of distributions.\n\n ```python\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Define a single scalar Normal distribution.\n dist = tfd.Normal(loc=0., scale=3.)\n\n # Evaluate the cdf at 1, returning a scalar.\n dist.cdf(1.)\n\n # Define a batch of two scalar valued Normals.\n # The first has mean 1 and standard deviation 11, the second 2 and 22.\n dist = tfd.Normal(loc=[1, 2.], scale=[11, 22.])\n\n # Evaluate the pdf of the first distribution on 0, and the second on 1.5,\n # returning a length two tensor.\n dist.prob([0, 1.5])\n\n # Get 3 samples, returning a 3 x 2 tensor.\n dist.sample([3])\n ```\n\n Arguments are broadcast when possible.\n\n ```python\n # Define a batch of two scalar valued Normals.\n # Both have mean 1, but different standard deviations.\n dist = tfd.Normal(loc=1., scale=[11, 22.])\n\n # Evaluate the pdf of both distributions on the same point, 3.0,\n # returning a length 2 tensor.\n dist.prob(3.0)\n ```\n\n ", "desc": "The Normal distribution with location `loc` and `scale` parameters.", "type": "API"}, {"name": "tf.compat.v1.distributions.RegisterKL", "docs": "Decorator to register a KL divergence implementation function.\n\n Usage:\n\n @distributions.RegisterKL(distributions.Normal, distributions.Normal)\n def _kl_normal_mvn(norm_a, norm_b):\n # Return KL(norm_a || norm_b)\n ", "desc": "Decorator to register a KL divergence implementation function.", "type": "API"}, {"name": "tf.compat.v1.distributions.ReparameterizationType", "docs": "Instances of this class represent how sampling is reparameterized.\n\n Two static instances exist in the distributions library, signifying\n one of two possible properties for samples from a distribution:\n\n `FULLY_REPARAMETERIZED`: Samples from the distribution are fully\n reparameterized, and straight-through gradients are supported.\n\n `NOT_REPARAMETERIZED`: Samples from the distribution are not fully\n reparameterized, and straight-through gradients are either partially\n unsupported or are not supported at all. In this case, for purposes of\n e.g. RL or variational inference, it is generally safest to wrap the\n sample results in a `stop_gradients` call and use policy\n gradients / surrogate loss instead.\n ", "desc": "Instances of this class represent how sampling is reparameterized.", "type": "API"}, {"name": "tf.compat.v1.distributions.StudentT", "docs": "Student's t-distribution.\n\n This distribution has parameters: degree of freedom `df`, location `loc`,\n and `scale`.\n\n #### Mathematical details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z\n where,\n y = (x - mu) / sigma\n Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1))\n ```\n\n where:\n * `loc = mu`,\n * `scale = sigma`, and,\n * `Z` is the normalization constant, and,\n * `Gamma` is the [gamma function](\n https://en.wikipedia.org/wiki/Gamma_function).\n\n The StudentT distribution is a member of the [location-scale family](\n https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be\n constructed as,\n\n ```none\n X ~ StudentT(df, loc=0, scale=1)\n Y = loc + scale * X\n ```\n\n Notice that `scale` has semantics more similar to standard deviation than\n variance. However it is not actually the std. deviation; the Student's\n t-distribution std. dev. is `scale sqrt(df / (df - 2))` when `df > 2`.\n\n Samples of this distribution are reparameterized (pathwise differentiable).\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n #### Examples\n\n Examples of initialization of one or a batch of distributions.\n\n ```python\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Define a single scalar Student t distribution.\n single_dist = tfd.StudentT(df=3)\n\n # Evaluate the pdf at 1, returning a scalar Tensor.\n single_dist.prob(1.)\n\n # Define a batch of two scalar valued Student t's.\n # The first has degrees of freedom 2, mean 1, and scale 11.\n # The second 3, 2 and 22.\n multi_dist = tfd.StudentT(df=[2, 3], loc=[1, 2.], scale=[11, 22.])\n\n # Evaluate the pdf of the first distribution on 0, and the second on 1.5,\n # returning a length two tensor.\n multi_dist.prob([0, 1.5])\n\n # Get 3 samples, returning a 3 x 2 tensor.\n multi_dist.sample(3)\n ```\n\n Arguments are broadcast when possible.\n\n ```python\n # Define a batch of two Student's t distributions.\n # Both have df 2 and mean 1, but different scales.\n dist = tfd.StudentT(df=2, loc=1, scale=[11, 22.])\n\n # Evaluate the pdf of both distributions on the same point, 3.0,\n # returning a length 2 tensor.\n dist.prob(3.0)\n ```\n\n Compute the gradients of samples w.r.t. the parameters:\n\n ```python\n df = tf.constant(2.0)\n loc = tf.constant(2.0)\n scale = tf.constant(11.0)\n dist = tfd.StudentT(df=df, loc=loc, scale=scale)\n samples = dist.sample(5) # Shape [5]\n loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function\n # Unbiased stochastic gradients of the loss function\n grads = tf.gradients(loss, [df, loc, scale])\n ```\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf](http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Student's t-distribution.", "type": "API"}, {"name": "tf.compat.v1.distributions.Uniform", "docs": "Uniform distribution with `low` and `high` parameters.\n\n #### Mathematical Details\n\n The probability density function (pdf) is,\n\n ```none\n pdf(x; a, b) = I[a <= x < b] / Z\n Z = b - a\n ```\n\n where\n\n - `low = a`,\n - `high = b`,\n - `Z` is the normalizing constant, and\n - `I[predicate]` is the [indicator function](\n https://en.wikipedia.org/wiki/Indicator_function) for `predicate`.\n\n The parameters `low` and `high` must be shaped in a way that supports\n broadcasting (e.g., `high - low` is a valid operation).\n\n #### Examples\n\n ```python\n # Without broadcasting:\n u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4]\n u2 = Uniform(low=[1.0, 2.0],\n high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4]\n u3 = Uniform(low=[[1.0, 2.0],\n [3.0, 4.0]],\n high=[[1.5, 2.5],\n [3.5, 4.5]]) # 4 distributions\n ```\n\n ```python\n # With broadcasting:\n u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions\n ```\n\n ", "desc": "Uniform distribution with `low` and `high` parameters.", "type": "API"}, {"name": "tf.compat.v1.div", "docs": "Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nDeprecated in favor of operator or tf.math.divide.\n\n@compatibility(TF2)\nThis function is deprecated in TF2. Prefer using the Tensor division operator,\n`tf.divide`, or `tf.math.divide`, which obey the Python 3 division operator\nsemantics.\n@end_compatibility\n\n\nThis function divides `x` and `y`, forcing Python 2 semantics. That is, if `x`\nand `y` are both integers then the result will be an integer. This is in\ncontrast to Python 3, where division with `/` is always a float while division\nwith `//` is always an integer.\n\nArgs:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n\nReturns:\n `x / y` returns the quotient of x and y.", "desc": "Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)", "type": "API"}, {"name": "tf.compat.v1.div_no_nan", "docs": "Computes a safe divide which returns 0 if `y` (denominator) is zero.\n\n For example:\n\n >>> tf.constant(3.0) / 0.0\n \n >>> tf.math.divide_no_nan(3.0, 0.0)\n \n\n Note that 0 is returned if `y` is 0 even if `x` is nonfinite:\n\n >>> tf.math.divide_no_nan(np.nan, 0.0)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for the operation (optional).\n\n Returns:\n The element-wise value of the x divided by y.\n ", "desc": "Computes a safe divide which returns 0 if `y` (denominator) is zero.", "type": "API"}, {"name": "tf.compat.v1.divide", "docs": "Computes Python style division of `x` by `y`.\n\n For example:\n\n >>> x = tf.constant([16, 12, 11])\n >>> y = tf.constant([4, 6, 2])\n >>> tf.divide(x,y)\n \n\n Args:\n x: A `Tensor`\n y: A `Tensor`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with same shape as input\n ", "desc": "Computes Python style division of `x` by `y`.", "type": "API"}, {"name": "tf.compat.v1.DType", "docs": "Represents the type of the elements in a `Tensor`.\n\n `DType`'s are used to specify the output data type for operations which\n require it, or to inspect the data type of existing `Tensor`'s.\n\n Examples:\n\n >>> tf.constant(1, dtype=tf.int64)\n \n >>> tf.constant(1.0).dtype\n tf.float32\n\n See `tf.dtypes` for a complete list of `DType`'s defined.\n ", "desc": "Represents the type of the elements in a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.dtypes", "docs": "Public API for tf.dtypes namespace.\n", "desc": "Public API for tf.dtypes namespace.", "type": "API"}, {"name": "tf.compat.v1.dtypes.as_dtype", "docs": "Converts the given `type_value` to a `DType`.\n\n Note: `DType` values are interned. When passed a new `DType` object,\n `as_dtype` always returns the interned value.\n\n Args:\n type_value: A value that can be converted to a `tf.DType` object. This may\n currently be a `tf.DType` object, a [`DataType`\n enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),\n a string type name, or a [`numpy.dtype`](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html).\n\n Returns:\n A `DType` corresponding to `type_value`.\n\n Raises:\n TypeError: If `type_value` cannot be converted to a `DType`.\n ", "desc": "Converts the given `type_value` to a `DType`.", "type": "API"}, {"name": "tf.compat.v1.dtypes.as_string", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.compat.v1.dtypes.cast", "docs": "Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.cast(x, tf.int32)\n \n\n Notice `tf.cast` has an alias `tf.dtypes.cast`:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.dtypes.cast(x, tf.int32)\n \n\n The operation supports data types (for `x` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`.\n In case of casting from complex types (`complex64`, `complex128`) to real\n types, only the real part of `x` is returned. In case of casting from real\n types to complex types (`complex64`, `complex128`), the imaginary part of the\n returned value is set to `0`. The handling of complex types here matches the\n behavior of numpy.\n\n Note casting nan and inf values to integral types has undefined behavior.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices` of numeric type. It could\n be `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`,\n `int64`, `float16`, `float32`, `float64`, `complex64`, `complex128`,\n `bfloat16`.\n dtype: The destination type. The list of supported dtypes is the same as\n `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` and\n same type as `dtype`.\n\n Raises:\n TypeError: If `x` cannot be cast to the `dtype`.\n ", "desc": "Casts a tensor to a new type.", "type": "API"}, {"name": "tf.compat.v1.dtypes.complex", "docs": "Converts two real numbers to a complex number.\n\n Given a tensor `real` representing the real part of a complex number, and a\n tensor `imag` representing the imaginary part of a complex number, this\n operation returns complex numbers elementwise of the form \\\\(a + bj\\\\), where\n *a* represents the `real` part and *b* represents the `imag` part.\n\n The input tensors `real` and `imag` must have the same shape.\n\n For example:\n\n ```python\n real = tf.constant([2.25, 3.25])\n imag = tf.constant([4.75, 5.75])\n tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]]\n ```\n\n Args:\n real: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n imag: A `Tensor`. Must have the same type as `real`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64` or `complex128`.\n\n Raises:\n TypeError: Real and imag must be correct types\n ", "desc": "Converts two real numbers to a complex number.", "type": "API"}, {"name": "tf.compat.v1.dtypes.DType", "docs": "Represents the type of the elements in a `Tensor`.\n\n `DType`'s are used to specify the output data type for operations which\n require it, or to inspect the data type of existing `Tensor`'s.\n\n Examples:\n\n >>> tf.constant(1, dtype=tf.int64)\n \n >>> tf.constant(1.0).dtype\n tf.float32\n\n See `tf.dtypes` for a complete list of `DType`'s defined.\n ", "desc": "Represents the type of the elements in a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.dtypes.saturate_cast", "docs": "Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underflow in the cast, this op\n applies the appropriate clamping before the cast.\n\n Args:\n value: A `Tensor`.\n dtype: The desired output `DType`.\n name: A name for the operation (optional).\n\n Returns:\n `value` safely cast to `dtype`.\n ", "desc": "Performs a safe saturating cast of `value` to `dtype`.", "type": "API"}, {"name": "tf.compat.v1.dynamic_partition", "docs": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.\n\n For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`\n becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`\n are placed in `outputs[i]` in lexicographic order of `js`, and the first\n dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.\n In detail,\n\n ```python\n outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]\n\n outputs[i] = pack([data[js, ...] for js if partitions[js] == i])\n ```\n\n `data.shape` must start with `partitions.shape`.\n\n For example:\n\n ```python\n # Scalar partitions.\n partitions = 1\n num_partitions = 2\n data = [10, 20]\n outputs[0] = [] # Empty with shape [0, 2]\n outputs[1] = [[10, 20]]\n\n # Vector partitions.\n partitions = [0, 0, 1, 1, 0]\n num_partitions = 2\n data = [10, 20, 30, 40, 50]\n outputs[0] = [10, 20, 50]\n outputs[1] = [30, 40]\n ```\n\n See `dynamic_stitch` for an example on how to merge partitions back.\n\n
\n \n
\n\n Args:\n data: A `Tensor`.\n partitions: A `Tensor` of type `int32`.\n Any shape. Indices in the range `[0, num_partitions)`.\n num_partitions: An `int` that is `>= 1`.\n The number of partitions to output.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_partitions` `Tensor` objects with the same type as `data`.\n ", "desc": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.", "type": "API"}, {"name": "tf.compat.v1.dynamic_stitch", "docs": "Interleave the values from the `data` tensors into a single tensor.\n\n Builds a merged tensor such that\n\n ```python\n merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]\n ```\n\n For example, if each `indices[m]` is scalar or vector, we have\n\n ```python\n # Scalar indices:\n merged[indices[m], ...] = data[m][...]\n\n # Vector indices:\n merged[indices[m][i], ...] = data[m][i, ...]\n ```\n\n Each `data[i].shape` must start with the corresponding `indices[i].shape`,\n and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we\n must have `data[i].shape = indices[i].shape + constant`. In terms of this\n `constant`, the output shape is\n\n merged.shape = [max(indices)] + constant\n\n Values are merged in order, so if an index appears in both `indices[m][i]` and\n `indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the\n merged result. If you do not need this guarantee, ParallelDynamicStitch might\n perform better on some devices.\n\n For example:\n\n ```python\n indices[0] = 6\n indices[1] = [4, 1]\n indices[2] = [[5, 2], [0, 3]]\n data[0] = [61, 62]\n data[1] = [[41, 42], [11, 12]]\n data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]\n merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],\n [51, 52], [61, 62]]\n ```\n\n This method can be used to merge partitions created by `dynamic_partition`\n as illustrated on the following example:\n\n ```python\n # Apply function (increments x_i) on elements for which a certain condition\n # apply (x_i != -1 in this example).\n x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])\n condition_mask=tf.not_equal(x,tf.constant(-1.))\n partitioned_data = tf.dynamic_partition(\n x, tf.cast(condition_mask, tf.int32) , 2)\n partitioned_data[1] = partitioned_data[1] + 1.0\n condition_indices = tf.dynamic_partition(\n tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)\n x = tf.dynamic_stitch(condition_indices, partitioned_data)\n # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain\n # unchanged.\n ```\n\n
\n \n
\n\n Args:\n indices: A list of at least 1 `Tensor` objects with type `int32`.\n data: A list with the same length as `indices` of `Tensor` objects with the same type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Interleave the values from the `data` tensors into a single tensor.", "type": "API"}, {"name": "tf.compat.v1.edit_distance", "docs": "Computes the Levenshtein distance between sequences.\n\n This operation takes variable-length sequences (`hypothesis` and `truth`),\n each provided as a `SparseTensor`, and computes the Levenshtein distance.\n You can normalize the edit distance by length of `truth` by setting\n `normalize` to true.\n\n For example:\n\n Given the following input,\n * `hypothesis` is a `tf.SparseTensor` of shape `[2, 1, 1]`\n * `truth` is a `tf.SparseTensor` of shape `[2, 2, 2]`\n\n >>> hypothesis = tf.SparseTensor(\n ... [[0, 0, 0],\n ... [1, 0, 0]],\n ... [\"a\", \"b\"],\n ... (2, 1, 1))\n >>> truth = tf.SparseTensor(\n ... [[0, 1, 0],\n ... [1, 0, 0],\n ... [1, 0, 1],\n ... [1, 1, 0]],\n ... [\"a\", \"b\", \"c\", \"a\"],\n ... (2, 2, 2))\n >>> tf.edit_distance(hypothesis, truth, normalize=True)\n \n\n The operation returns a dense Tensor of shape `[2, 2]` with\n edit distances normalized by `truth` lengths.\n\n **Note**: It is possible to calculate edit distance between two\n sparse tensors with variable-length values. However, attempting to create\n them while eager execution is enabled will result in a `ValueError`.\n\n For the following inputs,\n\n ```python\n # 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:\n # (0,0) = [\"a\"]\n # (1,0) = [\"b\"]\n hypothesis = tf.sparse.SparseTensor(\n [[0, 0, 0],\n [1, 0, 0]],\n [\"a\", \"b\"],\n (2, 1, 1))\n\n # 'truth' is a tensor of shape `[2, 2]` with variable-length values:\n # (0,0) = []\n # (0,1) = [\"a\"]\n # (1,0) = [\"b\", \"c\"]\n # (1,1) = [\"a\"]\n truth = tf.sparse.SparseTensor(\n [[0, 1, 0],\n [1, 0, 0],\n [1, 0, 1],\n [1, 1, 0]],\n [\"a\", \"b\", \"c\", \"a\"],\n (2, 2, 2))\n\n normalize = True\n\n # The output would be a dense Tensor of shape `(2,)`, with edit distances\n normalized by 'truth' lengths.\n # output => array([0., 0.5], dtype=float32)\n ```\n\n Args:\n hypothesis: A `SparseTensor` containing hypothesis sequences.\n truth: A `SparseTensor` containing truth sequences.\n normalize: A `bool`. If `True`, normalizes the Levenshtein distance by\n length of `truth.`\n name: A name for the operation (optional).\n\n Returns:\n A dense `Tensor` with rank `R - 1`, where R is the rank of the\n `SparseTensor` inputs `hypothesis` and `truth`.\n\n Raises:\n TypeError: If either `hypothesis` or `truth` are not a `SparseTensor`.\n ", "desc": "Computes the Levenshtein distance between sequences.", "type": "API"}, {"name": "tf.compat.v1.einsum", "docs": "Tensor contraction over specified indices and outer product.\n\n Einsum allows defining Tensors by defining their element-wise computation.\n This computation is defined by `equation`, a shorthand form based on Einstein\n summation. As an example, consider multiplying two matrices A and B to form a\n matrix C. The elements of C are given by:\n\n $$ C_{i,k} = \\sum_j A_{i,j} B_{j,k} $$\n\n or\n\n ```\n C[i,k] = sum_j A[i,j] * B[j,k]\n ```\n\n The corresponding einsum `equation` is:\n\n ```\n ij,jk->ik\n ```\n\n In general, to convert the element-wise equation into the `equation` string,\n use the following procedure (intermediate strings for matrix multiplication\n example provided in parentheses):\n\n 1. remove variable names, brackets, and commas, (`ik = sum_j ij * jk`)\n 2. replace \"*\" with \",\", (`ik = sum_j ij , jk`)\n 3. drop summation signs, and (`ik = ij, jk`)\n 4. move the output to the right, while replacing \"=\" with \"->\". (`ij,jk->ik`)\n\n Note: If the output indices are not specified repeated indices are summed.\n So `ij,jk->ik` can be simplified to `ij,jk`.\n\n Many common operations can be expressed in this way. For example:\n\n **Matrix multiplication**\n\n >>> m0 = tf.random.normal(shape=[2, 3])\n >>> m1 = tf.random.normal(shape=[3, 5])\n >>> e = tf.einsum('ij,jk->ik', m0, m1)\n >>> # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n Repeated indices are summed if the output indices are not specified.\n\n >>> e = tf.einsum('ij,jk', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n\n **Dot product**\n\n >>> u = tf.random.normal(shape=[5])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,i->', u, v) # output = sum_i u[i]*v[i]\n >>> print(e.shape)\n ()\n\n **Outer product**\n\n >>> u = tf.random.normal(shape=[3])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]\n >>> print(e.shape)\n (3, 5)\n\n **Transpose**\n\n >>> m = tf.ones(2,3)\n >>> e = tf.einsum('ij->ji', m0) # output[j,i] = m0[i,j]\n >>> print(e.shape)\n (3, 2)\n\n **Diag**\n\n >>> m = tf.reshape(tf.range(9), [3,3])\n >>> diag = tf.einsum('ii->i', m)\n >>> print(diag.shape)\n (3,)\n\n **Trace**\n\n >>> # Repeated indices are summed.\n >>> trace = tf.einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]\n >>> assert trace == sum(diag)\n >>> print(trace.shape)\n ()\n\n **Batch matrix multiplication**\n\n >>> s = tf.random.normal(shape=[7,5,3])\n >>> t = tf.random.normal(shape=[7,3,2])\n >>> e = tf.einsum('bij,bjk->bik', s, t)\n >>> # output[a,i,k] = sum_j s[a,i,j] * t[a, j, k]\n >>> print(e.shape)\n (7, 5, 2)\n\n This method does not support broadcasting on named-axes. All axes with\n matching labels should have the same length. If you have length-1 axes,\n use `tf.squeeze` or `tf.reshape` to eliminate them.\n\n To write code that is agnostic to the number of indices in the input\n use an ellipsis. The ellipsis is a placeholder for \"whatever other indices\n fit here\".\n\n For example, to perform a NumPy-style broadcasting-batch-matrix multiplication\n where the matrix multiply acts on the last two axes of the input, use:\n\n >>> s = tf.random.normal(shape=[11, 7, 5, 3])\n >>> t = tf.random.normal(shape=[11, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Einsum **will** broadcast over axes covered by the ellipsis.\n\n >>> s = tf.random.normal(shape=[11, 1, 5, 3])\n >>> t = tf.random.normal(shape=[1, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Args:\n equation: a `str` describing the contraction, in the same format as\n `numpy.einsum`.\n *inputs: the inputs to contract (each one a `Tensor`), whose shapes should\n be consistent with `equation`.\n **kwargs:\n - optimize: Optimization strategy to use to find contraction path using\n opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or\n 'auto'. (optional, default: 'greedy').\n - name: A name for the operation (optional).\n\n Returns:\n The contracted `Tensor`, with shape determined by `equation`.\n\n Raises:\n ValueError: If\n - the format of `equation` is incorrect,\n - number of inputs or their shapes are inconsistent with `equation`.\n ", "desc": "Tensor contraction over specified indices and outer product.", "type": "API"}, {"name": "tf.compat.v1.enable_control_flow_v2", "docs": "Use control flow v2.\n\n control flow v2 (cfv2) is an improved version of control flow in TensorFlow\n with support for higher order derivatives. Enabling cfv2 will change the\n graph/function representation of control flow, e.g., `tf.while_loop` and\n `tf.cond` will generate functional `While` and `If` ops instead of low-level\n `Switch`, `Merge` etc. ops. Note: Importing and running graphs exported\n with old control flow will still be supported.\n\n Calling tf.enable_control_flow_v2() lets you opt-in to this TensorFlow 2.0\n feature.\n\n Note: v2 control flow is always enabled inside of tf.function. Calling this\n function is not required.\n ", "desc": "Use control flow v2.", "type": "API"}, {"name": "tf.compat.v1.enable_eager_execution", "docs": "Enables eager execution for the lifetime of this program.\n\n Eager execution provides an imperative interface to TensorFlow. With eager\n execution enabled, TensorFlow functions execute operations immediately (as\n opposed to adding to a graph to be executed later in a `tf.compat.v1.Session`)\n and\n return concrete values (as opposed to symbolic references to a node in a\n computational graph).\n\n For example:\n\n ```python\n tf.compat.v1.enable_eager_execution()\n\n # After eager execution is enabled, operations are executed as they are\n # defined and Tensor objects hold concrete values, which can be accessed as\n # numpy.ndarray`s through the numpy() method.\n assert tf.multiply(6, 7).numpy() == 42\n ```\n\n Eager execution cannot be enabled after TensorFlow APIs have been used to\n create or execute graphs. It is typically recommended to invoke this function\n at program startup and not in a library (as most libraries should be usable\n both with and without eager execution).\n\n @compatibility(TF2)\n This function is not necessary if you are using TF2. Eager execution is\n enabled by default.\n @end_compatibility\n\n Args:\n config: (Optional.) A `tf.compat.v1.ConfigProto` to use to configure the\n environment in which operations are executed. Note that\n `tf.compat.v1.ConfigProto` is also used to configure graph execution (via\n `tf.compat.v1.Session`) and many options within `tf.compat.v1.ConfigProto`\n are not implemented (or are irrelevant) when eager execution is enabled.\n device_policy: (Optional.) Policy controlling how operations requiring\n inputs on a specific device (e.g., a GPU 0) handle inputs on a different\n device (e.g. GPU 1 or CPU). When set to None, an appropriate value will\n be picked automatically. The value picked may change between TensorFlow\n releases.\n Valid values:\n - tf.contrib.eager.DEVICE_PLACEMENT_EXPLICIT: raises an error if the\n placement is not correct.\n - tf.contrib.eager.DEVICE_PLACEMENT_WARN: copies the tensors which are not\n on the right device but logs a warning.\n - tf.contrib.eager.DEVICE_PLACEMENT_SILENT: silently copies the tensors.\n Note that this may hide performance problems as there is no notification\n provided when operations are blocked on the tensor being copied between\n devices.\n - tf.contrib.eager.DEVICE_PLACEMENT_SILENT_FOR_INT32: silently copies\n int32 tensors, raising errors on the other ones.\n execution_mode: (Optional.) Policy controlling how operations dispatched are\n actually executed. When set to None, an appropriate value will be picked\n automatically. The value picked may change between TensorFlow releases.\n Valid values:\n - tf.contrib.eager.SYNC: executes each operation synchronously.\n - tf.contrib.eager.ASYNC: executes each operation asynchronously. These\n operations may return \"non-ready\" handles.\n\n Raises:\n ValueError: If eager execution is enabled after creating/executing a\n TensorFlow graph, or if options provided conflict with a previous call\n to this function.\n ", "desc": "Enables eager execution for the lifetime of this program.", "type": "API"}, {"name": "tf.compat.v1.enable_resource_variables", "docs": "Creates resource variables by default.\n\n Resource variables are improved versions of TensorFlow variables with a\n well-defined memory model. Accessing a resource variable reads its value, and\n all ops which access a specific read value of the variable are guaranteed to\n see the same value for that tensor. Writes which happen after a read (by\n having a control or data dependency on the read) are guaranteed not to affect\n the value of the read tensor, and similarly writes which happen before a read\n are guaranteed to affect the value. No guarantees are made about unordered\n read/write pairs.\n\n Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0\n feature.\n ", "desc": "Creates resource variables by default.", "type": "API"}, {"name": "tf.compat.v1.enable_tensor_equality", "docs": "Compare Tensors with element-wise comparison and thus be unhashable.\n\n Comparing tensors with element-wise allows comparisons such as\n tf.Variable(1.0) == 1.0. Element-wise equality implies that tensors are\n unhashable. Thus tensors can no longer be directly used in sets or as a key in\n a dictionary.\n ", "desc": "Compare Tensors with element-wise comparison and thus be unhashable.", "type": "API"}, {"name": "tf.compat.v1.enable_v2_behavior", "docs": "Enables TensorFlow 2.x behaviors.\n\n This function can be called at the beginning of the program (before `Tensors`,\n `Graphs` or other structures have been created, and before devices have been\n initialized. It switches all global behaviors that are different between\n TensorFlow 1.x and 2.x to behave as intended for 2.x.\n\n This function is called in the main TensorFlow `__init__.py` file, user should\n not need to call it, except during complex migrations.\n\n @compatibility(TF2)\n This function is not necessary if you are using TF2. V2 behavior is enabled by\n default.\n @end_compatibility\n ", "desc": "Enables TensorFlow 2.x behaviors.", "type": "API"}, {"name": "tf.compat.v1.enable_v2_tensorshape", "docs": "In TensorFlow 2.0, iterating over a TensorShape instance returns values.\n\n This enables the new behavior.\n\n Concretely, `tensor_shape[i]` returned a Dimension instance in V1, but\n it V2 it returns either an integer, or None.\n\n Examples:\n\n ```\n #######################\n # If you had this in V1:\n value = tensor_shape[i].value\n\n # Do this in V2 instead:\n value = tensor_shape[i]\n\n #######################\n # If you had this in V1:\n for dim in tensor_shape:\n value = dim.value\n print(value)\n\n # Do this in V2 instead:\n for value in tensor_shape:\n print(value)\n\n #######################\n # If you had this in V1:\n dim = tensor_shape[i]\n dim.assert_is_compatible_with(other_shape) # or using any other shape method\n\n # Do this in V2 instead:\n if tensor_shape.rank is None:\n dim = Dimension(None)\n else:\n dim = tensor_shape.dims[i]\n dim.assert_is_compatible_with(other_shape) # or using any other shape method\n\n # The V2 suggestion above is more explicit, which will save you from\n # the following trap (present in V1):\n # you might do in-place modifications to `dim` and expect them to be reflected\n # in `tensor_shape[i]`, but they would not be.\n ```\n ", "desc": "In TensorFlow 2.0, iterating over a TensorShape instance returns values.", "type": "API"}, {"name": "tf.compat.v1.encode_base64", "docs": "Encode strings into web-safe base64 format.\n\n Refer to [this article](https://en.wikipedia.org/wiki/Base64) for more information on\n base64 format. Base64 strings may have padding with '=' at the\n end so that the encoded has length multiple of 4. See Padding section of the\n link above.\n\n Web-safe means that the encoder uses - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Strings to be encoded.\n pad: An optional `bool`. Defaults to `False`.\n Bool whether padding is applied at the ends.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode strings into web-safe base64 format.", "type": "API"}, {"name": "tf.compat.v1.ensure_shape", "docs": "Updates the shape of a tensor and checks at runtime that the shape holds.\n\n When executed, this operation asserts that the input tensor `x`'s shape\n is compatible with the `shape` argument.\n See `tf.TensorShape.is_compatible_with` for details.\n\n >>> x = tf.constant([[1, 2, 3],\n ... [4, 5, 6]])\n >>> x = tf.ensure_shape(x, [2, 3])\n\n Use `None` for unknown dimensions:\n\n >>> x = tf.ensure_shape(x, [None, 3])\n >>> x = tf.ensure_shape(x, [2, None])\n\n If the tensor's shape is not compatible with the `shape` argument, an error\n is raised:\n\n >>> x = tf.ensure_shape(x, [5])\n Traceback (most recent call last):\n ...\n tf.errors.InvalidArgumentError: Shape of tensor dummy_input [3] is not\n compatible with expected shape [5]. [Op:EnsureShape]\n\n During graph construction (typically tracing a `tf.function`),\n `tf.ensure_shape` updates the static-shape of the **result** tensor by\n merging the two shapes. See `tf.TensorShape.merge_with` for details.\n\n This is most useful when **you** know a shape that can't be determined\n statically by TensorFlow.\n\n The following trivial `tf.function` prints the input tensor's\n static-shape before and after `ensure_shape` is applied.\n\n >>> @tf.function\n ... def f(tensor):\n ... print(\"Static-shape before:\", tensor.shape)\n ... tensor = tf.ensure_shape(tensor, [None, 3])\n ... print(\"Static-shape after:\", tensor.shape)\n ... return tensor\n\n This lets you see the effect of `tf.ensure_shape` when the function is traced:\n >>> cf = f.get_concrete_function(tf.TensorSpec([None, None]))\n Static-shape before: (None, None)\n Static-shape after: (None, 3)\n\n >>> cf(tf.zeros([3, 3])) # Passes\n >>> cf(tf.constant([1, 2, 3])) # fails\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Shape of tensor x [3] is not compatible with expected shape [3,3].\n\n The above example raises `tf.errors.InvalidArgumentError`, because `x`'s\n shape, `(3,)`, is not compatible with the `shape` argument, `(None, 3)`\n\n Inside a `tf.function` or `v1.Graph` context it checks both the buildtime and\n runtime shapes. This is stricter than `tf.Tensor.set_shape` which only\n checks the buildtime shape.\n\n Note: This differs from `tf.Tensor.set_shape` in that it sets the static shape\n of the resulting tensor and enforces it at runtime, raising an error if the\n tensor's runtime shape is incompatible with the specified shape.\n `tf.Tensor.set_shape` sets the static shape of the tensor without enforcing it\n at runtime, which may result in inconsistencies between the statically-known\n shape of tensors and the runtime value of tensors.\n\n For example, of loading images of a known size:\n\n >>> @tf.function\n ... def decode_image(png):\n ... image = tf.image.decode_png(png, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... image = tf.ensure_shape(image,[28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n When tracing a function, no ops are being executed, shapes may be unknown.\n See the [Concrete Functions Guide](https://www.tensorflow.org/guide/concrete_function)\n for details.\n\n >>> concrete_decode = decode_image.get_concrete_function(\n ... tf.TensorSpec([], dtype=tf.string))\n Initial shape: (None, None, 3)\n Final shape: (28, 28, 3)\n\n >>> image = tf.random.uniform(maxval=255, shape=[28, 28, 3], dtype=tf.int32)\n >>> image = tf.cast(image,tf.uint8)\n >>> png = tf.image.encode_png(image)\n >>> image2 = concrete_decode(png)\n >>> print(image2.shape)\n (28, 28, 3)\n\n >>> image = tf.concat([image,image], axis=0)\n >>> print(image.shape)\n (56, 28, 3)\n >>> png = tf.image.encode_png(image)\n >>> image2 = concrete_decode(png)\n Traceback (most recent call last):\n ...\n tf.errors.InvalidArgumentError: Shape of tensor DecodePng [56,28,3] is not\n compatible with expected shape [28,28,3].\n\n Caution: if you don't use the result of `tf.ensure_shape` the check may not\n run.\n\n >>> @tf.function\n ... def bad_decode_image(png):\n ... image = tf.image.decode_png(png, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... # BAD: forgot to use the returned tensor.\n ... tf.ensure_shape(image,[28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n >>> image = bad_decode_image(png)\n Initial shape: (None, None, 3)\n Final shape: (None, None, 3)\n >>> print(image.shape)\n (56, 28, 3)\n\n Args:\n x: A `Tensor`.\n shape: A `TensorShape` representing the shape of this tensor, a\n `TensorShapeProto`, a list, a tuple, or None.\n name: A name for this operation (optional). Defaults to \"EnsureShape\".\n\n Returns:\n A `Tensor`. Has the same type and contents as `x`.\n\n Raises:\n tf.errors.InvalidArgumentError: If `shape` is incompatible with the shape\n of `x`.\n ", "desc": "Updates the shape of a tensor and checks at runtime that the shape holds.", "type": "API"}, {"name": "tf.compat.v1.equal", "docs": "Returns the truth value of (x == y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise equality comparison, returning a Tensor of\n boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x == y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.erf", "docs": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.\n\n For example:\n\n >>> tf.math.erf([[1.0, 2.0, 3.0], [0.0, -1.0, -2.0]])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.erf(x.values, ...), x.dense_shape)`", "desc": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.", "type": "API"}, {"name": "tf.compat.v1.erfc", "docs": "Computes the complementary error function of `x` element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the complementary error function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.errors", "docs": "Exception types for TensorFlow errors.\n", "desc": "Exception types for TensorFlow errors.", "type": "API"}, {"name": "tf.compat.v1.errors.AbortedError", "docs": "The operation was aborted, typically due to a concurrent action.\n\n For example, running a\n `tf.QueueBase.enqueue`\n operation may raise `AbortedError` if a\n `tf.QueueBase.close` operation\n previously ran.\n\n @@__init__\n ", "desc": "The operation was aborted, typically due to a concurrent action.", "type": "API"}, {"name": "tf.compat.v1.errors.AlreadyExistsError", "docs": "Raised when an entity that we attempted to create already exists.\n\n For example, running an operation that saves a file\n (e.g. `tf.train.Saver.save`)\n could potentially raise this exception if an explicit filename for an\n existing file was passed.\n\n @@__init__\n ", "desc": "Raised when an entity that we attempted to create already exists.", "type": "API"}, {"name": "tf.compat.v1.errors.CancelledError", "docs": "Raised when an operation or step is cancelled.\n\n For example, a long-running operation (e.g.\n `tf.QueueBase.enqueue` may be\n cancelled by running another operation (e.g.\n `tf.QueueBase.close`,\n or by `tf.Session.close`.\n A step that is running such a long-running operation will fail by raising\n `CancelledError`.\n\n @@__init__\n ", "desc": "Raised when an operation or step is cancelled.", "type": "API"}, {"name": "tf.compat.v1.errors.DataLossError", "docs": "Raised when unrecoverable data loss or corruption is encountered.\n\n For example, this may be raised by running a\n `tf.WholeFileReader.read`\n operation, if the file is truncated while it is being read.\n\n @@__init__\n ", "desc": "Raised when unrecoverable data loss or corruption is encountered.", "type": "API"}, {"name": "tf.compat.v1.errors.DeadlineExceededError", "docs": "Raised when a deadline expires before an operation could complete.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "Raised when a deadline expires before an operation could complete.", "type": "API"}, {"name": "tf.compat.v1.errors.error_code_from_exception_type", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.errors.exception_type_from_error_code", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.errors.FailedPreconditionError", "docs": "Operation was rejected because the system is not in a state to execute it.\n\n This exception is most commonly raised when running an operation\n that reads a `tf.Variable`\n before it has been initialized.\n\n @@__init__\n ", "desc": "Operation was rejected because the system is not in a state to execute it.", "type": "API"}, {"name": "tf.compat.v1.errors.InternalError", "docs": "Raised when the system experiences an internal error.\n\n This exception is raised when some invariant expected by the runtime\n has been broken. Catching this exception is not recommended.\n\n @@__init__\n ", "desc": "Raised when the system experiences an internal error.", "type": "API"}, {"name": "tf.compat.v1.errors.InvalidArgumentError", "docs": "Raised when an operation receives an invalid argument.\n\n This error is typically raised when an op receives mismatched arguments.\n\n Example:\n\n >>> tf.reshape([1, 2, 3], (2,))\n Traceback (most recent call last):\n ...\n InvalidArgumentError: ...\n\n @@__init__\n ", "desc": "Raised when an operation receives an invalid argument.", "type": "API"}, {"name": "tf.compat.v1.errors.NotFoundError", "docs": "Raised when a requested entity (e.g., a file or directory) was not found.\n\n For example, running the\n `tf.WholeFileReader.read`\n operation could raise `NotFoundError` if it receives the name of a file that\n does not exist.\n\n @@__init__\n ", "desc": "Raised when a requested entity (e.g., a file or directory) was not found.", "type": "API"}, {"name": "tf.compat.v1.errors.OpError", "docs": "The base class for TensorFlow exceptions.\n\n Usually, TensorFlow will raise a more specific subclass of `OpError` from the\n `tf.errors` module.\n ", "desc": "The base class for TensorFlow exceptions.", "type": "API"}, {"name": "tf.compat.v1.errors.OutOfRangeError", "docs": "Raised when an operation iterates past the valid input range.\n\n This exception is raised in \"end-of-file\" conditions, such as when a\n `tf.QueueBase.dequeue`\n operation is blocked on an empty queue, and a\n `tf.QueueBase.close`\n operation executes.\n\n @@__init__\n ", "desc": "Raised when an operation iterates past the valid input range.", "type": "API"}, {"name": "tf.compat.v1.errors.PermissionDeniedError", "docs": "Raised when the caller does not have permission to run an operation.\n\n For example, running the\n `tf.WholeFileReader.read`\n operation could raise `PermissionDeniedError` if it receives the name of a\n file for which the user does not have the read file permission.\n\n @@__init__\n ", "desc": "Raised when the caller does not have permission to run an operation.", "type": "API"}, {"name": "tf.compat.v1.errors.raise_exception_on_not_ok_status", "docs": "Context manager to check for C API status.", "desc": "Context manager to check for C API status.", "type": "API"}, {"name": "tf.compat.v1.errors.ResourceExhaustedError", "docs": "Some resource has been exhausted.\n\n For example, this error might be raised if a per-user quota is\n exhausted, or perhaps the entire file system is out of space.\n\n @@__init__\n ", "desc": "Some resource has been exhausted.", "type": "API"}, {"name": "tf.compat.v1.errors.UnauthenticatedError", "docs": "The request does not have valid authentication credentials.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "The request does not have valid authentication credentials.", "type": "API"}, {"name": "tf.compat.v1.errors.UnavailableError", "docs": "Raised when the runtime is currently unavailable.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "Raised when the runtime is currently unavailable.", "type": "API"}, {"name": "tf.compat.v1.errors.UnimplementedError", "docs": "Raised when an operation has not been implemented.\n\n Some operations may raise this error when passed otherwise-valid\n arguments that it does not currently support. For example, running\n the `tf.nn.max_pool2d` operation\n would raise this error if pooling was requested on the batch dimension,\n because this is not yet supported.\n\n @@__init__\n ", "desc": "Raised when an operation has not been implemented.", "type": "API"}, {"name": "tf.compat.v1.errors.UnknownError", "docs": "Unknown error.\n\n An example of where this error may be returned is if a Status value\n received from another address space belongs to an error-space that\n is not known to this address space. Also, errors raised by APIs that\n do not return enough error information may be converted to this\n error.\n\n @@__init__\n ", "desc": "Unknown error.", "type": "API"}, {"name": "tf.compat.v1.estimator", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.estimator.add_metrics", "docs": "Creates a new `tf.estimator.Estimator` which has given metrics.\n\n Example:\n\n ```python\n def my_auc(labels, predictions):\n auc_metric = tf.keras.metrics.AUC(name=\"my_auc\")\n auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'])\n return {'auc': auc_metric}\n\n estimator = tf.estimator.DNNClassifier(...)\n estimator = tf.estimator.add_metrics(estimator, my_auc)\n estimator.train(...)\n estimator.evaluate(...)\n ```\n Example usage of custom metric which uses features:\n\n ```python\n def my_auc(labels, predictions, features):\n auc_metric = tf.keras.metrics.AUC(name=\"my_auc\")\n auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'],\n sample_weight=features['weight'])\n return {'auc': auc_metric}\n\n estimator = tf.estimator.DNNClassifier(...)\n estimator = tf.estimator.add_metrics(estimator, my_auc)\n estimator.train(...)\n estimator.evaluate(...)\n ```\n\n Args:\n estimator: A `tf.estimator.Estimator` object.\n metric_fn: A function which should obey the following signature:\n - Args: can only have following four arguments in any order:\n * predictions: Predictions `Tensor` or dict of `Tensor` created by given\n `estimator`.\n * features: Input `dict` of `Tensor` objects created by `input_fn` which\n is given to `estimator.evaluate` as an argument.\n * labels: Labels `Tensor` or dict of `Tensor` created by `input_fn`\n which is given to `estimator.evaluate` as an argument.\n * config: config attribute of the `estimator`.\n - Returns: Dict of metric results keyed by name. Final metrics are a\n union of this and `estimator's` existing metrics. If there is a name\n conflict between this and `estimator`s existing metrics, this will\n override the existing one. The values of the dict are the results of\n calling a metric function, namely a `(metric_tensor, update_op)` tuple.\n\n Returns:\n A new `tf.estimator.Estimator` which has a union of original metrics with\n given ones.\n ", "desc": "Creates a new `tf.estimator.Estimator` which has given metrics.", "type": "API"}, {"name": "tf.compat.v1.estimator.BaselineClassifier", "docs": "A classifier that can establish a simple baseline.\n\n This classifier ignores feature values and will learn to predict the average\n value of each label. For single-label problems, this will predict the\n probability distribution of the classes as seen in the labels. For multi-label\n problems, this will predict the fraction of examples that are positive for\n each class.\n\n Example:\n\n ```python\n\n # Build BaselineClassifier\n classifier = tf.estimator.BaselineClassifier(n_classes=3)\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n classifier.train(input_fn=input_fn_train)\n\n # Evaluate cross entropy between the test and train labels.\n loss = classifier.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # predict outputs the probability distribution of the classes as seen in\n # training.\n predictions = classifier.predict(new_samples)\n\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A classifier that can establish a simple baseline.", "type": "API"}, {"name": "tf.compat.v1.estimator.BaselineEstimator", "docs": "An estimator that can establish a simple baseline.\n\n The estimator uses a user-specified head.\n\n This estimator ignores feature values and will learn to predict the average\n value of each label. E.g. for single-label classification problems, this will\n predict the probability distribution of the classes as seen in the labels.\n For multi-label classification problems, it will predict the ratio of examples\n that contain each class.\n\n Example:\n\n ```python\n\n # Build baseline multi-label classifier.\n estimator = tf.estimator.BaselineEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3))\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n estimator.train(input_fn=input_fn_train)\n\n # Evaluates cross entropy between the test and train labels.\n loss = estimator.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # For each class, predicts the ratio of training examples that contain the\n # class.\n predictions = estimator.predict(new_samples)\n\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is specified in the `head` constructor (and not None) for\n the head passed to BaselineEstimator's constructor, a feature with\n `key=weight_column` whose value is a `Tensor`.\n ", "desc": "An estimator that can establish a simple baseline.", "type": "API"}, {"name": "tf.compat.v1.estimator.BaselineRegressor", "docs": "A regressor that can establish a simple baseline.\n\n This regressor ignores feature values and will learn to predict the average\n value of each label.\n\n Example:\n\n ```python\n\n # Build BaselineRegressor\n regressor = tf.estimator.BaselineRegressor()\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n regressor.train(input_fn=input_fn_train)\n\n # Evaluate squared-loss between the test and train targets.\n loss = regressor.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # predict outputs the mean value seen during training.\n predictions = regressor.predict(new_samples)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A regressor that can establish a simple baseline.", "type": "API"}, {"name": "tf.compat.v1.estimator.BestExporter", "docs": "This class exports the serving graph and checkpoints of the best models.\n\n This class performs a model export everytime the new model is better than any\n existing model.\n ", "desc": "This class exports the serving graph and checkpoints of the best models.", "type": "API"}, {"name": "tf.compat.v1.estimator.BinaryClassHead", "docs": "Creates a `Head` for single label binary classification.\n\n Uses `sigmoid_cross_entropy_with_logits` loss.\n\n The head expects `logits` with shape `[D0, D1, ... DN, 1]`.\n In many applications, the shape is `[batch_size, 1]`.\n\n `labels` must be a dense `Tensor` with shape matching `logits`, namely\n `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string\n `Tensor` with values from the vocabulary. If `label_vocabulary` is not given,\n `labels` must be float `Tensor` with values in the interval `[0, 1]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n The loss is the weighted sum over the input dimensions. Namely, if the input\n labels have shape `[batch_size, 1]`, the loss is the weighted sum over\n `batch_size`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns loss\n with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support float `labels` with\n shape `[D0, D1, ... DN, 1]`. Namely, the head applies `label_vocabulary` to\n the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> head = tf.estimator.BinaryClassHead()\n >>> logits = np.array(((45,), (-41,),), dtype=np.float32)\n >>> labels = np.array(((1,), (1,),), dtype=np.int32)\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size\n >>> # = sum(0, 41) / 2 = 41 / 2 = 20.50\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 20.50\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n accuracy : 0.50\n accuracy_baseline : 1.00\n auc : 0.00\n auc_precision_recall : 1.00\n average_loss : 20.50\n label/mean : 1.00\n precision : 1.00\n prediction/mean : 0.50\n recall : 0.50\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[ 45.]\n [-41.]], shape=(2, 1), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.BinaryClassHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.BinaryClassHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n thresholds: Iterable of floats in the range `(0, 1)`. For binary\n classification metrics such as precision and recall, an eval metric is\n generated for each threshold value. This threshold is applied to the\n logistic values to determine the binary classification (i.e., above the\n threshold is `true`, below is `false`.\n label_vocabulary: A list or tuple of strings representing possible label\n values. If it is not given, that means labels are already encoded within\n [0, 1]. If given, labels must be string type and have any value in\n `label_vocabulary`. Note that errors will be raised if `label_vocabulary`\n is not provided but labels are strings.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by `batch size * label_dimension`.\n loss_fn: Optional loss function.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for single label binary classification.", "type": "API"}, {"name": "tf.compat.v1.estimator.CheckpointSaverHook", "docs": "Saves checkpoints every N steps or seconds.", "desc": "Saves checkpoints every N steps or seconds.", "type": "API"}, {"name": "tf.compat.v1.estimator.CheckpointSaverListener", "docs": "Interface for listeners that take action before or after checkpoint save.\n\n `CheckpointSaverListener` triggers only in steps when `CheckpointSaverHook` is\n triggered, and provides callbacks at the following points:\n - before using the session\n - before each call to `Saver.save()`\n - after each call to `Saver.save()`\n - at the end of session\n\n To use a listener, implement a class and pass the listener to a\n `CheckpointSaverHook`, as in this example:\n\n ```python\n class ExampleCheckpointSaverListener(CheckpointSaverListener):\n def begin(self):\n # You can add ops to the graph here.\n print('Starting the session.')\n self.your_tensor = ...\n\n def before_save(self, session, global_step_value):\n print('About to write a checkpoint')\n\n def after_save(self, session, global_step_value):\n print('Done writing checkpoint.')\n if decided_to_stop_training():\n return True\n\n def end(self, session, global_step_value):\n print('Done with the session.')\n\n ...\n listener = ExampleCheckpointSaverListener()\n saver_hook = tf.estimator.CheckpointSaverHook(\n checkpoint_dir, listeners=[listener])\n with\n tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):\n ...\n ```\n\n A `CheckpointSaverListener` may simply take some action after every\n checkpoint save. It is also possible for the listener to use its own schedule\n to act less frequently, e.g. based on global_step_value. In this case,\n implementors should implement the `end()` method to handle actions related to\n the last checkpoint save. But the listener should not act twice if\n `after_save()` already handled this last checkpoint save.\n\n A `CheckpointSaverListener` can request training to be stopped, by returning\n True in `after_save`. Please note that, in replicated distributed training\n setting, only `chief` should use this behavior. Otherwise each worker will do\n their own evaluation, which may be wasteful of resources.\n ", "desc": "Interface for listeners that take action before or after checkpoint save.", "type": "API"}, {"name": "tf.compat.v1.estimator.classifier_parse_example_spec", "docs": "Generates parsing spec for tf.parse_example to be used with classifiers.\n\n If users keep data in tf.Example format, they need to call tf.parse_example\n with a proper feature spec. There are two main things that this utility helps:\n\n * Users need to combine parsing spec of features with labels and weights\n (if any) since they are all parsed from same tf.Example instance. This\n utility combines these specs.\n * It is difficult to map expected label by a classifier such as\n `DNNClassifier` to corresponding tf.parse_example spec. This utility encodes\n it by getting related information from users (key, dtype).\n\n Example output of parsing spec:\n\n ```python\n # Define features and transformations\n feature_b = tf.feature_column.numeric_column(...)\n feature_c_bucketized = tf.feature_column.bucketized_column(\n tf.feature_column.numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = tf.feature_column.crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]\n parsing_spec = tf.estimator.classifier_parse_example_spec(\n feature_columns, label_key='my-label', label_dtype=tf.string)\n\n # For the above example, classifier_parse_example_spec would return the dict:\n assert parsing_spec == {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n \"my-label\" : parsing_ops.FixedLenFeature([1], dtype=tf.string)\n }\n ```\n\n Example usage with a classifier:\n\n ```python\n feature_columns = # define features via tf.feature_column\n estimator = DNNClassifier(\n n_classes=1000,\n feature_columns=feature_columns,\n weight_column='example-weight',\n label_vocabulary=['photos', 'keep', ...],\n hidden_units=[256, 64, 16])\n # This label configuration tells the classifier the following:\n # * weights are retrieved with key 'example-weight'\n # * label is string and can be one of the following ['photos', 'keep', ...]\n # * integer id for label 'photos' is 0, 'keep' is 1, ...\n\n\n # Input builders\n def input_fn_train(): # Returns a tuple of features and labels.\n features = tf.contrib.learn.read_keyed_batch_features(\n file_pattern=train_files,\n batch_size=batch_size,\n # creates parsing configuration for tf.parse_example\n features=tf.estimator.classifier_parse_example_spec(\n feature_columns,\n label_key='my-label',\n label_dtype=tf.string,\n weight_column='example-weight'),\n reader=tf.RecordIOReader)\n labels = features.pop('my-label')\n return features, labels\n\n estimator.train(input_fn=input_fn_train)\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `FeatureColumn`.\n label_key: A string identifying the label. It means tf.Example stores labels\n with this key.\n label_dtype: A `tf.dtype` identifies the type of labels. By default it is\n `tf.int64`. If user defines a `label_vocabulary`, this should be set as\n `tf.string`. `tf.float32` labels are only supported for binary\n classification.\n label_default: used as label if label_key does not exist in given\n tf.Example. An example usage: let's say `label_key` is 'clicked' and\n tf.Example contains clicked data only for positive examples in following\n format `key:clicked, value:1`. This means that if there is no data with\n key 'clicked' it should count as negative example by setting\n `label_deafault=0`. Type of this value should be compatible with\n `label_dtype`.\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. If it is a string, it is\n used as a key to fetch weight tensor from the `features`. If it is a\n `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then\n weight_column.normalizer_fn is applied on it to get weight tensor.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If label is used in `feature_columns`.\n ValueError: If weight_column is used in `feature_columns`.\n ValueError: If any of the given `feature_columns` is not a `_FeatureColumn`\n instance.\n ValueError: If `weight_column` is not a `NumericColumn` instance.\n ValueError: if label_key is None.\n ", "desc": "Generates parsing spec for tf.parse_example to be used with classifiers.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNClassifier", "docs": "A classifier for TensorFlow DNN models.\n\n Example:\n\n ```python\n categorical_feature_a = categorical_column_with_hash_bucket(...)\n categorical_feature_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A classifier for TensorFlow DNN models.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNEstimator", "docs": "An estimator for TensorFlow DNN models with user-specified head.\n\n Example:\n\n ```python\n sparse_feature_a = sparse_column_with_hash_bucket(...)\n sparse_feature_b = sparse_column_with_hash_bucket(...)\n\n sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,\n ...)\n sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,\n ...)\n\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss and predicted output are determined by the specified head.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow DNN models with user-specified head.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNLinearCombinedClassifier", "docs": "An estimator for TensorFlow Linear and DNN joined classification models.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_id_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedClassifier(\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...),\n # warm-start settings\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined classification models.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNLinearCombinedEstimator", "docs": "An estimator for TensorFlow Linear and DNN joined models with custom head.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...))\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined models with custom head.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNLinearCombinedRegressor", "docs": "An estimator for TensorFlow Linear and DNN joined models for regression.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedRegressor(\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...),\n # warm-start settings\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined models for regression.", "type": "API"}, {"name": "tf.compat.v1.estimator.DNNRegressor", "docs": "A regressor for TensorFlow DNN models.\n\n Example:\n\n ```python\n categorical_feature_a = categorical_column_with_hash_bucket(...)\n categorical_feature_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A regressor for TensorFlow DNN models.", "type": "API"}, {"name": "tf.compat.v1.estimator.Estimator", "docs": "Estimator class to train and evaluate TensorFlow models.\n\n The `Estimator` object wraps a model which is specified by a `model_fn`,\n which, given inputs and a number of other parameters, returns the ops\n necessary to perform training, evaluation, or predictions.\n\n All outputs (checkpoints, event files, etc.) are written to `model_dir`, or a\n subdirectory thereof. If `model_dir` is not set, a temporary directory is\n used.\n\n The `config` argument can be passed `tf.estimator.RunConfig` object containing\n information about the execution environment. It is passed on to the\n `model_fn`, if the `model_fn` has a parameter named \"config\" (and input\n functions in the same manner). If the `config` parameter is not passed, it is\n instantiated by the `Estimator`. Not passing config means that defaults useful\n for local execution are used. `Estimator` makes config available to the model\n (for instance, to allow specialization based on the number of workers\n available), and also uses some of its fields to control internals, especially\n regarding checkpointing.\n\n The `params` argument contains hyperparameters. It is passed to the\n `model_fn`, if the `model_fn` has a parameter named \"params\", and to the input\n functions in the same manner. `Estimator` only passes params along, it does\n not inspect it. The structure of `params` is therefore entirely up to the\n developer.\n\n None of `Estimator`'s methods can be overridden in subclasses (its\n constructor enforces this). Subclasses should use `model_fn` to configure\n the base class, and may add methods implementing specialized functionality.\n\n See [estimators](https://tensorflow.org/guide/estimator) for more\n information.\n\n To warm-start an `Estimator`:\n\n ```python\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n ```\n\n For more details on warm-start configuration, see\n `tf.estimator.WarmStartSettings`.\n\n @compatibility(eager)\n Calling methods of `Estimator` will work while eager execution is enabled.\n However, the `model_fn` and `input_fn` is not executed eagerly, `Estimator`\n will switch to graph mode before calling all user-provided functions (incl.\n hooks), so their code has to be compatible with graph mode execution. Note\n that `input_fn` code using `tf.data` generally works in both graph and eager\n modes.\n @end_compatibility\n ", "desc": "Estimator class to train and evaluate TensorFlow models.", "type": "API"}, {"name": "tf.compat.v1.estimator.EstimatorSpec", "docs": "Ops and objects returned from a `model_fn` and passed to an `Estimator`.\n\n `EstimatorSpec` fully defines the model to be run by an `Estimator`.\n ", "desc": "Ops and objects returned from a `model_fn` and passed to an `Estimator`.", "type": "API"}, {"name": "tf.compat.v1.estimator.EvalSpec", "docs": "Configuration for the \"eval\" part for the `train_and_evaluate` call.\n\n `EvalSpec` combines details of evaluation of the trained model as well as its\n export. Evaluation consists of computing metrics to judge the performance of\n the trained model. Export writes out the trained model on to external\n storage.\n ", "desc": "Configuration for the \"eval\" part for the `train_and_evaluate` call.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental", "docs": "Public API for tf.estimator.experimental namespace.\n", "desc": "Public API for tf.estimator.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn", "docs": "Build a supervised_input_receiver_fn for raw features and labels.\n\n This function wraps tensor placeholders in a supervised_receiver_fn\n with the expectation that the features and labels appear precisely as\n the model_fn expects them. Features and labels can therefore be dicts of\n tensors, or raw tensors.\n\n Args:\n features: a dict of string to `Tensor` or `Tensor`.\n labels: a dict of string to `Tensor` or `Tensor`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A supervised_input_receiver_fn.\n\n Raises:\n ValueError: if features and labels have overlapping keys.\n ", "desc": "Build a supervised_input_receiver_fn for raw features and labels.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.call_logit_fn", "docs": "Calls logit_fn (experimental).\n\n THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs\n for logit and model composition.\n\n A utility function that calls the provided logit_fn with the relevant subset\n of provided arguments. Similar to tf.estimator._call_model_fn().\n\n Args:\n logit_fn: A logit_fn as defined above.\n features: The features dict.\n mode: TRAIN / EVAL / PREDICT ModeKeys.\n params: The hyperparameter dict.\n config: The configuration object.\n\n Returns:\n A logit Tensor, the output of logit_fn.\n\n Raises:\n ValueError: if logit_fn does not return a Tensor or a dictionary mapping\n strings to Tensors.\n ", "desc": "Calls logit_fn (experimental).", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.dnn_logit_fn_builder", "docs": "Function builder for a dnn logit_fn.\n\n Args:\n units: An int indicating the dimension of the logit layer. In the MultiHead\n case, this should be the sum of all component Heads' logit dimensions.\n hidden_units: Iterable of integer number of hidden units per layer.\n feature_columns: Iterable of `feature_column._FeatureColumn` model inputs.\n activation_fn: Activation function applied to each layer.\n dropout: When not `None`, the probability we will drop out a given\n coordinate.\n input_layer_partitioner: Partitioner for input layer.\n batch_norm: Whether to use batch normalization after each hidden layer.\n\n Returns:\n A logit_fn (see below).\n\n Raises:\n ValueError: If units is not an int.\n ", "desc": "Function builder for a dnn logit_fn.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook", "docs": "Hook to run evaluation in training without a checkpoint.\n\n Example:\n\n ```python\n def train_input_fn():\n ...\n return train_dataset\n\n def eval_input_fn():\n ...\n return eval_dataset\n\n estimator = tf.estimator.DNNClassifier(...)\n\n evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(\n estimator, eval_input_fn)\n estimator.train(train_input_fn, hooks=[evaluator])\n ```\n\n Current limitations of this approach are:\n\n * It doesn't support multi-node distributed mode.\n * It doesn't support saveable objects other than variables (such as boosted\n tree support)\n * It doesn't support custom saver logic (such as ExponentialMovingAverage\n support)\n\n ", "desc": "Hook to run evaluation in training without a checkpoint.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.KMeans", "docs": "An Estimator for K-Means clustering.\n\n Example:\n ```\n import numpy as np\n import tensorflow as tf\n\n num_points = 100\n dimensions = 2\n points = np.random.uniform(0, 1000, [num_points, dimensions])\n\n def input_fn():\n return tf.compat.v1.train.limit_epochs(\n tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1)\n\n num_clusters = 5\n kmeans = tf.compat.v1.estimator.experimental.KMeans(\n num_clusters=num_clusters, use_mini_batch=False)\n\n # train\n num_iterations = 10\n previous_centers = None\n for _ in xrange(num_iterations):\n kmeans.train(input_fn)\n cluster_centers = kmeans.cluster_centers()\n if previous_centers is not None:\n print 'delta:', cluster_centers - previous_centers\n previous_centers = cluster_centers\n print 'score:', kmeans.score(input_fn)\n print 'cluster centers:', cluster_centers\n\n # map the input points to their clusters\n cluster_indices = list(kmeans.predict_cluster_index(input_fn))\n for i, point in enumerate(points):\n cluster_index = cluster_indices[i]\n center = cluster_centers[cluster_index]\n print 'point:', point, 'is in cluster', cluster_index, 'centered at', center\n ```\n\n The `SavedModel` saved by the `export_saved_model` method does not include the\n cluster centers. However, the cluster centers may be retrieved by the\n latest checkpoint saved during training. Specifically,\n ```\n kmeans.cluster_centers()\n ```\n is equivalent to\n ```\n tf.train.load_variable(\n kmeans.model_dir, KMeansClustering.CLUSTER_CENTERS_VAR_NAME)\n ```\n ", "desc": "An Estimator for K-Means clustering.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.linear_logit_fn_builder", "docs": "Function builder for a linear logit_fn.\n\n Args:\n units: An int indicating the dimension of the logit layer.\n feature_columns: An iterable containing all the feature columns used by the\n model.\n sparse_combiner: A string specifying how to reduce if a categorical column\n is multivalent. One of \"mean\", \"sqrtn\", and \"sum\".\n\n Returns:\n A logit_fn (see below).\n\n ", "desc": "Function builder for a linear logit_fn.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.LinearSDCA", "docs": "Stochastic Dual Coordinate Ascent helper for linear estimators.\n\n Objects of this class are intended to be provided as the optimizer argument\n (though LinearSDCA objects do not implement the `tf.train.Optimizer`\n interface)\n when creating `tf.estimator.LinearClassifier` or\n `tf.estimator.LinearRegressor`.\n\n SDCA can only be used with `LinearClassifier` and `LinearRegressor` under the\n following conditions:\n\n - Feature columns are of type V2.\n - Multivalent categorical columns are not normalized. In other words the\n `sparse_combiner` argument in the estimator constructor should be \"sum\".\n - For classification: binary label.\n - For regression: one-dimensional label.\n\n Example usage:\n\n ```python\n real_feature_column = numeric_column(...)\n sparse_feature_column = categorical_column_with_hash_bucket(...)\n linear_sdca = tf.estimator.experimental.LinearSDCA(\n example_id_column='example_id',\n num_loss_partitions=1,\n num_table_shards=1,\n symmetric_l2_regularization=2.0)\n classifier = tf.estimator.LinearClassifier(\n feature_columns=[real_feature_column, sparse_feature_column],\n weight_column=...,\n optimizer=linear_sdca)\n classifier.train(input_fn_train, steps=50)\n classifier.evaluate(input_fn=input_fn_eval)\n ```\n\n Here the expectation is that the `input_fn_*` functions passed to train and\n evaluate return a pair (dict, label_tensor) where dict has `example_id_column`\n as `key` whose value is a `Tensor` of shape [batch_size] and dtype string.\n num_loss_partitions defines sigma' in eq (11) of [3]. Convergence of (global)\n loss is guaranteed if `num_loss_partitions` is larger or equal to the product\n `(#concurrent train ops/per worker) x (#workers)`. Larger values for\n `num_loss_partitions` lead to slower convergence. The recommended value for\n `num_loss_partitions` in `tf.estimator` (where currently there is one process\n per worker) is the number of workers running the train steps. It defaults to 1\n (single machine).\n `num_table_shards` defines the number of shards for the internal state\n table, typically set to match the number of parameter servers for large\n data sets.\n\n The SDCA algorithm was originally introduced in [1] and it was followed by\n the L1 proximal step [2], a distributed version [3] and adaptive sampling [4].\n [1] www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf\n [2] https://arxiv.org/pdf/1309.2375.pdf\n [3] https://arxiv.org/pdf/1502.03508.pdf\n [4] https://arxiv.org/pdf/1502.08053.pdf\n Details specific to this implementation are provided in:\n https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear_optimizer/doc/sdca.ipynb\n ", "desc": "Stochastic Dual Coordinate Ascent helper for linear estimators.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.make_early_stopping_hook", "docs": "Creates early-stopping hook.\n\n Returns a `SessionRunHook` that stops training when `should_stop_fn` returns\n `True`.\n\n Usage example:\n\n ```python\n estimator = ...\n hook = early_stopping.make_early_stopping_hook(\n estimator, should_stop_fn=make_stop_fn(...))\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n should_stop_fn: `callable`, function that takes no arguments and returns a\n `bool`. If the function returns `True`, stopping will be initiated by the\n chief.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n A `SessionRunHook` that periodically executes `should_stop_fn` and initiates\n early stopping if the function returns `True`.\n\n Raises:\n TypeError: If `estimator` is not of type `tf.estimator.Estimator`.\n ValueError: If both `run_every_secs` and `run_every_steps` are set.\n ", "desc": "Creates early-stopping hook.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook", "docs": "Creates a proper StopAtCheckpointStepHook based on chief status.", "desc": "Creates a proper StopAtCheckpointStepHook based on chief status.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.stop_if_higher_hook", "docs": "Creates hook to stop if the given metric is higher than the threshold.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if accuracy becomes higher than 0.9.\n hook = early_stopping.stop_if_higher_hook(estimator, \"accuracy\", 0.9)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n threshold: Numeric threshold for the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric is higher than specified threshold and initiates\n early stopping if true.\n ", "desc": "Creates hook to stop if the given metric is higher than the threshold.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.stop_if_lower_hook", "docs": "Creates hook to stop if the given metric is lower than the threshold.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if loss becomes lower than 100.\n hook = early_stopping.stop_if_lower_hook(estimator, \"loss\", 100)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n threshold: Numeric threshold for the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric is lower than specified threshold and initiates\n early stopping if true.\n ", "desc": "Creates hook to stop if the given metric is lower than the threshold.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook", "docs": "Creates hook to stop if metric does not decrease within given max steps.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if loss does not decrease in over 100000 steps.\n hook = early_stopping.stop_if_no_decrease_hook(estimator, \"loss\", 100000)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n max_steps_without_decrease: `int`, maximum number of training steps with no\n decrease in the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric shows no decrease over given maximum number of\n training steps, and initiates early stopping if true.\n ", "desc": "Creates hook to stop if metric does not decrease within given max steps.", "type": "API"}, {"name": "tf.compat.v1.estimator.experimental.stop_if_no_increase_hook", "docs": "Creates hook to stop if metric does not increase within given max steps.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if accuracy does not increase in over 100000 steps.\n hook = early_stopping.stop_if_no_increase_hook(estimator, \"accuracy\", 100000)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n max_steps_without_increase: `int`, maximum number of training steps with no\n increase in the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric shows no increase over given maximum number of\n training steps, and initiates early stopping if true.\n ", "desc": "Creates hook to stop if metric does not increase within given max steps.", "type": "API"}, {"name": "tf.compat.v1.estimator.export", "docs": "All public utility methods for exporting Estimator to SavedModel.\n\nThis file includes functions and constants from core (model_utils) and export.py\n\n", "desc": "All public utility methods for exporting Estimator to SavedModel.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn", "docs": "Build a serving_input_receiver_fn expecting fed tf.Examples.\n\n Creates a serving_input_receiver_fn that expects a serialized tf.Example fed\n into a string placeholder. The function parses the tf.Example according to\n the provided feature_spec, and returns all parsed Tensors as features.\n\n Args:\n feature_spec: a dict of string to `VarLenFeature`/`FixedLenFeature`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A serving_input_receiver_fn suitable for use in serving.\n ", "desc": "Build a serving_input_receiver_fn expecting fed tf.Examples.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn", "docs": "Build a serving_input_receiver_fn expecting feature Tensors.\n\n Creates an serving_input_receiver_fn that expects all features to be fed\n directly.\n\n Args:\n features: a dict of string to `Tensor`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A serving_input_receiver_fn.\n ", "desc": "Build a serving_input_receiver_fn expecting feature Tensors.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.ClassificationOutput", "docs": "Represents the output of a classification head.\n\n Either classes or scores or both must be set.\n\n The classes `Tensor` must provide string labels, not integer class IDs.\n\n If only classes is set, it is interpreted as providing top-k results in\n descending order.\n\n If only scores is set, it is interpreted as providing a score for every class\n in order of class ID.\n\n If both classes and scores are set, they are interpreted as zipped, so each\n score corresponds to the class at the same index. Clients should not depend\n on the order of the entries.\n ", "desc": "Represents the output of a classification head.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.EvalOutput", "docs": "Represents the output of a supervised eval process.\n\n This class generates the appropriate signature def for exporting\n eval output by type-checking and wrapping loss, predictions, and metrics\n values.\n ", "desc": "Represents the output of a supervised eval process.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.ExportOutput", "docs": "Represents an output of a model that can be served.\n\n These typically correspond to model heads.\n ", "desc": "Represents an output of a model that can be served.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.PredictOutput", "docs": "Represents the output of a generic prediction head.\n\n A generic prediction need not be either a classification or a regression.\n\n Named outputs must be provided as a dict from string to `Tensor`,\n ", "desc": "Represents the output of a generic prediction head.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.RegressionOutput", "docs": "Represents the output of a regression head.", "desc": "Represents the output of a regression head.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.ServingInputReceiver", "docs": "A return type for a serving_input_receiver_fn.\n\n Attributes:\n features: A `Tensor`, `SparseTensor`, or dict of string or int to `Tensor`\n or `SparseTensor`, specifying the features to be passed to the model.\n Note: if `features` passed is not a dict, it will be wrapped in a dict\n with a single entry, using 'feature' as the key. Consequently, the\n model\n must accept a feature dict of the form {'feature': tensor}. You may use\n `TensorServingInputReceiver` if you want the tensor to be passed as is.\n receiver_tensors: A `Tensor`, `SparseTensor`, or dict of string to `Tensor`\n or `SparseTensor`, specifying input nodes where this receiver expects to\n be fed by default. Typically, this is a single placeholder expecting\n serialized `tf.Example` protos.\n receiver_tensors_alternatives: a dict of string to additional groups of\n receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict\n of string to `Tensor` or`SparseTensor`. These named receiver tensor\n alternatives generate additional serving signatures, which may be used to\n feed inputs at different points within the input receiver subgraph. A\n typical usage is to allow feeding raw feature `Tensor`s *downstream* of\n the tf.parse_example() op. Defaults to None.\n ", "desc": "A return type for a serving_input_receiver_fn.", "type": "API"}, {"name": "tf.compat.v1.estimator.export.TensorServingInputReceiver", "docs": "A return type for a serving_input_receiver_fn.\n\n This is for use with models that expect a single `Tensor` or `SparseTensor`\n as an input feature, as opposed to a dict of features.\n\n The normal `ServingInputReceiver` always returns a feature dict, even if it\n contains only one entry, and so can be used only with models that accept such\n a dict. For models that accept only a single raw feature, the\n `serving_input_receiver_fn` provided to `Estimator.export_saved_model()`\n should return this `TensorServingInputReceiver` instead. See:\n https://github.com/tensorflow/tensorflow/issues/11674\n\n Note that the receiver_tensors and receiver_tensor_alternatives arguments\n will be automatically converted to the dict representation in either case,\n because the SavedModel format requires each input `Tensor` to have a name\n (provided by the dict key).\n\n Attributes:\n features: A single `Tensor` or `SparseTensor`, representing the feature to\n be passed to the model.\n receiver_tensors: A `Tensor`, `SparseTensor`, or dict of string to `Tensor`\n or `SparseTensor`, specifying input nodes where this receiver expects to\n be fed by default. Typically, this is a single placeholder expecting\n serialized `tf.Example` protos.\n receiver_tensors_alternatives: a dict of string to additional groups of\n receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict\n of string to `Tensor` or`SparseTensor`. These named receiver tensor\n alternatives generate additional serving signatures, which may be used to\n feed inputs at different points within the input receiver subgraph. A\n typical usage is to allow feeding raw feature `Tensor`s *downstream* of\n the tf.parse_example() op. Defaults to None.\n ", "desc": "A return type for a serving_input_receiver_fn.", "type": "API"}, {"name": "tf.compat.v1.estimator.Exporter", "docs": "A class representing a type of model export.", "desc": "A class representing a type of model export.", "type": "API"}, {"name": "tf.compat.v1.estimator.FeedFnHook", "docs": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "desc": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "type": "API"}, {"name": "tf.compat.v1.estimator.FinalExporter", "docs": "This class exports the serving graph and checkpoints at the end.\n\n This class performs a single export at the end of training.\n ", "desc": "This class exports the serving graph and checkpoints at the end.", "type": "API"}, {"name": "tf.compat.v1.estimator.FinalOpsHook", "docs": "A hook which evaluates `Tensors` at the end of a session.", "desc": "A hook which evaluates `Tensors` at the end of a session.", "type": "API"}, {"name": "tf.compat.v1.estimator.GlobalStepWaiterHook", "docs": "Delays execution until global step reaches `wait_until_step`.\n\n This hook delays execution until global step reaches to `wait_until_step`. It\n is used to gradually start workers in distributed settings. One example usage\n would be setting `wait_until_step=int(K*log(task_id+1))` assuming that\n task_id=0 is the chief.\n ", "desc": "Delays execution until global step reaches `wait_until_step`.", "type": "API"}, {"name": "tf.compat.v1.estimator.Head", "docs": "Interface for the head/top of a model.\n\n Head sits on top of the model network and handles computing the outputs of\n the network. Given logits (or output of a hidden layer), a Head knows how to\n compute predictions, loss, train_op, metrics and export outputs. It is meant\n to:\n\n 1. Simplify writing model_fn and to make model_fn more configurable for\n Estimator.\n 2. Simpilfy creating loss and metrics for the train and test loop in Eager\n execution.\n 3. Support wide range of machine learning models. Since most heads can work\n with logits, they can support DNN, RNN, Wide, Wide&Deep,\n Global objectives, Gradient boosted trees and many other types\n of machine learning models.\n\n Common usage:\n Here is simplified model_fn to build a DNN regression model.\n ```python\n def _my_dnn_model_fn(features, labels, mode, params, config=None):\n # Optionally your callers can pass head to model_fn as a param.\n head = tf.estimator.RegressionHead(...)\n\n feature_columns = tf.feature_column.numeric_column(...)\n feature_layer = tf.keras.layers.DenseFeatures(feature_columns)\n inputs = feature_layer(features)\n\n # Compute logits with tf.keras.layers API\n hidden_layer0 = tf.keras.layers.Dense(\n units=1000, activation=\"relu\")(inputs)\n hidden_layer1 = tf.keras.layers.Dense(\n units=500, activation=\"relu\")(hidden_layer0)\n logits = tf.keras.layers.Dense(\n units=head.logits_dimension, activation=None)(hidden_layer1)\n\n # Or use Keras model for logits computation\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(units=1000, activation=\"relu\"))\n model.add(tf.keras.layers.Dense(units=500, activation=\"relu\"))\n model.add(tf.keras.layers.Dense(\n units=head.logits_dimension, activation=None))\n logits = model(inputs)\n\n return head.create_estimator_spec(\n features=features,\n labels=labels,\n mode=mode,\n logits=logits,\n optimizer=optimizer)\n ```\n ", "desc": "Interface for the head/top of a model.", "type": "API"}, {"name": "tf.compat.v1.estimator.inputs", "docs": "Utility methods to create simple input_fns.\n", "desc": "Utility methods to create simple input_fns.", "type": "API"}, {"name": "tf.compat.v1.estimator.inputs.numpy_input_fn", "docs": "Returns input function that would feed dict of numpy arrays into the model.\n\n This returns a function outputting `features` and `targets` based on the dict\n of numpy arrays. The dict `features` has the same keys as the `x`. The dict\n `targets` has the same keys as the `y` if `y` is a dict.\n\n Example:\n\n ```python\n age = np.arange(4) * 1.0\n height = np.arange(32, 36)\n x = {'age': age, 'height': height}\n y = np.arange(-32, -28)\n\n with tf.Session() as session:\n input_fn = numpy_io.numpy_input_fn(\n x, y, batch_size=2, shuffle=False, num_epochs=1)\n ```\n\n Args:\n x: numpy array object or dict of numpy array objects. If an array, the array\n will be treated as a single feature.\n y: numpy array object or dict of numpy array object. `None` if absent.\n batch_size: Integer, size of batches to return.\n num_epochs: Integer, number of epochs to iterate over data. If `None` will\n run forever.\n shuffle: Boolean, if True shuffles the queue. Avoid shuffle at prediction\n time.\n queue_capacity: Integer, size of queue to accumulate.\n num_threads: Integer, number of threads used for reading and enqueueing. In\n order to have predicted and repeatable order of reading and enqueueing,\n such as in prediction and evaluation mode, `num_threads` should be 1.\n\n Returns:\n Function, that has signature of ()->(dict of `features`, `targets`)\n\n Raises:\n ValueError: if the shape of `y` mismatches the shape of values in `x` (i.e.,\n values in `x` have same shape).\n ValueError: if duplicate keys are in both `x` and `y` when `y` is a dict.\n ValueError: if x or y is an empty dict.\n TypeError: `x` is not a dict or array.\n ValueError: if 'shuffle' is not provided or a bool.\n ", "desc": "Returns input function that would feed dict of numpy arrays into the model.", "type": "API"}, {"name": "tf.compat.v1.estimator.inputs.pandas_input_fn", "docs": "Returns input function that would feed Pandas DataFrame into the model.\n\n Note: `y`'s index must match `x`'s index.\n\n Args:\n x: pandas `DataFrame` object.\n y: pandas `Series` object or `DataFrame`. `None` if absent.\n batch_size: int, size of batches to return.\n num_epochs: int, number of epochs to iterate over data. If not `None`, read\n attempts that would exceed this value will raise `OutOfRangeError`.\n shuffle: bool, whether to read the records in random order.\n queue_capacity: int, size of the read queue. If `None`, it will be set\n roughly to the size of `x`.\n num_threads: Integer, number of threads used for reading and enqueueing. In\n order to have predicted and repeatable order of reading and enqueueing,\n such as in prediction and evaluation mode, `num_threads` should be 1.\n target_column: str, name to give the target column `y`. This parameter is\n not used when `y` is a `DataFrame`.\n\n Returns:\n Function, that has signature of ()->(dict of `features`, `target`)\n\n Raises:\n ValueError: if `x` already contains a column with the same name as `y`, or\n if the indexes of `x` and `y` don't match.\n ValueError: if 'shuffle' is not provided or a bool.\n ", "desc": "Returns input function that would feed Pandas DataFrame into the model.", "type": "API"}, {"name": "tf.compat.v1.estimator.LatestExporter", "docs": "This class regularly exports the serving graph and checkpoints.\n\n In addition to exporting, this class also garbage collects stale exports.\n ", "desc": "This class regularly exports the serving graph and checkpoints.", "type": "API"}, {"name": "tf.compat.v1.estimator.LinearClassifier", "docs": "Linear classifier model.\n\n Train a linear model to classify instances into one of multiple possible\n classes. When number of possible classes is 2, this is binary classification.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.exponential_decay(\n learning_rate=0.1,\n global_step=tf.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `SparseColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedSparseColumn`, two features: the first with\n `key` the id column name, the second with `key` the weight column name.\n Both features' `value` must be a `SparseTensor`.\n - if `column` is a `RealValuedColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "Linear classifier model.", "type": "API"}, {"name": "tf.compat.v1.estimator.LinearEstimator", "docs": "An estimator for TensorFlow linear models with user-specified head.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss and predicted output are determined by the specified head.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow linear models with user-specified head.", "type": "API"}, {"name": "tf.compat.v1.estimator.LinearRegressor", "docs": "An estimator for TensorFlow Linear regression problems.\n\n Train a linear regression model to predict label value given observation of\n feature values.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a KeyError:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `SparseColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedSparseColumn`, two features: the first with\n `key` the id column name, the second with `key` the weight column name.\n Both features' `value` must be a `SparseTensor`.\n - if `column` is a `RealValuedColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear regression problems.", "type": "API"}, {"name": "tf.compat.v1.estimator.LoggingTensorHook", "docs": "Prints the given tensors every N local steps, every N seconds, or at end.\n\n The tensors will be printed to the log, with `INFO` severity. If you are not\n seeing the logs, you might want to add the following line after your imports:\n\n ```python\n tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)\n ```\n\n Note that if `at_end` is True, `tensors` should not include any tensor\n whose evaluation produces a side effect such as consuming additional inputs.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n\n ", "desc": "Prints the given tensors every N local steps, every N seconds, or at end.", "type": "API"}, {"name": "tf.compat.v1.estimator.LogisticRegressionHead", "docs": "Creates a `Head` for logistic regression.\n\n Uses `sigmoid_cross_entropy_with_logits` loss, which is the same as\n `BinaryClassHead`. The differences compared to `BinaryClassHead` are:\n\n * Does not support `label_vocabulary`. Instead, labels must be float in the\n range [0, 1].\n * Does not calculate some metrics that do not make sense, such as AUC.\n * In `PREDICT` mode, only returns logits and predictions\n (`=tf.sigmoid(logits)`), whereas `BinaryClassHead` also returns\n probabilities, classes, and class_ids.\n * Export output defaults to `RegressionOutput`, whereas `BinaryClassHead`\n defaults to `PredictOutput`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, 1]`.\n In many applications, the shape is `[batch_size, 1]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\n This is implemented as a generalized linear model, see\n https://en.wikipedia.org/wiki/Generalized_linear_model.\n\n The head can be used with a canned estimator. Example:\n\n ```python\n my_head = tf.estimator.LogisticRegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.LogisticRegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch\n size * label_dimension`.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for logistic regression.", "type": "API"}, {"name": "tf.compat.v1.estimator.ModeKeys", "docs": "Standard names for Estimator model modes.\n\n The following standard keys are defined:\n\n * `TRAIN`: training/fitting mode.\n * `EVAL`: testing/evaluation mode.\n * `PREDICT`: predication/inference mode.\n ", "desc": "Standard names for Estimator model modes.", "type": "API"}, {"name": "tf.compat.v1.estimator.MultiClassHead", "docs": "Creates a `Head` for multi class classification.\n\n Uses `sparse_softmax_cross_entropy` loss.\n\n The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`.\n In many applications, the shape is `[batch_size, n_classes]`.\n\n `labels` must be a dense `Tensor` with shape matching `logits`, namely\n `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string\n `Tensor` with values from the vocabulary. If `label_vocabulary` is not given,\n `labels` must be an integer `Tensor` with values specifying the class index.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n The loss is the weighted sum over the input dimensions. Namely, if the input\n labels have shape `[batch_size, 1]`, the loss is the weighted sum over\n `batch_size`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns\n unreduced loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support\n integer `labels` with shape `[D0, D1, ... DN, 1]`. Namely, the head applies\n `label_vocabulary` to the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> n_classes = 3\n >>> head = tf.estimator.MultiClassHead(n_classes)\n >>> logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)\n >>> labels = np.array(((1,), (1,)), dtype=np.int64)\n >>> features = {'x': np.array(((42,),), dtype=np.int32)}\n >>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size\n >>> # = sum(10, 0) / 2 = 5.\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 5.00\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n accuracy : 0.50\n average_loss : 5.00\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[10. 0. 0.]\n [ 0. 10. 0.]], shape=(2, 3), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.MultiClassHead(n_classes=3)\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.MultiClassHead(n_classes=3)\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n n_classes: Number of classes, must be greater than 2 (for 2 classes, use\n `BinaryClassHead`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_vocabulary: A list or tuple of strings representing possible label\n values. If it is not given, that means labels are already encoded as an\n integer within [0, n_classes). If given, labels must be of string type and\n have any value in `label_vocabulary`. Note that errors will be raised if\n `label_vocabulary` is not provided but labels are strings. If both\n `n_classes` and `label_vocabulary` are provided, `label_vocabulary` should\n contain exactly `n_classes` items.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by `batch size * label_dimension`.\n loss_fn: Optional loss function.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for multi class classification.", "type": "API"}, {"name": "tf.compat.v1.estimator.MultiHead", "docs": "Creates a `Head` for multi-objective learning.\n\n This class merges the output of multiple `Head` objects. Specifically:\n\n * For training, sums losses of each head, calls `train_op_fn` with this\n final loss.\n * For eval, merges metrics by adding `head.name` suffix to the keys in eval\n metrics, such as `precision/head1.name`, `precision/head2.name`.\n * For prediction, merges predictions and updates keys in prediction dict to a\n 2-tuple, `(head.name, prediction_key)`. Merges `export_outputs` such that\n by default the first head is served.\n\n Usage:\n\n >>> head1 = tf.estimator.MultiLabelHead(n_classes=2, name='head1')\n >>> head2 = tf.estimator.MultiLabelHead(n_classes=3, name='head2')\n >>> multi_head = tf.estimator.MultiHead([head1, head2])\n >>> logits = {\n ... 'head1': np.array([[-10., 10.], [-15., 10.]], dtype=np.float32),\n ... 'head2': np.array([[20., -20., 20.], [-30., 20., -20.]],\n ... dtype=np.float32),}\n >>> labels = {\n ... 'head1': np.array([[1, 0], [1, 1]], dtype=np.int64),\n ... 'head2': np.array([[0, 1, 0], [1, 1, 0]], dtype=np.int64),}\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # For large logits, sigmoid cross entropy loss is approximated as:\n >>> # loss = labels * (logits < 0) * (-logits) +\n >>> # (1 - labels) * (logits > 0) * logits =>\n >>> # head1: expected_unweighted_loss = [[10., 10.], [15., 0.]]\n >>> # loss1 = ((10 + 10) / 2 + (15 + 0) / 2) / 2 = 8.75\n >>> # head2: expected_unweighted_loss = [[20., 20., 20.], [30., 0., 0]]\n >>> # loss2 = ((20 + 20 + 20) / 3 + (30 + 0 + 0) / 3) / 2 = 15.00\n >>> # loss = loss1 + loss2 = 8.75 + 15.00 = 23.75\n >>> loss = multi_head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 23.75\n >>> eval_metrics = multi_head.metrics()\n >>> updated_metrics = multi_head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n auc/head1 : 0.17\n auc/head2 : 0.33\n auc_precision_recall/head1 : 0.60\n auc_precision_recall/head2 : 0.40\n average_loss/head1 : 8.75\n average_loss/head2 : 15.00\n loss/head1 : 8.75\n loss/head2 : 15.00\n >>> preds = multi_head.predictions(logits)\n >>> print(preds[('head1', 'logits')])\n tf.Tensor(\n [[-10. 10.]\n [-15. 10.]], shape=(2, 2), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n # In `input_fn`, specify labels as a dict keyed by head name:\n def input_fn():\n features = ...\n labels1 = ...\n labels2 = ...\n return features, {'head1.name': labels1, 'head2.name': labels2}\n\n # In `model_fn`, specify logits as a dict keyed by head name:\n def model_fn(features, labels, mode):\n # Create simple heads and specify head name.\n head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')\n head2 = tf.estimator.BinaryClassHead(name='head2')\n # Create MultiHead from two simple heads.\n head = tf.estimator.MultiHead([head1, head2])\n # Create logits for each head, and combine them into a dict.\n logits1, logits2 = logit_fn()\n logits = {'head1.name': logits1, 'head2.name': logits2}\n # Return the merged EstimatorSpec\n return head.create_estimator_spec(..., logits=logits, ...)\n\n # Create an estimator with this model_fn.\n estimator = tf.estimator.Estimator(model_fn=model_fn)\n estimator.train(input_fn=input_fn)\n ```\n\n Also supports `logits` as a `Tensor` of shape\n `[D0, D1, ... DN, logits_dimension]`. It will split the `Tensor` along the\n last dimension and distribute it appropriately among the heads. E.g.:\n\n ```python\n # Input logits.\n logits = np.array([[-1., 1., 2., -2., 2.], [-1.5, 1., -3., 2., -2.]],\n dtype=np.float32)\n # Suppose head1 and head2 have the following logits dimension.\n head1.logits_dimension = 2\n head2.logits_dimension = 3\n # After splitting, the result will be:\n logits_dict = {'head1_name': [[-1., 1.], [-1.5, 1.]],\n 'head2_name': [[2., -2., 2.], [-3., 2., -2.]]}\n ```\n\n Usage:\n\n ```python\n def model_fn(features, labels, mode):\n # Create simple heads and specify head name.\n head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')\n head2 = tf.estimator.BinaryClassHead(name='head2')\n # Create multi-head from two simple heads.\n head = tf.estimator.MultiHead([head1, head2])\n # Create logits for the multihead. The result of logits is a `Tensor`.\n logits = logit_fn(logits_dimension=head.logits_dimension)\n # Return the merged EstimatorSpec\n return head.create_estimator_spec(..., logits=logits, ...)\n ```\n\n Args:\n heads: List or tuple of `Head` instances. All heads must have `name`\n specified. The first head in the list is the default used at serving time.\n head_weights: Optional list of weights, same length as `heads`. Used when\n merging losses to calculate the weighted sum of losses from each head. If\n `None`, all losses are weighted equally.\n ", "desc": "Creates a `Head` for multi-objective learning.", "type": "API"}, {"name": "tf.compat.v1.estimator.MultiLabelHead", "docs": "Creates a `Head` for multi-label classification.\n\n Multi-label classification handles the case where each example may have zero\n or more associated labels, from a discrete set. This is distinct from\n `MultiClassHead` which has exactly one label per example.\n\n Uses `sigmoid_cross_entropy` loss average over classes and weighted sum over\n the batch. Namely, if the input logits have shape `[batch_size, n_classes]`,\n the loss is the average over `n_classes` and the weighted sum over\n `batch_size`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. In many\n applications, the shape is `[batch_size, n_classes]`.\n\n Labels can be:\n\n * A multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`\n * An integer `SparseTensor` of class indices. The `dense_shape` must be\n `[D0, D1, ... DN, ?]` and the values within `[0, n_classes)`.\n * If `label_vocabulary` is given, a string `SparseTensor`. The `dense_shape`\n must be `[D0, D1, ... DN, ?]` and the values within `label_vocabulary` or a\n multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features)` as arguments and returns unreduced loss with\n shape `[D0, D1, ... DN, 1]`. `loss_fn` must support indicator `labels` with\n shape `[D0, D1, ... DN, n_classes]`. Namely, the head applies\n `label_vocabulary` to the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> n_classes = 2\n >>> head = tf.estimator.MultiLabelHead(n_classes)\n >>> logits = np.array([[-1., 1.], [-1.5, 1.5]], dtype=np.float32)\n >>> labels = np.array([[1, 0], [1, 1]], dtype=np.int64)\n >>> features = {'x': np.array([[41], [42]], dtype=np.int32)}\n >>> # expected_loss = sum(_sigmoid_cross_entropy(labels, logits)) / batch_size\n >>> # = sum(1.31326169, 0.9514133) / 2 = 1.13\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 1.13\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n auc : 0.33\n auc_precision_recall : 0.77\n average_loss : 1.13\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[-1. 1. ]\n [-1.5 1.5]], shape=(2, 2), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.MultiLabelHead(n_classes=3)\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.MultiLabelHead(n_classes=3)\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n n_classes: Number of classes, must be greater than 1 (for 1 class, use\n `BinaryClassHead`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. Per-class weighting is not\n supported.\n thresholds: Iterable of floats in the range `(0, 1)`. Accuracy, precision\n and recall metrics are evaluated for each threshold value. The threshold\n is applied to the predicted probabilities, i.e. above the threshold is\n `true`, below is `false`.\n label_vocabulary: A list of strings represents possible label values. If it\n is not given, that means labels are already encoded as integer within [0,\n n_classes) or multi-hot Tensor. If given, labels must be SparseTensor\n `string` type and have any value in `label_vocabulary`. Also there will be\n errors if vocabulary is not provided and labels are string.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by batch size.\n loss_fn: Optional loss function.\n classes_for_class_based_metrics: List of integer class IDs or string class\n names for which per-class metrics are evaluated. If integers, all must be\n in the range `[0, n_classes - 1]`. If strings, all must be in\n `label_vocabulary`.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for multi-label classification.", "type": "API"}, {"name": "tf.compat.v1.estimator.NanLossDuringTrainingError", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.estimator.NanTensorHook", "docs": "Monitors the loss tensor and stops training if loss is NaN.\n\n Can either fail with exception or just stop training.\n ", "desc": "Monitors the loss tensor and stops training if loss is NaN.", "type": "API"}, {"name": "tf.compat.v1.estimator.PoissonRegressionHead", "docs": "Creates a `Head` for poisson regression using `tf.nn.log_poisson_loss`.\n\n The loss is the weighted sum over all input dimensions. Namely, if the input\n labels have shape `[batch_size, label_dimension]`, the loss is the weighted\n sum over both `batch_size` and `label_dimension`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`.\n In many applications, the shape is `[batch_size, label_dimension]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape\n `[D0, D1, ... DN]` is also supported.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or\n `[D0, D1, ... DN, label_dimension]`.\n\n This is implemented as a generalized linear model, see\n https://en.wikipedia.org/wiki/Generalized_linear_model.\n\n The head can be used with a canned estimator. Example:\n\n ```python\n my_head = tf.estimator.PoissonRegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.PoissonRegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_dimension: Number of regression labels per example. This is the size\n of the last dimension of the labels `Tensor` (typically, this has shape\n `[batch_size, label_dimension]`).\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch\n size * label_dimension`.\n compute_full_loss: Whether to include the constant `log(z!)` term in\n computing the poisson loss. See `tf.nn.log_poisson_loss` for the full\n documentation.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for poisson regression using `tf.nn.log_poisson_loss`.", "type": "API"}, {"name": "tf.compat.v1.estimator.ProfilerHook", "docs": "Captures CPU/GPU profiling information every N steps or seconds.\n\n This produces files called \"timeline-.json\", which are in Chrome\n Trace format.\n\n For more information see:\n https://github.com/catapult-project/catapult/blob/master/tracing/README.md\n ", "desc": "Captures CPU/GPU profiling information every N steps or seconds.", "type": "API"}, {"name": "tf.compat.v1.estimator.RegressionHead", "docs": "Creates a `Head` for regression using the `mean_squared_error` loss.\n\n The loss is the weighted sum over all input dimensions. Namely, if the input\n labels have shape `[batch_size, label_dimension]`, the loss is the weighted\n sum over both `batch_size` and `label_dimension`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`.\n In many applications, the shape is `[batch_size, label_dimension]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape\n `[D0, D1, ... DN]` is also supported.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or\n `[D0, D1, ... DN, label_dimension]`.\n\n Supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns\n unreduced loss with shape `[D0, D1, ... DN, label_dimension]`.\n\n Also supports custom `inverse_link_fn`, also known as 'mean function'.\n `inverse_link_fn` is only used in `PREDICT` mode. It takes `logits` as\n argument and returns predicted values. This function is the inverse of the\n link function defined in\n https://en.wikipedia.org/wiki/Generalized_linear_model#Link_function\n Namely, for poisson regression, set `inverse_link_fn=tf.exp`.\n\n Usage:\n\n >>> head = tf.estimator.RegressionHead()\n >>> logits = np.array(((45,), (41,),), dtype=np.float32)\n >>> labels = np.array(((43,), (44,),), dtype=np.int32)\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # expected_loss = weighted_loss / batch_size\n >>> # = (43-45)^2 + (44-41)^2 / 2 = 6.50\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 6.50\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n average_loss : 6.50\n label/mean : 43.50\n prediction/mean : 43.00\n >>> preds = head.predictions(logits)\n >>> print(preds['predictions'])\n tf.Tensor(\n [[45.]\n [41.]], shape=(2, 1), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.RegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.RegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_dimension: Number of regression labels per example. This is the size\n of the last dimension of the labels `Tensor` (typically, this has shape\n `[batch_size, label_dimension]`).\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by\n `batch_size * label_dimension`.\n loss_fn: Optional loss function. Defaults to `mean_squared_error`.\n inverse_link_fn: Optional inverse link function, also known as 'mean\n function'. Defaults to identity.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for regression using the `mean_squared_error` loss.", "type": "API"}, {"name": "tf.compat.v1.estimator.regressor_parse_example_spec", "docs": "Generates parsing spec for tf.parse_example to be used with regressors.\n\n If users keep data in tf.Example format, they need to call tf.parse_example\n with a proper feature spec. There are two main things that this utility helps:\n\n * Users need to combine parsing spec of features with labels and weights\n (if any) since they are all parsed from same tf.Example instance. This\n utility combines these specs.\n * It is difficult to map expected label by a regressor such as `DNNRegressor`\n to corresponding tf.parse_example spec. This utility encodes it by getting\n related information from users (key, dtype).\n\n Example output of parsing spec:\n\n ```python\n # Define features and transformations\n feature_b = tf.feature_column.numeric_column(...)\n feature_c_bucketized = tf.feature_column.bucketized_column(\n tf.feature_column.numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = tf.feature_column.crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]\n parsing_spec = tf.estimator.regressor_parse_example_spec(\n feature_columns, label_key='my-label')\n\n # For the above example, regressor_parse_example_spec would return the dict:\n assert parsing_spec == {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n \"my-label\" : parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n }\n ```\n\n Example usage with a regressor:\n\n ```python\n feature_columns = # define features via tf.feature_column\n estimator = DNNRegressor(\n hidden_units=[256, 64, 16],\n feature_columns=feature_columns,\n weight_column='example-weight',\n label_dimension=3)\n # This label configuration tells the regressor the following:\n # * weights are retrieved with key 'example-weight'\n # * label is a 3 dimension tensor with float32 dtype.\n\n\n # Input builders\n def input_fn_train(): # Returns a tuple of features and labels.\n features = tf.contrib.learn.read_keyed_batch_features(\n file_pattern=train_files,\n batch_size=batch_size,\n # creates parsing configuration for tf.parse_example\n features=tf.estimator.classifier_parse_example_spec(\n feature_columns,\n label_key='my-label',\n label_dimension=3,\n weight_column='example-weight'),\n reader=tf.RecordIOReader)\n labels = features.pop('my-label')\n return features, labels\n\n estimator.train(input_fn=input_fn_train)\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `_FeatureColumn`.\n label_key: A string identifying the label. It means tf.Example stores labels\n with this key.\n label_dtype: A `tf.dtype` identifies the type of labels. By default it is\n `tf.float32`.\n label_default: used as label if label_key does not exist in given\n tf.Example. By default default_value is none, which means\n `tf.parse_example` will error out if there is any missing label.\n label_dimension: Number of regression targets per example. This is the size\n of the last dimension of the labels and logits `Tensor` objects\n (typically, these have shape `[batch_size, label_dimension]`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. If it is a string, it is\n used as a key to fetch weight tensor from the `features`. If it is a\n `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then\n weight_column.normalizer_fn is applied on it to get weight tensor.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If label is used in `feature_columns`.\n ValueError: If weight_column is used in `feature_columns`.\n ValueError: If any of the given `feature_columns` is not a `_FeatureColumn`\n instance.\n ValueError: If `weight_column` is not a `NumericColumn` instance.\n ValueError: if label_key is None.\n ", "desc": "Generates parsing spec for tf.parse_example to be used with regressors.", "type": "API"}, {"name": "tf.compat.v1.estimator.RunConfig", "docs": "This class specifies the configurations for an `Estimator` run.", "desc": "This class specifies the configurations for an `Estimator` run.", "type": "API"}, {"name": "tf.compat.v1.estimator.SecondOrStepTimer", "docs": "Timer that triggers at most once every N seconds or once every N steps.\n\n This symbol is also exported to v2 in tf.estimator namespace. See\n https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/hooks/basic_session_run_hooks.py\n ", "desc": "Timer that triggers at most once every N seconds or once every N steps.", "type": "API"}, {"name": "tf.compat.v1.estimator.SessionRunArgs", "docs": "Represents arguments to be added to a `Session.run()` call.\n\n Args:\n fetches: Exactly like the 'fetches' argument to Session.Run().\n Can be a single tensor or op, a list of 'fetches' or a dictionary\n of fetches. For example:\n fetches = global_step_tensor\n fetches = [train_op, summary_op, global_step_tensor]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n Note that this can recurse as expected:\n fetches = {'step': global_step_tensor,\n 'ops': [train_op, check_nan_op]}\n feed_dict: Exactly like the `feed_dict` argument to `Session.Run()`\n options: Exactly like the `options` argument to `Session.run()`, i.e., a\n config_pb2.RunOptions proto.\n ", "desc": "Represents arguments to be added to a `Session.run()` call.", "type": "API"}, {"name": "tf.compat.v1.estimator.SessionRunContext", "docs": "Provides information about the `session.run()` call being made.\n\n Provides information about original request to `Session.Run()` function.\n SessionRunHook objects can stop the loop by calling `request_stop()` of\n `run_context`. In the future we may use this object to add more information\n about run without changing the Hook API.\n ", "desc": "Provides information about the `session.run()` call being made.", "type": "API"}, {"name": "tf.compat.v1.estimator.SessionRunHook", "docs": "Hook to extend calls to MonitoredSession.run().", "desc": "Hook to extend calls to MonitoredSession.run().", "type": "API"}, {"name": "tf.compat.v1.estimator.SessionRunValues", "docs": "Contains the results of `Session.run()`.\n\n In the future we may use this object to add more information about result of\n run without changing the Hook API.\n\n Args:\n results: The return values from `Session.run()` corresponding to the fetches\n attribute returned in the RunArgs. Note that this has the same shape as\n the RunArgs fetches. For example:\n fetches = global_step_tensor\n => results = nparray(int)\n fetches = [train_op, summary_op, global_step_tensor]\n => results = [None, nparray(string), nparray(int)]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n => results = {'step': nparray(int), 'summ': nparray(string)}\n options: `RunOptions` from the `Session.run()` call.\n run_metadata: `RunMetadata` from the `Session.run()` call.\n ", "desc": "Contains the results of `Session.run()`.", "type": "API"}, {"name": "tf.compat.v1.estimator.StepCounterHook", "docs": "Hook that counts steps per second.", "desc": "Hook that counts steps per second.", "type": "API"}, {"name": "tf.compat.v1.estimator.StopAtStepHook", "docs": "Hook that requests stop at a specified step.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n ", "desc": "Hook that requests stop at a specified step.", "type": "API"}, {"name": "tf.compat.v1.estimator.SummarySaverHook", "docs": "Saves summaries every N steps.", "desc": "Saves summaries every N steps.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu", "docs": "Public API for tf.estimator.tpu namespace.\n", "desc": "Public API for tf.estimator.tpu namespace.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.experimental", "docs": "Public API for tf.estimator.tpu.experimental namespace.\n", "desc": "Public API for tf.estimator.tpu.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec", "docs": "Class to keep track of the specification for TPU embeddings.\n\n Pass this class to `tf.estimator.tpu.TPUEstimator` via the\n `embedding_config_spec` parameter. At minimum you need to specify\n `feature_columns` and `optimization_parameters`. The feature columns passed\n should be created with some combination of\n `tf.tpu.experimental.embedding_column` and\n `tf.tpu.experimental.shared_embedding_columns`.\n\n TPU embeddings do not support arbitrary Tensorflow optimizers and the\n main optimizer you use for your model will be ignored for the embedding table\n variables. Instead TPU embeddigns support a fixed set of predefined optimizers\n that you can select from and set the parameters of. These include adagrad,\n adam and stochastic gradient descent. Each supported optimizer has a\n `Parameters` class in the `tf.tpu.experimental` namespace.\n\n ```\n column_a = tf.feature_column.categorical_column_with_identity(...)\n column_b = tf.feature_column.categorical_column_with_identity(...)\n column_c = tf.feature_column.categorical_column_with_identity(...)\n tpu_shared_columns = tf.tpu.experimental.shared_embedding_columns(\n [column_a, column_b], 10)\n tpu_non_shared_column = tf.tpu.experimental.embedding_column(\n column_c, 10)\n tpu_columns = [tpu_non_shared_column] + tpu_shared_columns\n ...\n def model_fn(features):\n dense_features = tf.keras.layers.DenseFeature(tpu_columns)\n embedded_feature = dense_features(features)\n ...\n\n estimator = tf.estimator.tpu.TPUEstimator(\n model_fn=model_fn,\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n column=tpu_columns,\n optimization_parameters=(\n tf.estimator.tpu.experimental.AdagradParameters(0.1))))\n ```\n\n @compatibility(TF2)\n TPU Estimator manages its own TensorFlow graph and session, so it is not\n compatible with TF2 behaviors. We recommend that you migrate to the newer\n `tf.distribute.TPUStrategy`. See the\n [TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n @end_compatibility\n ", "desc": "Class to keep track of the specification for TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.InputPipelineConfig", "docs": "Please see the definition of these values in TPUConfig.\n\n @compatibility(TF2)\n TPU Estimator manages its own TensorFlow graph and session, so it is not\n compatible with TF2 behaviors. We recommend that you migrate to the newer\n `tf.distribute.TPUStrategy`. See the\n [TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n @end_compatibility\n ", "desc": "Please see the definition of these values in TPUConfig.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.RunConfig", "docs": "RunConfig with TPU support.", "desc": "RunConfig with TPU support.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.TPUConfig", "docs": "TPU related configuration required by `TPUEstimator`.\n\n Args:\n iterations_per_loop: This is the number of train steps running in TPU system\n before returning to CPU host for each `Session.run`. This means global\n step is increased `iterations_per_loop` times in one `Session.run`. It is\n recommended to be set as number of global steps for next checkpoint. Note\n that in evaluation don't use this value, instead we run total eval `steps`\n on TPU for a single `Session.run`.\n [Experimental]: `iterations_per_loop` can be specified as a time interval.\n To specify N seconds in one `Session.run`, one can specify it as `Ns`\n and substitute the N with the N with the number of desired seconds.\n Alternatively, the unit of time can also be specified in minutes or\n hours, e.g. `3600s` or `60m` or `1h`.\n num_shards: (Deprecated, ignored by TPUEstimator). The number of model\n replicas in the system. For non-model-parallelism case, this number equals\n the total number of TPU cores. For model-parallelism, the total number of\n TPU cores equals num_cores_per_replica * num_shards.\n num_cores_per_replica: Defaults to `None`, which disables model parallelism.\n An integer which describes the number of TPU cores per model replica. This\n is required by model-parallelism which enables partitioning the model to\n multiple cores. Currently num_cores_per_replica must be 1, 2, 4, or 8.\n per_host_input_for_training: If `True`, for `PER_HOST_V1`, the `input_fn` is\n invoked once on each host, and the number of hosts must be smaller or\n equal to the number of replicas. For PER_HOST_V2, the `input_fn` is\n invoked once for each host (if the number of hosts is less than the number\n of replicas) or replica (if the number of replicas is less than the number\n of hosts. With the per-core input pipeline configuration, it is invoked\n once for each core. With a global batch size `train_batch_size` in\n `TPUEstimator` constructor, the batch size for each shard is\n `train_batch_size` // #hosts in the `True` or `PER_HOST_V1` mode. In\n `PER_HOST_V2` mode, it is `train_batch_size` // #cores. In `BROADCAST`\n mode, `input_fn` is only invoked once on host 0 and the tensors are\n broadcasted to all other replicas. The batch size equals to\n `train_batch_size`. With the per-core input pipeline configuration, the\n shard batch size is also `train_batch_size` // #cores.\n Note: per_host_input_for_training==PER_SHARD_V1 only supports mode.TRAIN.\n tpu_job_name: The name of the TPU job. Typically, this name is auto-inferred\n within TPUEstimator, however when using ClusterSpec propagation in more\n esoteric cluster configurations, you may need to specify the job name as a\n string.\n initial_infeed_sleep_secs: The number of seconds the infeed thread should\n wait before enqueueing the first batch. This helps avoid timeouts for\n models that require a long compilation time.\n input_partition_dims: A nested list to describe the partition dims for all\n the tensors from input_fn(). The structure of input_partition_dims must\n match the structure of `features` and `labels` from input_fn(). The total\n number of partitions must match\n `num_cores_per_replica`. For example, if input_fn() returns two tensors:\n images with shape [N, H, W, C] and labels [N]. input_partition_dims =\n [[1, 2, 2, 1], None] will split the images to 4 pieces and feed into 4\n TPU cores. labels tensor are directly broadcasted to all the TPU cores\n since the partition dims is `None`.\n Current limitations: This feature is only supported with the PER_HOST_V2\n input mode.\n eval_training_input_configuration: If `SLICED`, `input_fn` is only invoked\n once on host 0 and the tensors are broadcasted to all other replicas.\n Unlike per_host_input_for_training=BROADCAST, each replica will only get a\n slice of the data instead of a whole copy. If `PER_HOST_V1`, the behaviour\n is determined by per_host_input_for_training.\n experimental_host_call_every_n_steps: Within a training loop, this argument\n sets how often host calls are performed during training. Host calls will\n be evaluated every n steps within a training loop where n is the value of\n this argument.\n experimental_allow_per_host_v2_parallel_get_next: When enabled, allows\n concurrent execution of dataset get next calls when using PER_HOST_V2\n input. May result in a performance increase for models with a small step\n time, but as a consequence TPUEstimator may non-deterministically\n distribute batches to different cores, rather than guaranteeing round\n robin behavior.\n experimental_feed_hook: This is a class which user can provide to the TPU\n estimator to override the default TPUInfeedOutfeedSessionHook implementation\n and add customized implementatioin to handle infeed outfeed logic. If\n given class is None, TPU estimator uses default TPUInfeedOutfeedSessionHook\n implementation in tpu_estimator.py. If not None, TPU estimator uses this\n customized tpu infeed outfeed session hook class rather to override the\n default one.\n\n Raises:\n ValueError: If `num_cores_per_replica` is not 1, 2, 4, 8, ..., 128.\n\n @compatibility(TF2)\n TPU Estimator manages its own TensorFlow graph and session, so it is not\n compatible with TF2 behaviors. We recommend that you migrate to the newer\n `tf.distribute.TPUStrategy`. See the\n [TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n @end_compatibility\n ", "desc": "TPU related configuration required by `TPUEstimator`.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.TPUEstimator", "docs": "Estimator with TPU support.\n\n TPUEstimator also supports training on CPU and GPU. You don't need to define\n a separate `tf.estimator.Estimator`.\n\n TPUEstimator handles many of the details of running on TPU devices, such as\n replicating inputs and models for each core, and returning to host\n periodically to run hooks.\n\n TPUEstimator transforms a global batch size in params to a per-shard batch\n size when calling the `input_fn` and `model_fn`. Users should specify\n global batch size in constructor, and then get the batch size for each shard\n in `input_fn` and `model_fn` by `params['batch_size']`.\n\n - For training, `model_fn` gets per-core batch size; `input_fn` may get\n per-core or per-host batch size depending on `per_host_input_for_training`\n in `TPUConfig` (See docstring for TPUConfig for details).\n\n - For evaluation and prediction, `model_fn` gets per-core batch size and\n `input_fn` get per-host batch size.\n\n Evaluation\n ==========\n\n `model_fn` should return `TPUEstimatorSpec`, which expects the `eval_metrics`\n for TPU evaluation. If eval_on_tpu is False, the evaluation will execute on\n CPU or GPU; in this case the following discussion on TPU evaluation does not\n apply.\n\n `TPUEstimatorSpec.eval_metrics` is a tuple of `metric_fn` and `tensors`, where\n `tensors` could be a list of any nested structure of `Tensor`s (See\n `TPUEstimatorSpec` for details). `metric_fn` takes the `tensors` and returns\n a dict from metric string name to the result of calling a metric function,\n namely a `(metric_tensor, update_op)` tuple.\n\n One can set `use_tpu` to `False` for testing. All training, evaluation, and\n predict will be executed on CPU. `input_fn` and `model_fn` will receive\n `train_batch_size` or `eval_batch_size` unmodified as `params['batch_size']`.\n\n Current limitations:\n --------------------\n\n 1. TPU evaluation only works on a single host (one TPU worker) except\n BROADCAST mode.\n\n 2. `input_fn` for evaluation should **NOT** raise an end-of-input exception\n (`OutOfRangeError` or `StopIteration`). And all evaluation steps and all\n batches should have the same size.\n\n Example (MNIST):\n ----------------\n\n ```\n # The metric Fn which runs on CPU.\n def metric_fn(labels, logits):\n predictions = tf.argmax(logits, 1)\n return {\n 'accuracy': tf.compat.v1.metrics.precision(\n labels=labels, predictions=predictions),\n }\n\n # Your model Fn which runs on TPU (eval_metrics is list in this example)\n def model_fn(features, labels, mode, config, params):\n ...\n logits = ...\n\n if mode = tf.estimator.ModeKeys.EVAL:\n return tpu_estimator.TPUEstimatorSpec(\n mode=mode,\n loss=loss,\n eval_metrics=(metric_fn, [labels, logits]))\n\n # or specify the eval_metrics tensors as dict.\n def model_fn(features, labels, mode, config, params):\n ...\n final_layer_output = ...\n\n if mode = tf.estimator.ModeKeys.EVAL:\n return tpu_estimator.TPUEstimatorSpec(\n mode=mode,\n loss=loss,\n eval_metrics=(metric_fn, {\n 'labels': labels,\n 'logits': final_layer_output,\n }))\n ```\n\n Prediction\n ==========\n\n Prediction on TPU is an experimental feature to support large batch inference.\n It is not designed for latency-critical system. In addition, due to some\n usability issues, for prediction with small dataset, CPU `.predict`, i.e.,\n creating a new `TPUEstimator` instance with `use_tpu=False`, might be more\n convenient.\n\n Note: In contrast to TPU training/evaluation, the `input_fn` for prediction\n *should* raise an end-of-input exception (`OutOfRangeError` or\n `StopIteration`), which serves as the stopping signal to `TPUEstimator`. To be\n precise, the ops created by `input_fn` produce one batch of the data.\n The `predict()` API processes one batch at a time. When reaching the end of\n the data source, an end-of-input exception should be raised by one of these\n operations. The user usually does not need to do this manually. As long as the\n dataset is not repeated forever, the `tf.data` API will raise an end-of-input\n exception automatically after the last batch has been produced.\n\n Note: Estimator.predict returns a Python generator. Please consume all the\n data from the generator so that TPUEstimator can shutdown the TPU system\n properly for user.\n\n Current limitations:\n --------------------\n 1. TPU prediction only works on a single host (one TPU worker).\n\n 2. `input_fn` must return a `Dataset` instance rather than `features`. In\n fact, .train() and .evaluate() also support Dataset as return value.\n\n Example (MNIST):\n ----------------\n ```\n height = 32\n width = 32\n total_examples = 100\n\n def predict_input_fn(params):\n batch_size = params['batch_size']\n\n images = tf.random.uniform(\n [total_examples, height, width, 3], minval=-1, maxval=1)\n\n dataset = tf.data.Dataset.from_tensor_slices(images)\n dataset = dataset.map(lambda images: {'image': images})\n\n dataset = dataset.batch(batch_size)\n return dataset\n\n def model_fn(features, labels, params, mode):\n # Generate predictions, called 'output', from features['image']\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.contrib.tpu.TPUEstimatorSpec(\n mode=mode,\n predictions={\n 'predictions': output,\n 'is_padding': features['is_padding']\n })\n\n tpu_est = TPUEstimator(\n model_fn=model_fn,\n ...,\n predict_batch_size=16)\n\n # Fully consume the generator so that TPUEstimator can shutdown the TPU\n # system.\n for item in tpu_est.predict(input_fn=input_fn):\n # Filter out item if the `is_padding` is 1.\n # Process the 'predictions'\n ```\n\n Exporting\n =========\n\n `export_saved_model` exports 2 metagraphs, one with `saved_model.SERVING`, and\n another with `saved_model.SERVING` and `saved_model.TPU` tags. At serving\n time, these tags are used to select the appropriate metagraph to load.\n\n Before running the graph on TPU, the TPU system needs to be initialized. If\n TensorFlow Serving model-server is used, this is done automatically. If not,\n please use `session.run(tpu.initialize_system())`.\n\n There are two versions of the API: 1 or 2.\n\n In V1, the exported CPU graph is `model_fn` as it is. The exported TPU graph\n wraps `tpu.rewrite()` and `TPUPartitionedCallOp` around `model_fn` so\n `model_fn` is on TPU by default. To place ops on CPU,\n `tpu.outside_compilation(host_call, logits)` can be used.\n\n Example:\n ----------------\n\n ```\n def model_fn(features, labels, mode, config, params):\n ...\n logits = ...\n export_outputs = {\n 'logits': export_output_lib.PredictOutput(\n {'logits': logits})\n }\n\n def host_call(logits):\n class_ids = math_ops.argmax(logits)\n classes = string_ops.as_string(class_ids)\n export_outputs['classes'] =\n export_output_lib.ClassificationOutput(classes=classes)\n\n tpu.outside_compilation(host_call, logits)\n\n ...\n ```\n\n In V2, `export_saved_model()` sets up `params['use_tpu']` flag to let the user\n know if the code is exporting to TPU (or not). When `params['use_tpu']` is\n `True`, users need to call `tpu.rewrite()`, `TPUPartitionedCallOp` and/or\n `batch_function()`.\n\n TIP: V2 is recommended as it is more flexible (eg: batching, etc).\n\n @compatibility(TF2)\n TPU Estimator manages its own TensorFlow graph and session, so it is not\n compatible with TF2 behaviors. We recommend that you migrate to the newer\n `tf.distribute.TPUStrategy`. See the\n [TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n @end_compatibility\n ", "desc": "Estimator with TPU support.", "type": "API"}, {"name": "tf.compat.v1.estimator.tpu.TPUEstimatorSpec", "docs": "Ops and objects returned from a `model_fn` and passed to `TPUEstimator`.\n\n See `EstimatorSpec` for `mode`, `predictions`, `loss`, `train_op`, and\n `export_outputs`.\n\n For evaluation, `eval_metrics `is a tuple of `metric_fn` and `tensors`, where\n `metric_fn` runs on CPU to generate metrics and `tensors` represents the\n `Tensor`s transferred from TPU system to CPU host and passed to `metric_fn`.\n To be precise, TPU evaluation expects a slightly different signature from the\n `tf.estimator.Estimator`. While `EstimatorSpec.eval_metric_ops` expects a\n dict, `TPUEstimatorSpec.eval_metrics` is a tuple of `metric_fn` and `tensors`.\n The `tensors` could be a list of `Tensor`s or dict of names to `Tensor`s. The\n `tensors` usually specify the model logits, which are transferred back from\n TPU system to CPU host. All tensors must have be batch-major, i.e., the batch\n size is the first dimension. Once all tensors are available at CPU host from\n all shards, they are concatenated (on CPU) and passed as positional arguments\n to the `metric_fn` if `tensors` is list or keyword arguments if `tensors` is\n a dict. `metric_fn` takes the `tensors` and returns a dict from metric string\n name to the result of calling a metric function, namely a `(metric_tensor,\n update_op)` tuple. See `TPUEstimator` for MNIST example how to specify the\n `eval_metrics`.\n\n `scaffold_fn` is a function running on CPU to generate the `Scaffold`. This\n function should not capture any Tensors in `model_fn`.\n\n `host_call` is a tuple of a `function` and a list or dictionary of `tensors`\n to pass to that function and returns a list of Tensors. `host_call` currently\n works for train() and evaluate(). The Tensors returned by the function is\n executed on the CPU on every step, so there is communication overhead when\n sending tensors from TPU to CPU. To reduce the overhead, try reducing the\n size of the tensors. The `tensors` are concatenated along their major (batch)\n dimension, and so must be >= rank 1. The `host_call` is useful for writing\n summaries with `tf.contrib.summary.create_file_writer`.\n\n @compatibility(TF2)\n TPU Estimator manages its own TensorFlow graph and session, so it is not\n compatible with TF2 behaviors. We recommend that you migrate to the newer\n `tf.distribute.TPUStrategy`. See the\n [TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n @end_compatibility\n ", "desc": "Ops and objects returned from a `model_fn` and passed to `TPUEstimator`.", "type": "API"}, {"name": "tf.compat.v1.estimator.train_and_evaluate", "docs": "Train and evaluate the `estimator`.\n\n This utility function trains, evaluates, and (optionally) exports the model by\n using the given `estimator`. All training related specification is held in\n `train_spec`, including training `input_fn` and training max steps, etc. All\n evaluation and export related specification is held in `eval_spec`, including\n evaluation `input_fn`, steps, etc.\n\n This utility function provides consistent behavior for both local\n (non-distributed) and distributed configurations. The default distribution\n configuration is parameter server-based between-graph replication. For other\n types of distribution configurations such as all-reduce training, please use\n [DistributionStrategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/distribute).\n\n Overfitting: In order to avoid overfitting, it is recommended to set up the\n training `input_fn` to shuffle the training data properly.\n\n Stop condition: In order to support both distributed and non-distributed\n configuration reliably, the only supported stop condition for model\n training is `train_spec.max_steps`. If `train_spec.max_steps` is `None`, the\n model is trained forever. *Use with care* if model stop condition is\n different. For example, assume that the model is expected to be trained with\n one epoch of training data, and the training `input_fn` is configured to throw\n `OutOfRangeError` after going through one epoch, which stops the\n `Estimator.train`. For a three-training-worker distributed configuration, each\n training worker is likely to go through the whole epoch independently. So, the\n model will be trained with three epochs of training data instead of one epoch.\n\n Example of local (non-distributed) training:\n\n ```python\n # Set up feature columns.\n categorial_feature_a = categorial_column_with_hash_bucket(...)\n categorial_feature_a_emb = embedding_column(\n categorical_column=categorial_feature_a, ...)\n ... # other feature columns\n\n estimator = DNNClassifier(\n feature_columns=[categorial_feature_a_emb, ...],\n hidden_units=[1024, 512, 256])\n\n # Or set up the model directory\n # estimator = DNNClassifier(\n # config=tf.estimator.RunConfig(\n # model_dir='/my_model', save_summary_steps=100),\n # feature_columns=[categorial_feature_a_emb, ...],\n # hidden_units=[1024, 512, 256])\n\n # Input pipeline for train and evaluate.\n def train_input_fn(): # returns x, y\n # please shuffle the data.\n pass\n def eval_input_fn(): # returns x, y\n pass\n\n train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)\n eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn)\n\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n ```\n Note that in current implementation `estimator.evaluate` will be called\n multiple times. This means that evaluation graph (including eval_input_fn)\n will be re-created for each `evaluate` call. `estimator.train` will be called\n only once.\n\n Example of distributed training:\n\n Regarding the example of distributed training, the code above can be used\n without a change (Please do make sure that the `RunConfig.model_dir` for all\n workers is set to the same directory, i.e., a shared file system all workers\n can read and write). The only extra work to do is setting the environment\n variable `TF_CONFIG` properly for each worker correspondingly.\n\n Also see\n [Distributed TensorFlow](https://www.tensorflow.org/deploy/distributed).\n\n Setting environment variable depends on the platform. For example, on Linux,\n it can be done as follows (`$` is the shell prompt):\n\n ```\n $ TF_CONFIG='' python train_model.py\n ```\n\n For the content in `TF_CONFIG`, assume that the training cluster spec looks\n like:\n\n ```\n cluster = {\"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]}\n ```\n\n Example of `TF_CONFIG` for chief training worker (must have one and only one):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"chief\", \"index\": 0}\n }'\n ```\n Note that the chief worker also does the model training job, similar to other\n non-chief training workers (see next paragraph). In addition to the model\n training, it manages some extra work, e.g., checkpoint saving and restoring,\n writing summaries, etc.\n\n Example of `TF_CONFIG` for non-chief training worker (optional, could be\n multiple):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"worker\", \"index\": 0}\n }'\n ```\n where the `task.index` should be set as 0, 1, 2, in this example, respectively\n for non-chief training workers.\n\n Example of `TF_CONFIG` for parameter server, aka ps (could be multiple):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"ps\", \"index\": 0}\n }'\n ```\n where the `task.index` should be set as 0 and 1, in this example, respectively\n for parameter servers.\n\n Example of `TF_CONFIG` for evaluator task. Evaluator is a special task that is\n not part of the training cluster. There could be only one. It is used for\n model evaluation.\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"evaluator\", \"index\": 0}\n }'\n ```\n\n When `distribute` or `experimental_distribute.train_distribute` and\n `experimental_distribute.remote_cluster` is set, this method will start a\n client running on the current host which connects to the `remote_cluster` for\n training and evaluation.\n\n Args:\n estimator: An `Estimator` instance to train and evaluate.\n train_spec: A `TrainSpec` instance to specify the training specification.\n eval_spec: A `EvalSpec` instance to specify the evaluation and export\n specification.\n\n Returns:\n A tuple of the result of the `evaluate` call to the `Estimator` and the\n export results using the specified `Exporter`s.\n Currently, the return value is undefined for distributed training mode.\n\n Raises:\n ValueError: if environment variable `TF_CONFIG` is incorrectly set.\n ", "desc": "Train and evaluate the `estimator`.", "type": "API"}, {"name": "tf.compat.v1.estimator.TrainSpec", "docs": "Configuration for the \"train\" part for the `train_and_evaluate` call.\n\n `TrainSpec` determines the input data for the training, as well as the\n duration. Optional hooks run at various stages of training.\n\n Usage:\n\n >>> train_spec = tf.estimator.TrainSpec(\n ... input_fn=lambda: 1,\n ... max_steps=100,\n ... hooks=[_StopAtSecsHook(stop_after_secs=10)],\n ... saving_listeners=[_NewCheckpointListenerForEvaluate(None, 20, None)])\n >>> train_spec.saving_listeners[0]._eval_throttle_secs\n 20\n >>> train_spec.hooks[0]._stop_after_secs\n 10\n >>> train_spec.max_steps\n 100\n ", "desc": "Configuration for the \"train\" part for the `train_and_evaluate` call.", "type": "API"}, {"name": "tf.compat.v1.estimator.VocabInfo", "docs": "Vocabulary information for warm-starting.\n\n See `tf.estimator.WarmStartSettings` for examples of using\n VocabInfo to warm-start.\n\n Args:\n new_vocab: [Required] A path to the new vocabulary file (used with the model\n to be trained).\n new_vocab_size: [Required] An integer indicating how many entries of the new\n vocabulary will used in training.\n num_oov_buckets: [Required] An integer indicating how many OOV buckets are\n associated with the vocabulary.\n old_vocab: [Required] A path to the old vocabulary file (used with the\n checkpoint to be warm-started from).\n old_vocab_size: [Optional] An integer indicating how many entries of the old\n vocabulary were used in the creation of the checkpoint. If not provided,\n the entire old vocabulary will be used.\n backup_initializer: [Optional] A variable initializer used for variables\n corresponding to new vocabulary entries and OOV. If not provided, these\n entries will be zero-initialized.\n axis: [Optional] Denotes what axis the vocabulary corresponds to. The\n default, 0, corresponds to the most common use case (embeddings or\n linear weights for binary classification / regression). An axis of 1\n could be used for warm-starting output layers with class vocabularies.\n\n Returns:\n A `VocabInfo` which represents the vocabulary information for warm-starting.\n\n Raises:\n ValueError: `axis` is neither 0 or 1.\n\n Example Usage:\n```python\n embeddings_vocab_info = tf.VocabInfo(\n new_vocab='embeddings_vocab',\n new_vocab_size=100,\n num_oov_buckets=1,\n old_vocab='pretrained_embeddings_vocab',\n old_vocab_size=10000,\n backup_initializer=tf.compat.v1.truncated_normal_initializer(\n mean=0.0, stddev=(1 / math.sqrt(embedding_dim))),\n axis=0)\n\n softmax_output_layer_kernel_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.glorot_uniform_initializer(),\n axis=1)\n\n softmax_output_layer_bias_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.zeros_initializer(),\n axis=0)\n\n #Currently, only axis=0 and axis=1 are supported.\n ```\n ", "desc": "Vocabulary information for warm-starting.", "type": "API"}, {"name": "tf.compat.v1.estimator.WarmStartSettings", "docs": "Settings for warm-starting in `tf.estimator.Estimators`.\n\n Example Use with canned `tf.estimator.DNNEstimator`:\n\n ```\n emb_vocab_file = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_vocabulary_file(\n \"sc_vocab_file\", \"new_vocab.txt\", vocab_size=100),\n dimension=8)\n emb_vocab_list = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_vocabulary_list(\n \"sc_vocab_list\", vocabulary_list=[\"a\", \"b\"]),\n dimension=8)\n estimator = tf.estimator.DNNClassifier(\n hidden_units=[128, 64], feature_columns=[emb_vocab_file, emb_vocab_list],\n warm_start_from=ws)\n ```\n\n where `ws` could be defined as:\n\n Warm-start all weights in the model (input layer and hidden weights).\n Either the directory or a specific checkpoint can be provided (in the case\n of the former, the latest checkpoint will be used):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\")\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp/model-1000\")\n ```\n\n Warm-start only the embeddings (input layer):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=\".*input_layer.*\")\n ```\n\n Warm-start all weights but the embedding parameters corresponding to\n `sc_vocab_file` have a different vocab from the one used in the current\n model:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\"\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start only `sc_vocab_file` embeddings (and no other variables), which\n have a different vocab from the one used in the current model:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\"\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=None,\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start all weights but the parameters corresponding to `sc_vocab_file`\n have a different vocab from the one used in current checkpoint, and only\n 100 of those entries were used:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\",\n old_vocab_size=100\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start all weights but the parameters corresponding to `sc_vocab_file`\n have a different vocab from the one used in current checkpoint and the\n parameters corresponding to `sc_vocab_list` have a different name from the\n current checkpoint:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\",\n old_vocab_size=100\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n },\n var_name_to_prev_var_name={\n \"input_layer/sc_vocab_list_embedding/embedding_weights\":\n \"old_tensor_name\"\n })\n ```\n\n Warm-start all TRAINABLE variables:\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=\".*\")\n ```\n\n Warm-start all variables (including non-TRAINABLE):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=[\".*\"])\n ```\n\n Warm-start non-TRAINABLE variables \"v1\", \"v1/Momentum\", and \"v2\" but not\n \"v2/momentum\":\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=[\"v1\", \"v2[^/]\"])\n ```\n\n Attributes:\n ckpt_to_initialize_from: [Required] A string specifying the directory with\n checkpoint file(s) or path to checkpoint from which to warm-start the\n model parameters.\n vars_to_warm_start: [Optional] One of the following:\n\n * A regular expression (string) that captures which variables to\n warm-start (see tf.compat.v1.get_collection). This expression will only\n consider variables in the TRAINABLE_VARIABLES collection -- if you need\n to warm-start non_TRAINABLE vars (such as optimizer accumulators or\n batch norm statistics), please use the below option.\n * A list of strings, each a regex scope provided to\n tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see\n tf.compat.v1.get_collection). For backwards compatibility reasons, this\n is separate from the single-string argument type.\n * A list of Variables to warm-start. If you do not have access to the\n `Variable` objects at the call site, please use the above option.\n * `None`, in which case only TRAINABLE variables specified in\n `var_name_to_vocab_info` will be warm-started.\n\n Defaults to `'.*'`, which warm-starts all variables in the\n TRAINABLE_VARIABLES collection. Note that this excludes variables such as\n accumulators and moving statistics from batch norm.\n var_name_to_vocab_info: [Optional] Dict of variable names (strings) to\n `tf.estimator.VocabInfo`. The variable names should be \"full\" variables,\n not the names of the partitions. If not explicitly provided, the variable\n is assumed to have no (changes to) vocabulary.\n var_name_to_prev_var_name: [Optional] Dict of variable names (strings) to\n name of the previously-trained variable in `ckpt_to_initialize_from`. If\n not explicitly provided, the name of the variable is assumed to be same\n between previous checkpoint and current model. Note that this has no\n effect on the set of variables that is warm-started, and only controls\n name mapping (use `vars_to_warm_start` for controlling what variables to\n warm-start).\n ", "desc": "Settings for warm-starting in `tf.estimator.Estimators`.", "type": "API"}, {"name": "tf.compat.v1.Event", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.executing_eagerly", "docs": "Checks whether the current thread has eager execution enabled.\n\n Eager execution is typically enabled via\n `tf.compat.v1.enable_eager_execution`, but may also be enabled within the\n context of a Python function via tf.contrib.eager.py_func.\n\n When eager execution is enabled, returns `True` in most cases. However,\n this API might return `False` in the following use cases.\n\n * Executing inside `tf.function`, unless under `tf.init_scope` or\n `tf.config.run_functions_eagerly(True)` is previously called.\n * Executing inside a transformation function for `tf.dataset`.\n * `tf.compat.v1.disable_eager_execution()` is called.\n\n >>> tf.compat.v1.enable_eager_execution()\n\n General case:\n\n >>> print(tf.executing_eagerly())\n True\n\n Inside `tf.function`:\n\n >>> @tf.function\n ... def fn():\n ... with tf.init_scope():\n ... print(tf.executing_eagerly())\n ... print(tf.executing_eagerly())\n >>> fn()\n True\n False\n\n Inside `tf.function`\n after `tf.config.run_functions_eagerly(True)` is called:\n\n >>> tf.config.run_functions_eagerly(True)\n >>> @tf.function\n ... def fn():\n ... with tf.init_scope():\n ... print(tf.executing_eagerly())\n ... print(tf.executing_eagerly())\n >>> fn()\n True\n True\n >>> tf.config.run_functions_eagerly(False)\n\n Inside a transformation function for `tf.dataset`:\n\n >>> def data_fn(x):\n ... print(tf.executing_eagerly())\n ... return x\n >>> dataset = tf.data.Dataset.range(100)\n >>> dataset = dataset.map(data_fn)\n False\n\n Returns:\n `True` if the current thread has eager execution enabled.\n ", "desc": "Checks whether the current thread has eager execution enabled.", "type": "API"}, {"name": "tf.compat.v1.executing_eagerly_outside_functions", "docs": "Returns True if executing eagerly, even if inside a graph function.\n\n This function will check the outermost context for the program and see if\n it is in eager mode. It is useful comparing to `tf.executing_eagerly()`,\n which checks the current context and will return `False` within a\n `tf.function` body. It can be used to build library that behave differently\n in eager runtime and v1 session runtime (deprecated).\n\n Example:\n\n >>> tf.compat.v1.enable_eager_execution()\n >>> @tf.function\n ... def func():\n ... # A function constructs TensorFlow graphs, it does not execute eagerly,\n ... # but the outer most context is still eager.\n ... assert not tf.executing_eagerly()\n ... return tf.compat.v1.executing_eagerly_outside_functions()\n >>> func()\n \n\n Returns:\n boolean, whether the outermost context is in eager mode.\n ", "desc": "Returns True if executing eagerly, even if inside a graph function.", "type": "API"}, {"name": "tf.compat.v1.exp", "docs": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).\n\n This function computes the exponential of the input tensor element-wise.\n i.e. `math.exp(x)` or \\\\(e^x\\\\), where `x` is the input tensor.\n \\\\(e\\\\) denotes Euler's number and is approximately equal to 2.718281.\n Output is positive for any real input.\n\n >>> x = tf.constant(2.0)\n >>> tf.math.exp(x)\n \n\n >>> x = tf.constant([2.0, 8.0])\n >>> tf.math.exp(x)\n \n\n For complex numbers, the exponential value is calculated as\n $$\n e^{x+iy} = {e^x} {e^{iy}} = {e^x} ({\\cos (y) + i \\sin (y)})\n $$\n\n For `1+1j` the value would be computed as:\n $$\n e^1 (\\cos (1) + i \\sin (1)) = 2.7182817 \\times (0.5403023+0.84147096j)\n $$\n\n >>> x = tf.constant(1 + 1j)\n >>> tf.math.exp(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.exp\n @end_compatibility\n ", "desc": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).", "type": "API"}, {"name": "tf.compat.v1.expand_dims", "docs": "Returns a tensor with a length 1 axis inserted at index `axis`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nGiven a tensor `input`, this operation inserts a dimension of length 1 at the\ndimension index `axis` of `input`'s shape. The dimension index follows Python\nindexing rules: It's zero-based, a negative index it is counted backward\nfrom the end.\n\nThis operation is useful to:\n\n* Add an outer \"batch\" dimension to a single element.\n* Align axes for broadcasting.\n* To add an inner vector length axis to a tensor of scalars.\n\nFor example:\n\nIf you have a single image of shape `[height, width, channels]`:\n\n>>> image = tf.zeros([10,10,3])\n\nYou can add an outer `batch` axis by passing `axis=0`:\n\n>>> tf.expand_dims(image, axis=0).shape.as_list()\n[1, 10, 10, 3]\n\nThe new axis location matches Python `list.insert(axis, 1)`:\n\n>>> tf.expand_dims(image, axis=1).shape.as_list()\n[10, 1, 10, 3]\n\nFollowing standard Python indexing rules, a negative `axis` counts from the\nend so `axis=-1` adds an inner most dimension:\n\n>>> tf.expand_dims(image, -1).shape.as_list()\n[10, 10, 3, 1]\n\nThis operation requires that `axis` is a valid index for `input.shape`,\nfollowing Python indexing rules:\n\n```\n-1-tf.rank(input) <= axis <= tf.rank(input)\n```\n\nThis operation is related to:\n\n* `tf.squeeze`, which removes dimensions of size 1.\n* `tf.reshape`, which provides more flexible reshaping capability.\n* `tf.sparse.expand_dims`, which provides this functionality for\n `tf.SparseTensor`\n\nArgs:\n input: A `Tensor`.\n axis: 0-D (scalar). Specifies the dimension index at which to expand the\n shape of `input`. Must be in the range `[-rank(input) - 1, rank(input)]`.\n name: The name of the output `Tensor` (optional).\n dim: 0-D (scalar). Equivalent to `axis`, to be deprecated.\n\nReturns:\n A `Tensor` with the same data as `input`, but its shape has an additional\n dimension of size 1 added.\n\nRaises:\n ValueError: if either both or neither of `dim` and `axis` are specified.", "desc": "Returns a tensor with a length 1 axis inserted at index `axis`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.experimental", "docs": "Public API for tf.experimental namespace.\n", "desc": "Public API for tf.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.experimental.async_clear_error", "docs": "Clear pending operations and error statuses in async execution.\n\n In async execution mode, an error in op/function execution can lead to errors\n in subsequent ops/functions that are scheduled but not yet executed. Calling\n this method clears all pending operations and reset the async execution state.\n\n Example:\n\n ```\n while True:\n try:\n # Step function updates the metric `loss` internally\n train_step_fn()\n except tf.errors.OutOfRangeError:\n tf.experimental.async_clear_error()\n break\n logging.info('loss = %s', loss.numpy())\n ```\n ", "desc": "Clear pending operations and error statuses in async execution.", "type": "API"}, {"name": "tf.compat.v1.experimental.async_scope", "docs": "Context manager for grouping async operations.\n\n Ops/function calls inside the scope can return before finishing the actual\n execution. When exiting the async scope, a synchronization barrier will be\n automatically added to ensure the completion of all async op and function\n execution, potentially raising exceptions if async execution results in\n an error state.\n\n Users may write the following code to asynchronously invoke `train_step_fn`\n and log the `loss` metric for every `num_steps` steps in a training loop.\n `train_step_fn` internally consumes data using `iterator.get_next()`, and may\n throw OutOfRangeError when running out of data. In the case:\n\n ```\n try:\n with tf.experimental.async_scope():\n for _ in range(num_steps):\n # Step function updates the metric `loss` internally\n train_step_fn()\n except tf.errors.OutOfRangeError:\n tf.experimental.async_clear_error()\n logging.info('loss = %s', loss.numpy())\n ```\n\n Yields:\n Context manager for grouping async operations.\n ", "desc": "Context manager for grouping async operations.", "type": "API"}, {"name": "tf.compat.v1.experimental.function_executor_type", "docs": "Context manager for setting the executor of eager defined functions.\n\n Eager defined functions are functions decorated by tf.contrib.eager.defun.\n\n Args:\n executor_type: a string for the name of the executor to be used to execute\n functions defined by tf.contrib.eager.defun.\n\n Yields:\n Context manager for setting the executor of eager defined functions.\n ", "desc": "Context manager for setting the executor of eager defined functions.", "type": "API"}, {"name": "tf.compat.v1.experimental.Optional", "docs": "Represents a value that may or may not be present.\n\n A `tf.experimental.Optional` can represent the result of an operation that may\n fail as a value, rather than raising an exception and halting execution. For\n example, `tf.data.Iterator.get_next_as_optional()` returns a\n `tf.experimental.Optional` that either contains the next element of an\n iterator if one exists, or an \"empty\" value that indicates the end of the\n sequence has been reached.\n\n `tf.experimental.Optional` can only be used with values that are convertible\n to `tf.Tensor` or `tf.CompositeTensor`.\n\n One can create a `tf.experimental.Optional` from a value using the\n `from_value()` method:\n\n >>> optional = tf.experimental.Optional.from_value(42)\n >>> print(optional.has_value())\n tf.Tensor(True, shape=(), dtype=bool)\n >>> print(optional.get_value())\n tf.Tensor(42, shape=(), dtype=int32)\n\n or without a value using the `empty()` method:\n\n >>> optional = tf.experimental.Optional.empty(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))\n >>> print(optional.has_value())\n tf.Tensor(False, shape=(), dtype=bool)\n ", "desc": "Represents a value that may or may not be present.", "type": "API"}, {"name": "tf.compat.v1.experimental.output_all_intermediates", "docs": "Whether to output all intermediates from functional control flow ops.\n\n The \"default\" behavior to is to output all intermediates when using v2 control\n flow inside Keras models in graph mode (possibly inside Estimators). This is\n needed to support taking gradients of v2 control flow. In graph mode, Keras\n can sometimes freeze the forward graph before the gradient computation which\n does not work for v2 control flow since it requires updating the forward ops\n to output the needed intermediates. We work around this by proactively\n outputting the needed intermediates when building the forward pass itself.\n Ideally any such extra tensors should be pruned out at runtime. However, if\n for any reason this doesn't work for you or if you have an inference-only\n model you can turn this behavior off using\n `tf.compat.v1.experimental.output_all_intermediates(False)`.\n\n If with the default behavior you are still seeing errors of the form\n \"Connecting to invalid output X of source node Y which has Z outputs\" try\n setting `tf.compat.v1.experimental.output_all_intermediates(True)` and\n please file an issue at https://github.com/tensorflow/tensorflow/issues.\n\n Args:\n state: True, False or None. None restores the default behavior.\n ", "desc": "Whether to output all intermediates from functional control flow ops.", "type": "API"}, {"name": "tf.compat.v1.experimental.register_filesystem_plugin", "docs": "Loads a TensorFlow FileSystem plugin.\n\n Args:\n plugin_location: Path to the plugin. Relative or absolute filesystem plugin\n path to a dynamic library file.\n\n Returns:\n None\n\n Raises:\n OSError: When the file to be loaded is not found.\n RuntimeError: when unable to load the library.\n ", "desc": "Loads a TensorFlow FileSystem plugin.", "type": "API"}, {"name": "tf.compat.v1.expm1", "docs": "Computes `exp(x) - 1` element-wise.\n\n i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor.\n `e` denotes Euler's number and is approximately equal to 2.718281.\n\n ```python\n x = tf.constant(2.0)\n tf.math.expm1(x) ==> 6.389056\n\n x = tf.constant([2.0, 8.0])\n tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)\n\n x = tf.constant(1 + 1j)\n tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes `exp(x) - 1` element-wise.", "type": "API"}, {"name": "tf.compat.v1.extract_image_patches", "docs": "Extract `patches` from `images` and put them in the \"depth\" output dimension.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`.\n 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 4`.\n The size of the sliding window for each dimension of `images`.\n strides: A list of `ints` that has length `>= 4`.\n How far the centers of two consecutive patches are in\n the images. Must be: `[1, stride_rows, stride_cols, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n Must be: `[1, rate_rows, rate_cols, 1]`. This is the\n input stride, specifying how far two consecutive patch samples are in the\n input. Equivalent to extracting patches with\n `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by\n subsampling them spatially by a factor of `rates`. This is equivalent to\n `rate` in dilated (a.k.a. Atrous) convolutions.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Extract `patches` from `images` and put them in the \"depth\" output dimension.", "type": "API"}, {"name": "tf.compat.v1.extract_volume_patches", "docs": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 5`.\n The size of the sliding window for each dimension of `input`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D of length 5. How far the centers of two consecutive patches are in\n `input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n\n The size-related attributes are specified as follows:\n\n ```python\n ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]\n strides = [1, stride_planes, strides_rows, strides_cols, 1]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.", "type": "API"}, {"name": "tf.compat.v1.eye", "docs": "Construct an identity matrix, or a batch of matrices.\n\n See also `tf.ones`, `tf.zeros`, `tf.fill`, `tf.one_hot`.\n\n ```python\n # Construct one identity matrix.\n tf.eye(2)\n ==> [[1., 0.],\n [0., 1.]]\n\n # Construct a batch of 3 identity matrices, each 2 x 2.\n # batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.\n batch_identity = tf.eye(2, batch_shape=[3])\n\n # Construct one 2 x 3 \"identity\" matrix\n tf.eye(2, num_columns=3)\n ==> [[ 1., 0., 0.],\n [ 0., 1., 0.]]\n ```\n\n Args:\n num_rows: Non-negative `int32` scalar `Tensor` giving the number of rows\n in each batch matrix.\n num_columns: Optional non-negative `int32` scalar `Tensor` giving the number\n of columns in each batch matrix. Defaults to `num_rows`.\n batch_shape: A list or tuple of Python integers or a 1-D `int32` `Tensor`.\n If provided, the returned `Tensor` will have leading batch dimensions of\n this shape.\n dtype: The type of an element in the resulting `Tensor`\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `Tensor` of shape `batch_shape + [num_rows, num_columns]`\n ", "desc": "Construct an identity matrix, or a batch of matrices.", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_args", "docs": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n Quantization is called fake since the output is still in floating point.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_args_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxArgs operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxArgs operation.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxArgs operation.", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_vars", "docs": "Fake-quantize the 'inputs' tensor of type float via global float scalars\n\n Fake-quantize the `inputs` tensor of type float via global float scalars\n `min` and `max` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via global float scalars", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_vars_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVars operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation.\n min, max: Quantization interval, scalar floats.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 8, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVars operation.", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_vars_per_channel", "docs": "Fake-quantize the 'inputs' tensor of type float via per-channel floats\n\n Fake-quantize the `inputs` tensor of type float per-channel and one of the\n shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max`\n of shape `[d]` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via per-channel floats", "type": "API"}, {"name": "tf.compat.v1.fake_quant_with_min_max_vars_per_channel_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation,\n shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape\n same as `gradients`.\n min, max: Quantization interval, floats of shape `[d]`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 16, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.", "type": "API"}, {"name": "tf.compat.v1.feature_column", "docs": "Public API for tf.feature_column namespace.\n", "desc": "Public API for tf.feature_column namespace.", "type": "API"}, {"name": "tf.compat.v1.feature_column.bucketized_column", "docs": "Represents discretized dense input bucketed by `boundaries`.\n\n Buckets include the left boundary, and exclude the right boundary. Namely,\n `boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`,\n `[1., 2.)`, and `[2., +inf)`.\n\n For example, if the inputs are\n\n ```python\n boundaries = [0, 10, 100]\n input tensor = [[-5, 10000]\n [150, 10]\n [5, 100]]\n ```\n\n then the output will be\n\n ```python\n output = [[0, 3]\n [3, 2]\n [1, 3]]\n ```\n\n Example:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n bucketized_price = tf.feature_column.bucketized_column(\n price, boundaries=[...])\n columns = [bucketized_price, ...]\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)\n ```\n\n A `bucketized_column` can also be crossed with another categorical column\n using `crossed_column`:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n # bucketized_column converts numerical feature to a categorical one.\n bucketized_price = tf.feature_column.bucketized_column(\n price, boundaries=[...])\n # 'keywords' is a string feature.\n price_x_keywords = tf.feature_column.crossed_column(\n [bucketized_price, 'keywords'], 50K)\n columns = [price_x_keywords, ...]\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)\n linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor)\n ```\n\n Args:\n source_column: A one-dimensional dense column which is generated with\n `numeric_column`.\n boundaries: A sorted list or tuple of floats specifying the boundaries.\n\n Returns:\n A `BucketizedColumn`.\n\n Raises:\n ValueError: If `source_column` is not a numeric column, or if it is not\n one-dimensional.\n ValueError: If `boundaries` is not a sorted list or tuple.\n ", "desc": "Represents discretized dense input bucketed by `boundaries`.", "type": "API"}, {"name": "tf.compat.v1.feature_column.categorical_column_with_hash_bucket", "docs": "Represents sparse feature where ids are set by hashing.\n\n Use this when your sparse features are in string or integer format, and you\n want to distribute your inputs into a finite number of buckets by hashing.\n output_id = Hash(input_feature_string) % bucket_size for string type input.\n For int type input, the value is converted to its string representation first\n and then hashed by the same formula.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example:\n\n ```python\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n columns = [keywords]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n linear_prediction, _, _ = tf.compat.v1.feature_column.linear_model(features,\n columns)\n\n # or\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n keywords_embedded = tf.feature_column.embedding_column(keywords, 16)\n columns = [keywords_embedded]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n hash_bucket_size: An int > 1. The number of buckets.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `HashedCategoricalColumn`.\n\n Raises:\n ValueError: `hash_bucket_size` is not greater than 1.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "Represents sparse feature where ids are set by hashing.", "type": "API"}, {"name": "tf.compat.v1.feature_column.categorical_column_with_identity", "docs": "A `CategoricalColumn` that returns identity values.\n\n Use this when your inputs are integers in the range `[0, num_buckets)`, and\n you want to use the input value itself as the categorical ID. Values outside\n this range will result in `default_value` if specified, otherwise it will\n fail.\n\n Typically, this is used for contiguous ranges of integer indexes, but\n it doesn't have to be. This might be inefficient, however, if many of IDs\n are unused. Consider `categorical_column_with_hash_bucket` in that case.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n In the following examples, each input in the range `[0, 1000000)` is assigned\n the same value. All other inputs are assigned `default_value` 0. Note that a\n literal 0 in inputs will result in the same default ID.\n\n Linear model:\n\n ```python\n import tensorflow as tf\n video_id = tf.feature_column.categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [video_id]\n features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],\n [33,78, 2, 73, 1]])}\n linear_prediction = tf.compat.v1.feature_column.linear_model(features,\n columns)\n ```\n\n Embedding for a DNN model:\n\n ```python\n import tensorflow as tf\n video_id = tf.feature_column.categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [tf.feature_column.embedding_column(video_id, 9)]\n features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],\n [33,78, 2, 73, 1]])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n num_buckets: Range of inputs and outputs is `[0, num_buckets)`.\n default_value: If set, values outside of range `[0, num_buckets)` will\n be replaced with this value. If not set, values >= num_buckets will\n cause a failure while values < 0 will be dropped.\n\n Returns:\n A `CategoricalColumn` that returns identity values.\n\n Raises:\n ValueError: if `num_buckets` is less than one.\n ValueError: if `default_value` is not in range `[0, num_buckets)`.\n ", "desc": "A `CategoricalColumn` that returns identity values.", "type": "API"}, {"name": "tf.compat.v1.feature_column.categorical_column_with_vocabulary_file", "docs": "A `CategoricalColumn` with a vocabulary file.\n\n Use this when your inputs are in string or integer format, and you have a\n vocabulary file that maps each value to an integer ID. By default,\n out-of-vocabulary values are ignored. Use either (but not both) of\n `num_oov_buckets` and `default_value` to specify how to include\n out-of-vocabulary values.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example with `num_oov_buckets`:\n File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state\n abbreviation. All inputs with values in that file are assigned an ID 0-49,\n corresponding to its line number. All other values are hashed and assigned an\n ID 50-54.\n\n ```python\n import tensorflow as tf\n states = tf.feature_column.categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='states.txt', vocabulary_size=5,\n num_oov_buckets=1)\n columns = [states]\n features = {'states':tf.constant([['california', 'georgia', 'michigan',\n 'texas', 'new york'], ['new york', 'georgia', 'california', 'michigan',\n 'texas']])}\n linear_prediction = tf.compat.v1.feature_column.linear_model(features,\n columns)\n ```\n\n Example with `default_value`:\n File '/us/states.txt' contains 51 lines - the first line is 'XX', and the\n other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX'\n in input, and other values missing from the file, will be assigned ID 0. All\n others are assigned the corresponding line number 1-50.\n\n ```python\n import tensorflow as tf\n states = tf.feature_column.categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='states.txt', vocabulary_size=6,\n default_value=0)\n columns = [states]\n features = {'states':tf.constant([['california', 'georgia', 'michigan',\n 'texas', 'new york'], ['new york', 'georgia', 'california', 'michigan',\n 'texas']])}\n linear_prediction = tf.compat.v1.feature_column.linear_model(features,\n columns)\n ```\n\n And to make an embedding with either:\n\n ```python\n import tensorflow as tf\n states = tf.feature_column.categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='states.txt', vocabulary_size=5,\n num_oov_buckets=1)\n columns = [tf.feature_column.embedding_column(states, 3)]\n features = {'states':tf.constant([['california', 'georgia', 'michigan',\n 'texas', 'new york'], ['new york', 'georgia', 'california', 'michigan',\n 'texas']])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n vocabulary_file: The vocabulary file name.\n vocabulary_size: Number of the elements in the vocabulary. This must be no\n greater than length of `vocabulary_file`, if less than length, later\n values are ignored. If None, it is set to the length of `vocabulary_file`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of\n the input value. A positive `num_oov_buckets` can not be specified with\n `default_value`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `CategoricalColumn` with a vocabulary file.\n\n Raises:\n ValueError: `vocabulary_file` is missing or cannot be opened.\n ValueError: `vocabulary_size` is missing or < 1.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A `CategoricalColumn` with a vocabulary file.", "type": "API"}, {"name": "tf.compat.v1.feature_column.categorical_column_with_vocabulary_list", "docs": "A `CategoricalColumn` with in-memory vocabulary.\n\n Use this when your inputs are in string or integer format, and you have an\n in-memory vocabulary mapping each value to an integer ID. By default,\n out-of-vocabulary values are ignored. Use either (but not both) of\n `num_oov_buckets` and `default_value` to specify how to include\n out-of-vocabulary values.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example with `num_oov_buckets`:\n In the following example, each input in `vocabulary_list` is assigned an ID\n 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other\n inputs are hashed and assigned an ID 4-5.\n\n ```python\n colors = categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),\n num_oov_buckets=2)\n columns = [colors, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n Example with `default_value`:\n In the following example, each input in `vocabulary_list` is assigned an ID\n 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other\n inputs are assigned `default_value` 0.\n\n\n ```python\n colors = categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('X', 'R', 'G', 'B', 'Y'), default_value=0)\n columns = [colors, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n And to make an embedding with either:\n\n ```python\n columns = [embedding_column(colors, 3),...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the column\n name and the dictionary key for feature parsing configs, feature `Tensor`\n objects, and feature columns.\n vocabulary_list: An ordered iterable defining the vocabulary. Each feature\n is mapped to the index of its value (if present) in `vocabulary_list`.\n Must be castable to `dtype`.\n dtype: The type of features. Only string and integer types are supported. If\n `None`, it will be inferred from `vocabulary_list`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a\n hash of the input value. A positive `num_oov_buckets` can not be specified\n with `default_value`.\n\n Returns:\n A `CategoricalColumn` with in-memory vocabulary.\n\n Raises:\n ValueError: if `vocabulary_list` is empty, or contains duplicate keys.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: if `dtype` is not integer or string.\n ", "desc": "A `CategoricalColumn` with in-memory vocabulary.", "type": "API"}, {"name": "tf.compat.v1.feature_column.crossed_column", "docs": "Returns a column for performing crosses of categorical features.\n\n Crossed features will be hashed according to `hash_bucket_size`. Conceptually,\n the transformation can be thought of as:\n Hash(cartesian product of features) % `hash_bucket_size`\n\n For example, if the input features are:\n\n * SparseTensor referred by first key:\n\n ```python\n shape = [2, 2]\n {\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n }\n ```\n\n * SparseTensor referred by second key:\n\n ```python\n shape = [2, 1]\n {\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n }\n ```\n\n then crossed feature will look like:\n\n ```python\n shape = [2, 2]\n {\n [0, 0]: Hash64(\"d\", Hash64(\"a\")) % hash_bucket_size\n [1, 0]: Hash64(\"e\", Hash64(\"b\")) % hash_bucket_size\n [1, 1]: Hash64(\"e\", Hash64(\"c\")) % hash_bucket_size\n }\n ```\n\n Here is an example to create a linear model with crosses of string features:\n\n ```python\n keywords_x_doc_terms = crossed_column(['keywords', 'doc_terms'], 50K)\n columns = [keywords_x_doc_terms, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n You could also use vocabulary lookup before crossing:\n\n ```python\n keywords = categorical_column_with_vocabulary_file(\n 'keywords', '/path/to/vocabulary/file', vocabulary_size=1K)\n keywords_x_doc_terms = crossed_column([keywords, 'doc_terms'], 50K)\n columns = [keywords_x_doc_terms, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n If an input feature is of numeric type, you can use\n `categorical_column_with_identity`, or `bucketized_column`, as in the example:\n\n ```python\n # vertical_id is an integer categorical feature.\n vertical_id = categorical_column_with_identity('vertical_id', 10K)\n price = numeric_column('price')\n # bucketized_column converts numerical feature to a categorical one.\n bucketized_price = bucketized_column(price, boundaries=[...])\n vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)\n columns = [vertical_id_x_price, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n To use crossed column in DNN model, you need to add it in an embedding column\n as in this example:\n\n ```python\n vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)\n vertical_id_x_price_embedded = embedding_column(vertical_id_x_price, 10)\n dense_tensor = input_layer(features, [vertical_id_x_price_embedded, ...])\n ```\n\n Args:\n keys: An iterable identifying the features to be crossed. Each element can\n be either:\n * string: Will use the corresponding feature which must be of string type.\n * `CategoricalColumn`: Will use the transformed tensor produced by this\n column. Does not support hashed categorical column.\n hash_bucket_size: An int > 1. The number of buckets.\n hash_key: Specify the hash_key that will be used by the `FingerprintCat64`\n function to combine the crosses fingerprints on SparseCrossOp (optional).\n\n Returns:\n A `CrossedColumn`.\n\n Raises:\n ValueError: If `len(keys) < 2`.\n ValueError: If any of the keys is neither a string nor `CategoricalColumn`.\n ValueError: If any of the keys is `HashedCategoricalColumn`.\n ValueError: If `hash_bucket_size < 1`.\n ", "desc": "Returns a column for performing crosses of categorical features.", "type": "API"}, {"name": "tf.compat.v1.feature_column.embedding_column", "docs": "`DenseColumn` that converts from sparse, categorical input.\n\n Use this when your inputs are sparse, but you want to convert them to a dense\n representation (e.g., to feed to a DNN).\n\n Inputs must be a `CategoricalColumn` created by any of the\n `categorical_column_*` function. Here is an example of using\n `embedding_column` with `DNNClassifier`:\n\n ```python\n video_id = categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [embedding_column(video_id, 9),...]\n\n estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)\n\n label_column = ...\n def input_fn():\n features = tf.io.parse_example(\n ..., features=make_parse_example_spec(columns + [label_column]))\n labels = features.pop(label_column.name)\n return features, labels\n\n estimator.train(input_fn=input_fn, steps=100)\n ```\n\n Here is an example using `embedding_column` with model_fn:\n\n ```python\n def model_fn(features, ...):\n video_id = categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [embedding_column(video_id, 9),...]\n dense_tensor = input_layer(features, columns)\n # Form DNN layers, calculate loss, and return EstimatorSpec.\n ...\n ```\n\n Args:\n categorical_column: A `CategoricalColumn` created by a\n `categorical_column_with_*` function. This column produces the sparse IDs\n that are inputs to the embedding lookup.\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries in\n a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with\n 'mean' the default. 'sqrtn' often achieves good accuracy, in particular\n with bag-of-words columns. Each of this can be thought as example level\n normalizations on the column. For more information, see\n `tf.embedding_lookup_sparse`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `truncated_normal_initializer` with mean `0.0` and\n standard deviation `1/sqrt(dimension)`.\n ckpt_to_load_from: String representing checkpoint name/pattern from which to\n restore column weights. Required if `tensor_name_in_ckpt` is not `None`.\n tensor_name_in_ckpt: Name of the `Tensor` in `ckpt_to_load_from` from which\n to restore the column weights. Required if `ckpt_to_load_from` is not\n `None`.\n max_norm: If not `None`, embedding values are l2-normalized to this value.\n trainable: Whether or not the embedding is trainable. Default is True.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n `DenseColumn` that converts from sparse input.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt`\n is specified.\n ValueError: if `initializer` is specified and is not callable.\n RuntimeError: If eager execution is enabled.\n ", "desc": "`DenseColumn` that converts from sparse, categorical input.", "type": "API"}, {"name": "tf.compat.v1.feature_column.indicator_column", "docs": "Represents multi-hot representation of given categorical column.\n\n - For DNN model, `indicator_column` can be used to wrap any\n `categorical_column_*` (e.g., to feed to DNN). Consider to Use\n `embedding_column` if the number of buckets/unique(values) are large.\n\n - For Wide (aka linear) model, `indicator_column` is the internal\n representation for categorical column when passing categorical column\n directly (as any element in feature_columns) to `linear_model`. See\n `linear_model` for details.\n\n ```python\n name = indicator_column(categorical_column_with_vocabulary_list(\n 'name', ['bob', 'george', 'wanda']))\n columns = [name, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n\n dense_tensor == [[1, 0, 0]] # If \"name\" bytes_list is [\"bob\"]\n dense_tensor == [[1, 0, 1]] # If \"name\" bytes_list is [\"bob\", \"wanda\"]\n dense_tensor == [[2, 0, 0]] # If \"name\" bytes_list is [\"bob\", \"bob\"]\n ```\n\n Args:\n categorical_column: A `CategoricalColumn` which is created by\n `categorical_column_with_*` or `crossed_column` functions.\n\n Returns:\n An `IndicatorColumn`.\n\n Raises:\n ValueError: If `categorical_column` is not CategoricalColumn type.\n ", "desc": "Represents multi-hot representation of given categorical column.", "type": "API"}, {"name": "tf.compat.v1.feature_column.input_layer", "docs": "Returns a dense `Tensor` as input layer based on given `feature_columns`.\n\n Generally a single example in training data is described with FeatureColumns.\n At the first layer of the model, this column oriented data should be converted\n to a single `Tensor`.\n\n Example:\n\n ```python\n price = numeric_column('price')\n keywords_embedded = embedding_column(\n categorical_column_with_hash_bucket(\"keywords\", 10K), dimensions=16)\n columns = [price, keywords_embedded, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n for units in [128, 64, 32]:\n dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu)\n prediction = tf.compat.v1.layers.dense(dense_tensor, 1)\n ```\n\n Args:\n features: A mapping from key to tensors. `_FeatureColumn`s look up via these\n keys. For example `numeric_column('price')` will look at 'price' key in\n this dict. Values can be a `SparseTensor` or a `Tensor` depends on\n corresponding `_FeatureColumn`.\n feature_columns: An iterable containing the FeatureColumns to use as inputs\n to your model. All items should be instances of classes derived from\n `_DenseColumn` such as `numeric_column`, `embedding_column`,\n `bucketized_column`, `indicator_column`. If you have categorical features,\n you can wrap them with an `embedding_column` or `indicator_column`.\n weight_collections: A list of collection names to which the Variable will be\n added. Note that variables will also be added to collections\n `tf.GraphKeys.GLOBAL_VARIABLES` and `ops.GraphKeys.MODEL_VARIABLES`.\n trainable: If `True` also add the variable to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n cols_to_vars: If not `None`, must be a dictionary that will be filled with a\n mapping from `_FeatureColumn` to list of `Variable`s. For example, after\n the call, we might have cols_to_vars =\n {_EmbeddingColumn(\n categorical_column=_HashedCategoricalColumn(\n key='sparse_feature', hash_bucket_size=5, dtype=tf.string),\n dimension=10): [],\n 'bias': [],\n _NumericColumn(\n key='numeric_feature2', shape=(2,)):\n []}\n If a column creates no variables, its value will be an empty list. Note\n that cols_to_vars will also contain a string key 'bias' that maps to a\n list of Variables.\n\n Returns:\n A `Tensor` which represents predictions/logits of a linear model. Its shape\n is (batch_size, units) and its dtype is `float32`.\n\n Raises:\n ValueError: if an item in `feature_columns` is neither a `_DenseColumn`\n nor `_CategoricalColumn`.\n ", "desc": "Returns a linear prediction `Tensor` based on given `feature_columns`.", "type": "API"}, {"name": "tf.compat.v1.feature_column.make_parse_example_spec", "docs": "Creates parsing spec dictionary from input feature_columns.\n\n The returned dictionary can be used as arg 'features' in\n `tf.io.parse_example`.\n\n Typical usage example:\n\n ```python\n # Define features and transformations\n feature_a = categorical_column_with_vocabulary_file(...)\n feature_b = numeric_column(...)\n feature_c_bucketized = bucketized_column(numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = set(\n [feature_b, feature_c_bucketized, feature_a_x_feature_c])\n features = tf.io.parse_example(\n serialized=serialized_examples,\n features=make_parse_example_spec(feature_columns))\n ```\n\n For the above example, make_parse_example_spec would return the dict:\n\n ```python\n {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n }\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `_FeatureColumn`.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If any of the given `feature_columns` is not a `_FeatureColumn`\n instance.\n ", "desc": "Creates parsing spec dictionary from input feature_columns.", "type": "API"}, {"name": "tf.compat.v1.feature_column.numeric_column", "docs": "Represents real valued or numerical features.\n\n Example:\n\n Assume we have data with two features `a` and `b`.\n\n >>> data = {'a': [15, 9, 17, 19, 21, 18, 25, 30],\n ... 'b': [5.0, 6.4, 10.5, 13.6, 15.7, 19.9, 20.3 , 0.0]}\n\n Let us represent the features `a` and `b` as numerical features.\n\n >>> a = tf.feature_column.numeric_column('a')\n >>> b = tf.feature_column.numeric_column('b')\n\n Feature column describe a set of transformations to the inputs.\n\n For example, to \"bucketize\" feature `a`, wrap the `a` column in a\n `feature_column.bucketized_column`.\n Providing `5` bucket boundaries, the bucketized_column api\n will bucket this feature in total of `6` buckets.\n\n >>> a_buckets = tf.feature_column.bucketized_column(a,\n ... boundaries=[10, 15, 20, 25, 30])\n\n Create a `DenseFeatures` layer which will apply the transformations\n described by the set of `tf.feature_column` objects:\n\n >>> feature_layer = tf.keras.layers.DenseFeatures([a_buckets, b])\n >>> print(feature_layer(data))\n tf.Tensor(\n [[ 0. 0. 1. 0. 0. 0. 5. ]\n [ 1. 0. 0. 0. 0. 0. 6.4]\n [ 0. 0. 1. 0. 0. 0. 10.5]\n [ 0. 0. 1. 0. 0. 0. 13.6]\n [ 0. 0. 0. 1. 0. 0. 15.7]\n [ 0. 0. 1. 0. 0. 0. 19.9]\n [ 0. 0. 0. 0. 1. 0. 20.3]\n [ 0. 0. 0. 0. 0. 1. 0. ]], shape=(8, 7), dtype=float32)\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n shape: An iterable of integers specifies the shape of the `Tensor`. An\n integer can be given which means a single dimension `Tensor` with given\n width. The `Tensor` representing the column will have the shape of\n [batch_size] + `shape`.\n default_value: A single value compatible with `dtype` or an iterable of\n values compatible with `dtype` which the column takes on during\n `tf.Example` parsing if data is missing. A default value of `None` will\n cause `tf.io.parse_example` to fail if an example does not contain this\n column. If a single value is provided, the same value will be applied as\n the default value for every item. If an iterable of values is provided,\n the shape of the `default_value` should be equal to the given `shape`.\n dtype: defines the type of values. Default value is `tf.float32`. Must be a\n non-quantized, real integer or floating point type.\n normalizer_fn: If not `None`, a function that can be used to normalize the\n value of the tensor after `default_value` is applied for parsing.\n Normalizer function takes the input `Tensor` as its argument, and returns\n the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that\n even though the most common use case of this function is normalization, it\n can be used for any kind of Tensorflow transformations.\n\n Returns:\n A `NumericColumn`.\n\n Raises:\n TypeError: if any dimension in shape is not an int\n ValueError: if any dimension in shape is not a positive integer\n TypeError: if `default_value` is an iterable but not compatible with `shape`\n TypeError: if `default_value` is not compatible with `dtype`.\n ValueError: if `dtype` is not convertible to `tf.float32`.\n ", "desc": "Represents real valued or numerical features.", "type": "API"}, {"name": "tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket", "docs": "A sequence of categorical terms where ids are set by hashing.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n tokens = sequence_categorical_column_with_hash_bucket(\n 'tokens', hash_bucket_size=1000)\n tokens_embedding = embedding_column(tokens, dimension=10)\n columns = [tokens_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n hash_bucket_size: An int > 1. The number of buckets.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: `hash_bucket_size` is not greater than 1.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A sequence of categorical terms where ids are set by hashing.", "type": "API"}, {"name": "tf.compat.v1.feature_column.sequence_categorical_column_with_identity", "docs": "Returns a feature column that represents sequences of integers.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n watches = sequence_categorical_column_with_identity(\n 'watches', num_buckets=1000)\n watches_embedding = embedding_column(watches, dimension=10)\n columns = [watches_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n num_buckets: Range of inputs. Namely, inputs are expected to be in the\n range `[0, num_buckets)`.\n default_value: If `None`, this column's graph operations will fail for\n out-of-range inputs. Otherwise, this value must be in the range\n `[0, num_buckets)`, and will replace out-of-range inputs.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: if `num_buckets` is less than one.\n ValueError: if `default_value` is not in range `[0, num_buckets)`.\n ", "desc": "Returns a feature column that represents sequences of integers.", "type": "API"}, {"name": "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file", "docs": "A sequence of categorical terms where ids use a vocabulary file.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n states = sequence_categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,\n num_oov_buckets=5)\n states_embedding = embedding_column(states, dimension=10)\n columns = [states_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n vocabulary_file: The vocabulary file name.\n vocabulary_size: Number of the elements in the vocabulary. This must be no\n greater than length of `vocabulary_file`, if less than length, later\n values are ignored. If None, it is set to the length of `vocabulary_file`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of\n the input value. A positive `num_oov_buckets` can not be specified with\n `default_value`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: `vocabulary_file` is missing or cannot be opened.\n ValueError: `vocabulary_size` is missing or < 1.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A sequence of categorical terms where ids use a vocabulary file.", "type": "API"}, {"name": "tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list", "docs": "A sequence of categorical terms where ids use an in-memory list.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n colors = sequence_categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),\n num_oov_buckets=2)\n colors_embedding = embedding_column(colors, dimension=3)\n columns = [colors_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n vocabulary_list: An ordered iterable defining the vocabulary. Each feature\n is mapped to the index of its value (if present) in `vocabulary_list`.\n Must be castable to `dtype`.\n dtype: The type of features. Only string and integer types are supported.\n If `None`, it will be inferred from `vocabulary_list`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a\n hash of the input value. A positive `num_oov_buckets` can not be specified\n with `default_value`.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: if `vocabulary_list` is empty, or contains duplicate keys.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: if `dtype` is not integer or string.\n ", "desc": "A sequence of categorical terms where ids use an in-memory list.", "type": "API"}, {"name": "tf.compat.v1.feature_column.sequence_numeric_column", "docs": "Returns a feature column that represents sequences of numeric data.\n\n Example:\n\n ```python\n temperature = sequence_numeric_column('temperature')\n columns = [temperature]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input features.\n shape: The shape of the input data per sequence id. E.g. if `shape=(2,)`,\n each example must contain `2 * sequence_length` values.\n default_value: A single value compatible with `dtype` that is used for\n padding the sparse data into a dense `Tensor`.\n dtype: The type of values.\n normalizer_fn: If not `None`, a function that can be used to normalize the\n value of the tensor after `default_value` is applied for parsing.\n Normalizer function takes the input `Tensor` as its argument, and returns\n the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that\n even though the most common use case of this function is normalization, it\n can be used for any kind of Tensorflow transformations.\n\n Returns:\n A `SequenceNumericColumn`.\n\n Raises:\n TypeError: if any dimension in shape is not an int.\n ValueError: if any dimension in shape is not a positive integer.\n ValueError: if `dtype` is not convertible to `tf.float32`.\n ", "desc": "Returns a feature column that represents sequences of numeric data.", "type": "API"}, {"name": "tf.compat.v1.feature_column.shared_embedding_columns", "docs": "List of dense columns that convert from sparse, categorical input.\n\n This is similar to `embedding_column`, except that it produces a list of\n embedding columns that share the same embedding weights.\n\n Use this when your inputs are sparse and of the same type (e.g. watched and\n impression video IDs that share the same vocabulary), and you want to convert\n them to a dense representation (e.g., to feed to a DNN).\n\n Inputs must be a list of categorical columns created by any of the\n `categorical_column_*` function. They must all be of the same type and have\n the same arguments except `key`. E.g. they can be\n categorical_column_with_vocabulary_file with the same vocabulary_file. Some or\n all columns could also be weighted_categorical_column.\n\n Here is an example embedding of two features for a DNNClassifier model:\n\n ```python\n watched_video_id = categorical_column_with_vocabulary_file(\n 'watched_video_id', video_vocabulary_file, video_vocabulary_size)\n impression_video_id = categorical_column_with_vocabulary_file(\n 'impression_video_id', video_vocabulary_file, video_vocabulary_size)\n columns = shared_embedding_columns(\n [watched_video_id, impression_video_id], dimension=10)\n\n estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)\n\n label_column = ...\n def input_fn():\n features = tf.io.parse_example(\n ..., features=make_parse_example_spec(columns + [label_column]))\n labels = features.pop(label_column.name)\n return features, labels\n\n estimator.train(input_fn=input_fn, steps=100)\n ```\n\n Here is an example using `shared_embedding_columns` with model_fn:\n\n ```python\n def model_fn(features, ...):\n watched_video_id = categorical_column_with_vocabulary_file(\n 'watched_video_id', video_vocabulary_file, video_vocabulary_size)\n impression_video_id = categorical_column_with_vocabulary_file(\n 'impression_video_id', video_vocabulary_file, video_vocabulary_size)\n columns = shared_embedding_columns(\n [watched_video_id, impression_video_id], dimension=10)\n dense_tensor = input_layer(features, columns)\n # Form DNN layers, calculate loss, and return EstimatorSpec.\n ...\n ```\n\n Args:\n categorical_columns: List of categorical columns created by a\n `categorical_column_with_*` function. These columns produce the sparse IDs\n that are inputs to the embedding lookup. All columns must be of the same\n type and have the same arguments except `key`. E.g. they can be\n categorical_column_with_vocabulary_file with the same vocabulary_file.\n Some or all columns could also be weighted_categorical_column.\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries in\n a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with\n 'mean' the default. 'sqrtn' often achieves good accuracy, in particular\n with bag-of-words columns. Each of this can be thought as example level\n normalizations on the column. For more information, see\n `tf.embedding_lookup_sparse`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `truncated_normal_initializer` with mean `0.0` and\n standard deviation `1/sqrt(dimension)`.\n shared_embedding_collection_name: Optional name of the collection where\n shared embedding weights are added. If not given, a reasonable name will\n be chosen based on the names of `categorical_columns`. This is also used\n in `variable_scope` when creating shared embedding weights.\n ckpt_to_load_from: String representing checkpoint name/pattern from which to\n restore column weights. Required if `tensor_name_in_ckpt` is not `None`.\n tensor_name_in_ckpt: Name of the `Tensor` in `ckpt_to_load_from` from which\n to restore the column weights. Required if `ckpt_to_load_from` is not\n `None`.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is larger\n than this value, before combining.\n trainable: Whether or not the embedding is trainable. Default is True.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n A list of dense columns that converts from sparse input. The order of\n results follows the ordering of `categorical_columns`.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if any of the given `categorical_columns` is of different type\n or has different arguments than the others.\n ValueError: if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt`\n is specified.\n ValueError: if `initializer` is specified and is not callable.\n RuntimeError: if eager execution is enabled.\n ", "desc": "List of dense columns that convert from sparse, categorical input.", "type": "API"}, {"name": "tf.compat.v1.feature_column.weighted_categorical_column", "docs": "Applies weight values to a `CategoricalColumn`.\n\n Use this when each of your sparse inputs has both an ID and a value. For\n example, if you're representing text documents as a collection of word\n frequencies, you can provide 2 parallel sparse input features ('terms' and\n 'frequencies' below).\n\n Example:\n\n Input `tf.Example` objects:\n\n ```proto\n [\n features {\n feature {\n key: \"terms\"\n value {bytes_list {value: \"very\" value: \"model\"}}\n }\n feature {\n key: \"frequencies\"\n value {float_list {value: 0.3 value: 0.1}}\n }\n },\n features {\n feature {\n key: \"terms\"\n value {bytes_list {value: \"when\" value: \"course\" value: \"human\"}}\n }\n feature {\n key: \"frequencies\"\n value {float_list {value: 0.4 value: 0.1 value: 0.2}}\n }\n }\n ]\n ```\n\n ```python\n categorical_column = categorical_column_with_hash_bucket(\n column_name='terms', hash_bucket_size=1000)\n weighted_column = weighted_categorical_column(\n categorical_column=categorical_column, weight_feature_key='frequencies')\n columns = [weighted_column, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n This assumes the input dictionary contains a `SparseTensor` for key\n 'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have\n the same indices and dense shape.\n\n Args:\n categorical_column: A `CategoricalColumn` created by\n `categorical_column_with_*` functions.\n weight_feature_key: String key for weight values.\n dtype: Type of weights, such as `tf.float32`. Only float and integer weights\n are supported.\n\n Returns:\n A `CategoricalColumn` composed of two sparse features: one represents id,\n the other represents weight (value) of the id feature in that example.\n\n Raises:\n ValueError: if `dtype` is not convertible to float.\n ", "desc": "Applies weight values to a `CategoricalColumn`.", "type": "API"}, {"name": "tf.compat.v1.fft", "docs": "Fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform over the inner-most\n dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.fft2d", "docs": "2D fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform over the inner-most\n 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.fft3d", "docs": "3D fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform over the inner-most 3\n dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.FIFOQueue", "docs": "A queue implementation that dequeues elements in first-in first-out order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in first-in first-out order.", "type": "API"}, {"name": "tf.compat.v1.fill", "docs": "Creates a tensor filled with a scalar value.\n\n See also `tf.ones`, `tf.zeros`, `tf.one_hot`, `tf.eye`.\n\n This operation creates a tensor of shape `dims` and fills it with `value`.\n\n For example:\n\n >>> tf.fill([2, 3], 9)\n \n\n `tf.fill` evaluates at graph runtime and supports dynamic shapes based on\n other runtime `tf.Tensors`, unlike `tf.constant(value, shape=dims)`, which\n embeds the value as a `Const` node.\n\n Args:\n dims: A 1-D sequence of non-negative numbers. Represents the shape of the\n output `tf.Tensor`. Entries should be of type: `int32`, `int64`.\n value: A value to fill the returned `tf.Tensor`.\n name: Optional string. The name of the output `tf.Tensor`.\n\n Returns:\n A `tf.Tensor` with shape `dims` and the same dtype as `value`.\n\n Raises:\n InvalidArgumentError: `dims` contains negative entries.\n NotFoundError: `dims` contains non-integer entries.\n\n @compatibility(numpy)\n Similar to `np.full`. In `numpy`, more parameters are supported. Passing a\n number argument as the shape (`np.full(5, value)`) is valid in `numpy` for\n specifying a 1-D shaped result, while TensorFlow does not support this syntax.\n @end_compatibility\n ", "desc": "Creates a tensor filled with a scalar value.", "type": "API"}, {"name": "tf.compat.v1.fingerprint", "docs": "Generates fingerprint values.\n\n Generates fingerprint values of `data`.\n\n Fingerprint op considers the first dimension of `data` as the batch dimension,\n and `output[i]` contains the fingerprint value generated from contents in\n `data[i, ...]` for all `i`.\n\n Fingerprint op writes fingerprint values as byte arrays. For example, the\n default method `farmhash64` generates a 64-bit fingerprint value at a time.\n This 8-byte value is written out as an `tf.uint8` array of size 8, in\n little-endian order.\n\n For example, suppose that `data` has data type `tf.int32` and shape (2, 3, 4),\n and that the fingerprint method is `farmhash64`. In this case, the output\n shape is (2, 8), where 2 is the batch dimension size of `data`, and 8 is the\n size of each fingerprint value in bytes. `output[0, :]` is generated from\n 12 integers in `data[0, :, :]` and similarly `output[1, :]` is generated from\n other 12 integers in `data[1, :, :]`.\n\n Note that this op fingerprints the raw underlying buffer, and it does not\n fingerprint Tensor's metadata such as data type and/or shape. For example, the\n fingerprint values are invariant under reshapes and bitcasts as long as the\n batch dimension remain the same:\n\n ```python\n tf.fingerprint(data) == tf.fingerprint(tf.reshape(data, ...))\n tf.fingerprint(data) == tf.fingerprint(tf.bitcast(data, ...))\n ```\n\n For string data, one should expect `tf.fingerprint(data) !=\n tf.fingerprint(tf.string.reduce_join(data))` in general.\n\n Args:\n data: A `Tensor`. Must have rank 1 or higher.\n method: A `Tensor` of type `tf.string`. Fingerprint method used by this op.\n Currently available method is `farmhash64`.\n name: A name for the operation (optional).\n\n Returns:\n A two-dimensional `Tensor` of type `tf.uint8`. The first dimension equals to\n `data`'s first dimension, and the second dimension size depends on the\n fingerprint algorithm.\n ", "desc": "Generates fingerprint values.", "type": "API"}, {"name": "tf.compat.v1.fixed_size_partitioner", "docs": "Partitioner to specify a fixed number of shards along given axis.\n\n @compatibility(TF2)\n This API is deprecated in TF2. In TF2, partitioner is no longer part of\n the variable declaration via `tf.Variable`.\n [ParameterServer Training]\n (https://www.tensorflow.org/tutorials/distribute/parameter_server_training)\n handles partitioning of variables. The corresponding TF2 partitioner class of\n `fixed_size_partitioner` is\n `tf.distribute.experimental.partitioners.FixedShardsPartitioner`.\n\n Check the [migration guide]\n (https://www.tensorflow.org/guide/migrate#2_use_python_objects_to_track_variables_and_losses)\n on the differences in treatment of variables and losses between TF1 and TF2.\n\n Before:\n\n ```\n x = tf.compat.v1.get_variable(\n \"x\", shape=(2,), partitioner=tf.compat.v1.fixed_size_partitioner(2)\n )\n ```\n After:\n\n ```\n partitioner = (\n tf.distribute.experimental.partitioners.FixedShardsPartitioner(\n num_shards=2)\n )\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=cluster_resolver,\n variable_partitioner=partitioner)\n\n with strategy.scope():\n x = tf.Variable([1.0, 2.0])\n ```\n @end_compatibility\n\n Args:\n num_shards: `int`, number of shards to partition variable.\n axis: `int`, axis to partition on.\n\n Returns:\n A partition function usable as the `partitioner` argument to\n `variable_scope` and `get_variable`.\n ", "desc": "Partitioner to specify a fixed number of shards along given axis.", "type": "API"}, {"name": "tf.compat.v1.FixedLenFeature", "docs": "Configuration for parsing a fixed-length input feature.\n\n To treat sparse input as dense, provide a `default_value`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data.\n dtype: Data type of input.\n default_value: Value to be used if an example is missing this feature. It\n must be compatible with `dtype` and of the specified `shape`.\n ", "desc": "Configuration for parsing a fixed-length input feature.", "type": "API"}, {"name": "tf.compat.v1.FixedLengthRecordReader", "docs": "A Reader that outputs fixed-length records from a file.\n\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs fixed-length records from a file.", "type": "API"}, {"name": "tf.compat.v1.FixedLenSequenceFeature", "docs": "Configuration for parsing a variable-length input feature into a `Tensor`.\n\n The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has\n a static `shape` of `[None] + shape` and the specified `dtype`.\n The resulting `Tensor` of parsing a `batch_size` many `Example`s has\n a static `shape` of `[batch_size, None] + shape` and the specified `dtype`.\n The entries in the `batch` from different `Examples` will be padded with\n `default_value` to the maximum length present in the `batch`.\n\n To treat a sparse input as dense, provide `allow_missing=True`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data for dimension 2 and higher. First dimension is\n of variable length `None`.\n dtype: Data type of input.\n allow_missing: Whether to allow this feature to be missing from a feature\n list item. Is available only for parsing `SequenceExample` not for\n parsing `Examples`.\n default_value: Scalar value to be used to pad multiple `Example`s to their\n maximum length. Irrelevant for parsing a single `Example` or\n `SequenceExample`. Defaults to \"\" for dtype string and 0 otherwise\n (optional).\n ", "desc": "Configuration for parsing a variable-length input feature into a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.flags", "docs": "Import router for absl.flags. See https://github.com/abseil/abseil-py.", "desc": "Import router for absl.flags. See https://github.com/abseil/abseil-py.", "type": "API"}, {"name": "tf.compat.v1.flags.adopt_module_key_flags", "docs": "Declares that all flags key to a module are key to the current module.\n\n Args:\n module: module, the module object from which all key flags will be declared\n as key flags to the current module.\n flag_values: FlagValues, the FlagValues instance in which the flags will be\n declared as key flags. This should almost never need to be overridden.\n\n Raises:\n Error: Raised when given an argument that is a module name (a string),\n instead of a module object.\n ", "desc": "Declares that all flags key to a module are key to the current module.", "type": "API"}, {"name": "tf.compat.v1.flags.ArgumentParser", "docs": "Base class used to parse and convert arguments.\n\n The parse() method checks to make sure that the string argument is a\n legal value and convert it to a native type. If the value cannot be\n converted, it should throw a 'ValueError' exception with a human\n readable explanation of why the value is illegal.\n\n Subclasses should also define a syntactic_help string which may be\n presented to the user to describe the form of the legal values.\n\n Argument parser classes must be stateless, since instances are cached\n and shared between flags. Initializer arguments are allowed, but all\n member variables must be derived from initializer arguments only.\n ", "desc": "Base class used to parse and convert arguments.", "type": "API"}, {"name": "tf.compat.v1.flags.ArgumentSerializer", "docs": "Base class for generating string representations of a flag value.", "desc": "Base class for generating string representations of a flag value.", "type": "API"}, {"name": "tf.compat.v1.flags.BaseListParser", "docs": "Base class for a parser of lists of strings.\n\n To extend, inherit from this class; from the subclass __init__, call\n\n BaseListParser.__init__(self, token, name)\n\n where token is a character used to tokenize, and name is a description\n of the separator.\n ", "desc": "Base class for a parser of lists of strings.", "type": "API"}, {"name": "tf.compat.v1.flags.BooleanFlag", "docs": "Basic boolean flag.\n\n Boolean flags do not take any arguments, and their value is either\n True (1) or False (0). The false value is specified on the command\n line by prepending the word 'no' to either the long or the short flag\n name.\n\n For example, if a Boolean flag was created whose long name was\n 'update' and whose short name was 'x', then this flag could be\n explicitly unset through either --noupdate or --nox.\n ", "desc": "Basic boolean flag.", "type": "API"}, {"name": "tf.compat.v1.flags.BooleanParser", "docs": "Parser of boolean values.", "desc": "Parser of boolean values.", "type": "API"}, {"name": "tf.compat.v1.flags.CantOpenFlagFileError", "docs": "Raised when flagfile fails to open.\n\n E.g. the file doesn't exist, or has wrong permissions.\n ", "desc": "Raised when flagfile fails to open.", "type": "API"}, {"name": "tf.compat.v1.flags.CsvListSerializer", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.flags.declare_key_flag", "docs": "Declares one flag as key to the current module.\n\n Key flags are flags that are deemed really important for a module.\n They are important when listing help messages; e.g., if the\n --helpshort command-line flag is used, then only the key flags of the\n main module are listed (instead of all flags, as in the case of\n --helpfull).\n\n Sample usage:\n\n flags.declare_key_flag('flag_1')\n\n Args:\n flag_name: str, the name of an already declared flag. (Redeclaring flags as\n key, including flags implicitly key because they were declared in this\n module, is a no-op.)\n flag_values: FlagValues, the FlagValues instance in which the flag will be\n declared as a key flag. This should almost never need to be overridden.\n\n Raises:\n ValueError: Raised if flag_name not defined as a Python flag.\n ", "desc": "Declares one flag as key to the current module.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE", "docs": "Registers a generic Flag object.\n\n NOTE: in the docstrings of all DEFINE* functions, \"registers\" is short\n for \"creates a new flag and registers it\".\n\n Auxiliary function: clients should use the specialized DEFINE_\n function instead.\n\n Args:\n parser: ArgumentParser, used to parse the flag arguments.\n name: str, the flag name.\n default: The default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n serializer: ArgumentSerializer, the flag serializer instance.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a generic Flag object.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_alias", "docs": "Defines an alias flag for an existing one.\n\n Args:\n name: str, the flag name.\n original_name: str, the original flag name.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the module that defines this flag.\n\n Returns:\n a handle to defined flag.\n\n Raises:\n flags.FlagError:\n UnrecognizedFlagError: if the referenced flag doesn't exist.\n DuplicateFlagError: if the alias name has been used by some existing flag.\n ", "desc": "Defines an alias flag for an existing one.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_bool", "docs": "Registers a boolean flag.\n\n Such a boolean flag does not take an argument. If a user wants to\n specify a false value explicitly, the long option beginning with 'no'\n must be used: i.e. --noflag\n\n This flag will have a value of None, True or False. None is possible\n if default=None and the user does not specify the flag on the command\n line.\n\n Args:\n name: str, the flag name.\n default: bool|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a boolean flag.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_boolean", "docs": "Registers a boolean flag.\n\n Such a boolean flag does not take an argument. If a user wants to\n specify a false value explicitly, the long option beginning with 'no'\n must be used: i.e. --noflag\n\n This flag will have a value of None, True or False. None is possible\n if default=None and the user does not specify the flag on the command\n line.\n\n Args:\n name: str, the flag name.\n default: bool|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a boolean flag.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_enum", "docs": "Registers a flag whose value can be any string from enum_values.\n\n Instead of a string enum, prefer `DEFINE_enum_class`, which allows\n defining enums from an `enum.Enum` class.\n\n Args:\n name: str, the flag name.\n default: str|None, the default value of the flag.\n enum_values: [str], a non-empty list of strings with the possible values for\n the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be any string from enum_values.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_enum_class", "docs": "Registers a flag whose value can be the name of enum members.\n\n Args:\n name: str, the flag name.\n default: Enum|str|None, the default value of the flag.\n enum_class: class, the Enum class with all the possible values for the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n case_sensitive: bool, whether to map strings to members of the enum_class\n without considering case.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to Flag __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be the name of enum members.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_flag", "docs": "Registers a 'Flag' object with a 'FlagValues' object.\n\n By default, the global FLAGS 'FlagValue' object is used.\n\n Typical users will use one of the more specialized DEFINE_xxx\n functions, such as DEFINE_string or DEFINE_integer. But developers\n who need to create Flag objects themselves should use this function\n to register their flags.\n\n Args:\n flag: Flag, a flag that is key to the module.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: str, the name of the Python module declaring this flag. If not\n provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a 'Flag' object with a 'FlagValues' object.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_float", "docs": "Registers a flag whose value must be a float.\n\n If lower_bound or upper_bound are set, then this flag must be\n within the given range.\n\n Args:\n name: str, the flag name.\n default: float|str|None, the default value of the flag.\n help: str, the help message.\n lower_bound: float, min value of the flag.\n upper_bound: float, max value of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to DEFINE.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value must be a float.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_integer", "docs": "Registers a flag whose value must be an integer.\n\n If lower_bound, or upper_bound are set, then this flag must be\n within the given range.\n\n Args:\n name: str, the flag name.\n default: int|str|None, the default value of the flag.\n help: str, the help message.\n lower_bound: int, min value of the flag.\n upper_bound: int, max value of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: dict, the extra keyword args that are passed to DEFINE.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value must be an integer.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_list", "docs": "Registers a flag whose value is a comma-separated list of strings.\n\n The flag value is parsed with a CSV parser.\n\n Args:\n name: str, the flag name.\n default: list|str|None, the default value of the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value is a comma-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi", "docs": "Registers a generic MultiFlag that parses its args with a given parser.\n\n Auxiliary function. Normal users should NOT use it directly.\n\n Developers who need to create their own 'Parser' classes for options\n which can appear multiple times can call this module function to\n register their flags.\n\n Args:\n parser: ArgumentParser, used to parse the flag arguments.\n serializer: ArgumentSerializer, the flag serializer instance.\n name: str, the flag name.\n default: Union[Iterable[T], Text, None], the default value of the flag. If\n the value is text, it will be parsed as if it was provided from the\n command line. If the value is a non-string iterable, it will be iterated\n over to create a shallow copy of the values. If it is None, it is left\n as-is.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the Python module declaring this flag. If\n not provided, it will be computed using the stack trace of this call.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a generic MultiFlag that parses its args with a given parser.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi_enum", "docs": "Registers a flag whose value can be a list strings from enum_values.\n\n Use the flag on the command line multiple times to place multiple\n enum values into the list. The 'default' may be a single string\n (which will be converted into a single-element list) or a list of\n strings.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Text], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n enum_values: [str], a non-empty list of strings with the possible values for\n the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n case_sensitive: Whether or not the enum is to be case-sensitive.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list strings from enum_values.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi_enum_class", "docs": "Registers a flag whose value can be a list of enum members.\n\n Use the flag on the command line multiple times to place multiple\n enum values into the list.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the\n default value of the flag; see `DEFINE_multi`; only differences are\n documented here. If the value is a single Enum, it is treated as a\n single-item list of that Enum value. If it is an iterable, text values\n within the iterable will be converted to the equivalent Enum objects.\n enum_class: class, the Enum class with all the possible values for the flag.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n module_name: A string, the name of the Python module declaring this flag. If\n not provided, it will be computed using the stack trace of this call.\n case_sensitive: bool, whether to map strings to members of the enum_class\n without considering case.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of enum members.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi_float", "docs": "Registers a flag whose value can be a list of arbitrary floats.\n\n Use the flag on the command line multiple times to place multiple\n float values into the list. The 'default' may be a single float\n (which will be converted into a single-element list) or a list of\n floats.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[float], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n lower_bound: float, min values of the flag.\n upper_bound: float, max values of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of arbitrary floats.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi_integer", "docs": "Registers a flag whose value can be a list of arbitrary integers.\n\n Use the flag on the command line multiple times to place multiple\n integer values into the list. The 'default' may be a single integer\n (which will be converted into a single-element list) or a list of\n integers.\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[int], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n lower_bound: int, min values of the flag.\n upper_bound: int, max values of the flag.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of arbitrary integers.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_multi_string", "docs": "Registers a flag whose value can be a list of any strings.\n\n Use the flag on the command line multiple times to place multiple\n string values into the list. The 'default' may be a single string\n (which will be converted into a single-element list) or a list of\n strings.\n\n\n Args:\n name: str, the flag name.\n default: Union[Iterable[Text], Text, None], the default value of the flag;\n see `DEFINE_multi`.\n help: str, the help message.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value can be a list of any strings.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_spaceseplist", "docs": "Registers a flag whose value is a whitespace-separated list of strings.\n\n Any whitespace can be used as a separator.\n\n Args:\n name: str, the flag name.\n default: list|str|None, the default value of the flag.\n help: str, the help message.\n comma_compat: bool - Whether to support comma as an additional separator. If\n false then only whitespace is supported. This is intended only for\n backwards compatibility with flags that used to be comma-separated.\n flag_values: FlagValues, the FlagValues instance with which the flag will be\n registered. This should almost never need to be overridden.\n required: bool, is this a required flag. This must be used as a keyword\n argument.\n **args: Dictionary with extra keyword args that are passed to the Flag\n __init__.\n\n Returns:\n a handle to defined flag.\n ", "desc": "Registers a flag whose value is a whitespace-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.flags.DEFINE_string", "docs": "Registers a flag whose value can be any string.", "desc": "Registers a flag whose value can be any string.", "type": "API"}, {"name": "tf.compat.v1.flags.disclaim_key_flags", "docs": "Declares that the current module will not define any more key flags.\n\n Normally, the module that calls the DEFINE_xxx functions claims the\n flag to be its key flag. This is undesirable for modules that\n define additional DEFINE_yyy functions with its own flag parsers and\n serializers, since that module will accidentally claim flags defined\n by DEFINE_yyy as its key flags. After calling this function, the\n module disclaims flag definitions thereafter, so the key flags will\n be correctly attributed to the caller of DEFINE_yyy.\n\n After calling this function, the module will not be able to define\n any more flags. This function will affect all FlagValues objects.\n ", "desc": "Declares that the current module will not define any more key flags.", "type": "API"}, {"name": "tf.compat.v1.flags.doc_to_help", "docs": "Takes a __doc__ string and reformats it as help.", "desc": "Takes a __doc__ string and reformats it as help.", "type": "API"}, {"name": "tf.compat.v1.flags.DuplicateFlagError", "docs": "Raised if there is a flag naming conflict.", "desc": "Raised if there is a flag naming conflict.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumClassFlag", "docs": "Basic enum flag; its value is an enum class's member.", "desc": "Basic enum flag; its value is an enum class's member.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumClassListSerializer", "docs": "A serializer for MultiEnumClass flags.\n\n This serializer simply joins the output of `EnumClassSerializer` using a\n provided seperator.\n ", "desc": "A serializer for MultiEnumClass flags.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumClassParser", "docs": "Parser of an Enum class member.", "desc": "Parser of an Enum class member.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumClassSerializer", "docs": "Class for generating string representations of an enum class flag value.", "desc": "Class for generating string representations of an enum class flag value.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumFlag", "docs": "Basic enum flag; its value can be any string from list of enum_values.", "desc": "Basic enum flag; its value can be any string from list of enum_values.", "type": "API"}, {"name": "tf.compat.v1.flags.EnumParser", "docs": "Parser of a string enum value (a string value from a given set).", "desc": "Parser of a string enum value (a string value from a given set).", "type": "API"}, {"name": "tf.compat.v1.flags.Error", "docs": "The base class for all flags errors.", "desc": "The base class for all flags errors.", "type": "API"}, {"name": "tf.compat.v1.flags.Flag", "docs": "Information about a command-line flag.\n\n 'Flag' objects define the following fields:\n .name - the name for this flag;\n .default - the default value for this flag;\n .default_unparsed - the unparsed default value for this flag.\n .default_as_str - default value as repr'd string, e.g., \"'true'\" (or None);\n .value - the most recent parsed value of this flag; set by parse();\n .help - a help string or None if no help is available;\n .short_name - the single letter alias for this flag (or None);\n .boolean - if 'true', this flag does not accept arguments;\n .present - true if this flag was parsed from command line flags;\n .parser - an ArgumentParser object;\n .serializer - an ArgumentSerializer object;\n .allow_override - the flag may be redefined without raising an error, and\n newly defined flag overrides the old one.\n .allow_override_cpp - use the flag from C++ if available; the flag\n definition is replaced by the C++ flag after init;\n .allow_hide_cpp - use the Python flag despite having a C++ flag with\n the same name (ignore the C++ flag);\n .using_default_value - the flag value has not been set by user;\n .allow_overwrite - the flag may be parsed more than once without raising\n an error, the last set value will be used;\n .allow_using_method_names - whether this flag can be defined even if it has\n a name that conflicts with a FlagValues method.\n\n The only public method of a 'Flag' object is parse(), but it is\n typically only called by a 'FlagValues' object. The parse() method is\n a thin wrapper around the 'ArgumentParser' parse() method. The parsed\n value is saved in .value, and the .present attribute is updated. If\n this flag was already present, an Error is raised.\n\n parse() is also called during __init__ to parse the default value and\n initialize the .value attribute. This enables other python modules to\n safely use flags even if the __main__ module neglects to parse the\n command line arguments. The .present attribute is cleared after\n __init__ parsing. If the default value is set to None, then the\n __init__ parsing step is skipped and the .value attribute is\n initialized to None.\n\n Note: The default value is also presented to the user in the help\n string, so it is important that it be a legal value for this flag.\n ", "desc": "Information about a command-line flag.", "type": "API"}, {"name": "tf.compat.v1.flags.flag_dict_to_args", "docs": "Convert a dict of values into process call parameters.\n\n This method is used to convert a dictionary into a sequence of parameters\n for a binary that parses arguments using this module.\n\n Args:\n flag_map: dict, a mapping where the keys are flag names (strings).\n values are treated according to their type:\n * If value is None, then only the name is emitted.\n * If value is True, then only the name is emitted.\n * If value is False, then only the name prepended with 'no' is emitted.\n * If value is a string then --name=value is emitted.\n * If value is a collection, this will emit --name=value1,value2,value3,\n unless the flag name is in multi_flags, in which case this will emit\n --name=value1 --name=value2 --name=value3.\n * Everything else is converted to string an passed as such.\n multi_flags: set, names (strings) of flags that should be treated as\n multi-flags.\n Yields:\n sequence of string suitable for a subprocess execution.\n ", "desc": "Convert a dict of values into process call parameters.", "type": "API"}, {"name": "tf.compat.v1.flags.FlagHolder", "docs": "Holds a defined flag.\n\n This facilitates a cleaner api around global state. Instead of\n\n ```\n flags.DEFINE_integer('foo', ...)\n flags.DEFINE_integer('bar', ...)\n ...\n def method():\n # prints parsed value of 'bar' flag\n print(flags.FLAGS.foo)\n # runtime error due to typo or possibly bad coding style.\n print(flags.FLAGS.baz)\n ```\n\n it encourages code like\n\n ```\n FOO_FLAG = flags.DEFINE_integer('foo', ...)\n BAR_FLAG = flags.DEFINE_integer('bar', ...)\n ...\n def method():\n print(FOO_FLAG.value)\n print(BAR_FLAG.value)\n ```\n\n since the name of the flag appears only once in the source code.\n ", "desc": "Holds a defined flag.", "type": "API"}, {"name": "tf.compat.v1.flags.FlagNameConflictsWithMethodError", "docs": "Raised when a flag name conflicts with FlagValues methods.", "desc": "Raised when a flag name conflicts with FlagValues methods.", "type": "API"}, {"name": "tf.compat.v1.flags.FLAGS", "docs": "Registry of 'Flag' objects.\n\n A 'FlagValues' can then scan command line arguments, passing flag\n arguments through to the 'Flag' objects that it owns. It also\n provides easy access to the flag values. Typically only one\n 'FlagValues' object is needed by an application: flags.FLAGS\n\n This class is heavily overloaded:\n\n 'Flag' objects are registered via __setitem__:\n FLAGS['longname'] = x # register a new flag\n\n The .value attribute of the registered 'Flag' objects can be accessed\n as attributes of this 'FlagValues' object, through __getattr__. Both\n the long and short name of the original 'Flag' objects can be used to\n access its value:\n FLAGS.longname # parsed flag value\n FLAGS.x # parsed flag value (short name)\n\n Command line arguments are scanned and passed to the registered 'Flag'\n objects through the __call__ method. Unparsed arguments, including\n argv[0] (e.g. the program name) are returned.\n argv = FLAGS(sys.argv) # scan command line arguments\n\n The original registered Flag objects can be retrieved through the use\n of the dictionary-like operator, __getitem__:\n x = FLAGS['longname'] # access the registered Flag object\n\n The str() operator of a 'FlagValues' object provides help for all of\n the registered 'Flag' objects.\n ", "desc": "Registry of 'Flag' objects.", "type": "API"}, {"name": "tf.compat.v1.flags.FlagValues", "docs": "Registry of 'Flag' objects.\n\n A 'FlagValues' can then scan command line arguments, passing flag\n arguments through to the 'Flag' objects that it owns. It also\n provides easy access to the flag values. Typically only one\n 'FlagValues' object is needed by an application: flags.FLAGS\n\n This class is heavily overloaded:\n\n 'Flag' objects are registered via __setitem__:\n FLAGS['longname'] = x # register a new flag\n\n The .value attribute of the registered 'Flag' objects can be accessed\n as attributes of this 'FlagValues' object, through __getattr__. Both\n the long and short name of the original 'Flag' objects can be used to\n access its value:\n FLAGS.longname # parsed flag value\n FLAGS.x # parsed flag value (short name)\n\n Command line arguments are scanned and passed to the registered 'Flag'\n objects through the __call__ method. Unparsed arguments, including\n argv[0] (e.g. the program name) are returned.\n argv = FLAGS(sys.argv) # scan command line arguments\n\n The original registered Flag objects can be retrieved through the use\n of the dictionary-like operator, __getitem__:\n x = FLAGS['longname'] # access the registered Flag object\n\n The str() operator of a 'FlagValues' object provides help for all of\n the registered 'Flag' objects.\n ", "desc": "Registry of 'Flag' objects.", "type": "API"}, {"name": "tf.compat.v1.flags.FloatParser", "docs": "Parser of floating point values.\n\n Parsed value may be bounded to a given upper and lower bound.\n ", "desc": "Parser of floating point values.", "type": "API"}, {"name": "tf.compat.v1.flags.get_help_width", "docs": "Returns the integer width of help lines that is used in TextWrap.", "desc": "Returns the integer width of help lines that is used in TextWrap.", "type": "API"}, {"name": "tf.compat.v1.flags.IllegalFlagValueError", "docs": "Raised when the flag command line argument is illegal.", "desc": "Raised when the flag command line argument is illegal.", "type": "API"}, {"name": "tf.compat.v1.flags.IntegerParser", "docs": "Parser of an integer value.\n\n Parsed value may be bounded to a given upper and lower bound.\n ", "desc": "Parser of an integer value.", "type": "API"}, {"name": "tf.compat.v1.flags.ListParser", "docs": "Parser for a comma-separated list of strings.", "desc": "Parser for a comma-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.flags.ListSerializer", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive", "docs": "Ensures that only one flag among flag_names is True.\n\n Args:\n flag_names: [str], names of the flags.\n required: bool. If true, exactly one flag must be True. Otherwise, at most\n one flag can be True, and it is valid for all flags to be False.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n ", "desc": "Ensures that only one flag among flag_names is True.", "type": "API"}, {"name": "tf.compat.v1.flags.mark_flag_as_required", "docs": "Ensures that flag is not None during program execution.\n\n Registers a flag validator, which will follow usual validator rules.\n Important note: validator will pass for any non-None value, such as False,\n 0 (zero), '' (empty string) and so on.\n\n If your module might be imported by others, and you only wish to make the flag\n required when the module is directly executed, call this method like this:\n\n if __name__ == '__main__':\n flags.mark_flag_as_required('your_flag_name')\n app.run()\n\n Args:\n flag_name: str, name of the flag\n flag_values: flags.FlagValues, optional FlagValues instance where the flag\n is defined.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "Ensures that flag is not None during program execution.", "type": "API"}, {"name": "tf.compat.v1.flags.mark_flags_as_mutual_exclusive", "docs": "Ensures that only one flag among flag_names is not None.\n\n Important note: This validator checks if flag values are None, and it does not\n distinguish between default and explicit values. Therefore, this validator\n does not make sense when applied to flags with default values other than None,\n including other false values (e.g. False, 0, '', []). That includes multi\n flags with a default value of [] instead of None.\n\n Args:\n flag_names: [str], names of the flags.\n required: bool. If true, exactly one of the flags must have a value other\n than None. Otherwise, at most one of the flags can have a value other\n than None, and it is valid for all of the flags to be None.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n ", "desc": "Ensures that only one flag among flag_names is not None.", "type": "API"}, {"name": "tf.compat.v1.flags.mark_flags_as_required", "docs": "Ensures that flags are not None during program execution.\n\n If your module might be imported by others, and you only wish to make the flag\n required when the module is directly executed, call this method like this:\n\n if __name__ == '__main__':\n flags.mark_flags_as_required(['flag1', 'flag2', 'flag3'])\n app.run()\n\n Args:\n flag_names: Sequence[str], names of the flags.\n flag_values: flags.FlagValues, optional FlagValues instance where the flags\n are defined.\n Raises:\n AttributeError: If any of flag name has not already been defined as a flag.\n ", "desc": "Ensures that flags are not None during program execution.", "type": "API"}, {"name": "tf.compat.v1.flags.multi_flags_validator", "docs": "A function decorator for defining a multi-flag validator.\n\n Registers the decorated function as a validator for flag_names, e.g.\n\n @flags.multi_flags_validator(['foo', 'bar'])\n def _CheckFooBar(flags_dict):\n ...\n\n See register_multi_flags_validator() for the specification of checker\n function.\n\n Args:\n flag_names: [str], a list of the flag names to be checked.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n\n Returns:\n A function decorator that registers its function argument as a validator.\n\n Raises:\n AttributeError: Raised when a flag is not registered as a valid flag name.\n ", "desc": "A function decorator for defining a multi-flag validator.", "type": "API"}, {"name": "tf.compat.v1.flags.MultiEnumClassFlag", "docs": "A multi_enum_class flag.\n\n See the __doc__ for MultiFlag for most behaviors of this class. In addition,\n this class knows how to handle enum.Enum instances as values for this flag\n type.\n ", "desc": "A multi_enum_class flag.", "type": "API"}, {"name": "tf.compat.v1.flags.MultiFlag", "docs": "A flag that can appear multiple time on the command-line.\n\n The value of such a flag is a list that contains the individual values\n from all the appearances of that flag on the command-line.\n\n See the __doc__ for Flag for most behavior of this class. Only\n differences in behavior are described here:\n\n * The default value may be either a single value or an iterable of values.\n A single value is transformed into a single-item list of that value.\n\n * The value of the flag is always a list, even if the option was\n only supplied once, and even if the default value is a single\n value\n ", "desc": "A flag that can appear multiple time on the command-line.", "type": "API"}, {"name": "tf.compat.v1.flags.register_multi_flags_validator", "docs": "Adds a constraint to multiple flags.\n\n The constraint is validated when flags are initially parsed, and after each\n change of the corresponding flag's value.\n\n Args:\n flag_names: [str], a list of the flag names to be checked.\n multi_flags_checker: callable, a function to validate the flag.\n input - dict, with keys() being flag_names, and value for each key\n being the value of the corresponding flag (string, boolean, etc).\n output - bool, True if validator constraint is satisfied.\n If constraint is not satisfied, it should either return False or\n raise flags.ValidationError.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n\n Raises:\n AttributeError: Raised when a flag is not registered as a valid flag name.\n ", "desc": "Adds a constraint to multiple flags.", "type": "API"}, {"name": "tf.compat.v1.flags.register_validator", "docs": "Adds a constraint, which will be enforced during program execution.\n\n The constraint is validated when flags are initially parsed, and after each\n change of the corresponding flag's value.\n Args:\n flag_name: str, name of the flag to be checked.\n checker: callable, a function to validate the flag.\n input - A single positional argument: The value of the corresponding\n flag (string, boolean, etc. This value will be passed to checker\n by the library).\n output - bool, True if validator constraint is satisfied.\n If constraint is not satisfied, it should either return False or\n raise flags.ValidationError(desired_error_message).\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "Adds a constraint, which will be enforced during program execution.", "type": "API"}, {"name": "tf.compat.v1.flags.text_wrap", "docs": "Wraps a given text to a maximum line length and returns it.\n\n It turns lines that only contain whitespace into empty lines, keeps new lines,\n and expands tabs using 4 spaces.\n\n Args:\n text: str, text to wrap.\n length: int, maximum length of a line, includes indentation.\n If this is None then use get_help_width()\n indent: str, indent for all but first line.\n firstline_indent: str, indent for first line; if None, fall back to indent.\n\n Returns:\n str, the wrapped text.\n\n Raises:\n ValueError: Raised if indent or firstline_indent not shorter than length.\n ", "desc": "Wraps a given text to a maximum line length and returns it.", "type": "API"}, {"name": "tf.compat.v1.flags.tf_decorator", "docs": "Base TFDecorator class and utility functions for working with decorators.\n\nThere are two ways to create decorators that TensorFlow can introspect into.\nThis is important for documentation generation purposes, so that function\nsignatures aren't obscured by the (*args, **kwds) signature that decorators\noften provide.\n\n1. Call `tf_decorator.make_decorator` on your wrapper function. If your\ndecorator is stateless, or can capture all of the variables it needs to work\nwith through lexical closure, this is the simplest option. Create your wrapper\nfunction as usual, but instead of returning it, return\n`tf_decorator.make_decorator(target, your_wrapper)`. This will attach some\ndecorator introspection metadata onto your wrapper and return it.\n\nExample:\n\n def print_hello_before_calling(target):\n def wrapper(*args, **kwargs):\n print('hello')\n return target(*args, **kwargs)\n return tf_decorator.make_decorator(target, wrapper)\n\n2. Derive from TFDecorator. If your decorator needs to be stateful, you can\nimplement it in terms of a TFDecorator. Store whatever state you need in your\nderived class, and implement the `__call__` method to do your work before\ncalling into your target. You can retrieve the target via\n`super(MyDecoratorClass, self).decorated_target`, and call it with whatever\nparameters it needs.\n\nExample:\n\n class CallCounter(tf_decorator.TFDecorator):\n def __init__(self, target):\n super(CallCounter, self).__init__('count_calls', target)\n self.call_count = 0\n\n def __call__(self, *args, **kwargs):\n self.call_count += 1\n return super(CallCounter, self).decorated_target(*args, **kwargs)\n\n def count_calls(target):\n return CallCounter(target)\n", "desc": "Base TFDecorator class and utility functions for working with decorators.", "type": "API"}, {"name": "tf.compat.v1.flags.tf_decorator.make_decorator", "docs": "Make a decorator from a wrapper and a target.\n\n Args:\n target: The final callable to be wrapped.\n decorator_func: The wrapper function.\n decorator_name: The name of the decorator. If `None`, the name of the\n function calling make_decorator.\n decorator_doc: Documentation specific to this application of\n `decorator_func` to `target`.\n decorator_argspec: The new callable signature of this decorator.\n\n Returns:\n The `decorator_func` argument with new metadata attached.\n ", "desc": "Make a decorator from a wrapper and a target.", "type": "API"}, {"name": "tf.compat.v1.flags.tf_decorator.rewrap", "docs": "Injects a new target into a function built by make_decorator.\n\n This function allows replacing a function wrapped by `decorator_func`,\n assuming the decorator that wraps the function is written as described below.\n\n The decorator function must use `.__wrapped__` instead of the\n wrapped function that is normally used:\n\n Example:\n\n # Instead of this:\n def simple_parametrized_wrapper(*args, **kwds):\n return wrapped_fn(*args, **kwds)\n\n tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn)\n\n # Write this:\n def simple_parametrized_wrapper(*args, **kwds):\n return simple_parametrized_wrapper.__wrapped__(*args, **kwds)\n\n tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn)\n\n Note that this process modifies decorator_func.\n\n Args:\n decorator_func: Callable returned by `wrap`.\n previous_target: Callable that needs to be replaced.\n new_target: Callable to replace previous_target with.\n\n Returns:\n The updated decorator. If decorator_func is not a tf_decorator, new_target\n is returned.\n ", "desc": "Injects a new target into a function built by make_decorator.", "type": "API"}, {"name": "tf.compat.v1.flags.tf_decorator.TFDecorator", "docs": "Base class for all TensorFlow decorators.\n\n TFDecorator captures and exposes the wrapped target, and provides details\n about the current decorator.\n ", "desc": "Base class for all TensorFlow decorators.", "type": "API"}, {"name": "tf.compat.v1.flags.tf_decorator.unwrap", "docs": "Unwraps an object into a list of TFDecorators and a final target.\n\n Args:\n maybe_tf_decorator: Any callable object.\n\n Returns:\n A tuple whose first element is an list of TFDecorator-derived objects that\n were applied to the final callable target, and whose second element is the\n final undecorated callable target. If the `maybe_tf_decorator` parameter is\n not decorated by any TFDecorators, the first tuple element will be an empty\n list. The `TFDecorator` list is ordered from outermost to innermost\n decorators.\n ", "desc": "Unwraps an object into a list of TFDecorators and a final target.", "type": "API"}, {"name": "tf.compat.v1.flags.UnparsedFlagAccessError", "docs": "Raised when accessing the flag value from unparsed FlagValues.", "desc": "Raised when accessing the flag value from unparsed FlagValues.", "type": "API"}, {"name": "tf.compat.v1.flags.UnrecognizedFlagError", "docs": "Raised when a flag is unrecognized.\n\n Attributes:\n flagname: str, the name of the unrecognized flag.\n flagvalue: The value of the flag, empty if the flag is not defined.\n ", "desc": "Raised when a flag is unrecognized.", "type": "API"}, {"name": "tf.compat.v1.flags.ValidationError", "docs": "Raised when flag validator constraint is not satisfied.", "desc": "Raised when flag validator constraint is not satisfied.", "type": "API"}, {"name": "tf.compat.v1.flags.validator", "docs": "A function decorator for defining a flag validator.\n\n Registers the decorated function as a validator for flag_name, e.g.\n\n @flags.validator('foo')\n def _CheckFoo(foo):\n ...\n\n See register_validator() for the specification of checker function.\n\n Args:\n flag_name: str, name of the flag to be checked.\n message: str, error text to be shown to the user if checker returns False.\n If checker raises flags.ValidationError, message from the raised\n error will be shown.\n flag_values: flags.FlagValues, optional FlagValues instance to validate\n against.\n Returns:\n A function decorator that registers its function argument as a validator.\n Raises:\n AttributeError: Raised when flag_name is not registered as a valid flag\n name.\n ", "desc": "A function decorator for defining a flag validator.", "type": "API"}, {"name": "tf.compat.v1.flags.WhitespaceSeparatedListParser", "docs": "Parser for a whitespace-separated list of strings.", "desc": "Parser for a whitespace-separated list of strings.", "type": "API"}, {"name": "tf.compat.v1.floor", "docs": "Returns element-wise largest integer not greater than x.\n\n Both input range is `(-inf, inf)` and the\n output range consists of all integer values.\n\n For example:\n\n >>> x = tf.constant([1.3324, -1.5, 5.555, -2.532, 0.99, float(\"inf\")])\n >>> tf.floor(x).numpy()\n array([ 1., -2., 5., -3., 0., inf], dtype=float32)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Returns element-wise largest integer not greater than x.", "type": "API"}, {"name": "tf.compat.v1.floor_div", "docs": "Returns x // y element-wise.\n\n *NOTE*: `floor_div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x // y element-wise.", "type": "API"}, {"name": "tf.compat.v1.floordiv", "docs": "Divides `x / y` elementwise, rounding toward the most negative integer.\n\n Mathematically, this is equivalent to floor(x / y). For example:\n floor(8.4 / 4.0) = floor(2.1) = 2.0\n floor(-8.4 / 4.0) = floor(-2.1) = -3.0\n This is equivalent to the '//' operator in Python 3.0 and above.\n\n Note: `x` and `y` must have the same type, and the result will have the same\n type as well.\n\n Args:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` rounded toward -infinity.\n\n Raises:\n TypeError: If the inputs are complex.\n ", "desc": "Divides `x / y` elementwise, rounding toward the most negative integer.", "type": "API"}, {"name": "tf.compat.v1.floormod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.compat.v1.foldl", "docs": "foldl on the list of tensors unpacked from `elems` on dimension 0.\n\n This foldl operator repeatedly applies the callable `fn` to a sequence\n of elements from first to last. The elements are made of the tensors\n unpacked from `elems` on dimension 0. The callable fn takes two tensors as\n arguments. The first argument is the accumulated value computed from the\n preceding invocation of fn, and the second is the value at the current\n position of `elems`. If `initializer` is None, `elems` must contain at least\n one element, and its first element is used as the initializer.\n\n Suppose that `elems` is unpacked into `values`, a list of tensors. The shape\n of the result tensor is fn(initializer, values[0]).shape`.\n\n This method also allows multi-arity `elems` and output of `fn`. If `elems`\n is a (possibly nested) list or tuple of tensors, then each of these tensors\n must have a matching first (unpack) dimension. The signature of `fn` may\n match the structure of `elems`. That is, if `elems` is\n `(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:\n `fn = lambda (t1, [t2, t3, [t4, t5]]):`.\n\n Args:\n fn: The callable to be performed.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n as the initial value for the accumulator.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) True enables support for back propagation.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n name: (optional) Name prefix for the returned tensors.\n\n Returns:\n A tensor or (possibly nested) sequence of tensors, resulting from applying\n `fn` consecutively to the list of tensors unpacked from `elems`, from first\n to last.\n\n Raises:\n TypeError: if `fn` is not callable.\n\n Example:\n ```python\n elems = tf.constant([1, 2, 3, 4, 5, 6])\n sum = foldl(lambda a, x: a + x, elems)\n # sum == 21\n ```\n ", "desc": "foldl on the list of tensors unpacked from `elems` on dimension 0.", "type": "API"}, {"name": "tf.compat.v1.foldr", "docs": "foldr on the list of tensors unpacked from `elems` on dimension 0.\n\n This foldr operator repeatedly applies the callable `fn` to a sequence\n of elements from last to first. The elements are made of the tensors\n unpacked from `elems`. The callable fn takes two tensors as arguments.\n The first argument is the accumulated value computed from the preceding\n invocation of fn, and the second is the value at the current position of\n `elems`. If `initializer` is None, `elems` must contain at least one element,\n and its first element is used as the initializer.\n\n Suppose that `elems` is unpacked into `values`, a list of tensors. The shape\n of the result tensor is `fn(initializer, values[0]).shape`.\n\n This method also allows multi-arity `elems` and output of `fn`. If `elems`\n is a (possibly nested) list or tuple of tensors, then each of these tensors\n must have a matching first (unpack) dimension. The signature of `fn` may\n match the structure of `elems`. That is, if `elems` is\n `(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:\n `fn = lambda (t1, [t2, t3, [t4, t5]]):`.\n\n Args:\n fn: The callable to be performed.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n as the initial value for the accumulator.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) True enables support for back propagation.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n name: (optional) Name prefix for the returned tensors.\n\n Returns:\n A tensor or (possibly nested) sequence of tensors, resulting from applying\n `fn` consecutively to the list of tensors unpacked from `elems`, from last\n to first.\n\n Raises:\n TypeError: if `fn` is not callable.\n\n Example:\n ```python\n elems = [1, 2, 3, 4, 5, 6]\n sum = foldr(lambda a, x: a + x, elems)\n # sum == 21\n ```\n ", "desc": "foldr on the list of tensors unpacked from `elems` on dimension 0.", "type": "API"}, {"name": "tf.compat.v1.function", "docs": "Compiles a function into a callable TensorFlow graph. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(experimental_compile)`. They will be removed in a future version.\nInstructions for updating:\nexperimental_compile is deprecated, use jit_compile instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(experimental_relax_shapes)`. They will be removed in a future version.\nInstructions for updating:\nexperimental_relax_shapes is deprecated, use reduce_retracing instead\n\n`tf.function` constructs a `tf.types.experimental.GenericFunction` that\nexecutes a TensorFlow graph (`tf.Graph`) created by trace-compiling the\nTensorFlow operations in `func`. More information on the topic can be found\nin [Introduction to Graphs and tf.function]\n(https://www.tensorflow.org/guide/intro_to_graphs).\n\nSee [Better Performance with tf.function]\n(https://www.tensorflow.org/guide/function) for tips on performance and\nknown limitations.\n\nExample usage:\n\n>>> @tf.function\n... def f(x, y):\n... return x ** 2 + y\n>>> x = tf.constant([2, 3])\n>>> y = tf.constant([3, -2])\n>>> f(x, y)\n\n\nThe trace-compilation allows non-TensorFlow operations to execute, but under\nspecial conditions. In general, only TensorFlow operations are guaranteed to\nrun and create fresh results whenever the `GenericFunction` is called.\n\n## Features\n\n`func` may use data-dependent Python control flow statements, including `if`,\n`for`, `while` `break`, `continue` and `return`:\n\n>>> @tf.function\n... def f(x):\n... if tf.reduce_sum(x) > 0:\n... return x * x\n... else:\n... return -x // 2\n>>> f(tf.constant(-2))\n\n\n`func`'s closure may include `tf.Tensor` and `tf.Variable` objects:\n\n>>> @tf.function\n... def f():\n... return x ** 2 + y\n>>> x = tf.constant([-2, -3])\n>>> y = tf.Variable([3, -2])\n>>> f()\n\n\n`func` may also use ops with side effects, such as `tf.print`, `tf.Variable`\nand others:\n\n>>> v = tf.Variable(1)\n>>> @tf.function\n... def f(x):\n... for i in tf.range(x):\n... v.assign_add(i)\n>>> f(3)\n>>> v\n\n\nImportant: Any Python side-effects (appending to a list, printing with\n`print`, etc) will only happen once, when `func` is traced. To have\nside-effects executed into your `tf.function` they need to be written\nas TF ops:\n\n>>> l = []\n>>> @tf.function\n... def f(x):\n... for i in x:\n... l.append(i + 1) # Caution! Will only happen once when tracing\n>>> f(tf.constant([1, 2, 3]))\n>>> l\n[]\n\nInstead, use TensorFlow collections like `tf.TensorArray`:\n\n>>> @tf.function\n... def f(x):\n... ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)\n... for i in range(len(x)):\n... ta = ta.write(i, x[i] + 1)\n... return ta.stack()\n>>> f(tf.constant([1, 2, 3]))\n\n\n## `tf.function` creates polymorphic callables\n\nInternally, `tf.types.experimental.GenericFunction` may contain multiple\n`tf.types.experimental.ConcreteFunction`s, each specialized to arguments with\ndifferent data types or shapes, since TensorFlow can perform more\noptimizations on graphs of specific shapes, dtypes and values of constant\narguments. `tf.function` treats any pure Python values as opaque objects (best\nthought of as compile-time constants), and builds a separate `tf.Graph` for\neach set of Python arguments that it encounters.\nFor more information, see the\n[tf.function guide](https://www.tensorflow.org/guide/function#rules_of_tracing)\n\nExecuting a `GenericFunction` will select and execute the appropriate\n`ConcreteFunction` based on the argument types and values.\n\nTo obtain an individual `ConcreteFunction`, use the\n`GenericFunction.get_concrete_function` method. It can be called with the\nsame arguments as `func` and returns a\n`tf.types.experimental.ConcreteFunction`. `ConcreteFunction`s are backed by a\nsingle `tf.Graph`:\n\n>>> @tf.function\n... def f(x):\n... return x + 1\n>>> isinstance(f.get_concrete_function(1).graph, tf.Graph)\nTrue\n\n`ConcreteFunction`s can be executed just like `GenericFunction`s, but their\ninput is resticted to the types to which they're specialized.\n\n## Retracing\n\n`ConcreteFunctions` are built (traced) on the fly, as the `GenericFunction` is\ncalled with new TensorFlow types or shapes, or with new Python values as\narguments. When `GenericFunction` builds a new trace, it is said that `func`\nis retraced. Retracing is a frequent performance concern for `tf.function` as\nit can be considerably slower than executing a graph that's already been\ntraced. It is ideal to minimize the amount of retracing in your code.\n\nCaution: Passing python scalars or lists as arguments to `tf.function` will\nusually retrace. To avoid this, pass numeric arguments as Tensors whenever\npossible:\n\n>>> @tf.function\n... def f(x):\n... return tf.abs(x)\n>>> f1 = f.get_concrete_function(1)\n>>> f2 = f.get_concrete_function(2) # Slow - compiles new graph\n>>> f1 is f2\nFalse\n>>> f1 = f.get_concrete_function(tf.constant(1))\n>>> f2 = f.get_concrete_function(tf.constant(2)) # Fast - reuses f1\n>>> f1 is f2\nTrue\n\nPython numerical arguments should only be used when they take few distinct\nvalues, such as hyperparameters like the number of layers in a neural network.\n\n## Input signatures\n\nFor Tensor arguments, `GenericFunction`creates a new `ConcreteFunction` for\nevery unique set of input shapes and datatypes. The example below creates two\nseparate `ConcreteFunction`s, each specialized to a different shape:\n\n>>> @tf.function\n... def f(x):\n... return x + 1\n>>> vector = tf.constant([1.0, 1.0])\n>>> matrix = tf.constant([[3.0]])\n>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)\nFalse\n\nAn \"input signature\" can be optionally provided to `tf.function` to control\nthis process. The input signature specifies the shape and type of each\nTensor argument to the function using a `tf.TensorSpec` object. More general\nshapes can be used. This ensures only one `ConcreteFunction` is created, and\nrestricts the `GenericFunction` to the specified shapes and types. It is\nan effective way to limit retracing when Tensors have dynamic shapes.\n\n>>> @tf.function(\n... input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])\n... def f(x):\n... return x + 1\n>>> vector = tf.constant([1.0, 1.0])\n>>> matrix = tf.constant([[3.0]])\n>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)\nTrue\n\n## Variables may only be created once\n\n`tf.function` only allows creating new `tf.Variable` objects when it is called\nfor the first time:\n\n>>> class MyModule(tf.Module):\n... def __init__(self):\n... self.v = None\n...\n... @tf.function\n... def __call__(self, x):\n... if self.v is None:\n... self.v = tf.Variable(tf.ones_like(x))\n... return self.v * x\n\nIn general, it is recommended to create `tf.Variable`s outside of\n`tf.function`.\nIn simple cases, persisting state across `tf.function` boundaries may be\nimplemented using a pure functional style in which state is represented by\n`tf.Tensor`s passed as arguments and returned as return values.\n\nContrast the two styles below:\n\n>>> state = tf.Variable(1)\n>>> @tf.function\n... def f(x):\n... state.assign_add(x)\n>>> f(tf.constant(2)) # Non-pure functional style\n>>> state\n\n\n>>> state = tf.constant(1)\n>>> @tf.function\n... def f(state, x):\n... state += x\n... return state\n>>> state = f(state, tf.constant(2)) # Pure functional style\n>>> state\n\n\n## Python operations execute only once per trace\n\n`func` may contain TensorFlow operations mixed with pure Python operations.\nHowever, when the function is executed, only the TensorFlow operations will\nrun. The Python operations run only once, at trace time. If TensorFlow\noperations depend on results from Pyhton operations, those results will be\nfrozen into the graph.\n\n>>> @tf.function\n... def f(a, b):\n... print('this runs at trace time; a is', a, 'and b is', b)\n... return b\n>>> f(1, tf.constant(1))\nthis runs at trace time; a is 1 and b is Tensor(\"...\", shape=(), dtype=int32)\n\n\n>>> f(1, tf.constant(2))\n\n\n>>> f(2, tf.constant(1))\nthis runs at trace time; a is 2 and b is Tensor(\"...\", shape=(), dtype=int32)\n\n\n>>> f(2, tf.constant(2))\n\n\n## Using type annotations to improve performance\n\n'experimental_follow_type_hints` can be used along with type annotations to\nreduce retracing by automatically casting any Python values to `tf.Tensor`\n(something that is not done by default, unless you use input signatures).\n\n>>> @tf.function(experimental_follow_type_hints=True)\n... def f_with_hints(x: tf.Tensor):\n... print('Tracing')\n... return x\n>>> @tf.function(experimental_follow_type_hints=False)\n... def f_no_hints(x: tf.Tensor):\n... print('Tracing')\n... return x\n>>> f_no_hints(1)\nTracing\n\n>>> f_no_hints(2)\nTracing\n\n>>> f_with_hints(1)\nTracing\n\n>>> f_with_hints(2)\n\n\nArgs:\n func: the function to be compiled. If `func` is None, `tf.function` returns\n a decorator that can be invoked with a single argument - `func`. In other\n words, `tf.function(input_signature=...)(func)` is equivalent to\n `tf.function(func, input_signature=...)`. The former can be used as\n decorator.\n input_signature: A possibly nested sequence of `tf.TensorSpec` objects\n specifying the shapes and dtypes of the Tensors that will be supplied to\n this function. If `None`, a separate function is instantiated for each\n inferred input signature. If input_signature is specified, every input to\n `func` must be a `Tensor`, and `func` cannot accept `**kwargs`.\n autograph: Whether autograph should be applied on `func` before tracing a\n graph. Data-dependent Python control flow statements require\n `autograph=True`. For more information, see the\n [tf.function and AutoGraph guide](\n https://www.tensorflow.org/guide/function#autograph_transformations).\n jit_compile: If `True`, compiles the function using\n [XLA](https://tensorflow.org/xla). XLA performs compiler optimizations,\n such as fusion, and attempts to emit more efficient code. This may\n drastically improve the performance. If set to `True`,\n the whole function needs to be compilable by XLA, or an\n `errors.InvalidArgumentError` is thrown.\n If `None` (default), compiles the function with XLA when running on TPU\n and goes through the regular function execution path when running on\n other devices.\n If `False`, executes the function without XLA compilation. Set this value\n to `False` when directly running a multi-device function on TPUs (e.g. two\n TPU cores, one TPU core and its host CPU).\n Not all functions are compilable, see a list of\n [sharp corners](https://tensorflow.org/xla/known_issues).\n reduce_retracing: When True, `tf.function` attempts to reduce the\n amount of retracing, for example by using more generic shapes. This\n can be controlled for user objects by customizing their associated\n `tf.types.experimental.TraceType`.\n experimental_implements: If provided, contains a name of a \"known\" function\n this implements. For example \"mycompany.my_recurrent_cell\".\n This is stored as an attribute in inference function,\n which can then be detected when processing serialized function.\n See [standardizing composite ops](https://github.com/tensorflow/community/blob/master/rfcs/20190610-standardizing-composite_ops.md) # pylint: disable=line-too-long\n for details. For an example of utilizing this attribute see this\n [example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc)\n The code above automatically detects and substitutes function that\n implements \"embedded_matmul\" and allows TFLite to substitute its own\n implementations. For instance, a tensorflow user can use this\n attribute to mark that their function also implements\n `embedded_matmul` (perhaps more efficiently!)\n by specifying it using this parameter:\n `@tf.function(experimental_implements=\"embedded_matmul\")`\n This can either be specified as just the string name of the function or\n a NameAttrList corresponding to a list of key-value attributes associated\n with the function name. The name of the function will be in the 'name'\n field of the NameAttrList. To define a formal TF op for this function\n implements, try the experimental [composite TF](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tfr)\n project.\n experimental_autograph_options: Optional tuple of\n `tf.autograph.experimental.Feature` values.\n experimental_relax_shapes: Deprecated. Use `reduce_retracing`\n instead.\n experimental_compile: Deprecated alias to 'jit_compile'.\n experimental_follow_type_hints: When True, the function may use type\n annotations from `func` to optimize the tracing performance. For example,\n arguments annotated with `tf.Tensor` will automatically be converted\n to a Tensor.\n\nReturns:\n If `func` is not None, returns a `tf.types.experimental.GenericFunction`.\n If `func` is None, returns a decorator that, when invoked with a single\n `func` argument, returns a `tf.types.experimental.GenericFunction`.\n\nRaises:\n `ValueError` when attempting to use `jit_compile=True`, but XLA support is\n not available.", "desc": "Compiles a function into a callable TensorFlow graph. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.gather", "docs": "Gather slices from params axis `axis` according to indices. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(validate_indices)`. They will be removed in a future version.\nInstructions for updating:\nThe `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.\n\nGather slices from `params` axis `axis` according to `indices`. `indices`\nmust be an integer tensor of any dimension (often 1-D).\n\n`Tensor.__getitem__` works for scalars, `tf.newaxis`, and\n[python slices](https://numpy.org/doc/stable/reference/arrays.indexing.html#basic-slicing-and-indexing)\n\n`tf.gather` extends indexing to handle tensors of indices.\n\nIn the simplest case it's identical to scalar indexing:\n\n>>> params = tf.constant(['p0', 'p1', 'p2', 'p3', 'p4', 'p5'])\n>>> params[3].numpy()\nb'p3'\n>>> tf.gather(params, 3).numpy()\nb'p3'\n\nThe most common case is to pass a single axis tensor of indices (this\ncan't be expressed as a python slice because the indices are not sequential):\n\n>>> indices = [2, 0, 2, 5]\n>>> tf.gather(params, indices).numpy()\narray([b'p2', b'p0', b'p2', b'p5'], dtype=object)\n\n
\n\n
\n\nThe indices can have any shape. When the `params` has 1 axis, the\noutput shape is equal to the input shape:\n\n>>> tf.gather(params, [[2, 0], [2, 5]]).numpy()\narray([[b'p2', b'p0'],\n [b'p2', b'p5']], dtype=object)\n\nThe `params` may also have any shape. `gather` can select slices\nacross any axis depending on the `axis` argument (which defaults to 0).\nBelow it is used to gather first rows, then columns from a matrix:\n\n>>> params = tf.constant([[0, 1.0, 2.0],\n... [10.0, 11.0, 12.0],\n... [20.0, 21.0, 22.0],\n... [30.0, 31.0, 32.0]])\n>>> tf.gather(params, indices=[3,1]).numpy()\narray([[30., 31., 32.],\n [10., 11., 12.]], dtype=float32)\n>>> tf.gather(params, indices=[2,1], axis=1).numpy()\narray([[ 2., 1.],\n [12., 11.],\n [22., 21.],\n [32., 31.]], dtype=float32)\n\nMore generally: The output shape has the same shape as the input, with the\nindexed-axis replaced by the shape of the indices.\n\n>>> def result_shape(p_shape, i_shape, axis=0):\n... return p_shape[:axis] + i_shape + p_shape[axis+1:]\n>>>\n>>> result_shape([1, 2, 3], [], axis=1)\n[1, 3]\n>>> result_shape([1, 2, 3], [7], axis=1)\n[1, 7, 3]\n>>> result_shape([1, 2, 3], [7, 5], axis=1)\n[1, 7, 5, 3]\n\nHere are some examples:\n\n>>> params.shape.as_list()\n[4, 3]\n>>> indices = tf.constant([[0, 2]])\n>>> tf.gather(params, indices=indices, axis=0).shape.as_list()\n[1, 2, 3]\n>>> tf.gather(params, indices=indices, axis=1).shape.as_list()\n[4, 1, 2]\n\n>>> params = tf.random.normal(shape=(5, 6, 7, 8))\n>>> indices = tf.random.uniform(shape=(10, 11), maxval=7, dtype=tf.int32)\n>>> result = tf.gather(params, indices, axis=2)\n>>> result.shape.as_list()\n[5, 6, 10, 11, 8]\n\nThis is because each index takes a slice from `params`, and\nplaces it at the corresponding location in the output. For the above example\n\n>>> # For any location in indices\n>>> a, b = 0, 1\n>>> tf.reduce_all(\n... # the corresponding slice of the result\n... result[:, :, a, b, :] ==\n... # is equal to the slice of `params` along `axis` at the index.\n... params[:, :, indices[a, b], :]\n... ).numpy()\nTrue\n\n### Batching:\n\nThe `batch_dims` argument lets you gather different items from each element\nof a batch.\n\nUsing `batch_dims=1` is equivalent to having an outer loop over the first\naxis of `params` and `indices`:\n\n>>> params = tf.constant([\n... [0, 0, 1, 0, 2],\n... [3, 0, 0, 0, 4],\n... [0, 5, 0, 6, 0]])\n>>> indices = tf.constant([\n... [2, 4],\n... [0, 4],\n... [1, 3]])\n\n>>> tf.gather(params, indices, axis=1, batch_dims=1).numpy()\narray([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n\nThis is equivalent to:\n\n>>> def manually_batched_gather(params, indices, axis):\n... batch_dims=1\n... result = []\n... for p,i in zip(params, indices):\n... r = tf.gather(p, i, axis=axis-batch_dims)\n... result.append(r)\n... return tf.stack(result)\n>>> manually_batched_gather(params, indices, axis=1).numpy()\narray([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n\nHigher values of `batch_dims` are equivalent to multiple nested loops over\nthe outer axes of `params` and `indices`. So the overall shape function is\n\n>>> def batched_result_shape(p_shape, i_shape, axis=0, batch_dims=0):\n... return p_shape[:axis] + i_shape[batch_dims:] + p_shape[axis+1:]\n>>>\n>>> batched_result_shape(\n... p_shape=params.shape.as_list(),\n... i_shape=indices.shape.as_list(),\n... axis=1,\n... batch_dims=1)\n[3, 2]\n\n>>> tf.gather(params, indices, axis=1, batch_dims=1).shape.as_list()\n[3, 2]\n\nThis comes up naturally if you need to use the indices of an operation like\n`tf.argsort`, or `tf.math.top_k` where the last dimension of the indices\nindexes into the last dimension of input, at the corresponding location.\nIn this case you can use `tf.gather(values, indices, batch_dims=-1)`.\n\nSee also:\n\n* `tf.Tensor.__getitem__`: The direct tensor index operation (`t[]`), handles\n scalars and python-slices `tensor[..., 7, 1:-1]`\n* `tf.scatter`: A collection of operations similar to `__setitem__`\n (`t[i] = x`)\n* `tf.gather_nd`: An operation similar to `tf.gather` but gathers across\n multiple axis at once (it can gather elements of a matrix instead of rows\n or columns)\n* `tf.boolean_mask`, `tf.where`: Binary indexing.\n* `tf.slice` and `tf.strided_slice`: For lower level access to the\n implementation of `__getitem__`'s python-slice handling (`t[1:-1:2]`)\n\nArgs:\n params: The `Tensor` from which to gather values. Must be at least rank\n `axis + 1`.\n indices: The index `Tensor`. Must be one of the following types: `int32`,\n `int64`. The values must be in range `[0, params.shape[axis])`.\n validate_indices: Deprecated, does nothing. Indices are always validated on\n CPU, never validated on GPU.\n\n Caution: On CPU, if an out of bound index is found, an error is raised.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`. The\n `axis` in `params` to gather `indices` from. Must be greater than or equal\n to `batch_dims`. Defaults to the first non-batch dimension. Supports\n negative indexes.\n batch_dims: An `integer`. The number of batch dimensions. Must be less\n than or equal to `rank(indices)`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor`. Has the same type as `params`.", "desc": "Gather slices from params axis `axis` according to indices. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.gather_nd", "docs": "Gather slices from `params` into a Tensor with shape specified by `indices`.\n\n `indices` is a `Tensor` of indices into `params`. The index vectors are\n arranged along the last axis of `indices`.\n\n This is similar to `tf.gather`, in which `indices` defines slices into the\n first dimension of `params`. In `tf.gather_nd`, `indices` defines slices into the\n first `N` dimensions of `params`, where `N = indices.shape[-1]`.\n\n Caution: On CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n\n ## Gathering scalars\n\n In the simplest case the vectors in `indices` index the full rank of `params`:\n\n >>> tf.gather_nd(\n ... indices=[[0, 0],\n ... [1, 1]],\n ... params = [['a', 'b'],\n ... ['c', 'd']]).numpy()\n array([b'a', b'd'], dtype=object)\n\n In this case the result has 1-axis fewer than `indices`, and each index vector\n is replaced by the scalar indexed from `params`.\n\n In this case the shape relationship is:\n\n ```\n index_depth = indices.shape[-1]\n assert index_depth == params.shape.rank\n result_shape = indices.shape[:-1]\n ```\n\n If `indices` has a rank of `K`, it is helpful to think `indices` as a\n (K-1)-dimensional tensor of indices into `params`.\n\n ## Gathering slices\n\n If the index vectors do not index the full rank of `params` then each location\n in the result contains a slice of params. This example collects rows from a\n matrix:\n\n >>> tf.gather_nd(\n ... indices = [[1],\n ... [0]],\n ... params = [['a', 'b', 'c'],\n ... ['d', 'e', 'f']]).numpy()\n array([[b'd', b'e', b'f'],\n [b'a', b'b', b'c']], dtype=object)\n\n Here `indices` contains `[2]` index vectors, each with a length of `1`.\n The index vectors each refer to rows of the `params` matrix. Each\n row has a shape of `[3]` so the output shape is `[2, 3]`.\n\n In this case, the relationship between the shapes is:\n\n ```\n index_depth = indices.shape[-1]\n outer_shape = indices.shape[:-1]\n assert index_depth <= params.shape.rank\n inner_shape = params.shape[index_depth:]\n output_shape = outer_shape + inner_shape\n ```\n\n It is helpful to think of the results in this case as tensors-of-tensors.\n The shape of the outer tensor is set by the leading dimensions of `indices`.\n While the shape of the inner tensors is the shape of a single slice.\n\n ## Batches\n\n Additionally both `params` and `indices` can have `M` leading batch\n dimensions that exactly match. In this case `batch_dims` must be set to `M`.\n\n For example, to collect one row from each of a batch of matrices you could\n set the leading elements of the index vectors to be their location in the\n batch:\n\n >>> tf.gather_nd(\n ... indices = [[0, 1],\n ... [1, 0],\n ... [2, 4],\n ... [3, 2],\n ... [4, 1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n The `batch_dims` argument lets you omit those leading location dimensions\n from the index:\n\n >>> tf.gather_nd(\n ... batch_dims=1,\n ... indices = [[1],\n ... [0],\n ... [4],\n ... [2],\n ... [1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n This is equivalent to caling a separate `gather_nd` for each location in the\n batch dimensions.\n\n\n >>> params=tf.zeros([5, 7, 3])\n >>> indices=tf.zeros([5, 1])\n >>> batch_dims = 1\n >>>\n >>> index_depth = indices.shape[-1]\n >>> batch_shape = indices.shape[:batch_dims]\n >>> assert params.shape[:batch_dims] == batch_shape\n >>> outer_shape = indices.shape[batch_dims:-1]\n >>> assert index_depth <= params.shape.rank\n >>> inner_shape = params.shape[batch_dims + index_depth:]\n >>> output_shape = batch_shape + outer_shape + inner_shape\n >>> output_shape.as_list()\n [5, 3]\n\n ### More examples\n\n Indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'a1', b'b1'],\n [b'c1', b'd1']]], dtype=object)\n\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 1], [1, 0]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 0, 1], [1, 0, 1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([b'b0', b'b1'], dtype=object)\n\n The examples below are for the case when only indices have leading extra\n dimensions. If both 'params' and 'indices' have leading batch dimensions, use\n the 'batch_dims' parameter to run gather_nd in batch mode.\n\n Batched indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0]], [[0, 1]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[b'a'],\n [b'b']], dtype=object)\n\n\n\n Batched slice indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[[b'c', b'd']],\n [[b'a', b'b']]], dtype=object)\n\n\n Batched indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[[b'a1', b'b1'],\n [b'c1', b'd1']]],\n [[[b'a0', b'b0'],\n [b'c0', b'd0']]]], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0'],\n [b'a1', b'b1']],\n [[b'a0', b'b0'],\n [b'c1', b'd1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'b0', b'b1'],\n [b'd0', b'c1']], dtype=object)\n\n\n Examples with batched 'params' and 'indices':\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[1],\n ... [0]],\n ... params = [[['a0', 'b0'],\n ... ['c0', 'd0']],\n ... [['a1', 'b1'],\n ... ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0']],\n [[b'a1', b'b1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1, 0]], [[0, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0'],\n [b'b1']], dtype=object)\n\n\n See also `tf.gather`.\n\n Args:\n params: A `Tensor`. The tensor from which to gather values.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n name: A name for the operation (optional).\n batch_dims: An integer or a scalar 'Tensor'. The number of batch dimensions.\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` into a Tensor with shape specified by `indices`.", "type": "API"}, {"name": "tf.compat.v1.get_collection", "docs": "Wrapper for `Graph.get_collection()` using the default graph.\n\n See `tf.Graph.get_collection`\n for more details.\n\n Args:\n key: The key for the collection. For example, the `GraphKeys` class contains\n many standard names for collections.\n scope: (Optional.) If supplied, the resulting list is filtered to include\n only items whose `name` attribute matches using `re.match`. Items without\n a `name` attribute are never returned if a scope is supplied and the\n choice or `re.match` means that a `scope` without special tokens filters\n by prefix.\n\n Returns:\n The list of values in the collection with the given `name`, or\n an empty list if no value has been added to that collection. The\n list contains the values in the order under which they were\n collected.\n\n @compatibility(eager)\n Collections are not supported when eager execution is enabled.\n @end_compatibility\n ", "desc": "Wrapper for `Graph.get_collection()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.get_collection_ref", "docs": "Wrapper for `Graph.get_collection_ref()` using the default graph.\n\n See `tf.Graph.get_collection_ref`\n for more details.\n\n Args:\n key: The key for the collection. For example, the `GraphKeys` class contains\n many standard names for collections.\n\n Returns:\n The list of values in the collection with the given `name`, or an empty\n list if no value has been added to that collection. Note that this returns\n the collection list itself, which can be modified in place to change the\n collection.\n\n @compatibility(eager)\n Collections are not supported when eager execution is enabled.\n @end_compatibility\n ", "desc": "Wrapper for `Graph.get_collection_ref()` using the default graph.", "type": "API"}, {"name": "tf.compat.v1.get_default_graph", "docs": "Returns the default graph for the current thread.\n\n The returned graph will be the innermost graph on which a\n `Graph.as_default()` context has been entered, or a global default\n graph if none has been explicitly created.\n\n NOTE: The default graph is a property of the current thread. If you\n create a new thread, and wish to use the default graph in that\n thread, you must explicitly add a `with g.as_default():` in that\n thread's function.\n\n @compatibility(TF2)\n `get_default_graph` does not work with either eager execution or\n `tf.function`, and you should not invoke it directly. To migrate code that\n uses Graph-related functions to TF2, rewrite the code without them. See the\n [migration guide](https://www.tensorflow.org/guide/migrate) for more\n description about the behavior and semantic changes between Tensorflow 1 and\n Tensorflow 2.\n @end_compatibility\n\n Returns:\n The default `Graph` being used in the current thread.\n ", "desc": "Returns the default graph for the current thread.", "type": "API"}, {"name": "tf.compat.v1.get_default_session", "docs": "Returns the default session for the current thread.\n\n The returned `Session` will be the innermost session on which a\n `Session` or `Session.as_default()` context has been entered.\n\n NOTE: The default session is a property of the current thread. If you\n create a new thread, and wish to use the default session in that\n thread, you must explicitly add a `with sess.as_default():` in that\n thread's function.\n\n Returns:\n The default `Session` being used in the current thread.\n ", "desc": "Returns the default session for the current thread.", "type": "API"}, {"name": "tf.compat.v1.get_local_variable", "docs": "Gets an existing *local* variable or creates a new one.\n\n@compatibility(TF2)\nAlthough it is a legacy `compat.v1` api,\n`tf.compat.v1.get_variable` is mostly compatible with eager\nexecution and `tf.function` but only if you combine it with the\n`tf.compat.v1.keras.utils.track_tf1_style_variables` decorator. (Though\nit will behave as if reuse is always set to `AUTO_REUSE`.)\n\nSee the\n[model migration guide](https://www.tensorflow.org/guide/migrate/model_mapping)\nfor more info.\n\nIf you do not combine it with\n`tf.compat.v1.keras.utils.track_tf1_style_variables`, `get_variable` will create\na brand new variable every single time it is called and will never reuse\nvariables, regardless of variable names or `reuse` arguments.\n\nThe TF2 equivalent of this symbol would be `tf.Variable`, but note\nthat when using `tf.Variable` you must make sure you track your variables\n(and regularizer arguments) either manually or via `tf.Module` or\n`tf.keras.layers.Layer` mechanisms.\n\nA section of the\n[migration guide](https://www.tensorflow.org/guide/migrate/model_mapping#incremental_migration_to_native_tf2)\nprovides more details on incrementally migrating these usages to `tf.Variable`\nas well.\n\nNote: The `partitioner` arg is not compatible with TF2 behaviors even when\nusing `tf.compat.v1.keras.utils.track_tf1_style_variables`. It can be replaced\nby using `ParameterServerStrategy` and its partitioners. See the\n[multi-gpu migration guide](https://www.tensorflow.org/guide/migrate/multi_worker_cpu_gpu_training)\nand the ParameterServerStrategy guides it references for more info.\n@end_compatibility\n\nBehavior is the same as in `get_variable`, except that variables are\nadded to the `LOCAL_VARIABLES` collection and `trainable` is set to\n`False`.\nThis function prefixes the name with the current variable scope\nand performs reuse checks. See the\n[Variable Scope How To](https://tensorflow.org/guide/variables)\nfor an extensive description of how reusing works. Here is a basic example:\n\n```python\ndef foo():\n with tf.variable_scope(\"foo\", reuse=tf.AUTO_REUSE):\n v = tf.get_variable(\"v\", [1])\n return v\n\nv1 = foo() # Creates v.\nv2 = foo() # Gets the same, existing v.\nassert v1 == v2\n```\n\nIf initializer is `None` (the default), the default initializer passed in\nthe variable scope will be used. If that one is `None` too, a\n`glorot_uniform_initializer` will be used. The initializer can also be\na Tensor, in which case the variable is initialized to this value and shape.\n\nSimilarly, if the regularizer is `None` (the default), the default regularizer\npassed in the variable scope will be used (if that is `None` too,\nthen by default no regularization is performed).\n\nIf a partitioner is provided, a `PartitionedVariable` is returned.\nAccessing this object as a `Tensor` returns the shards concatenated along\nthe partition axis.\n\nSome useful partitioners are available. See, e.g.,\n`variable_axis_size_partitioner` and `min_max_variable_partitioner`.\n\nArgs:\n name: The name of the new or existing variable.\n shape: Shape of the new or existing variable.\n dtype: Type of the new or existing variable (defaults to `DT_FLOAT`).\n initializer: Initializer for the variable if one is created. Can either be\n an initializer object or a Tensor. If it's a Tensor, its shape must be known\n unless validate_shape is False.\n regularizer: A (Tensor -> Tensor or None) function; the result of\n applying it on a newly created variable will be added to the collection\n `tf.GraphKeys.REGULARIZATION_LOSSES` and can be used for regularization.\n collections: List of graph collections keys to add the Variable to.\n Defaults to `[GraphKeys.LOCAL_VARIABLES]` (see `tf.Variable`).\n caching_device: Optional device string or function describing where the\n Variable should be cached for reading. Defaults to the Variable's\n device. If not `None`, caches on another device. Typical use is to\n cache on the device where the Ops using the Variable reside, to\n deduplicate copying through `Switch` and other conditional statements.\n partitioner: Optional callable that accepts a fully defined `TensorShape`\n and `dtype` of the Variable to be created, and returns a list of\n partitions for each axis (currently only one axis can be partitioned).\n validate_shape: If False, allows the variable to be initialized with a\n value of unknown shape. If True, the default, the shape of initial_value\n must be known. For this to be used the initializer must be a Tensor and\n not an initializer object.\n use_resource: If False, creates a regular Variable. If true, creates an\n experimental ResourceVariable instead with well-defined semantics.\n Defaults to False (will later change to True). When eager execution is\n enabled this argument is always forced to be True.\n custom_getter: Callable that takes as a first argument the true getter, and\n allows overwriting the internal get_variable method.\n The signature of `custom_getter` should match that of this method,\n but the most future-proof version will allow for changes:\n `def custom_getter(getter, *args, **kwargs)`. Direct access to\n all `get_variable` parameters is also allowed:\n `def custom_getter(getter, name, *args, **kwargs)`. A simple identity\n custom getter that simply creates variables with modified names is:\n ```python\n def custom_getter(getter, name, *args, **kwargs):\n return getter(name + '_suffix', *args, **kwargs)\n ```\n constraint: An optional projection function to be applied to the variable\n after being updated by an `Optimizer` (e.g. used to implement norm\n constraints or value constraints for layer weights). The function must\n take as input the unprojected Tensor representing the value of the\n variable and return the Tensor for the projected value\n (which must have the same shape). Constraints are not safe to\n use when doing asynchronous distributed training.\n synchronization: Indicates when a distributed a variable will be\n aggregated. Accepted values are constants defined in the class\n `tf.VariableSynchronization`. By default the synchronization is set to\n `AUTO` and the current `DistributionStrategy` chooses\n when to synchronize.\n aggregation: Indicates how a distributed variable will be aggregated.\n Accepted values are constants defined in the class\n `tf.VariableAggregation`.\n\nReturns:\n The created or existing `Variable` (or `PartitionedVariable`, if a\n partitioner was used).\n\nRaises:\n ValueError: when creating a new variable and shape is not declared,\n when violating reuse during variable creation, or when `initializer` dtype\n and `dtype` don't match. Reuse is set inside `variable_scope`.\n", "desc": "Gets an existing *local* variable or creates a new one.", "type": "API"}, {"name": "tf.compat.v1.get_logger", "docs": "Return TF logger instance.", "desc": "Return TF logger instance.", "type": "API"}, {"name": "tf.compat.v1.get_seed", "docs": "Returns the local seeds an operation should use given an op-specific seed.\n\n Given operation-specific seed, `op_seed`, this helper function returns two\n seeds derived from graph-level and op-level seeds. Many random operations\n internally use the two seeds to allow user to change the seed globally for a\n graph, or for only specific operations.\n\n For details on how the graph-level seed interacts with op seeds, see\n `tf.compat.v1.random.set_random_seed`.\n\n Args:\n op_seed: integer.\n\n Returns:\n A tuple of two integers that should be used for the local seed of this\n operation.\n ", "desc": "Returns the local seeds an operation should use given an op-specific seed.", "type": "API"}, {"name": "tf.compat.v1.get_session_handle", "docs": "Return the handle of `data`.\n\n This is EXPERIMENTAL and subject to change.\n\n Keep `data` \"in-place\" in the runtime and create a handle that can be\n used to retrieve `data` in a subsequent run().\n\n Combined with `get_session_tensor`, we can keep a tensor produced in\n one run call in place, and use it as the input in a future run call.\n\n Args:\n data: A tensor to be stored in the session.\n name: Optional name prefix for the return tensor.\n\n Returns:\n A scalar string tensor representing a unique handle for `data`.\n\n Raises:\n TypeError: if `data` is not a Tensor.\n\n Example:\n\n ```python\n c = tf.multiply(a, b)\n h = tf.compat.v1.get_session_handle(c)\n h = sess.run(h)\n\n p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32)\n b = tf.multiply(a, 10)\n c = sess.run(b, feed_dict={p: h.handle})\n ```\n\n ", "desc": "Return the handle of `data`.", "type": "API"}, {"name": "tf.compat.v1.get_session_tensor", "docs": "Get the tensor of type `dtype` by feeding a tensor handle.\n\n This is EXPERIMENTAL and subject to change.\n\n Get the value of the tensor from a tensor handle. The tensor\n is produced in a previous run() and stored in the state of the\n session.\n\n Args:\n handle: The string representation of a persistent tensor handle.\n dtype: The type of the output tensor.\n name: Optional name prefix for the return tensor.\n\n Returns:\n A pair of tensors. The first is a placeholder for feeding a\n tensor handle and the second is the tensor in the session state\n keyed by the tensor handle.\n\n Example:\n\n ```python\n c = tf.multiply(a, b)\n h = tf.compat.v1.get_session_handle(c)\n h = sess.run(h)\n\n p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32)\n b = tf.multiply(a, 10)\n c = sess.run(b, feed_dict={p: h.handle})\n ```\n\n ", "desc": "Get the tensor of type `dtype` by feeding a tensor handle.", "type": "API"}, {"name": "tf.compat.v1.get_static_value", "docs": "Returns the constant value of the given tensor, if efficiently calculable.\n\n This function attempts to partially evaluate the given tensor, and\n returns its value as a numpy ndarray if this succeeds.\n\n Example usage:\n\n >>> a = tf.constant(10)\n >>> tf.get_static_value(a)\n 10\n >>> b = tf.constant(20)\n >>> tf.get_static_value(tf.add(a, b))\n 30\n\n >>> # `tf.Variable` is not supported.\n >>> c = tf.Variable(30)\n >>> print(tf.get_static_value(c))\n None\n\n Using `partial` option is most relevant when calling `get_static_value` inside\n a `tf.function`. Setting it to `True` will return the results but for the\n values that cannot be evaluated will be `None`. For example:\n\n ```python\n class Foo(object):\n def __init__(self):\n self.a = tf.Variable(1)\n self.b = tf.constant(2)\n\n @tf.function\n def bar(self, partial):\n packed = tf.raw_ops.Pack(values=[self.a, self.b])\n static_val = tf.get_static_value(packed, partial=partial)\n tf.print(static_val)\n\n f = Foo()\n f.bar(partial=True) # `array([None, array(2, dtype=int32)], dtype=object)`\n f.bar(partial=False) # `None`\n ```\n\n Compatibility(V1): If `constant_value(tensor)` returns a non-`None` result, it\n will no longer be possible to feed a different value for `tensor`. This allows\n the result of this function to influence the graph that is constructed, and\n permits static shape optimizations.\n\n Args:\n tensor: The Tensor to be evaluated.\n partial: If True, the returned numpy array is allowed to have partially\n evaluated values. Values that can't be evaluated will be None.\n\n Returns:\n A numpy ndarray containing the constant value of the given `tensor`,\n or None if it cannot be calculated.\n\n Raises:\n TypeError: if tensor is not an ops.Tensor.\n ", "desc": "Returns the constant value of the given tensor, if efficiently calculable.", "type": "API"}, {"name": "tf.compat.v1.get_variable", "docs": "Gets an existing variable with these parameters or create a new one.\n\n@compatibility(TF2)\nAlthough it is a legacy `compat.v1` api,\n`tf.compat.v1.get_variable` is mostly compatible with eager\nexecution and `tf.function` but only if you combine it with the\n`tf.compat.v1.keras.utils.track_tf1_style_variables` decorator. (Though\nit will behave as if reuse is always set to `AUTO_REUSE`.)\n\nSee the\n[model migration guide](https://www.tensorflow.org/guide/migrate/model_mapping)\nfor more info.\n\nIf you do not combine it with\n`tf.compat.v1.keras.utils.track_tf1_style_variables`, `get_variable` will create\na brand new variable every single time it is called and will never reuse\nvariables, regardless of variable names or `reuse` arguments.\n\nThe TF2 equivalent of this symbol would be `tf.Variable`, but note\nthat when using `tf.Variable` you must make sure you track your variables\n(and regularizer arguments) either manually or via `tf.Module` or\n`tf.keras.layers.Layer` mechanisms.\n\nA section of the\n[migration guide](https://www.tensorflow.org/guide/migrate/model_mapping#incremental_migration_to_native_tf2)\nprovides more details on incrementally migrating these usages to `tf.Variable`\nas well.\n\nNote: The `partitioner` arg is not compatible with TF2 behaviors even when\nusing `tf.compat.v1.keras.utils.track_tf1_style_variables`. It can be replaced\nby using `ParameterServerStrategy` and its partitioners. See the\n[multi-gpu migration guide](https://www.tensorflow.org/guide/migrate/multi_worker_cpu_gpu_training)\nand the ParameterServerStrategy guides it references for more info.\n@end_compatibility\n\nThis function prefixes the name with the current variable scope\nand performs reuse checks. See the\n[Variable Scope How To](https://tensorflow.org/guide/variables)\nfor an extensive description of how reusing works. Here is a basic example:\n\n```python\ndef foo():\n with tf.variable_scope(\"foo\", reuse=tf.AUTO_REUSE):\n v = tf.get_variable(\"v\", [1])\n return v\n\nv1 = foo() # Creates v.\nv2 = foo() # Gets the same, existing v.\nassert v1 == v2\n```\n\nIf initializer is `None` (the default), the default initializer passed in\nthe variable scope will be used. If that one is `None` too, a\n`glorot_uniform_initializer` will be used. The initializer can also be\na Tensor, in which case the variable is initialized to this value and shape.\n\nSimilarly, if the regularizer is `None` (the default), the default regularizer\npassed in the variable scope will be used (if that is `None` too,\nthen by default no regularization is performed).\n\nIf a partitioner is provided, a `PartitionedVariable` is returned.\nAccessing this object as a `Tensor` returns the shards concatenated along\nthe partition axis.\n\nSome useful partitioners are available. See, e.g.,\n`variable_axis_size_partitioner` and `min_max_variable_partitioner`.\n\nArgs:\n name: The name of the new or existing variable.\n shape: Shape of the new or existing variable.\n dtype: Type of the new or existing variable (defaults to `DT_FLOAT`).\n initializer: Initializer for the variable if one is created. Can either be\n an initializer object or a Tensor. If it's a Tensor, its shape must be known\n unless validate_shape is False.\n regularizer: A (Tensor -> Tensor or None) function; the result of\n applying it on a newly created variable will be added to the collection\n `tf.GraphKeys.REGULARIZATION_LOSSES` and can be used for regularization.\n trainable: If `True` also add the variable to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n collections: List of graph collections keys to add the Variable to.\n Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`).\n caching_device: Optional device string or function describing where the\n Variable should be cached for reading. Defaults to the Variable's\n device. If not `None`, caches on another device. Typical use is to\n cache on the device where the Ops using the Variable reside, to\n deduplicate copying through `Switch` and other conditional statements.\n partitioner: Optional callable that accepts a fully defined `TensorShape`\n and `dtype` of the Variable to be created, and returns a list of\n partitions for each axis (currently only one axis can be partitioned).\n validate_shape: If False, allows the variable to be initialized with a\n value of unknown shape. If True, the default, the shape of initial_value\n must be known. For this to be used the initializer must be a Tensor and\n not an initializer object.\n use_resource: If False, creates a regular Variable. If true, creates an\n experimental ResourceVariable instead with well-defined semantics.\n Defaults to False (will later change to True). When eager execution is\n enabled this argument is always forced to be True.\n custom_getter: Callable that takes as a first argument the true getter, and\n allows overwriting the internal get_variable method.\n The signature of `custom_getter` should match that of this method,\n but the most future-proof version will allow for changes:\n `def custom_getter(getter, *args, **kwargs)`. Direct access to\n all `get_variable` parameters is also allowed:\n `def custom_getter(getter, name, *args, **kwargs)`. A simple identity\n custom getter that simply creates variables with modified names is:\n ```python\n def custom_getter(getter, name, *args, **kwargs):\n return getter(name + '_suffix', *args, **kwargs)\n ```\n constraint: An optional projection function to be applied to the variable\n after being updated by an `Optimizer` (e.g. used to implement norm\n constraints or value constraints for layer weights). The function must\n take as input the unprojected Tensor representing the value of the\n variable and return the Tensor for the projected value\n (which must have the same shape). Constraints are not safe to\n use when doing asynchronous distributed training.\n synchronization: Indicates when a distributed a variable will be\n aggregated. Accepted values are constants defined in the class\n `tf.VariableSynchronization`. By default the synchronization is set to\n `AUTO` and the current `DistributionStrategy` chooses\n when to synchronize.\n aggregation: Indicates how a distributed variable will be aggregated.\n Accepted values are constants defined in the class\n `tf.VariableAggregation`.\n\nReturns:\n The created or existing `Variable` (or `PartitionedVariable`, if a\n partitioner was used).\n\nRaises:\n ValueError: when creating a new variable and shape is not declared,\n when violating reuse during variable creation, or when `initializer` dtype\n and `dtype` don't match. Reuse is set inside `variable_scope`.\n", "desc": "Gets an existing variable with these parameters or create a new one.", "type": "API"}, {"name": "tf.compat.v1.get_variable_scope", "docs": "Returns the current variable scope.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` api,\n `tf.compat.v1.get_variable` is compatible with eager\n execution and `tf.function`\n\n However, to maintain variable-scope based variable reuse\n you will need to combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`. (Though\n it will behave as if reuse is always set to `tf.compat.v1.AUTO_REUSE`.)\n\n See the\n [migration guide](https://www.tensorflow.org/guide/migrate/model_mapping)\n for more info.\n\n The TF2 equivalent, if you are just trying to track\n variable name prefixes and not control `get_variable`-based variable reuse,\n would be to use `tf.name_scope` and capture the output of opening the\n scope (which represents the current name prefix).\n\n For example:\n ```python\n x = tf.name_scope('foo') as current_scope:\n ...\n ```\n @end_compatibility\n ", "desc": "Returns the current variable scope.", "type": "API"}, {"name": "tf.compat.v1.gfile", "docs": "Import router for file_io.\n", "desc": "Import router for file_io.", "type": "API"}, {"name": "tf.compat.v1.gfile.Copy", "docs": "Copies data from `src` to `dst`.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that you need to always specify a file name, even if moving into a new\n directory. This is because some cloud filesystems don't have the concept of a\n directory.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.mkdir(\"/tmp/new_dir\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/new_dir/y\")\n >>> tf.io.gfile.exists(\"/tmp/new_dir/y\")\n True\n >>> tf.io.gfile.rmtree(\"/tmp/new_dir\")\n\n If you want to prevent errors if the path already exists, you can use\n `overwrite` argument:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\", overwrite=True)\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that the above will still result in an error if you try to overwrite a\n directory with a file.\n\n Note that you cannot copy a directory, only file arguments are supported.\n\n Args:\n src: string, name of the file whose contents need to be copied\n dst: string, name of the file to which to copy to\n overwrite: boolean, if false it's an error for `dst` to be occupied by an\n existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Copies data from `src` to `dst`.", "type": "API"}, {"name": "tf.compat.v1.gfile.DeleteRecursively", "docs": "Deletes everything under dirname recursively.\n\n Args:\n dirname: string, a path to a directory\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Deletes everything under dirname recursively.", "type": "API"}, {"name": "tf.compat.v1.gfile.Exists", "docs": "Determines whether a path exists or not.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> # for a GCS filesystem path:\n >>> # tf.io.gfile.exists(\"gs://bucket/file\")\n >>> # for a local filesystem:\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"file:///tmp/x\")\n True\n\n This currently returns `True` for existing directories but don't rely on this\n behavior, especially if you are using cloud filesystems (e.g., GCS, S3,\n Hadoop):\n\n >>> tf.io.gfile.exists(\"/tmp\")\n True\n\n Args:\n path: string, a path\n\n Returns:\n True if the path exists, whether it's a file or a directory.\n False if the path does not exist and there are no filesystem errors.\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API.\n ", "desc": "Determines whether a path exists or not.", "type": "API"}, {"name": "tf.compat.v1.gfile.FastGFile", "docs": "File I/O wrappers without thread locking.\n\n Note, that this is somewhat like builtin Python file I/O, but\n there are semantic differences to make it more efficient for\n some backing filesystems. For example, a write mode file will\n not be opened until the first write call (to minimize RPC\n invocations in network filesystems).\n ", "desc": "File I/O wrappers without thread locking.", "type": "API"}, {"name": "tf.compat.v1.gfile.GFile", "docs": "File I/O wrappers without thread locking.\n\n The main roles of the `tf.io.gfile` module are:\n\n 1. To provide an API that is close to Python's file I/O objects, and\n 2. To provide an implementation based on TensorFlow's C++ FileSystem API.\n\n The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a `gs://` prefix, and\n HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`,\n so that you can use these implementations for saving and loading checkpoints,\n writing to TensorBoard logs, and accessing training data (among other uses).\n However, if all your files are local, you can use the regular Python file\n API without any problem.\n\n *Note*: though similar to Python's I/O implementation, there are semantic\n differences to make `tf.io.gfile` more efficient for backing filesystems. For\n example, a write mode file will not be opened until the first write call to\n minimize RPC invocations in network filesystems.\n\n Once you obtain a `GFile` object, you can use it in most ways as you would any\n Python's file object:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n 4\n >>> with tf.io.gfile.GFile(\"/tmp/x\") as f:\n ... f.read()\n 'asdf'\n\n The difference is that you can specify URI schemes to use other filesystems\n (e.g., `gs://` for GCS, `s3://` for S3, etc.), if they are supported. Using\n `file://` as an example, we have:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"qwert\")\n ... f.write(\"asdf\")\n >>> tf.io.gfile.GFile(\"file:///tmp/x\").read()\n 'qwertasdf'\n\n You can also read all lines of a file directly:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> tf.io.gfile.GFile(\"/tmp/x\").readlines()\n ['asdf\\n', 'qwer\\n']\n\n You can iterate over the lines:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> for line in tf.io.gfile.GFile(\"/tmp/x\"):\n ... print(line[:-1]) # removes the end of line character\n asdf\n qwer\n\n Random access read is possible if the underlying filesystem supports it:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdfqwer\")\n >>> f = tf.io.gfile.GFile(\"/tmp/x\")\n >>> f.read(3)\n 'asd'\n >>> f.seek(4)\n >>> f.tell()\n 4\n >>> f.read(3)\n 'qwe'\n >>> f.tell()\n 7\n >>> f.close()\n ", "desc": "File I/O wrappers without thread locking.", "type": "API"}, {"name": "tf.compat.v1.gfile.Glob", "docs": "Returns a list of files that match the given pattern(s).\n\n Args:\n filename: string or iterable of strings. The glob pattern(s).\n\n Returns:\n A list of strings containing filenames that match the given pattern(s).\n\n Raises:\n * errors.OpError: If there are filesystem / directory listing errors.\n * errors.NotFoundError: If pattern to be matched is an invalid directory.\n ", "desc": "Returns a list of files that match the given pattern(s).", "type": "API"}, {"name": "tf.compat.v1.gfile.IsDirectory", "docs": "Returns whether the path is a directory or not.\n\n Args:\n dirname: string, path to a potential directory\n\n Returns:\n True, if the path is a directory; False otherwise\n ", "desc": "Returns whether the path is a directory or not.", "type": "API"}, {"name": "tf.compat.v1.gfile.ListDirectory", "docs": "Returns a list of entries contained within a directory.\n\n The list is in arbitrary order. It does not contain the special entries \".\"\n and \"..\".\n\n Args:\n dirname: string, path to a directory\n\n Returns:\n [filename1, filename2, ... filenameN] as strings\n\n Raises:\n errors.NotFoundError if directory doesn't exist\n ", "desc": "Returns a list of entries contained within a directory.", "type": "API"}, {"name": "tf.compat.v1.gfile.MakeDirs", "docs": "Creates a directory and all parent/intermediate directories.\n\n It succeeds if dirname already exists and is writable.\n\n Args:\n dirname: string, name of the directory to be created\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory and all parent/intermediate directories.", "type": "API"}, {"name": "tf.compat.v1.gfile.MkDir", "docs": "Creates a directory with the name `dirname`.\n\n Args:\n dirname: string, name of the directory to be created\n\n Notes: The parent directories need to exist. Use `tf.io.gfile.makedirs`\n instead if there is the possibility that the parent dirs don't exist.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory with the name `dirname`.", "type": "API"}, {"name": "tf.compat.v1.gfile.Open", "docs": "File I/O wrappers without thread locking.\n\n The main roles of the `tf.io.gfile` module are:\n\n 1. To provide an API that is close to Python's file I/O objects, and\n 2. To provide an implementation based on TensorFlow's C++ FileSystem API.\n\n The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a `gs://` prefix, and\n HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`,\n so that you can use these implementations for saving and loading checkpoints,\n writing to TensorBoard logs, and accessing training data (among other uses).\n However, if all your files are local, you can use the regular Python file\n API without any problem.\n\n *Note*: though similar to Python's I/O implementation, there are semantic\n differences to make `tf.io.gfile` more efficient for backing filesystems. For\n example, a write mode file will not be opened until the first write call to\n minimize RPC invocations in network filesystems.\n\n Once you obtain a `GFile` object, you can use it in most ways as you would any\n Python's file object:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n 4\n >>> with tf.io.gfile.GFile(\"/tmp/x\") as f:\n ... f.read()\n 'asdf'\n\n The difference is that you can specify URI schemes to use other filesystems\n (e.g., `gs://` for GCS, `s3://` for S3, etc.), if they are supported. Using\n `file://` as an example, we have:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"qwert\")\n ... f.write(\"asdf\")\n >>> tf.io.gfile.GFile(\"file:///tmp/x\").read()\n 'qwertasdf'\n\n You can also read all lines of a file directly:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> tf.io.gfile.GFile(\"/tmp/x\").readlines()\n ['asdf\\n', 'qwer\\n']\n\n You can iterate over the lines:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> for line in tf.io.gfile.GFile(\"/tmp/x\"):\n ... print(line[:-1]) # removes the end of line character\n asdf\n qwer\n\n Random access read is possible if the underlying filesystem supports it:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdfqwer\")\n >>> f = tf.io.gfile.GFile(\"/tmp/x\")\n >>> f.read(3)\n 'asd'\n >>> f.seek(4)\n >>> f.tell()\n 4\n >>> f.read(3)\n 'qwe'\n >>> f.tell()\n 7\n >>> f.close()\n ", "desc": "File I/O wrappers without thread locking.", "type": "API"}, {"name": "tf.compat.v1.gfile.Remove", "docs": "Deletes the file located at 'filename'.\n\n Args:\n filename: string, a filename\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API. E.g.,\n `NotFoundError` if the file does not exist.\n ", "desc": "Deletes the file located at 'filename'.", "type": "API"}, {"name": "tf.compat.v1.gfile.Rename", "docs": "Rename or move a file / directory.\n\n Args:\n oldname: string, pathname for a file\n newname: string, pathname to which the file needs to be moved\n overwrite: boolean, if false it's an error for `newname` to be occupied by\n an existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Rename or move a file / directory.", "type": "API"}, {"name": "tf.compat.v1.gfile.Stat", "docs": "Returns file statistics for a given path.\n\n Args:\n filename: string, path to a file\n\n Returns:\n FileStatistics struct that contains information about the path\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Returns file statistics for a given path.", "type": "API"}, {"name": "tf.compat.v1.gfile.Walk", "docs": "Recursive directory tree generator for directories.\n\n Args:\n top: string, a Directory name\n in_order: bool, Traverse in order if True, post order if False. Errors that\n happen while listing directories are ignored.\n\n Yields:\n Each yield is a 3-tuple: the pathname of a directory, followed by lists of\n all its subdirectories and leaf files. That is, each yield looks like:\n `(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`.\n Each item is a string.\n ", "desc": "Recursive directory tree generator for directories.", "type": "API"}, {"name": "tf.compat.v1.global_norm", "docs": "Computes the global norm of multiple tensors.\n\n Given a tuple or list of tensors `t_list`, this operation returns the\n global norm of the elements in all tensors in `t_list`. The global norm is\n computed as:\n\n `global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`\n\n Any entries in `t_list` that are of type None are ignored.\n\n Args:\n t_list: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.\n name: A name for the operation (optional).\n\n Returns:\n A 0-D (scalar) `Tensor` of type `float`.\n\n Raises:\n TypeError: If `t_list` is not a sequence.\n ", "desc": "Computes the global norm of multiple tensors.", "type": "API"}, {"name": "tf.compat.v1.global_variables", "docs": "Returns global variables.\n\n Global variables are variables that are shared across machines in a\n distributed environment. The `Variable()` constructor or `get_variable()`\n automatically adds new variables to the graph collection\n `GraphKeys.GLOBAL_VARIABLES`.\n This convenience function returns the contents of that collection.\n\n An alternative to global variables are local variables. See\n `tf.compat.v1.local_variables`\n\n @compatibility(TF2)\n Not compatible with eager execution and `tf.function`. In particular, Graph\n collections are deprecated in TF2. Instead please create a\n [tf.Module](https://www.tensorflow.org/guide/intro_to_modules)\n container for all your model state, including variables.\n You can then list all the variables in your `tf.Module` through the\n `variables` attribute.\n @end_compatibility\n\n Args:\n scope: (Optional.) A string. If supplied, the resulting list is filtered to\n include only items whose `name` attribute matches `scope` using\n `re.match`. Items without a `name` attribute are never returned if a scope\n is supplied. The choice of `re.match` means that a `scope` without special\n tokens filters by prefix.\n\n Returns:\n A list of `Variable` objects.\n ", "desc": "Returns global variables.", "type": "API"}, {"name": "tf.compat.v1.global_variables_initializer", "docs": "Returns an Op that initializes global variables.\n\n This is just a shortcut for `variables_initializer(global_variables())`\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Returns:\n An Op that initializes global variables in the graph.\n ", "desc": "Returns an Op that initializes global variables.", "type": "API"}, {"name": "tf.compat.v1.glorot_normal_initializer", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n It draws samples from a truncated normal distribution centered on 0\n with standard deviation (after truncation) given by\n `stddev = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number\n of input units in the weight tensor and `fan_out` is the number of\n output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.compat.v1.glorot_uniform_initializer", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n It draws samples from a uniform distribution within [-limit, limit]\n where `limit` is `sqrt(6 / (fan_in + fan_out))`\n where `fan_in` is the number of input units in the weight tensor\n and `fan_out` is the number of output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.compat.v1.GPUOptions", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.GPUOptions.Experimental", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.GPUOptions.Experimental.VirtualDevices", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.grad_pass_through", "docs": "Creates a grad-pass-through op with the forward behavior provided in f.\n\n Use this function to wrap any op, maintaining its behavior in the forward\n pass, but replacing the original op in the backward graph with an identity.\n For example:\n\n ```python\n x = tf.Variable(1.0, name=\"x\")\n z = tf.Variable(3.0, name=\"z\")\n\n with tf.GradientTape() as tape:\n # y will evaluate to 9.0\n y = tf.grad_pass_through(x.assign)(z**2)\n # grads will evaluate to 6.0\n grads = tape.gradient(y, z)\n ```\n\n Another example is a 'differentiable' moving average approximation, where\n gradients are allowed to flow into the last value fed to the moving average,\n but the moving average is still used for the forward pass:\n\n ```python\n x = ... # Some scalar value\n # A moving average object, we don't need to know how this is implemented\n moving_average = MovingAverage()\n with backprop.GradientTape() as tape:\n # mavg_x will evaluate to the current running average value\n mavg_x = tf.grad_pass_through(moving_average)(x)\n grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0\n ```\n\n Args:\n f: function `f(*x)` that returns a `Tensor` or nested structure of `Tensor`\n outputs.\n\n Returns:\n A function `h(x)` which returns the same values as `f(x)` and whose\n gradients are the same as those of an identity function.\n ", "desc": "Creates a grad-pass-through op with the forward behavior provided in f.", "type": "API"}, {"name": "tf.compat.v1.gradients", "docs": "Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`.\n\n `ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`\n is a list of `Tensor`, holding the gradients received by the\n `ys`. The list must be the same length as `ys`.\n\n `gradients()` adds ops to the graph to output the derivatives of `ys` with\n respect to `xs`. It returns a list of `Tensor` of length `len(xs)` where\n each tensor is the `sum(dy/dx)` for y in `ys` and for x in `xs`.\n\n `grad_ys` is a list of tensors of the same length as `ys` that holds\n the initial gradients for each y in `ys`. When `grad_ys` is None,\n we fill in a tensor of '1's of the shape of y for each y in `ys`. A\n user can provide their own initial `grad_ys` to compute the\n derivatives using a different initial gradient for each y (e.g., if\n one wanted to weight the gradient differently for each value in\n each y).\n\n `stop_gradients` is a `Tensor` or a list of tensors to be considered constant\n with respect to all `xs`. These tensors will not be backpropagated through,\n as though they had been explicitly disconnected using `stop_gradient`. Among\n other things, this allows computation of partial derivatives as opposed to\n total derivatives. For example:\n\n ```python\n a = tf.constant(0.)\n b = 2 * a\n g = tf.gradients(a + b, [a, b], stop_gradients=[a, b])\n ```\n\n Here the partial derivatives `g` evaluate to `[1.0, 1.0]`, compared to the\n total derivatives `tf.gradients(a + b, [a, b])`, which take into account the\n influence of `a` on `b` and evaluate to `[3.0, 1.0]`. Note that the above is\n equivalent to:\n\n ```python\n a = tf.stop_gradient(tf.constant(0.))\n b = tf.stop_gradient(2 * a)\n g = tf.gradients(a + b, [a, b])\n ```\n\n `stop_gradients` provides a way of stopping gradient after the graph has\n already been constructed, as compared to `tf.stop_gradient` which is used\n during graph construction. When the two approaches are combined,\n backpropagation stops at both `tf.stop_gradient` nodes and nodes in\n `stop_gradients`, whichever is encountered first.\n\n All integer tensors are considered constant with respect to all `xs`, as if\n they were included in `stop_gradients`.\n\n `unconnected_gradients` determines the value returned for each x in xs if it\n is unconnected in the graph to ys. By default this is None to safeguard\n against errors. Mathematically these gradients are zero which can be requested\n using the `'zero'` option. `tf.UnconnectedGradients` provides the\n following options and behaviors:\n\n ```python\n a = tf.ones([1, 2])\n b = tf.ones([3, 1])\n g1 = tf.gradients([b], [a], unconnected_gradients='none')\n sess.run(g1) # [None]\n\n g2 = tf.gradients([b], [a], unconnected_gradients='zero')\n sess.run(g2) # [array([[0., 0.]], dtype=float32)]\n ```\n\n Let us take one practical example which comes during the back propogation\n phase. This function is used to evaluate the derivatives of the cost function\n with respect to Weights `Ws` and Biases `bs`. Below sample implementation\n provides the exaplantion of what it is actually used for :\n\n ```python\n Ws = tf.constant(0.)\n bs = 2 * Ws\n cost = Ws + bs # This is just an example. So, please ignore the formulas.\n g = tf.gradients(cost, [Ws, bs])\n dCost_dW, dCost_db = g\n ```\n\n\n Args:\n ys: A `Tensor` or list of tensors to be differentiated.\n xs: A `Tensor` or list of tensors to be used for differentiation.\n grad_ys: Optional. A `Tensor` or list of tensors the same size as\n `ys` and holding the gradients computed for each y in `ys`.\n name: Optional name to use for grouping all the gradient ops together.\n defaults to 'gradients'.\n colocate_gradients_with_ops: If True, try colocating gradients with\n the corresponding op.\n gate_gradients: If True, add a tuple around the gradients returned\n for an operations. This avoids some race conditions.\n aggregation_method: Specifies the method used to combine gradient terms.\n Accepted values are constants defined in the class `AggregationMethod`.\n stop_gradients: Optional. A `Tensor` or list of tensors not to differentiate\n through.\n unconnected_gradients: Optional. Specifies the gradient value returned when\n the given input tensors are unconnected. Accepted values are constants\n defined in the class `tf.UnconnectedGradients` and the default value is\n `none`.\n\n Returns:\n A list of `Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`\n for y in `ys` and for x in `xs`.\n\n Raises:\n LookupError: if one of the operations between `x` and `y` does not\n have a registered gradient function.\n ValueError: if the arguments are invalid.\n RuntimeError: if called in Eager mode.\n\n ", "desc": "Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`.", "type": "API"}, {"name": "tf.compat.v1.GradientTape", "docs": "Record operations for automatic differentiation.\n\n Operations are recorded if they are executed within this context manager and\n at least one of their inputs is being \"watched\".\n\n Trainable variables (created by `tf.Variable` or `tf.compat.v1.get_variable`,\n where `trainable=True` is default in both cases) are automatically watched.\n Tensors can be manually watched by invoking the `watch` method on this context\n manager.\n\n For example, consider the function `y = x * x`. The gradient at `x = 3.0` can\n be computed as:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = x * x\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n GradientTapes can be nested to compute higher-order derivatives. For example,\n\n >>> x = tf.constant(5.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... with tf.GradientTape() as gg:\n ... gg.watch(x)\n ... y = x * x\n ... dy_dx = gg.gradient(y, x) # dy_dx = 2 * x\n >>> d2y_dx2 = g.gradient(dy_dx, x) # d2y_dx2 = 2\n >>> print(dy_dx)\n tf.Tensor(10.0, shape=(), dtype=float32)\n >>> print(d2y_dx2)\n tf.Tensor(2.0, shape=(), dtype=float32)\n\n By default, the resources held by a GradientTape are released as soon as\n GradientTape.gradient() method is called. To compute multiple gradients over\n the same computation, create a persistent gradient tape. This allows multiple\n calls to the gradient() method as resources are released when the tape object\n is garbage collected. For example:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape(persistent=True) as g:\n ... g.watch(x)\n ... y = x * x\n ... z = y * y\n >>> dz_dx = g.gradient(z, x) # (4*x^3 at x = 3)\n >>> print(dz_dx)\n tf.Tensor(108.0, shape=(), dtype=float32)\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n By default GradientTape will automatically watch any trainable variables that\n are accessed inside the context. If you want fine grained control over which\n variables are watched you can disable automatic tracking by passing\n `watch_accessed_variables=False` to the tape constructor:\n\n >>> x = tf.Variable(2.0)\n >>> w = tf.Variable(5.0)\n >>> with tf.GradientTape(\n ... watch_accessed_variables=False, persistent=True) as tape:\n ... tape.watch(x)\n ... y = x ** 2 # Gradients will be available for `x`.\n ... z = w ** 3 # No gradients will be available as `w` isn't being watched.\n >>> dy_dx = tape.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(4.0, shape=(), dtype=float32)\n >>> # No gradients will be available as `w` isn't being watched.\n >>> dz_dw = tape.gradient(z, w)\n >>> print(dz_dw)\n None\n\n Note that when using models you should ensure that your variables exist when\n using `watch_accessed_variables=False`. Otherwise it's quite easy to make your\n first iteration not have any gradients:\n\n ```python\n a = tf.keras.layers.Dense(32)\n b = tf.keras.layers.Dense(32)\n\n with tf.GradientTape(watch_accessed_variables=False) as tape:\n tape.watch(a.variables) # Since `a.build` has not been called at this point\n # `a.variables` will return an empty list and the\n # tape will not be watching anything.\n result = b(a(inputs))\n tape.gradient(result, a.variables) # The result of this computation will be\n # a list of `None`s since a's variables\n # are not being watched.\n ```\n\n Note that only tensors with real or complex dtypes are differentiable.\n ", "desc": "Record operations for automatic differentiation.", "type": "API"}, {"name": "tf.compat.v1.Graph", "docs": "A TensorFlow computation, represented as a dataflow graph.\n\n Graphs are used by `tf.function`s to represent the function's computations.\n Each graph contains a set of `tf.Operation` objects, which represent units of\n computation; and `tf.Tensor` objects, which represent the units of data that\n flow between operations.\n\n ### Using graphs directly (deprecated)\n\n A `tf.Graph` can be constructed and used directly without a `tf.function`, as\n was required in TensorFlow 1, but this is deprecated and it is recommended to\n use a `tf.function` instead. If a graph is directly used, other deprecated\n TensorFlow 1 classes are also required to execute the graph, such as a\n `tf.compat.v1.Session`.\n\n A default graph can be registered with the `tf.Graph.as_default` context\n manager. Then, operations will be added to the graph instead of being executed\n eagerly. For example:\n\n ```python\n g = tf.Graph()\n with g.as_default():\n # Define operations and tensors in `g`.\n c = tf.constant(30.0)\n assert c.graph is g\n ```\n\n `tf.compat.v1.get_default_graph()` can be used to obtain the default graph.\n\n Important note: This class *is not* thread-safe for graph construction. All\n operations should be created from a single thread, or external\n synchronization must be provided. Unless otherwise specified, all methods\n are not thread-safe.\n\n A `Graph` instance supports an arbitrary number of \"collections\"\n that are identified by name. For convenience when building a large\n graph, collections can store groups of related objects: for\n example, the `tf.Variable` uses a collection (named\n `tf.GraphKeys.GLOBAL_VARIABLES`) for\n all variables that are created during the construction of a graph. The caller\n may define additional collections by specifying a new name.\n ", "desc": "A TensorFlow computation, represented as a dataflow graph.", "type": "API"}, {"name": "tf.compat.v1.graph_util", "docs": "Helpers to manipulate a tensor graph in python.\n\n", "desc": "Helpers to manipulate a tensor graph in python.", "type": "API"}, {"name": "tf.compat.v1.graph_util.convert_variables_to_constants", "docs": "Replaces all the variables in a graph with constants of the same values. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.convert_variables_to_constants`\n\nIf you have a trained graph containing Variable ops, it can be convenient to\nconvert them all to Const ops holding the same values. This makes it possible\nto describe the network fully with a single GraphDef file, and allows the\nremoval of a lot of ops related to loading and saving the variables.\n\nArgs:\n sess: Active TensorFlow session containing the variables.\n input_graph_def: GraphDef object holding the network.\n output_node_names: List of name strings for the result nodes of the graph.\n variable_names_whitelist: The set of variable names to convert (by default,\n all variables are converted).\n variable_names_blacklist: The set of variable names to omit converting\n to constants.\n\nReturns:\n GraphDef containing a simplified version of the original.\n\nRaises:\n RuntimeError: if a DT_RESOURCE op is found whose ancestor Variables are both\n denylisted AND whitelisted for freezing.", "desc": "Replaces all the variables in a graph with constants of the same values. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.graph_util.extract_sub_graph", "docs": "Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.extract_sub_graph`\n\nArgs:\n graph_def: A graph_pb2.GraphDef proto.\n dest_nodes: An iterable of strings specifying the destination node names.\nReturns:\n The GraphDef of the sub-graph.\n\nRaises:\n TypeError: If 'graph_def' is not a graph_pb2.GraphDef proto.", "desc": "Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.graph_util.import_graph_def", "docs": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(op_dict)`. They will be removed in a future version.\nInstructions for updating:\nPlease file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.\n\nThis function provides a way to import a serialized TensorFlow\n[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)\nprotocol buffer, and extract individual objects in the `GraphDef` as\n`tf.Tensor` and `tf.Operation` objects. Once extracted,\nthese objects are placed into the current default `Graph`. See\n`tf.Graph.as_graph_def` for a way to create a `GraphDef`\nproto.\n\nArgs:\n graph_def: A `GraphDef` proto containing operations to be imported into\n the default graph.\n input_map: A dictionary mapping input names (as strings) in `graph_def`\n to `Tensor` objects. The values of the named input tensors in the\n imported graph will be re-mapped to the respective `Tensor` values.\n return_elements: A list of strings containing operation names in\n `graph_def` that will be returned as `Operation` objects; and/or\n tensor names in `graph_def` that will be returned as `Tensor` objects.\n name: (Optional.) A prefix that will be prepended to the names in\n `graph_def`. Note that this does not apply to imported function names.\n Defaults to `\"import\"`.\n op_dict: (Optional.) Deprecated, do not use.\n producer_op_list: (Optional.) An `OpList` proto with the (possibly stripped)\n list of `OpDef`s used by the producer of the graph. If provided,\n unrecognized attrs for ops in `graph_def` that have their default value\n according to `producer_op_list` will be removed. This will allow some more\n `GraphDef`s produced by later binaries to be accepted by earlier binaries.\n\nReturns:\n A list of `Operation` and/or `Tensor` objects from the imported graph,\n corresponding to the names in `return_elements`,\n and None if `returns_elements` is None.\n\nRaises:\n TypeError: If `graph_def` is not a `GraphDef` proto,\n `input_map` is not a dictionary mapping strings to `Tensor` objects,\n or `return_elements` is not a list of strings.\n ValueError: If `input_map`, or `return_elements` contains names that\n do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.\n it refers to an unknown tensor).", "desc": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.graph_util.must_run_on_cpu", "docs": "Returns True if the given node_def must run on CPU, otherwise False. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.must_run_on_cpu`\n\nArgs:\n node: The node to be assigned to a device. Could be either an ops.Operation\n or NodeDef.\n pin_variables_on_cpu: If True, this function will return False if node_def\n represents a variable-related op.\n\nReturns:\n True if the given node must run on CPU, otherwise False.", "desc": "Returns True if the given node_def must run on CPU, otherwise False. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.graph_util.remove_training_nodes", "docs": "Prunes out nodes that aren't needed for inference. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.remove_training_nodes`\n\nThere are nodes like Identity and CheckNumerics that are only useful\nduring training, and can be removed in graphs that will be used for\nnothing but inference. Here we identify and remove them, returning an\nequivalent graph. To be specific, CheckNumerics nodes are always removed, and\nIdentity nodes that aren't involved in control edges are spliced out so that\ntheir input and outputs are directly connected.\n\nArgs:\n input_graph: Model to analyze and prune.\n protected_nodes: An optional list of names of nodes to be kept\n unconditionally. This is for example useful to preserve Identity output\n nodes.\n\nReturns:\n A list of nodes with the unnecessary ones removed.", "desc": "Prunes out nodes that aren't needed for inference. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.graph_util.tensor_shape_from_node_def_name", "docs": "Convenience function to get a shape from a NodeDef's input string. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.compat.v1.graph_util.tensor_shape_from_node_def_name`", "desc": "Convenience function to get a shape from a NodeDef's input string. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.GraphDef", "docs": "A protobuf containing the graph of operations.\n\n@compatibility(TF2)\nThis API is not available in TensorFlow 2.x.\n\nYou should not need to use `GraphDef`s directly in TF2. To load `GraphDef`s in\nTF2, use SavedModel. The SavedModel contains the `GraphDef`.\n\nBefore:\n\n```python\nwith tf.io.gfile.GFile('/tmp/graph.pb', 'rb') as f:\n graph_def = tf.compat.v1.GraphDef()\n graph_def.ParseFromString(f.read())\n```\n\nAfter:\n\n```python\ntf.saved_model.load('/tmp/saved_model')\n```\n\nIf you would like to create a `GraphDef` in TF2, use `tf.function` and\n`get_concrete_function`.\n\n>>> @tf.function\n>>> def f(x):\n>>> return x\n>>>\n>>> graph_def = f.get_concrete_function(1.).graph.as_graph_def()\n>>> print(graph_def)\n\n@end_compatibility\n\n", "desc": "A protobuf containing the graph of operations.", "type": "API"}, {"name": "tf.compat.v1.GraphKeys", "docs": "Standard names to use for graph collections.\n\n The standard library uses various well-known names to collect and\n retrieve values associated with a graph. For example, the\n `tf.Optimizer` subclasses default to optimizing the variables\n collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is\n specified, but it is also possible to pass an explicit list of\n variables.\n\n The following standard keys are defined:\n\n * `GLOBAL_VARIABLES`: the default collection of `Variable` objects, shared\n across distributed environment (model variables are subset of these). See\n `tf.compat.v1.global_variables`\n for more details.\n Commonly, all `TRAINABLE_VARIABLES` variables will be in `MODEL_VARIABLES`,\n and all `MODEL_VARIABLES` variables will be in `GLOBAL_VARIABLES`.\n * `LOCAL_VARIABLES`: the subset of `Variable` objects that are local to each\n machine. Usually used for temporarily variables, like counters.\n Note: use `tf.contrib.framework.local_variable` to add to this collection.\n * `MODEL_VARIABLES`: the subset of `Variable` objects that are used in the\n model for inference (feed forward). Note: use\n `tf.contrib.framework.model_variable` to add to this collection.\n * `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will\n be trained by an optimizer. See\n `tf.compat.v1.trainable_variables`\n for more details.\n * `SUMMARIES`: the summary `Tensor` objects that have been created in the\n graph. See\n `tf.compat.v1.summary.merge_all`\n for more details.\n * `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to\n produce input for a computation. See\n `tf.compat.v1.train.start_queue_runners`\n for more details.\n * `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also\n keep moving averages. See\n `tf.compat.v1.moving_average_variables`\n for more details.\n * `REGULARIZATION_LOSSES`: regularization losses collected during graph\n construction.\n\n The following standard keys are _defined_, but their collections are **not**\n automatically populated as many of the others are:\n\n * `WEIGHTS`\n * `BIASES`\n * `ACTIVATIONS`\n ", "desc": "Standard names to use for graph collections.", "type": "API"}, {"name": "tf.compat.v1.GraphOptions", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.greater", "docs": "Returns the truth value of (x > y) element-wise.\n\n *NOTE*: `math.greater` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 2, 5])\n tf.math.greater(x, y) ==> [False, True, True]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.greater(x, y) ==> [False, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x > y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.greater_equal", "docs": "Returns the truth value of (x >= y) element-wise.\n\n *NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5, 2, 5, 10])\n tf.math.greater_equal(x, y) ==> [True, True, True, False]\n\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5])\n tf.math.greater_equal(x, y) ==> [True, False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x >= y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.group", "docs": "Create an op that groups multiple operations.\n\n When this op finishes, all ops in `inputs` have finished. This op has no\n output.\n\n Note: *In TensorFlow 2 with eager and/or Autograph, you should not require\n this method, as ops execute in the expected order thanks to automatic control\n dependencies.* Only use `tf.group` when working with v1\n `tf.Graph` code.\n\n When operating in a v1-style graph context, ops are not executed in the same\n order as specified in the code; TensorFlow will attempt to execute ops in\n parallel or in an order convenient to the result it is computing. `tf.group`\n allows you to request that one or more results finish before execution\n continues.\n\n `tf.group` creates a single op (of type `NoOp`), and then adds appropriate\n control dependencies. Thus, `c = tf.group(a, b)` will compute the same graph\n as this:\n\n with tf.control_dependencies([a, b]):\n c = tf.no_op()\n\n See also `tf.tuple` and\n `tf.control_dependencies`.\n\n Args:\n *inputs: Zero or more tensors to group.\n name: A name for this operation (optional).\n\n Returns:\n An Operation that executes all its inputs.\n\n Raises:\n ValueError: If an unknown keyword argument is provided.\n ", "desc": "Create an op that groups multiple operations.", "type": "API"}, {"name": "tf.compat.v1.guarantee_const", "docs": "Promise to the TF runtime that the input tensor is a constant. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nNot for public use.\n\nThe runtime is then free to make optimizations based on this.\n\nReturns the input tensor without modification.\n\nArgs:\n input: A `Tensor`.\n name: A name for this operation.\n\nReturns:\n A `Tensor`. Has the same dtype as `input`.", "desc": "Promise to the TF runtime that the input tensor is a constant. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.hessians", "docs": "Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.\n\n `hessians()` adds ops to the graph to output the Hessian matrix of `ys`\n with respect to `xs`. It returns a list of `Tensor` of length `len(xs)`\n where each tensor is the Hessian of `sum(ys)`.\n\n The Hessian is a matrix of second-order partial derivatives of a scalar\n tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).\n\n Args:\n ys: A `Tensor` or list of tensors to be differentiated.\n xs: A `Tensor` or list of tensors to be used for differentiation.\n name: Optional name to use for grouping all the gradient ops together.\n defaults to 'hessians'.\n colocate_gradients_with_ops: See `gradients()` documentation for details.\n gate_gradients: See `gradients()` documentation for details.\n aggregation_method: See `gradients()` documentation for details.\n\n Returns:\n A list of Hessian matrices of `sum(ys)` for each `x` in `xs`.\n\n Raises:\n LookupError: if one of the operations between `xs` and `ys` does not\n have a registered gradient function.\n ", "desc": "Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.", "type": "API"}, {"name": "tf.compat.v1.histogram_fixed_width", "docs": "Return histogram of values.\n\n Given the tensor `values`, this operation returns a rank 1 histogram counting\n the number of entries in `values` that fell into every bin. The bins are\n equal width and determined by the arguments `value_range` and `nbins`.\n\n Args:\n values: Numeric `Tensor`.\n value_range: Shape [2] `Tensor` of same `dtype` as `values`.\n values <= value_range[0] will be mapped to hist[0],\n values >= value_range[1] will be mapped to hist[-1].\n nbins: Scalar `int32 Tensor`. Number of histogram bins.\n dtype: dtype for returned histogram.\n name: A name for this operation (defaults to 'histogram_fixed_width').\n\n Returns:\n A 1-D `Tensor` holding histogram of values.\n\n Raises:\n TypeError: If any unsupported dtype is provided.\n tf.errors.InvalidArgumentError: If value_range does not\n satisfy value_range[0] < value_range[1].\n\n Examples:\n\n >>> # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)\n ...\n >>> nbins = 5\n >>> value_range = [0.0, 5.0]\n >>> new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]\n >>> hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)\n >>> hist.numpy()\n array([2, 1, 1, 0, 2], dtype=int32)\n ", "desc": "Return histogram of values.", "type": "API"}, {"name": "tf.compat.v1.histogram_fixed_width_bins", "docs": "Bins the given values for use in a histogram.\n\n Given the tensor `values`, this operation returns a rank 1 `Tensor`\n representing the indices of a histogram into which each element\n of `values` would be binned. The bins are equal width and\n determined by the arguments `value_range` and `nbins`.\n\n Args:\n values: Numeric `Tensor`.\n value_range: Shape [2] `Tensor` of same `dtype` as `values`.\n values <= value_range[0] will be mapped to hist[0],\n values >= value_range[1] will be mapped to hist[-1].\n nbins: Scalar `int32 Tensor`. Number of histogram bins.\n dtype: dtype for returned histogram.\n name: A name for this operation (defaults to 'histogram_fixed_width').\n\n Returns:\n A `Tensor` holding the indices of the binned values whose shape matches\n `values`.\n\n Raises:\n TypeError: If any unsupported dtype is provided.\n tf.errors.InvalidArgumentError: If value_range does not\n satisfy value_range[0] < value_range[1].\n\n Examples:\n\n >>> # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)\n ...\n >>> nbins = 5\n >>> value_range = [0.0, 5.0]\n >>> new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]\n >>> indices = tf.histogram_fixed_width_bins(new_values, value_range, nbins=5)\n >>> indices.numpy()\n array([0, 0, 1, 2, 4, 4], dtype=int32)\n ", "desc": "Bins the given values for use in a histogram.", "type": "API"}, {"name": "tf.compat.v1.HistogramProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.identity", "docs": "Return a Tensor with the same shape and contents as input.\n\n The return value is not the same Tensor as the original, but contains the same\n values. This operation is fast when used on the same device.\n\n For example:\n\n >>> a = tf.constant([0.78])\n >>> a_identity = tf.identity(a)\n >>> a.numpy()\n array([0.78], dtype=float32)\n >>> a_identity.numpy()\n array([0.78], dtype=float32)\n\n Calling `tf.identity` on a variable will make a Tensor that represents the\n value of that variable at the time it is called. This is equivalent to calling\n `.read_value()`.\n\n >>> a = tf.Variable(5)\n >>> a_identity = tf.identity(a)\n >>> a.assign_add(1)\n \n >>> a.numpy()\n 6\n >>> a_identity.numpy()\n 5\n\n Args:\n input: A `Tensor`, a `Variable`, a `CompositeTensor` or anything that can be\n converted to a tensor using `tf.convert_to_tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or CompositeTensor. Has the same type and contents as `input`.\n ", "desc": "Return a Tensor with the same shape and contents as input.", "type": "API"}, {"name": "tf.compat.v1.identity_n", "docs": "Returns a list of tensors with the same shapes and contents as the input\n\n tensors.\n\n This op can be used to override the gradient for complicated functions. For\n example, suppose y = f(x) and we wish to apply a custom function g for backprop\n such that dx = g(dy). In Python,\n\n ```python\n with tf.get_default_graph().gradient_override_map(\n {'IdentityN': 'OverrideGradientWithG'}):\n y, _ = identity_n([f(x), x])\n\n @tf.RegisterGradient('OverrideGradientWithG')\n def ApplyG(op, dy, _):\n return [None, g(dy)] # Do not backprop to f(x).\n ```\n\n Args:\n input: A list of `Tensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "Returns a list of tensors with the same shapes and contents as the input", "type": "API"}, {"name": "tf.compat.v1.IdentityReader", "docs": "A Reader that outputs the queued work as both the key and value.\n\n To use, enqueue strings in a Queue. Read will take the front\n work string and output (work, work).\n\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs the queued work as both the key and value.", "type": "API"}, {"name": "tf.compat.v1.ifft", "docs": "Inverse fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform over the\n inner-most dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.ifft2d", "docs": "Inverse 2D fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform over the\n inner-most 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.ifft3d", "docs": "Inverse 3D fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform over the\n inner-most 3 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.igamma", "docs": "Compute the lower regularized incomplete Gamma function `P(a, x)`.\n\n The lower regularized incomplete Gamma function is defined as:\n\n\n \\\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\\\)\n\n where\n\n \\\\(gamma(a, x) = \\\\int_{0}^{x} t^{a-1} exp(-t) dt\\\\)\n\n is the lower incomplete Gamma function.\n\n Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the lower regularized incomplete Gamma function `P(a, x)`.", "type": "API"}, {"name": "tf.compat.v1.igammac", "docs": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.\n\n The upper regularized incomplete Gamma function is defined as:\n\n \\\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\\\)\n\n where\n\n \\\\(Gamma(a, x) = \\int_{x}^{\\infty} t^{a-1} exp(-t) dt\\\\)\n\n is the upper incomplete Gamma function.\n\n Note, above `P(a, x)` (`Igamma`) is the lower regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.", "type": "API"}, {"name": "tf.compat.v1.imag", "docs": "Returns the imaginary part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the imaginary part of each element in `input` considered as a complex\n number. If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.imag(x) # [4.75, 5.75]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the imaginary part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.image", "docs": "Image ops.\n\nThe `tf.image` module contains various functions for image\nprocessing and decoding-encoding Ops.\n\nMany of the encoding/decoding functions are also available in the\ncore `tf.io` module.\n\n## Image processing\n\n### Resizing\n\nThe resizing Ops accept input images as tensors of several types. They always\noutput resized images as float32 tensors.\n\nThe convenience function `tf.image.resize` supports both 4-D\nand 3-D tensors as input and output. 4-D tensors are for batches of images,\n3-D tensors for individual images.\n\nResized images will be distorted if their original aspect ratio is not the\nsame as size. To avoid distortions see tf.image.resize_with_pad.\n\n* `tf.image.resize`\n* `tf.image.resize_with_pad`\n* `tf.image.resize_with_crop_or_pad`\n\nThe Class `tf.image.ResizeMethod` provides various resize methods like\n`bilinear`, `nearest_neighbor`.\n\n### Converting Between Colorspaces\n\nImage ops work either on individual images or on batches of images, depending on\nthe shape of their input Tensor.\n\nIf 3-D, the shape is `[height, width, channels]`, and the Tensor represents one\nimage. If 4-D, the shape is `[batch_size, height, width, channels]`, and the\nTensor represents `batch_size` images.\n\nCurrently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are\ngrayscale, images with 3 channels are encoded as either RGB or HSV. Images\nwith 2 or 4 channels include an alpha channel, which has to be stripped from the\nimage before passing the image to most image processing functions (and can be\nre-attached later).\n\nInternally, images are either stored in as one `float32` per channel per pixel\n(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel\nper pixel (values are assumed to lie in `[0,255]`).\n\nTensorFlow can convert between images in RGB or HSV or YIQ.\n\n* `tf.image.rgb_to_grayscale`, `tf.image.grayscale_to_rgb`\n* `tf.image.rgb_to_hsv`, `tf.image.hsv_to_rgb`\n* `tf.image.rgb_to_yiq`, `tf.image.yiq_to_rgb`\n* `tf.image.rgb_to_yuv`, `tf.image.yuv_to_rgb`\n* `tf.image.image_gradients`\n* `tf.image.convert_image_dtype`\n\n### Image Adjustments\n\nTensorFlow provides functions to adjust images in various ways: brightness,\ncontrast, hue, and saturation. Each adjustment can be done with predefined\nparameters or with random parameters picked from predefined intervals. Random\nadjustments are often useful to expand a training set and reduce overfitting.\n\nIf several adjustments are chained it is advisable to minimize the number of\nredundant conversions by first converting the images to the most natural data\ntype and representation.\n\n* `tf.image.adjust_brightness`\n* `tf.image.adjust_contrast`\n* `tf.image.adjust_gamma`\n* `tf.image.adjust_hue`\n* `tf.image.adjust_jpeg_quality`\n* `tf.image.adjust_saturation`\n* `tf.image.random_brightness`\n* `tf.image.random_contrast`\n* `tf.image.random_hue`\n* `tf.image.random_saturation`\n* `tf.image.per_image_standardization`\n\n### Working with Bounding Boxes\n\n* `tf.image.draw_bounding_boxes`\n* `tf.image.combined_non_max_suppression`\n* `tf.image.generate_bounding_box_proposals`\n* `tf.image.non_max_suppression`\n* `tf.image.non_max_suppression_overlaps`\n* `tf.image.non_max_suppression_padded`\n* `tf.image.non_max_suppression_with_scores`\n* `tf.image.pad_to_bounding_box`\n* `tf.image.sample_distorted_bounding_box`\n\n### Cropping\n\n* `tf.image.central_crop`\n* `tf.image.crop_and_resize`\n* `tf.image.crop_to_bounding_box`\n* `tf.io.decode_and_crop_jpeg`\n* `tf.image.extract_glimpse`\n* `tf.image.random_crop`\n* `tf.image.resize_with_crop_or_pad`\n\n### Flipping, Rotating and Transposing\n\n* `tf.image.flip_left_right`\n* `tf.image.flip_up_down`\n* `tf.image.random_flip_left_right`\n* `tf.image.random_flip_up_down`\n* `tf.image.rot90`\n* `tf.image.transpose`\n\n## Image decoding and encoding\n\nTensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded\nimages are represented by scalar string Tensors, decoded images by 3-D uint8\ntensors of shape `[height, width, channels]`. (PNG also supports uint16.)\n\nNote: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`\n\nThe encode and decode Ops apply to one image at a time. Their input and output\nare all of variable size. If you need fixed size images, pass the output of\nthe decode Ops to one of the cropping and resizing Ops.\n\n* `tf.io.decode_bmp`\n* `tf.io.decode_gif`\n* `tf.io.decode_image`\n* `tf.io.decode_jpeg`\n* `tf.io.decode_and_crop_jpeg`\n* `tf.io.decode_png`\n* `tf.io.encode_jpeg`\n* `tf.io.encode_png`\n\n\n", "desc": "Image ops.", "type": "API"}, {"name": "tf.compat.v1.image.adjust_brightness", "docs": "Adjust the brightness of RGB or Grayscale images.\n\n This is a convenience method that converts RGB images to float\n representation, adjusts their brightness, and then converts them back to the\n original data type. If several adjustments are chained, it is advisable to\n minimize the number of redundant conversions.\n\n The value `delta` is added to all components of the tensor `image`. `image` is\n converted to `float` and scaled appropriately if it is in fixed-point\n representation, and `delta` is converted to the same data type. For regular\n images, `delta` should be in the range `(-1,1)`, as it is added to the image\n in floating point representation, where pixel values are in the `[0,1)` range.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_brightness(x, delta=0.1)\n \n\n Args:\n image: RGB image or images to adjust.\n delta: A scalar. Amount to add to the pixel values.\n\n Returns:\n A brightness-adjusted tensor of the same shape and type as `image`.\n ", "desc": "Adjust the brightness of RGB or Grayscale images.", "type": "API"}, {"name": "tf.compat.v1.image.adjust_contrast", "docs": "Adjust contrast of RGB or grayscale images.\n\n This is a convenience method that converts RGB images to float\n representation, adjusts their contrast, and then converts them back to the\n original data type. If several adjustments are chained, it is advisable to\n minimize the number of redundant conversions.\n\n `images` is a tensor of at least 3 dimensions. The last 3 dimensions are\n interpreted as `[height, width, channels]`. The other dimensions only\n represent a collection of images, such as `[batch, height, width, channels].`\n\n Contrast is adjusted independently for each channel of each image.\n\n For each channel, this Op computes the mean of the image pixels in the\n channel and then adjusts each component `x` of each pixel to\n `(x - mean) * contrast_factor + mean`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_contrast(x, 2.)\n \n\n Args:\n images: Images to adjust. At least 3-D.\n contrast_factor: A float multiplier for adjusting contrast.\n\n Returns:\n The contrast-adjusted image or images.\n ", "desc": "Adjust contrast of RGB or grayscale images.", "type": "API"}, {"name": "tf.compat.v1.image.adjust_gamma", "docs": "Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction).\n\n on the input image.\n\n Also known as Power Law Transform. This function converts the\n input images at first to float representation, then transforms them\n pixelwise according to the equation `Out = gain * In**gamma`,\n and then converts the back to the original data type.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_gamma(x, 0.2)\n \n\n Args:\n image : RGB image or images to adjust.\n gamma : A scalar or tensor. Non-negative real number.\n gain : A scalar or tensor. The constant multiplier.\n\n Returns:\n A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`.\n\n Raises:\n ValueError: If gamma is negative.\n Notes:\n For gamma greater than 1, the histogram will shift towards left and\n the output image will be darker than the input image.\n For gamma less than 1, the histogram will shift towards right and\n the output image will be brighter than the input image.\n References:\n [Wikipedia](http://en.wikipedia.org/wiki/Gamma_correction)\n ", "desc": "Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction).", "type": "API"}, {"name": "tf.compat.v1.image.adjust_hue", "docs": "Adjust hue of RGB images.\n\n This is a convenience method that converts an RGB image to float\n representation, converts it to HSV, adds an offset to the\n hue channel, converts back to RGB and then back to the original\n data type. If several adjustments are chained it is advisable to minimize\n the number of redundant conversions.\n\n `image` is an RGB image. The image hue is adjusted by converting the\n image(s) to HSV and rotating the hue channel (H) by\n `delta`. The image is then converted back to RGB.\n\n `delta` must be in the interval `[-1, 1]`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_hue(x, 0.2)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n delta: float. How much to add to the hue channel.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: image must have at least 3 dimensions.\n InvalidArgumentError: The size of the last dimension must be 3.\n ValueError: if `delta` is not in the interval of `[-1, 1]`.\n\n Usage Example:\n\n >>> image = [[[1, 2, 3], [4, 5, 6]],\n ... [[7, 8, 9], [10, 11, 12]],\n ... [[13, 14, 15], [16, 17, 18]]]\n >>> image = tf.constant(image)\n >>> tf.image.adjust_hue(image, 0.2)\n \n ", "desc": "Adjust hue of RGB images.", "type": "API"}, {"name": "tf.compat.v1.image.adjust_jpeg_quality", "docs": "Adjust jpeg encoding quality of an image.\n\n This is a convenience method that converts an image to uint8 representation,\n encodes it to jpeg with `jpeg_quality`, decodes it, and then converts back\n to the original data type.\n\n `jpeg_quality` must be in the interval `[0, 100]`.\n\n Usage Examples:\n\n >>> x = [[[0.01, 0.02, 0.03],\n ... [0.04, 0.05, 0.06]],\n ... [[0.07, 0.08, 0.09],\n ... [0.10, 0.11, 0.12]]]\n >>> x_jpeg = tf.image.adjust_jpeg_quality(x, 75)\n >>> x_jpeg.numpy()\n array([[[0.00392157, 0.01960784, 0.03137255],\n [0.02745098, 0.04313726, 0.05490196]],\n [[0.05882353, 0.07450981, 0.08627451],\n [0.08235294, 0.09803922, 0.10980393]]], dtype=float32)\n\n Note that floating point values are expected to have values in the range\n [0,1) and values outside this range are clipped.\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_jpeg_quality(x, 75)\n \n\n Note that `jpeg_quality` 100 is still lossy compresson.\n\n >>> x = tf.constant([[[1, 2, 3],\n ... [4, 5, 6]],\n ... [[7, 8, 9],\n ... [10, 11, 12]]], dtype=tf.uint8)\n >>> tf.image.adjust_jpeg_quality(x, 100)\n \n\n Args:\n image: 3D image. The size of the last dimension must be None, 1 or 3.\n jpeg_quality: Python int or Tensor of type int32. jpeg encoding quality.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image, same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: quality must be in [0,100]\n InvalidArgumentError: image must have 1 or 3 channels\n ", "desc": "Adjust jpeg encoding quality of an image.", "type": "API"}, {"name": "tf.compat.v1.image.adjust_saturation", "docs": "Adjust saturation of RGB images.\n\n This is a convenience method that converts RGB images to float\n representation, converts them to HSV, adds an offset to the\n saturation channel, converts back to RGB and then back to the original\n data type. If several adjustments are chained it is advisable to minimize\n the number of redundant conversions.\n\n `image` is an RGB image or images. The image saturation is adjusted by\n converting the images to HSV and multiplying the saturation (S) channel by\n `saturation_factor` and clipping. The images are then converted back to RGB.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_saturation(x, 0.5)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n saturation_factor: float. Factor to multiply the saturation by.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: input must have 3 channels\n ", "desc": "Adjust saturation of RGB images.", "type": "API"}, {"name": "tf.compat.v1.image.central_crop", "docs": "Crop the central region of the image(s).\n\n Remove the outer parts of an image but retain the central region of the image\n along each dimension. If we specify central_fraction = 0.5, this function\n returns the region marked with \"X\" in the below diagram.\n\n --------\n | |\n | XXXX |\n | XXXX |\n | | where \"X\" is the central 50% of the image.\n --------\n\n This function works on either a single image (`image` is a 3-D Tensor), or a\n batch of images (`image` is a 4-D Tensor).\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0],\n ... [7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]],\n ... [[13.0, 14.0, 15.0],\n ... [16.0, 17.0, 18.0],\n ... [19.0, 20.0, 21.0],\n ... [22.0, 23.0, 24.0]],\n ... [[25.0, 26.0, 27.0],\n ... [28.0, 29.0, 30.0],\n ... [31.0, 32.0, 33.0],\n ... [34.0, 35.0, 36.0]],\n ... [[37.0, 38.0, 39.0],\n ... [40.0, 41.0, 42.0],\n ... [43.0, 44.0, 45.0],\n ... [46.0, 47.0, 48.0]]]\n >>> tf.image.central_crop(x, 0.5)\n \n\n Args:\n image: Either a 3-D float Tensor of shape [height, width, depth], or a 4-D\n Tensor of shape [batch_size, height, width, depth].\n central_fraction: float (0, 1], fraction of size to crop\n\n Raises:\n ValueError: if central_crop_fraction is not within (0, 1].\n\n Returns:\n 3-D / 4-D float Tensor, as per the input.\n ", "desc": "Crop the central region of the image(s).", "type": "API"}, {"name": "tf.compat.v1.image.combined_non_max_suppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n This operation performs non_max_suppression on the inputs per batch, across\n all classes.\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Also note that\n this algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is the final boxes, scores and classes tensor\n returned after performing non_max_suppression.\n\n Args:\n boxes: A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q`\n is 1 then same boxes are used for all classes otherwise, if `q` is equal\n to number of classes, class-specific boxes are used.\n scores: A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]`\n representing a single score corresponding to each box (each row of boxes).\n max_output_size_per_class: A scalar integer `Tensor` representing the\n maximum number of boxes to be selected by non-max suppression per class\n max_total_size: A int32 scalar representing maximum number of boxes retained\n over all classes. Note that setting this value to a large number may\n result in OOM error depending on the system workload.\n iou_threshold: A float representing the threshold for deciding whether boxes\n overlap too much with respect to IOU.\n score_threshold: A float representing the threshold for deciding when to\n remove boxes based on score.\n pad_per_class: If false, the output nmsed boxes, scores and classes are\n padded/clipped to `max_total_size`. If true, the output nmsed boxes,\n scores and classes are padded to be of length\n `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in\n which case it is clipped to `max_total_size`. Defaults to false.\n clip_boxes: If true, the coordinates of output nmsed boxes will be clipped\n to [0, 1]. If false, output the box coordinates as it is. Defaults to\n true.\n name: A name for the operation (optional).\n\n Returns:\n 'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor\n containing the non-max suppressed boxes.\n 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing\n the scores for the boxes.\n 'nmsed_classes': A [batch_size, max_detections] float32 tensor\n containing the class for boxes.\n 'valid_detections': A [batch_size] int32 tensor indicating the number of\n valid detections per batch item. Only the top valid_detections[i] entries\n in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the\n entries are zero paddings.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.compat.v1.image.convert_image_dtype", "docs": "Convert `image` to `dtype`, scaling its values if needed.\n\n The operation supports data types (for `image` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `bfloat16`.\n\n Images that are represented using floating point values are expected to have\n values in the range [0,1). Image data stored in integer data types are\n expected to have values in the range `[0,MAX]`, where `MAX` is the largest\n positive representable number for the data type.\n\n This op converts between data types, scaling the values appropriately before\n casting.\n\n Usage Example:\n\n >>> x = [[[1, 2, 3], [4, 5, 6]],\n ... [[7, 8, 9], [10, 11, 12]]]\n >>> x_int8 = tf.convert_to_tensor(x, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(x_int8, dtype=tf.float16, saturate=False)\n \n\n Converting integer types to floating point types returns normalized floating\n point values in the range [0, 1); the values are normalized by the `MAX` value\n of the input dtype. Consider the following two examples:\n\n >>> a = [[[1], [2]], [[3], [4]]]\n >>> a_int8 = tf.convert_to_tensor(a, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(a_int8, dtype=tf.float32)\n \n\n >>> a_int32 = tf.convert_to_tensor(a, dtype=tf.int32)\n >>> tf.image.convert_image_dtype(a_int32, dtype=tf.float32)\n \n\n Despite having identical values of `a` and output dtype of `float32`, the\n outputs differ due to the different input dtypes (`int8` vs. `int32`). This\n is, again, because the values are normalized by the `MAX` value of the input\n dtype.\n\n Note that converting floating point values to integer type may lose precision.\n In the example below, an image tensor `b` of dtype `float32` is converted to\n `int8` and back to `float32`. The final output, however, is different from\n the original input `b` due to precision loss.\n\n >>> b = [[[0.12], [0.34]], [[0.56], [0.78]]]\n >>> b_float32 = tf.convert_to_tensor(b, dtype=tf.float32)\n >>> b_int8 = tf.image.convert_image_dtype(b_float32, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(b_int8, dtype=tf.float32)\n \n\n Scaling up from an integer type (input dtype) to another integer type (output\n dtype) will not map input dtype's `MAX` to output dtype's `MAX` but converting\n back and forth should result in no change. For example, as shown below, the\n `MAX` value of int8 (=127) is not mapped to the `MAX` value of int16 (=32,767)\n but, when scaled back, we get the same, original values of `c`.\n\n >>> c = [[[1], [2]], [[127], [127]]]\n >>> c_int8 = tf.convert_to_tensor(c, dtype=tf.int8)\n >>> c_int16 = tf.image.convert_image_dtype(c_int8, dtype=tf.int16)\n >>> print(c_int16)\n tf.Tensor(\n [[[ 256]\n [ 512]]\n [[32512]\n [32512]]], shape=(2, 2, 1), dtype=int16)\n >>> c_int8_back = tf.image.convert_image_dtype(c_int16, dtype=tf.int8)\n >>> print(c_int8_back)\n tf.Tensor(\n [[[ 1]\n [ 2]]\n [[127]\n [127]]], shape=(2, 2, 1), dtype=int8)\n\n Scaling down from an integer type to another integer type can be a lossy\n conversion. Notice in the example below that converting `int16` to `uint8` and\n back to `int16` has lost precision.\n\n >>> d = [[[1000], [2000]], [[3000], [4000]]]\n >>> d_int16 = tf.convert_to_tensor(d, dtype=tf.int16)\n >>> d_uint8 = tf.image.convert_image_dtype(d_int16, dtype=tf.uint8)\n >>> d_int16_back = tf.image.convert_image_dtype(d_uint8, dtype=tf.int16)\n >>> print(d_int16_back)\n tf.Tensor(\n [[[ 896]\n [1920]]\n [[2944]\n [3968]]], shape=(2, 2, 1), dtype=int16)\n\n Note that converting from floating point inputs to integer types may lead to\n over/underflow problems. Set saturate to `True` to avoid such problem in\n problematic conversions. If enabled, saturation will clip the output into the\n allowed range before performing a potentially dangerous cast (and only before\n performing such a cast, i.e., when casting from a floating point to an integer\n type, and when casting from a signed to an unsigned type; `saturate` has no\n effect on casts between floats, or on casts that increase the type's range).\n\n Args:\n image: An image.\n dtype: A `DType` to convert `image` to.\n saturate: If `True`, clip the input before casting (if necessary).\n name: A name for this operation (optional).\n\n Returns:\n `image`, converted to `dtype`.\n\n Raises:\n AttributeError: Raises an attribute error when dtype is neither\n float nor integer\n ", "desc": "Convert `image` to `dtype`, scaling its values if needed.", "type": "API"}, {"name": "tf.compat.v1.image.crop_and_resize", "docs": "Extracts crops from the input image tensor and resizes them.\n\n Extracts crops from the input image tensor and resizes them using bilinear\n sampling or nearest neighbor sampling (possibly with aspect ratio change) to a\n common output size specified by `crop_size`. This is more general than the\n `crop_to_bounding_box` op which extracts a fixed size slice from the input image\n and does not allow resizing or aspect ratio change.\n\n Returns a tensor with `crops` from the input `image` at positions defined at the\n bounding box locations in `boxes`. The cropped boxes are all resized (with\n bilinear or nearest neighbor interpolation) to a fixed\n `size = [crop_height, crop_width]`. The result is a 4-D tensor\n `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned.\n In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical\n results to using `tf.image.resize_bilinear()` or\n `tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with\n `align_corners=True`.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.\n A 4-D tensor of shape `[batch, image_height, image_width, depth]`.\n Both `image_height` and `image_width` need to be positive.\n boxes: A `Tensor` of type `float32`.\n A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor\n specifies the coordinates of a box in the `box_ind[i]` image and is specified\n in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of\n `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the\n `[0, 1]` interval of normalized image height is mapped to\n `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in\n which case the sampled crop is an up-down flipped version of the original\n image. The width dimension is treated similarly. Normalized coordinates\n outside the `[0, 1]` range are allowed, in which case we use\n `extrapolation_value` to extrapolate the input image values.\n box_ind: A `Tensor` of type `int32`.\n A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.\n The value of `box_ind[i]` specifies the image that the `i`-th box refers to.\n crop_size: A `Tensor` of type `int32`.\n A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All\n cropped image patches are resized to this size. The aspect ratio of the image\n content is not preserved. Both `crop_height` and `crop_width` need to be\n positive.\n method: An optional `string` from: `\"bilinear\", \"nearest\"`. Defaults to `\"bilinear\"`.\n A string specifying the sampling method for resizing. It can be either\n `\"bilinear\"` or `\"nearest\"` and default to `\"bilinear\"`. Currently two sampling\n methods are supported: Bilinear and Nearest Neighbor.\n extrapolation_value: An optional `float`. Defaults to `0`.\n Value used for extrapolation, when applicable.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts crops from the input image tensor and resizes them.", "type": "API"}, {"name": "tf.compat.v1.image.crop_to_bounding_box", "docs": "Crops an `image` to a specified bounding box.\n\n This op cuts a rectangular bounding box out of `image`. The top-left corner\n of the bounding box is at `offset_height, offset_width` in `image`, and the\n lower-right corner is at\n `offset_height + target_height, offset_width + target_width`.\n\n Example Usage:\n\n >>> image = tf.constant(np.arange(1, 28, dtype=np.float32), shape=[3, 3, 3])\n >>> image[:,:,0] # print the first channel of the 3-D tensor\n \n >>> cropped_image = tf.image.crop_to_bounding_box(image, 0, 0, 2, 2)\n >>> cropped_image[:,:,0] # print the first channel of the cropped 3-D tensor\n \n\n Args:\n image: 4-D `Tensor` of shape `[batch, height, width, channels]` or 3-D\n `Tensor` of shape `[height, width, channels]`.\n offset_height: Vertical coordinate of the top-left corner of the bounding\n box in `image`.\n offset_width: Horizontal coordinate of the top-left corner of the bounding\n box in `image`.\n target_height: Height of the bounding box.\n target_width: Width of the bounding box.\n\n Returns:\n If `image` was 4-D, a 4-D `Tensor` of shape\n `[batch, target_height, target_width, channels]`.\n If `image` was 3-D, a 3-D `Tensor` of shape\n `[target_height, target_width, channels]`.\n It has the same dtype with `image`.\n\n Raises:\n ValueError: `image` is not a 3-D or 4-D `Tensor`.\n ValueError: `offset_width < 0` or `offset_height < 0`.\n ValueError: `target_width <= 0` or `target_width <= 0`.\n ValueError: `width < offset_width + target_width` or\n `height < offset_height + target_height`.\n ", "desc": "Crops an `image` to a specified bounding box.", "type": "API"}, {"name": "tf.compat.v1.image.decode_and_crop_jpeg", "docs": "Decode and Crop a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n It is equivalent to a combination of decode and crop, but much faster by only\n decoding partial jpeg image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n crop_window: A `Tensor` of type `int32`.\n 1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode and Crop a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.image.decode_bmp", "docs": "Decode the first frame of a BMP-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the BMP-encoded image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The BMP-encoded image.\n channels: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the first frame of a BMP-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.image.decode_gif", "docs": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.\n\n GIF images with frame or transparency compression are not supported.\n On Linux and MacOS systems, convert animated GIFs from compressed to\n uncompressed by running:\n\n convert $src.gif -coalesce $dst.gif\n\n This op also supports decoding JPEGs and PNGs, though it is cleaner to use\n `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The GIF-encoded image.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.image.decode_image", "docs": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.\n\n Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the\n appropriate operation to convert the input bytes `string` into a `Tensor`\n of type `dtype`.\n\n Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as\n opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D\n arrays `[height, width, num_channels]`. Make sure to take this into account\n when constructing your graph if you are intermixing GIF files with BMP, JPEG,\n and/or PNG files. Alternately, set the `expand_animations` argument of this\n function to `False`, in which case the op will return 3-dimensional tensors\n and will truncate animated GIF files to the first frame.\n\n NOTE: If the first frame of an animated GIF does not occupy the entire\n canvas (maximum frame width x maximum frame height), then it fills the\n unoccupied areas (in the first frame) with zeros (black). For frames after the\n first frame that does not occupy the entire canvas, it uses the previous\n frame to fill the unoccupied areas.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The encoded image bytes.\n channels: An optional `int`. Defaults to `0`. Number of color channels for\n the decoded image.\n dtype: The desired DType of the returned `Tensor`.\n name: A name for the operation (optional)\n expand_animations: An optional `bool`. Defaults to `True`. Controls the\n shape of the returned op's output. If `True`, the returned op will produce\n a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs,\n whether animated or not. If, `False`, the returned op will produce a 3-D\n tensor for all file types and will truncate animated GIFs to the first\n frame.\n\n Returns:\n `Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on\n the file type and the value of the `expand_animations` parameter.\n\n Raises:\n ValueError: On incorrect number of channels.\n ", "desc": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.", "type": "API"}, {"name": "tf.compat.v1.image.decode_jpeg", "docs": "Decode a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n This op also supports decoding PNGs and non-animated GIFs since the interface is\n the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.image.decode_png", "docs": "Decode a PNG-encoded image to a uint8 or uint16 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the PNG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n If needed, the PNG-encoded image is transformed to match the requested number\n of color channels.\n\n This op also supports decoding JPEGs and non-animated GIFs since the interface\n is the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The PNG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Decode a PNG-encoded image to a uint8 or uint16 tensor.", "type": "API"}, {"name": "tf.compat.v1.image.draw_bounding_boxes", "docs": "Draw bounding boxes on a batch of images.\n\n Outputs a copy of `images` but draws on top of the pixels zero or more\n bounding boxes specified by the locations in `boxes`. The coordinates of the\n each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`.\n The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width\n and the height of the underlying image.\n\n For example, if an image is 100 x 200 pixels (height x width) and the bounding\n box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of\n the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).\n\n Parts of the bounding box may fall outside the image.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `float32`, `half`.\n 4-D with shape `[batch, height, width, depth]`. A batch of images.\n boxes: A `Tensor` of type `float32`. 3-D with shape `[batch,\n num_bounding_boxes, 4]` containing bounding boxes.\n name: A name for the operation (optional).\n colors: A `Tensor` of type `float32`. 2-D. A list of RGBA colors to cycle\n through for the boxes.\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n\n Usage Example:\n\n >>> # create an empty image\n >>> img = tf.zeros([1, 3, 3, 3])\n >>> # draw a box around the image\n >>> box = np.array([0, 0, 1, 1])\n >>> boxes = box.reshape([1, 1, 4])\n >>> # alternate between red and blue\n >>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])\n >>> tf.image.draw_bounding_boxes(img, boxes, colors)\n \n ", "desc": "Draw bounding boxes on a batch of images.", "type": "API"}, {"name": "tf.compat.v1.image.encode_jpeg", "docs": "JPEG-encode an image.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n\n The attr `format` can be used to override the color format of the encoded\n output. Values can be:\n\n * `''`: Use a default format based on the number of channels in the image.\n * `grayscale`: Output a grayscale JPEG image. The `channels` dimension\n of `image` must be 1.\n * `rgb`: Output an RGB JPEG image. The `channels` dimension\n of `image` must be 3.\n\n If `format` is not specified or is the empty string, a default format is picked\n in function of the number of channels in `image`:\n\n * 1: Output a grayscale image.\n * 3: Output an RGB image.\n\n Args:\n image: A `Tensor` of type `uint8`.\n 3-D with shape `[height, width, channels]`.\n format: An optional `string` from: `\"\", \"grayscale\", \"rgb\"`. Defaults to `\"\"`.\n Per pixel image format.\n quality: An optional `int`. Defaults to `95`.\n Quality of the compression from 0 to 100 (higher is better and slower).\n progressive: An optional `bool`. Defaults to `False`.\n If True, create a JPEG that loads progressively (coarse to fine).\n optimize_size: An optional `bool`. Defaults to `False`.\n If True, spend CPU/RAM to reduce size with no quality change.\n chroma_downsampling: An optional `bool`. Defaults to `True`.\n See http://en.wikipedia.org/wiki/Chroma_subsampling.\n density_unit: An optional `string` from: `\"in\", \"cm\"`. Defaults to `\"in\"`.\n Unit used to specify `x_density` and `y_density`:\n pixels per inch (`'in'`) or centimeter (`'cm'`).\n x_density: An optional `int`. Defaults to `300`.\n Horizontal pixels per density unit.\n y_density: An optional `int`. Defaults to `300`.\n Vertical pixels per density unit.\n xmp_metadata: An optional `string`. Defaults to `\"\"`.\n If not empty, embed this XMP metadata in the image header.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG-encode an image.", "type": "API"}, {"name": "tf.compat.v1.image.encode_png", "docs": "PNG-encode an image.\n\n `image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`\n where `channels` is:\n\n * 1: for grayscale.\n * 2: for grayscale + alpha.\n * 3: for RGB.\n * 4: for RGBA.\n\n The ZLIB compression level, `compression`, can be -1 for the PNG-encoder\n default or a value from 0 to 9. 9 is the highest compression level,\n generating the smallest output, but is slower.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.\n 3-D with shape `[height, width, channels]`.\n compression: An optional `int`. Defaults to `-1`. Compression level.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "PNG-encode an image.", "type": "API"}, {"name": "tf.compat.v1.image.extract_glimpse", "docs": "Extracts a glimpse from the input tensor.\n\n Returns a set of windows called glimpses extracted at location\n `offsets` from the input tensor. If the windows only partially\n overlaps the inputs, the non-overlapping areas will be filled with\n random noise.\n\n The result is a 4-D tensor of shape `[batch_size, glimpse_height,\n glimpse_width, channels]`. The channels and batch dimensions are the\n same as that of the input tensor. The height and width of the output\n windows are specified in the `size` parameter.\n\n The argument `normalized` and `centered` controls how the windows are built:\n\n * If the coordinates are normalized but not centered, 0.0 and 1.0\n correspond to the minimum and maximum of each height and width\n dimension.\n * If the coordinates are both normalized and centered, they range from\n -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper\n left corner, the lower right corner is located at (1.0, 1.0) and the\n center is at (0, 0).\n * If the coordinates are not normalized they are interpreted as\n numbers of pixels.\n\n Usage Example:\n\n >>> x = [[[[0.0],\n ... [1.0],\n ... [2.0]],\n ... [[3.0],\n ... [4.0],\n ... [5.0]],\n ... [[6.0],\n ... [7.0],\n ... [8.0]]]]\n >>> tf.compat.v1.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]],\n ... centered=False, normalized=False)\n \n\n Args:\n input: A `Tensor` of type `float32`. A 4-D float tensor of shape\n `[batch_size, height, width, channels]`.\n size: A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the\n size of the glimpses to extract. The glimpse height must be specified\n first, following by the glimpse width.\n offsets: A `Tensor` of type `float32`. A 2-D integer tensor of shape\n `[batch_size, 2]` containing the y, x locations of the center of each\n window.\n centered: An optional `bool`. Defaults to `True`. indicates if the offset\n coordinates are centered relative to the image, in which case the (0, 0)\n offset is relative to the center of the input images. If false, the (0,0)\n offset corresponds to the upper left corner of the input images.\n normalized: An optional `bool`. Defaults to `True`. indicates if the offset\n coordinates are normalized.\n uniform_noise: An optional `bool`. Defaults to `True`. indicates if the\n noise should be generated using a uniform distribution or a Gaussian\n distribution.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts a glimpse from the input tensor.", "type": "API"}, {"name": "tf.compat.v1.image.extract_image_patches", "docs": "Extract `patches` from `images` and put them in the \"depth\" output dimension.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`.\n 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 4`.\n The size of the sliding window for each dimension of `images`.\n strides: A list of `ints` that has length `>= 4`.\n How far the centers of two consecutive patches are in\n the images. Must be: `[1, stride_rows, stride_cols, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n Must be: `[1, rate_rows, rate_cols, 1]`. This is the\n input stride, specifying how far two consecutive patch samples are in the\n input. Equivalent to extracting patches with\n `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by\n subsampling them spatially by a factor of `rates`. This is equivalent to\n `rate` in dilated (a.k.a. Atrous) convolutions.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Extract `patches` from `images` and put them in the \"depth\" output dimension.", "type": "API"}, {"name": "tf.compat.v1.image.extract_jpeg_shape", "docs": "Extract the shape information of a JPEG-encoded image.\n\n This op only parses the image header, so it is much faster than DecodeJpeg.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n (Optional) The output type of the operation (int32 or int64).\n Defaults to int32.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Extract the shape information of a JPEG-encoded image.", "type": "API"}, {"name": "tf.compat.v1.image.extract_patches", "docs": "Extract `patches` from `images`.\n\n This op collects patches from the input image, as if applying a\n convolution. All extracted patches are stacked in the depth (last) dimension\n of the output.\n\n Specifically, the op extracts patches of shape `sizes` which are `strides`\n apart in the input image. The output is subsampled using the `rates` argument,\n in the same manner as \"atrous\" or \"dilated\" convolutions.\n\n The result is a 4D tensor which is indexed by batch, row, and column.\n `output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]`\n which is taken from the input starting at\n `images[i, x*strides[1], y*strides[2]]`.\n\n Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where\n `depth` is `images.shape[3]`.\n\n The output elements are taken from the input at intervals given by the `rate`\n argument, as in dilated convolutions.\n\n The `padding` argument has no effect on the size of each patch, it determines\n how many patches are extracted. If `VALID`, only patches which are fully\n contained in the input image are included. If `SAME`, all patches whose\n starting point is inside the input are included, and areas outside the input\n default to zero.\n\n Example:\n\n ```\n n = 10\n # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100\n images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]\n\n # We generate two outputs as follows:\n # 1. 3x3 patches with stride length 5\n # 2. Same as above, but the rate is increased to 2\n tf.image.extract_patches(images=images,\n sizes=[1, 3, 3, 1],\n strides=[1, 5, 5, 1],\n rates=[1, 1, 1, 1],\n padding='VALID')\n\n # Yields:\n [[[[ 1 2 3 11 12 13 21 22 23]\n [ 6 7 8 16 17 18 26 27 28]]\n [[51 52 53 61 62 63 71 72 73]\n [56 57 58 66 67 68 76 77 78]]]]\n ```\n\n If we mark the pixels in the input image which are taken for the output with\n `*`, we see the pattern:\n\n ```\n * * * 4 5 * * * 9 10\n * * * 14 15 * * * 19 20\n * * * 24 25 * * * 29 30\n 31 32 33 34 35 36 37 38 39 40\n 41 42 43 44 45 46 47 48 49 50\n * * * 54 55 * * * 59 60\n * * * 64 65 * * * 69 70\n * * * 74 75 * * * 79 80\n 81 82 83 84 85 86 87 88 89 90\n 91 92 93 94 95 96 97 98 99 100\n ```\n\n ```\n tf.image.extract_patches(images=images,\n sizes=[1, 3, 3, 1],\n strides=[1, 5, 5, 1],\n rates=[1, 2, 2, 1],\n padding='VALID')\n\n # Yields:\n [[[[ 1 3 5 21 23 25 41 43 45]\n [ 6 8 10 26 28 30 46 48 50]]\n\n [[ 51 53 55 71 73 75 91 93 95]\n [ 56 58 60 76 78 80 96 98 100]]]]\n ```\n\n We can again draw the effect, this time using the symbols `*`, `x`, `+` and\n `o` to distinguish the patches:\n\n ```\n * 2 * 4 * x 7 x 9 x\n 11 12 13 14 15 16 17 18 19 20\n * 22 * 24 * x 27 x 29 x\n 31 32 33 34 35 36 37 38 39 40\n * 42 * 44 * x 47 x 49 x\n + 52 + 54 + o 57 o 59 o\n 61 62 63 64 65 66 67 68 69 70\n + 72 + 74 + o 77 o 79 o\n 81 82 83 84 85 86 87 88 89 90\n + 92 + 94 + o 97 o 99 o\n ```\n\n Args:\n images: A 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.\n sizes: The size of the extracted patches. Must be\n `[1, size_rows, size_cols, 1]`.\n strides: A 1-D Tensor of length 4. How far the centers of two consecutive\n patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`.\n rates: A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`.\n This is the input stride, specifying how far two consecutive patch samples\n are in the input. Equivalent to extracting patches with `patch_sizes_eff =\n patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling\n them spatially by a factor of `rates`. This is equivalent to `rate` in\n dilated (a.k.a. Atrous) convolutions.\n padding: The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A 4-D Tensor of the same type as the input.\n ", "desc": "Extract `patches` from `images`.", "type": "API"}, {"name": "tf.compat.v1.image.flip_left_right", "docs": "Flip an image horizontally (left to right).\n\n Outputs the contents of `image` flipped along the width dimension.\n\n See also `tf.reverse`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.flip_left_right(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n\n Returns:\n A tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Flip an image horizontally (left to right).", "type": "API"}, {"name": "tf.compat.v1.image.flip_up_down", "docs": "Flip an image vertically (upside down).\n\n Outputs the contents of `image` flipped along the height dimension.\n\n See also `reverse()`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.flip_up_down(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n\n Returns:\n A `Tensor` of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Flip an image vertically (upside down).", "type": "API"}, {"name": "tf.compat.v1.image.generate_bounding_box_proposals", "docs": "Generate bounding box proposals from encoded bounding boxes.\n\n Args:\n scores: A 4-D float `Tensor` of shape\n `[num_images, height, width, num_achors]` containing scores of\n the boxes for given anchors, can be unsorted.\n bbox_deltas: A 4-D float `Tensor` of shape\n `[num_images, height, width, 4 x num_anchors]` encoding boxes\n with respect to each anchor. Coordinates are given\n in the form `[dy, dx, dh, dw]`.\n image_info: A 2-D float `Tensor` of shape `[num_images, 5]`\n containing image information Height, Width, Scale.\n anchors: A 2-D float `Tensor` of shape `[num_anchors, 4]`\n describing the anchor boxes.\n Boxes are formatted in the form `[y1, x1, y2, x2]`.\n nms_threshold: A scalar float `Tensor` for non-maximal-suppression\n threshold. Defaults to 0.7.\n pre_nms_topn: A scalar int `Tensor` for the number of\n top scoring boxes to be used as input. Defaults to 6000.\n min_size: A scalar float `Tensor`. Any box that has a smaller size\n than min_size will be discarded. Defaults to 16.\n post_nms_topn: An integer. Maximum number of rois in the output.\n name: A name for this operation (optional).\n\n Returns:\n rois: Region of interest boxes sorted by their scores.\n roi_probabilities: scores of the ROI boxes in the ROIs' `Tensor`.\n ", "desc": "Generate bounding box proposals from encoded bounding boxes.", "type": "API"}, {"name": "tf.compat.v1.image.grayscale_to_rgb", "docs": "Converts one or more images from Grayscale to RGB.\n\n Outputs a tensor of the same `DType` and rank as `images`. The size of the\n last dimension of the output is 3, containing the RGB value of the pixels.\n The input images' last dimension must be size 1.\n\n >>> original = tf.constant([[[1.0], [2.0], [3.0]]])\n >>> converted = tf.image.grayscale_to_rgb(original)\n >>> print(converted.numpy())\n [[[1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]]]\n\n Args:\n images: The Grayscale tensor to convert. The last dimension must be size 1.\n name: A name for the operation (optional).\n\n Returns:\n The converted grayscale image(s).\n ", "desc": "Converts one or more images from Grayscale to RGB.", "type": "API"}, {"name": "tf.compat.v1.image.hsv_to_rgb", "docs": "Convert one or more images from HSV to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n See `rgb_to_hsv` for a description of the HSV encoding.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. HSV data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Convert one or more images from HSV to RGB.", "type": "API"}, {"name": "tf.compat.v1.image.image_gradients", "docs": "Returns image gradients (dy, dx) for each color channel.\n\n Both output tensors have the same shape as the input: [batch_size, h, w,\n d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in\n location (x, y). That means that dy will always have zeros in the last row,\n and dx will always have zeros in the last column.\n\n Usage Example:\n ```python\n BATCH_SIZE = 1\n IMAGE_HEIGHT = 5\n IMAGE_WIDTH = 5\n CHANNELS = 1\n image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS,\n delta=1, dtype=tf.float32),\n shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))\n dy, dx = tf.image.image_gradients(image)\n print(image[0, :,:,0])\n tf.Tensor(\n [[ 0. 1. 2. 3. 4.]\n [ 5. 6. 7. 8. 9.]\n [10. 11. 12. 13. 14.]\n [15. 16. 17. 18. 19.]\n [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32)\n print(dy[0, :,:,0])\n tf.Tensor(\n [[5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32)\n print(dx[0, :,:,0])\n tf.Tensor(\n [[1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32)\n ```\n\n Args:\n image: Tensor with shape [batch_size, h, w, d].\n\n Returns:\n Pair of tensors (dy, dx) holding the vertical and horizontal image\n gradients (1-step finite difference).\n\n Raises:\n ValueError: If `image` is not a 4D tensor.\n ", "desc": "Returns image gradients (dy, dx) for each color channel.", "type": "API"}, {"name": "tf.compat.v1.image.is_jpeg", "docs": "Convenience function to check if the 'contents' encodes a JPEG image.\n\n Args:\n contents: 0-D `string`. The encoded image bytes.\n name: A name for the operation (optional)\n\n Returns:\n A scalar boolean tensor indicating if 'contents' may be a JPEG image.\n is_jpeg is susceptible to false positives.\n ", "desc": "Convenience function to check if the 'contents' encodes a JPEG image.", "type": "API"}, {"name": "tf.compat.v1.image.non_max_suppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices = tf.image.non_max_suppression(\n boxes, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n boxes: A 2-D float `Tensor` of shape `[num_boxes, 4]`.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n iou_threshold: A 0-D float tensor representing the threshold for deciding\n whether boxes overlap too much with respect to IOU.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the boxes tensor, where `M <= max_output_size`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.compat.v1.image.non_max_suppression_overlaps", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high overlap with previously selected boxes.\n N-by-n overlap values are supplied as square matrix.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices = tf.image.non_max_suppression_overlaps(\n overlaps, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n overlaps: A 2-D float `Tensor` of shape `[num_boxes, num_boxes]`\n representing the n-by-n box overlap values.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n overlap_threshold: A 0-D float tensor representing the threshold for\n deciding whether boxes overlap too much with respect to the provided\n overlap values.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the overlaps tensor, where `M <= max_output_size`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.compat.v1.image.non_max_suppression_padded", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Performs algorithmically equivalent operation to tf.image.non_max_suppression,\n with the addition of an optional parameter which zero-pads the output to\n be of size `max_output_size`.\n The output of this operation is a tuple containing the set of integers\n indexing into the input collection of bounding boxes representing the selected\n boxes and the number of valid indices in the index set. The bounding box\n coordinates corresponding to the selected indices can then be obtained using\n the `tf.slice` and `tf.gather` operations. For example:\n ```python\n selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(\n boxes, scores, max_output_size, iou_threshold,\n score_threshold, pad_to_max_output_size=True)\n selected_indices = tf.slice(\n selected_indices_padded, tf.constant([0]), num_valid)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n boxes: a tensor of rank 2 or higher with a shape of [..., num_boxes, 4].\n Dimensions except the last two are batch dimensions.\n scores: a tensor of rank 1 or higher with a shape of [..., num_boxes].\n max_output_size: a scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non max suppression. Note that setting this\n value to a large number may result in OOM error depending on the system\n workload.\n iou_threshold: a float representing the threshold for deciding whether boxes\n overlap too much with respect to IoU (intersection over union).\n score_threshold: a float representing the threshold for box scores. Boxes\n with a score that is not larger than this threshold will be suppressed.\n pad_to_max_output_size: whether to pad the output idx to max_output_size.\n Must be set to True when the input is a batch of images.\n name: name of operation.\n sorted_input: a boolean indicating whether the input boxes and scores\n are sorted in descending order by the score.\n canonicalized_coordinates: if box coordinates are given as\n `[y_min, x_min, y_max, x_max]`, setting to True eliminate redundant\n computation to canonicalize box coordinates.\n tile_size: an integer representing the number of boxes in a tile, i.e.,\n the maximum number of boxes per image that can be used to suppress other\n boxes in parallel; larger tile_size means larger parallelism and\n potentially more redundant work.\n Returns:\n idx: a tensor with a shape of [..., num_boxes] representing the\n indices selected by non-max suppression. The leading dimensions\n are the batch dimensions of the input boxes. All numbers are within\n [0, num_boxes). For each image (i.e., idx[i]), only the first num_valid[i]\n indices (i.e., idx[i][:num_valid[i]]) are valid.\n num_valid: a tensor of rank 0 or higher with a shape of [...]\n representing the number of valid indices in idx. Its dimensions are the\n batch dimensions of the input boxes.\n Raises:\n ValueError: When set pad_to_max_output_size to False for batched input.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.compat.v1.image.non_max_suppression_with_scores", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices, selected_scores = tf.image.non_max_suppression_padded(\n boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,\n soft_nms_sigma=0.5)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n This function generalizes the `tf.image.non_max_suppression` op by also\n supporting a Soft-NMS (with Gaussian weighting) mode (c.f.\n Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score\n of other overlapping boxes instead of directly causing them to be pruned.\n Consequently, in contrast to `tf.image.non_max_suppression`,\n `tf.image.non_max_suppression_with_scores` returns the new scores of each\n input box in the second output, `selected_scores`.\n\n To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be\n larger than 0. When `soft_nms_sigma` equals 0, the behavior of\n `tf.image.non_max_suppression_with_scores` is identical to that of\n `tf.image.non_max_suppression` (except for the extra output) both in function\n and in running time.\n\n Note that when `soft_nms_sigma` > 0, Soft-NMS is performed and `iou_threshold`\n is ignored. `iou_threshold` is only used for standard NMS.\n\n Args:\n boxes: A 2-D float `Tensor` of shape `[num_boxes, 4]`.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n iou_threshold: A 0-D float tensor representing the threshold for deciding\n whether boxes overlap too much with respect to IOU.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n soft_nms_sigma: A 0-D float tensor representing the sigma parameter for Soft\n NMS; see Bodla et al (c.f. https://arxiv.org/abs/1704.04503). When\n `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard)\n NMS.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the boxes tensor, where `M <= max_output_size`.\n selected_scores: A 1-D float tensor of shape `[M]` representing the\n corresponding scores for each selected box, where `M <= max_output_size`.\n Scores only differ from corresponding input scores when using Soft NMS\n (i.e. when `soft_nms_sigma>0`)\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.compat.v1.image.pad_to_bounding_box", "docs": "Pad `image` with zeros to the specified `height` and `width`.\n\n Adds `offset_height` rows of zeros on top, `offset_width` columns of\n zeros on the left, and then pads the image on the bottom and right\n with zeros until it has dimensions `target_height`, `target_width`.\n\n This op does nothing if `offset_*` is zero and the image already has size\n `target_height` by `target_width`.\n\n Usage Example:\n\n >>> x = [[[1., 2., 3.],\n ... [4., 5., 6.]],\n ... [[7., 8., 9.],\n ... [10., 11., 12.]]]\n >>> padded_image = tf.image.pad_to_bounding_box(x, 1, 1, 4, 4)\n >>> padded_image\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n offset_height: Number of rows of zeros to add on top.\n offset_width: Number of columns of zeros to add on the left.\n target_height: Height of output image.\n target_width: Width of output image.\n\n Returns:\n If `image` was 4-D, a 4-D float Tensor of shape\n `[batch, target_height, target_width, channels]`\n If `image` was 3-D, a 3-D float Tensor of shape\n `[target_height, target_width, channels]`\n\n Raises:\n ValueError: If the shape of `image` is incompatible with the `offset_*` or\n `target_*` arguments, or either `offset_height` or `offset_width` is\n negative.\n ", "desc": "Pad `image` with zeros to the specified `height` and `width`.", "type": "API"}, {"name": "tf.compat.v1.image.per_image_standardization", "docs": "Linearly scales each image in `image` to have mean 0 and variance 1.\n\n For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`,\n where\n\n - `mean` is the average of all values in `x`\n - `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to\n protect against division by 0 when handling uniform images\n - `N` is the number of elements in `x`\n - `stddev` is the standard deviation of all values in `x`\n\n Example Usage:\n\n >>> image = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> image # 3-D tensor\n \n >>> new_image = tf.image.per_image_standardization(image)\n >>> new_image # 3-D tensor with mean ~= 0 and variance ~= 1\n \n\n Args:\n image: An n-D `Tensor` with at least 3 dimensions, the last 3 of which are\n the dimensions of each image.\n\n Returns:\n A `Tensor` with the same shape as `image` and its dtype is `float32`.\n\n Raises:\n ValueError: The shape of `image` has fewer than 3 dimensions.\n ", "desc": "Linearly scales each image in `image` to have mean 0 and variance 1.", "type": "API"}, {"name": "tf.compat.v1.image.psnr", "docs": "Returns the Peak Signal-to-Noise Ratio between a and b.\n\n This is intended to be used on signals (or images). Produces a PSNR value for\n each image in batch.\n\n The last three dimensions of input are expected to be [height, width, depth].\n\n Example:\n\n ```python\n # Read images from file.\n im1 = tf.decode_png('path/to/im1.png')\n im2 = tf.decode_png('path/to/im2.png')\n # Compute PSNR over tf.uint8 Tensors.\n psnr1 = tf.image.psnr(im1, im2, max_val=255)\n\n # Compute PSNR over tf.float32 Tensors.\n im1 = tf.image.convert_image_dtype(im1, tf.float32)\n im2 = tf.image.convert_image_dtype(im2, tf.float32)\n psnr2 = tf.image.psnr(im1, im2, max_val=1.0)\n # psnr1 and psnr2 both have type tf.float32 and are almost equal.\n ```\n\n Args:\n a: First set of images.\n b: Second set of images.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n name: Namespace to embed the computation in.\n\n Returns:\n The scalar PSNR between a and b. The returned tensor has type `tf.float32`\n and shape [batch_size, 1].\n ", "desc": "Returns the Peak Signal-to-Noise Ratio between a and b.", "type": "API"}, {"name": "tf.compat.v1.image.random_brightness", "docs": "Adjust the brightness of images by a random factor.\n\n Equivalent to `adjust_brightness()` using a `delta` randomly picked in the\n interval `[-max_delta, max_delta)`.\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_brightness`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: An image or images to adjust.\n max_delta: float, must be non-negative.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_brightness(x, 0.2)\n \n\n Returns:\n The brightness-adjusted image(s).\n\n Raises:\n ValueError: if `max_delta` is negative.\n ", "desc": "Adjust the brightness of images by a random factor.", "type": "API"}, {"name": "tf.compat.v1.image.random_contrast", "docs": "Adjust the contrast of an image or images by a random factor.\n\n Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly\n picked in the interval `[lower, upper)`.\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_contrast`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: An image tensor with 3 or more dimensions.\n lower: float. Lower bound for the random contrast factor.\n upper: float. Upper bound for the random contrast factor.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_contrast(x, 0.2, 0.5)\n \n\n Returns:\n The contrast-adjusted image(s).\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the contrast of an image or images by a random factor.", "type": "API"}, {"name": "tf.compat.v1.image.random_crop", "docs": "Randomly crops a tensor to a given size.\n\n Slices a shape `size` portion out of `value` at a uniformly chosen offset.\n Requires `value.shape >= size`.\n\n If a dimension should not be cropped, pass the full size of that dimension.\n For example, RGB images can be cropped with\n `size = [crop_height, crop_width, 3]`.\n\n Example usage:\n\n >>> image = [[1, 2, 3], [4, 5, 6]]\n >>> result = tf.image.random_crop(value=image, size=(1, 3))\n >>> result.shape.as_list()\n [1, 3]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_crop`. Unlike using the `seed` param with\n `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same\n results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n value: Input tensor to crop.\n size: 1-D tensor with size the rank of `value`.\n seed: Python integer. Used to create a random seed. See\n `tf.random.set_seed`\n for behavior.\n name: A name for this operation (optional).\n\n Returns:\n A cropped tensor of the same rank as `value` and shape `size`.\n ", "desc": "Randomly crops a tensor to a given size.", "type": "API"}, {"name": "tf.compat.v1.image.random_flip_left_right", "docs": "Randomly flip an image horizontally (left to right).\n\n With a 1 in 2 chance, outputs the contents of `image` flipped along the\n second dimension, which is `width`. Otherwise output the image as-is.\n When passing a batch of images, each image will be randomly flipped\n independent of other images.\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> tf.image.random_flip_left_right(image, 5).numpy().tolist()\n [[[2], [1]], [[4], [3]]]\n\n Randomly flip multiple images.\n\n >>> images = np.array(\n ... [\n ... [[[1], [2]], [[3], [4]]],\n ... [[[5], [6]], [[7], [8]]]\n ... ])\n >>> tf.image.random_flip_left_right(images, 6).numpy().tolist()\n [[[[2], [1]], [[4], [3]]], [[[5], [6]], [[7], [8]]]]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_flip_left_right`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Returns:\n A tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Randomly flip an image horizontally (left to right).", "type": "API"}, {"name": "tf.compat.v1.image.random_flip_up_down", "docs": "Randomly flips an image vertically (upside down).\n\n With a 1 in 2 chance, outputs the contents of `image` flipped along the first\n dimension, which is `height`. Otherwise, output the image as-is.\n When passing a batch of images, each image will be randomly flipped\n independent of other images.\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> tf.image.random_flip_up_down(image, 3).numpy().tolist()\n [[[3], [4]], [[1], [2]]]\n\n Randomly flip multiple images.\n\n >>> images = np.array(\n ... [\n ... [[[1], [2]], [[3], [4]]],\n ... [[[5], [6]], [[7], [8]]]\n ... ])\n >>> tf.image.random_flip_up_down(images, 4).numpy().tolist()\n [[[[3], [4]], [[1], [2]]], [[[5], [6]], [[7], [8]]]]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_flip_up_down`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Returns:\n A tensor of the same type and shape as `image`.\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Randomly flips an image vertically (upside down).", "type": "API"}, {"name": "tf.compat.v1.image.random_hue", "docs": "Adjust the hue of RGB images by a random factor.\n\n Equivalent to `adjust_hue()` but uses a `delta` randomly\n picked in the interval `[-max_delta, max_delta)`.\n\n `max_delta` must be in the interval `[0, 0.5]`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_hue(x, 0.2)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_hue`. Unlike using the `seed` param with\n `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same\n results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n max_delta: float. The maximum value for the random delta.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `max_delta` is invalid.\n ", "desc": "Adjust the hue of RGB images by a random factor.", "type": "API"}, {"name": "tf.compat.v1.image.random_jpeg_quality", "docs": "Randomly changes jpeg encoding quality for inducing jpeg noise.\n\n `min_jpeg_quality` must be in the interval `[0, 100]` and less than\n `max_jpeg_quality`.\n `max_jpeg_quality` must be in the interval `[0, 100]`.\n\n Usage Example:\n\n >>> x = tf.constant([[[1, 2, 3],\n ... [4, 5, 6]],\n ... [[7, 8, 9],\n ... [10, 11, 12]]], dtype=tf.uint8)\n >>> tf.image.random_jpeg_quality(x, 75, 95)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_jpeg_quality`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 3D image. Size of the last dimension must be 1 or 3.\n min_jpeg_quality: Minimum jpeg encoding quality to use.\n max_jpeg_quality: Maximum jpeg encoding quality to use.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `min_jpeg_quality` or `max_jpeg_quality` is invalid.\n ", "desc": "Randomly changes jpeg encoding quality for inducing jpeg noise.", "type": "API"}, {"name": "tf.compat.v1.image.random_saturation", "docs": "Adjust the saturation of RGB images by a random factor.\n\n Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly\n picked in the interval `[lower, upper)`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_saturation(x, 5, 10)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_saturation`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n lower: float. Lower bound for the random saturation factor.\n upper: float. Upper bound for the random saturation factor.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the saturation of RGB images by a random factor.", "type": "API"}, {"name": "tf.compat.v1.image.resize", "docs": "Resize `images` to `size` using the specified `method`.\n\n Resized images will be distorted if their original aspect ratio is not\n the same as `size`. To avoid distortions see\n `tf.image.resize_with_pad` or `tf.image.resize_with_crop_or_pad`.\n\n The `method` can be one of:\n\n * `tf.image.ResizeMethod.BILINEAR`: [Bilinear interpolation.](\n https://en.wikipedia.org/wiki/Bilinear_interpolation)\n * `tf.image.ResizeMethod.NEAREST_NEIGHBOR`: [\n Nearest neighbor interpolation.](\n https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)\n * `tf.image.ResizeMethod.BICUBIC`: [Bicubic interpolation.](\n https://en.wikipedia.org/wiki/Bicubic_interpolation)\n * `tf.image.ResizeMethod.AREA`: Area interpolation.\n\n The return value has the same type as `images` if `method` is\n `tf.image.ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type\n as `images` if the size of `images` can be statically determined to be the\n same as `size`, because `images` is returned in this case. Otherwise, the\n return value has type `float32`.\n\n Args:\n images: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new\n size for the images.\n method: ResizeMethod. Defaults to `tf.image.ResizeMethod.BILINEAR`.\n align_corners: bool. If True, the centers of the 4 corner pixels of the\n input and output tensors are aligned, preserving the values at the corner\n pixels. Defaults to `False`.\n preserve_aspect_ratio: Whether to preserve the aspect ratio. If this is set,\n then `images` will be resized to a size that fits in `size` while\n preserving the aspect ratio of the original image. Scales up the image if\n `size` is bigger than the current size of the `image`. Defaults to False.\n name: A name for this operation (optional).\n\n Raises:\n ValueError: if the shape of `images` is incompatible with the\n shape arguments to this function\n ValueError: if `size` has invalid shape or type.\n ValueError: if an unsupported resize method is specified.\n\n Returns:\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Resize `images` to `size` using the specified `method`.", "type": "API"}, {"name": "tf.compat.v1.image.resize_area", "docs": "Resize `images` to `size` using area interpolation.\n\n Input images can be of different types but output images are always float.\n\n The range of pixel values for the output image might be slightly different\n from the range for the input image because of limited numerical precision.\n To guarantee an output range, for example `[0.0, 1.0]`, apply\n `tf.clip_by_value` to the output.\n\n Each output pixel is computed by first transforming the pixel's footprint into\n the input tensor and then averaging the pixels that intersect the footprint. An\n input pixel's contribution to the average is weighted by the fraction of its\n area that intersects the footprint. This is the same as OpenCV's INTER_AREA.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Resize `images` to `size` using area interpolation.", "type": "API"}, {"name": "tf.compat.v1.image.resize_bicubic", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.image.resize_bilinear", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.image.resize_image_with_crop_or_pad", "docs": "Crops and/or pads an image to a target width and height.\n\n Resizes an image to a target width and height by either centrally\n cropping the image or padding it evenly with zeros.\n\n If `width` or `height` is greater than the specified `target_width` or\n `target_height` respectively, this op centrally crops along that dimension.\n\n For example:\n\n >>> image = np.arange(75).reshape(5, 5, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 0, 3, 6, 9, 12],\n [15, 18, 21, 24, 27],\n [30, 33, 36, 39, 42],\n [45, 48, 51, 54, 57],\n [60, 63, 66, 69, 72]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 3, 3) # crop\n >>> # print first channel for demo purposes; centrally cropped output\n >>> image[:,:,0]\n \n\n If `width` or `height` is smaller than the specified `target_width` or\n `target_height` respectively, this op centrally pads with 0 along that\n dimension.\n\n For example:\n\n >>> image = np.arange(1, 28).reshape(3, 3, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 1, 4, 7],\n [10, 13, 16],\n [19, 22, 25]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 5, 5) # pad\n >>> # print first channel for demo purposes; we should see 0 paddings\n >>> image[:,:,0]\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n target_height: Target height.\n target_width: Target width.\n\n Raises:\n ValueError: if `target_height` or `target_width` are zero or negative.\n\n Returns:\n Cropped and/or padded image.\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Crops and/or pads an image to a target width and height.", "type": "API"}, {"name": "tf.compat.v1.image.resize_image_with_pad", "docs": "Resizes and pads an image to a target width and height.\n\n Resizes an image to a target width and height by keeping\n the aspect ratio the same without distortion. If the target\n dimensions don't match the image dimensions, the image\n is resized and then padded with zeroes to match requested\n dimensions.\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n target_height: Target height.\n target_width: Target width.\n method: Method to use for resizing image. See `resize_images()`\n align_corners: bool. If True, the centers of the 4 corner pixels of the\n input and output tensors are aligned, preserving the values at the corner\n pixels. Defaults to `False`.\n\n Raises:\n ValueError: if `target_height` or `target_width` are zero or negative.\n\n Returns:\n Resized and padded image.\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Resizes and pads an image to a target width and height.", "type": "API"}, {"name": "tf.compat.v1.image.resize_images", "docs": "Resize `images` to `size` using the specified `method`.\n\n Resized images will be distorted if their original aspect ratio is not\n the same as `size`. To avoid distortions see\n `tf.image.resize_with_pad` or `tf.image.resize_with_crop_or_pad`.\n\n The `method` can be one of:\n\n * `tf.image.ResizeMethod.BILINEAR`: [Bilinear interpolation.](\n https://en.wikipedia.org/wiki/Bilinear_interpolation)\n * `tf.image.ResizeMethod.NEAREST_NEIGHBOR`: [\n Nearest neighbor interpolation.](\n https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)\n * `tf.image.ResizeMethod.BICUBIC`: [Bicubic interpolation.](\n https://en.wikipedia.org/wiki/Bicubic_interpolation)\n * `tf.image.ResizeMethod.AREA`: Area interpolation.\n\n The return value has the same type as `images` if `method` is\n `tf.image.ResizeMethod.NEAREST_NEIGHBOR`. It will also have the same type\n as `images` if the size of `images` can be statically determined to be the\n same as `size`, because `images` is returned in this case. Otherwise, the\n return value has type `float32`.\n\n Args:\n images: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new\n size for the images.\n method: ResizeMethod. Defaults to `tf.image.ResizeMethod.BILINEAR`.\n align_corners: bool. If True, the centers of the 4 corner pixels of the\n input and output tensors are aligned, preserving the values at the corner\n pixels. Defaults to `False`.\n preserve_aspect_ratio: Whether to preserve the aspect ratio. If this is set,\n then `images` will be resized to a size that fits in `size` while\n preserving the aspect ratio of the original image. Scales up the image if\n `size` is bigger than the current size of the `image`. Defaults to False.\n name: A name for this operation (optional).\n\n Raises:\n ValueError: if the shape of `images` is incompatible with the\n shape arguments to this function\n ValueError: if `size` has invalid shape or type.\n ValueError: if an unsupported resize method is specified.\n\n Returns:\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Resize `images` to `size` using the specified `method`.", "type": "API"}, {"name": "tf.compat.v1.image.resize_nearest_neighbor", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.image.resize_with_crop_or_pad", "docs": "Crops and/or pads an image to a target width and height.\n\n Resizes an image to a target width and height by either centrally\n cropping the image or padding it evenly with zeros.\n\n If `width` or `height` is greater than the specified `target_width` or\n `target_height` respectively, this op centrally crops along that dimension.\n\n For example:\n\n >>> image = np.arange(75).reshape(5, 5, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 0, 3, 6, 9, 12],\n [15, 18, 21, 24, 27],\n [30, 33, 36, 39, 42],\n [45, 48, 51, 54, 57],\n [60, 63, 66, 69, 72]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 3, 3) # crop\n >>> # print first channel for demo purposes; centrally cropped output\n >>> image[:,:,0]\n \n\n If `width` or `height` is smaller than the specified `target_width` or\n `target_height` respectively, this op centrally pads with 0 along that\n dimension.\n\n For example:\n\n >>> image = np.arange(1, 28).reshape(3, 3, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 1, 4, 7],\n [10, 13, 16],\n [19, 22, 25]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 5, 5) # pad\n >>> # print first channel for demo purposes; we should see 0 paddings\n >>> image[:,:,0]\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n target_height: Target height.\n target_width: Target width.\n\n Raises:\n ValueError: if `target_height` or `target_width` are zero or negative.\n\n Returns:\n Cropped and/or padded image.\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Crops and/or pads an image to a target width and height.", "type": "API"}, {"name": "tf.compat.v1.image.ResizeMethod", "docs": "See `v1.image.resize` for details.", "desc": "See `v1.image.resize` for details.", "type": "API"}, {"name": "tf.compat.v1.image.rgb_to_grayscale", "docs": "Converts one or more images from RGB to Grayscale.\n\n Outputs a tensor of the same `DType` and rank as `images`. The size of the\n last dimension of the output is 1, containing the Grayscale value of the\n pixels.\n\n >>> original = tf.constant([[[1.0, 2.0, 3.0]]])\n >>> converted = tf.image.rgb_to_grayscale(original)\n >>> print(converted.numpy())\n [[[1.81...]]]\n\n Args:\n images: The RGB tensor to convert. The last dimension must have size 3 and\n should contain RGB values.\n name: A name for the operation (optional).\n\n Returns:\n The converted grayscale image(s).\n ", "desc": "Converts one or more images from RGB to Grayscale.", "type": "API"}, {"name": "tf.compat.v1.image.rgb_to_hsv", "docs": "Converts one or more images from RGB to HSV.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the HSV\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n `output[..., 0]` contains hue, `output[..., 1]` contains saturation, and\n `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0\n corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.\n\n Usage Example:\n\n >>> blue_image = tf.stack([\n ... tf.zeros([5,5]),\n ... tf.zeros([5,5]),\n ... tf.ones([5,5])],\n ... axis=-1)\n >>> blue_hsv_image = tf.image.rgb_to_hsv(blue_image)\n >>> blue_hsv_image[0,0].numpy()\n array([0.6666667, 1. , 1. ], dtype=float32)\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. RGB data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Converts one or more images from RGB to HSV.", "type": "API"}, {"name": "tf.compat.v1.image.rgb_to_yiq", "docs": "Converts one or more images from RGB to YIQ.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the YIQ\n value of the pixels.\n The output is only well defined if the value in images are in [0,1].\n\n Usage Example:\n\n >>> x = tf.constant([[[1.0, 2.0, 3.0]]])\n >>> tf.image.rgb_to_yiq(x)\n \n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from RGB to YIQ.", "type": "API"}, {"name": "tf.compat.v1.image.rgb_to_yuv", "docs": "Converts one or more images from RGB to YUV.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the YUV\n value of the pixels.\n The output is only well defined if the value in images are in [0, 1].\n There are two ways of representing an image: [0, 255] pixel values range or\n [0, 1] (as float) pixel values range. Users need to convert the input image\n into a float [0, 1] range.\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from RGB to YUV.", "type": "API"}, {"name": "tf.compat.v1.image.rot90", "docs": "Rotate image(s) counter-clockwise by 90 degrees.\n\n\n For example:\n\n >>> a=tf.constant([[[1],[2]],\n ... [[3],[4]]])\n >>> # rotating `a` counter clockwise by 90 degrees\n >>> a_rot=tf.image.rot90(a)\n >>> print(a_rot[...,0].numpy())\n [[2 4]\n [1 3]]\n >>> # rotating `a` counter clockwise by 270 degrees\n >>> a_rot=tf.image.rot90(a, k=3)\n >>> print(a_rot[...,0].numpy())\n [[3 1]\n [4 2]]\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n k: A scalar integer tensor. The number of times the image(s) are\n rotated by 90 degrees.\n name: A name for this operation (optional).\n\n Returns:\n A rotated tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Rotate image(s) counter-clockwise by 90 degrees.", "type": "API"}, {"name": "tf.compat.v1.image.sample_distorted_bounding_box", "docs": "Generate a single randomly distorted bounding box for an image. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.\n\nBounding box annotations are often supplied in addition to ground-truth labels\nin image recognition or object localization tasks. A common technique for\ntraining such a system is to randomly distort an image while preserving\nits content, i.e. *data augmentation*. This Op outputs a randomly distorted\nlocalization of an object, i.e. bounding box, given an `image_size`,\n`bounding_boxes` and a series of constraints.\n\nThe output of this Op is a single bounding box that may be used to crop the\noriginal image. The output is returned as 3 tensors: `begin`, `size` and\n`bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\nimage. The latter may be supplied to `tf.image.draw_bounding_boxes` to\nvisualize what the bounding box looks like.\n\nBounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`.\nThe\nbounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\nheight of the underlying image.\n\nFor example,\n\n```python\n # Generate a single distorted bounding box.\n begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(\n tf.shape(image),\n bounding_boxes=bounding_boxes,\n min_object_covered=0.1)\n\n # Draw the bounding box in an image summary.\n image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),\n bbox_for_draw)\n tf.compat.v1.summary.image('images_with_box', image_with_box)\n\n # Employ the bounding box to distort the image.\n distorted_image = tf.slice(image, begin, size)\n```\n\nNote that if no bounding box information is available, setting\n`use_image_if_no_bounding_boxes = True` will assume there is a single implicit\nbounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\nfalse and no bounding boxes are supplied, an error is raised.\n\nArgs:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`,\n `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]`\n describing the N bounding boxes associated with the image.\n seed: An optional `int`. Defaults to `0`. If either `seed` or `seed2` are\n set to non-zero, the random number generator is seeded by the given\n `seed`. Otherwise, it is seeded by a random seed.\n seed2: An optional `int`. Defaults to `0`. A second seed to avoid seed\n collision.\n min_object_covered: A Tensor of type `float32`. Defaults to `0.1`. The\n cropped area of the image must contain at least this fraction of any\n bounding box supplied. The value of this parameter should be non-negative.\n In the case of 0, the cropped area does not need to overlap any of the\n bounding boxes supplied.\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75,\n 1.33]`. The cropped area of the image must have an aspect ratio = width /\n height within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`. The\n cropped area of the image must contain a fraction of the supplied image\n within this range.\n max_attempts: An optional `int`. Defaults to `100`. Number of attempts at\n generating a cropped region of the image of the specified constraints.\n After `max_attempts` failures, return the entire image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied. If true, assume an\n implicit bounding box covering the whole input. If false, raise an error.\n name: A name for the operation (optional).\n\nReturns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[offset_height, offset_width, 0]`. Provide as input to\n `tf.slice`.\n size: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[target_height, target_width, -1]`. Provide as input to\n `tf.slice`.\n bboxes: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing\n the distorted bounding box.\n Provide as input to `tf.image.draw_bounding_boxes`.\n\nRaises:\n ValueError: If no seed is specified and op determinism is enabled.", "desc": "Generate a single randomly distorted bounding box for an image. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.image.sobel_edges", "docs": "Returns a tensor holding Sobel edge maps.\n\n Example usage:\n\n For general usage, `image` would be loaded from a file as below:\n\n ```python\n image_bytes = tf.io.read_file(path_to_image_file)\n image = tf.image.decode_image(image_bytes)\n image = tf.cast(image, tf.float32)\n image = tf.expand_dims(image, 0)\n ```\n But for demo purposes, we are using randomly generated values for `image`:\n\n >>> image = tf.random.uniform(\n ... maxval=255, shape=[1, 28, 28, 3], dtype=tf.float32)\n >>> sobel = tf.image.sobel_edges(image)\n >>> sobel_y = np.asarray(sobel[0, :, :, :, 0]) # sobel in y-direction\n >>> sobel_x = np.asarray(sobel[0, :, :, :, 1]) # sobel in x-direction\n\n For displaying the sobel results, PIL's [Image Module](\n https://pillow.readthedocs.io/en/stable/reference/Image.html) can be used:\n\n ```python\n # Display edge maps for the first channel (at index 0)\n Image.fromarray(sobel_y[..., 0] / 4 + 0.5).show()\n Image.fromarray(sobel_x[..., 0] / 4 + 0.5).show()\n ```\n\n Args:\n image: Image tensor with shape [batch_size, h, w, d] and type float32 or\n float64. The image(s) must be 2x2 or larger.\n\n Returns:\n Tensor holding edge maps for each channel. Returns a tensor with shape\n [batch_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]],\n [dy[1], dx[1]], ..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter.\n ", "desc": "Returns a tensor holding Sobel edge maps.", "type": "API"}, {"name": "tf.compat.v1.image.ssim", "docs": "Computes SSIM index between img1 and img2.\n\n This function is based on the standard SSIM implementation from:\n Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image\n quality assessment: from error visibility to structural similarity. IEEE\n transactions on image processing.\n\n Note: The true SSIM is only defined on grayscale. This function does not\n perform any colorspace transform. (If the input is already YUV, then it will\n compute YUV SSIM average.)\n\n Details:\n - 11x11 Gaussian filter of width 1.5 is used.\n - k1 = 0.01, k2 = 0.03 as in the original paper.\n\n The image sizes must be at least 11x11 because of the filter size.\n\n Example:\n\n ```python\n # Read images (of size 255 x 255) from file.\n im1 = tf.image.decode_image(tf.io.read_file('path/to/im1.png'))\n im2 = tf.image.decode_image(tf.io.read_file('path/to/im2.png'))\n tf.shape(im1) # `img1.png` has 3 channels; shape is `(255, 255, 3)`\n tf.shape(im2) # `img2.png` has 3 channels; shape is `(255, 255, 3)`\n # Add an outer batch for each image.\n im1 = tf.expand_dims(im1, axis=0)\n im2 = tf.expand_dims(im2, axis=0)\n # Compute SSIM over tf.uint8 Tensors.\n ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,\n filter_sigma=1.5, k1=0.01, k2=0.03)\n\n # Compute SSIM over tf.float32 Tensors.\n im1 = tf.image.convert_image_dtype(im1, tf.float32)\n im2 = tf.image.convert_image_dtype(im2, tf.float32)\n ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11,\n filter_sigma=1.5, k1=0.01, k2=0.03)\n # ssim1 and ssim2 both have type tf.float32 and are almost equal.\n ```\n\n Args:\n img1: First image batch. 4-D Tensor of shape `[batch, height, width,\n channels]` with only Positive Pixel Values.\n img2: Second image batch. 4-D Tensor of shape `[batch, height, width,\n channels]` with only Positive Pixel Values.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n filter_size: Default value 11 (size of gaussian filter).\n filter_sigma: Default value 1.5 (width of gaussian filter).\n k1: Default value 0.01\n k2: Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so\n it would be better if we took the values in the range of 0 < K2 < 0.4).\n\n Returns:\n A tensor containing an SSIM value for each image in batch. Returned SSIM\n values are in range (-1, 1], when pixel values are non-negative. Returns\n a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).\n ", "desc": "Computes SSIM index between img1 and img2.", "type": "API"}, {"name": "tf.compat.v1.image.ssim_multiscale", "docs": "Computes the MS-SSIM between img1 and img2.\n\n This function assumes that `img1` and `img2` are image batches, i.e. the last\n three dimensions are [height, width, channels].\n\n Note: The true SSIM is only defined on grayscale. This function does not\n perform any colorspace transform. (If the input is already YUV, then it will\n compute YUV SSIM average.)\n\n Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. \"Multiscale\n structural similarity for image quality assessment.\" Signals, Systems and\n Computers, 2004.\n\n Args:\n img1: First image batch with only Positive Pixel Values.\n img2: Second image batch with only Positive Pixel Values. Must have the\n same rank as img1.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n power_factors: Iterable of weights for each of the scales. The number of\n scales used is the length of the list. Index 0 is the unscaled\n resolution's weight and each increasing scale corresponds to the image\n being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363,\n 0.1333), which are the values obtained in the original paper.\n filter_size: Default value 11 (size of gaussian filter).\n filter_sigma: Default value 1.5 (width of gaussian filter).\n k1: Default value 0.01\n k2: Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so\n it would be better if we took the values in the range of 0 < K2 < 0.4).\n\n Returns:\n A tensor containing an MS-SSIM value for each image in batch. The values\n are in range [0, 1]. Returns a tensor with shape:\n broadcast(img1.shape[:-3], img2.shape[:-3]).\n ", "desc": "Computes the MS-SSIM between img1 and img2.", "type": "API"}, {"name": "tf.compat.v1.image.total_variation", "docs": "Calculate and return the total variation for one or more images.\n\n The total variation is the sum of the absolute differences for neighboring\n pixel-values in the input images. This measures how much noise is in the\n images.\n\n This can be used as a loss-function during optimization so as to suppress\n noise in images. If you have a batch of images, then you should calculate\n the scalar loss-value as the sum:\n `loss = tf.reduce_sum(tf.image.total_variation(images))`\n\n This implements the anisotropic 2-D version of the formula described here:\n\n https://en.wikipedia.org/wiki/Total_variation_denoising\n\n Args:\n images: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n name: A name for the operation (optional).\n\n Raises:\n ValueError: if images.shape is not a 3-D or 4-D vector.\n\n Returns:\n The total variation of `images`.\n\n If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the\n total variation for each image in the batch.\n If `images` was 3-D, return a scalar float with the total variation for\n that image.\n ", "desc": "Calculate and return the total variation for one or more images.", "type": "API"}, {"name": "tf.compat.v1.image.transpose", "docs": "Transpose image(s) by swapping the height and width dimension.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.transpose(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n name: A name for this operation (optional).\n\n Returns:\n If `image` was 4-D, a 4-D float Tensor of shape\n `[batch, width, height, channels]`\n If `image` was 3-D, a 3-D float Tensor of shape\n `[width, height, channels]`\n\n Raises:\n ValueError: if the shape of `image` not supported.\n\n Usage Example:\n\n >>> image = [[[1, 2], [3, 4]],\n ... [[5, 6], [7, 8]],\n ... [[9, 10], [11, 12]]]\n >>> image = tf.constant(image)\n >>> tf.image.transpose(image)\n \n ", "desc": "Transpose image(s) by swapping the height and width dimension.", "type": "API"}, {"name": "tf.compat.v1.image.transpose_image", "docs": "Transpose image(s) by swapping the height and width dimension.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.transpose(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n name: A name for this operation (optional).\n\n Returns:\n If `image` was 4-D, a 4-D float Tensor of shape\n `[batch, width, height, channels]`\n If `image` was 3-D, a 3-D float Tensor of shape\n `[width, height, channels]`\n\n Raises:\n ValueError: if the shape of `image` not supported.\n\n Usage Example:\n\n >>> image = [[[1, 2], [3, 4]],\n ... [[5, 6], [7, 8]],\n ... [[9, 10], [11, 12]]]\n >>> image = tf.constant(image)\n >>> tf.image.transpose(image)\n \n ", "desc": "Transpose image(s) by swapping the height and width dimension.", "type": "API"}, {"name": "tf.compat.v1.image.yiq_to_rgb", "docs": "Converts one or more images from YIQ to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels.\n The output is only well defined if the Y value in images are in [0,1],\n I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226].\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from YIQ to RGB.", "type": "API"}, {"name": "tf.compat.v1.image.yuv_to_rgb", "docs": "Converts one or more images from YUV to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels.\n The output is only well defined if the Y value in images are in [0,1],\n U and V value are in [-0.5,0.5].\n\n As per the above description, you need to scale your YUV images if their\n pixel values are not in the required range. Below given example illustrates\n preprocessing of each channel of images before feeding them to `yuv_to_rgb`.\n\n ```python\n yuv_images = tf.random.uniform(shape=[100, 64, 64, 3], maxval=255)\n last_dimension_axis = len(yuv_images.shape) - 1\n yuv_tensor_images = tf.truediv(\n tf.subtract(\n yuv_images,\n tf.reduce_min(yuv_images)\n ),\n tf.subtract(\n tf.reduce_max(yuv_images),\n tf.reduce_min(yuv_images)\n )\n )\n y, u, v = tf.split(yuv_tensor_images, 3, axis=last_dimension_axis)\n target_uv_min, target_uv_max = -0.5, 0.5\n u = u * (target_uv_max - target_uv_min) + target_uv_min\n v = v * (target_uv_max - target_uv_min) + target_uv_min\n preprocessed_yuv_images = tf.concat([y, u, v], axis=last_dimension_axis)\n rgb_tensor_images = tf.image.yuv_to_rgb(preprocessed_yuv_images)\n ```\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from YUV to RGB.", "type": "API"}, {"name": "tf.compat.v1.import_graph_def", "docs": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(op_dict)`. They will be removed in a future version.\nInstructions for updating:\nPlease file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.\n\nThis function provides a way to import a serialized TensorFlow\n[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)\nprotocol buffer, and extract individual objects in the `GraphDef` as\n`tf.Tensor` and `tf.Operation` objects. Once extracted,\nthese objects are placed into the current default `Graph`. See\n`tf.Graph.as_graph_def` for a way to create a `GraphDef`\nproto.\n\nArgs:\n graph_def: A `GraphDef` proto containing operations to be imported into\n the default graph.\n input_map: A dictionary mapping input names (as strings) in `graph_def`\n to `Tensor` objects. The values of the named input tensors in the\n imported graph will be re-mapped to the respective `Tensor` values.\n return_elements: A list of strings containing operation names in\n `graph_def` that will be returned as `Operation` objects; and/or\n tensor names in `graph_def` that will be returned as `Tensor` objects.\n name: (Optional.) A prefix that will be prepended to the names in\n `graph_def`. Note that this does not apply to imported function names.\n Defaults to `\"import\"`.\n op_dict: (Optional.) Deprecated, do not use.\n producer_op_list: (Optional.) An `OpList` proto with the (possibly stripped)\n list of `OpDef`s used by the producer of the graph. If provided,\n unrecognized attrs for ops in `graph_def` that have their default value\n according to `producer_op_list` will be removed. This will allow some more\n `GraphDef`s produced by later binaries to be accepted by earlier binaries.\n\nReturns:\n A list of `Operation` and/or `Tensor` objects from the imported graph,\n corresponding to the names in `return_elements`,\n and None if `returns_elements` is None.\n\nRaises:\n TypeError: If `graph_def` is not a `GraphDef` proto,\n `input_map` is not a dictionary mapping strings to `Tensor` objects,\n or `return_elements` is not a list of strings.\n ValueError: If `input_map`, or `return_elements` contains names that\n do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.\n it refers to an unknown tensor).", "desc": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.IndexedSlices", "docs": "A sparse representation of a set of tensor slices at given indices.\n\n This class is a simple wrapper for a pair of `Tensor` objects:\n\n * `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`.\n * `indices`: A 1-D integer `Tensor` with shape `[D0]`.\n\n An `IndexedSlices` is typically used to represent a subset of a larger\n tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`.\n The values in `indices` are the indices in the first dimension of\n the slices that have been extracted from the larger tensor.\n\n The dense tensor `dense` represented by an `IndexedSlices` `slices` has\n\n ```python\n dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]\n ```\n\n The `IndexedSlices` class is used principally in the definition of\n gradients for operations that have sparse gradients\n (e.g. `tf.gather`).\n\n >>> v = tf.Variable([[0.,1, 2], [2, 3, 4], [4, 5, 6], [6, 7, 8]])\n >>> with tf.GradientTape() as tape:\n ... r = tf.gather(v, [1,3])\n >>> index_slices = tape.gradient(r,v)\n >>> index_slices\n <...IndexedSlices object ...>\n >>> index_slices.indices.numpy()\n array([1, 3], dtype=int32)\n >>> index_slices.values.numpy()\n array([[1., 1., 1.],\n [1., 1., 1.]], dtype=float32)\n\n Contrast this representation with\n `tf.sparse.SparseTensor`,\n which uses multi-dimensional indices and scalar values.\n ", "desc": "A sparse representation of a set of tensor slices at given indices.", "type": "API"}, {"name": "tf.compat.v1.IndexedSlicesSpec", "docs": "Type specification for a `tf.IndexedSlices`.", "desc": "Type specification for a `tf.IndexedSlices`.", "type": "API"}, {"name": "tf.compat.v1.init_scope", "docs": "A context manager that lifts ops out of control-flow scopes and function-building graphs.\n\n There is often a need to lift variable initialization ops out of control-flow\n scopes, function-building graphs, and gradient tapes. Entering an\n `init_scope` is a mechanism for satisfying these desiderata. In particular,\n entering an `init_scope` has three effects:\n\n (1) All control dependencies are cleared the moment the scope is entered;\n this is equivalent to entering the context manager returned from\n `control_dependencies(None)`, which has the side-effect of exiting\n control-flow scopes like `tf.cond` and `tf.while_loop`.\n\n (2) All operations that are created while the scope is active are lifted\n into the lowest context on the `context_stack` that is not building a\n graph function. Here, a context is defined as either a graph or an eager\n context. Every context switch, i.e., every installation of a graph as\n the default graph and every switch into eager mode, is logged in a\n thread-local stack called `context_switches`; the log entry for a\n context switch is popped from the stack when the context is exited.\n Entering an `init_scope` is equivalent to crawling up\n `context_switches`, finding the first context that is not building a\n graph function, and entering it. A caveat is that if graph mode is\n enabled but the default graph stack is empty, then entering an\n `init_scope` will simply install a fresh graph as the default one.\n\n (3) The gradient tape is paused while the scope is active.\n\n When eager execution is enabled, code inside an init_scope block runs with\n eager execution enabled even when tracing a `tf.function`. For example:\n\n ```python\n tf.compat.v1.enable_eager_execution()\n\n @tf.function\n def func():\n # A function constructs TensorFlow graphs,\n # it does not execute eagerly.\n assert not tf.executing_eagerly()\n with tf.init_scope():\n # Initialization runs with eager execution enabled\n assert tf.executing_eagerly()\n ```\n\n Raises:\n RuntimeError: if graph state is incompatible with this initialization.\n ", "desc": "A context manager that lifts ops out of control-flow scopes and function-building graphs.", "type": "API"}, {"name": "tf.compat.v1.initialize_all_tables", "docs": "Returns an Op that initializes all tables of the default graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.tables_initializer` instead.\n\nArgs:\n name: Optional name for the initialization op.\n\nReturns:\n An Op that initializes all tables. Note that if there are\n not tables the returned Op is a NoOp.", "desc": "Returns an Op that initializes all tables of the default graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.initialize_all_variables", "docs": "See `tf.compat.v1.global_variables_initializer`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.\nInstructions for updating:\nUse `tf.global_variables_initializer` instead.\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "See `tf.compat.v1.global_variables_initializer`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.initialize_local_variables", "docs": "See `tf.compat.v1.local_variables_initializer`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.\nInstructions for updating:\nUse `tf.local_variables_initializer` instead.\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "See `tf.compat.v1.local_variables_initializer`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.initialize_variables", "docs": "See `tf.compat.v1.variables_initializer`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.\nInstructions for updating:\nUse `tf.variables_initializer` instead.\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "See `tf.compat.v1.variables_initializer`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.initializers", "docs": "Public API for tf.initializers namespace.\n", "desc": "Public API for tf.initializers namespace.", "type": "API"}, {"name": "tf.compat.v1.initializers.constant", "docs": "Initializer that generates tensors with constant values.\n\n The resulting tensor is populated with values of type `dtype`, as\n specified by arguments `value` following the desired `shape` of the\n new tensor (see examples below).\n\n The argument `value` can be a constant value, or a list of values of type\n `dtype`. If `value` is a list, then the length of the list must be less\n than or equal to the number of elements implied by the desired shape of the\n tensor. In the case where the total number of elements in `value` is less\n than the number of elements required by the tensor shape, the last element\n in `value` will be used to fill the remaining entries. If the total number of\n elements in `value` is greater than the number of elements required by the\n tensor shape, the initializer will raise a `ValueError`.\n\n Args:\n value: A Python scalar, list or tuple of values, or a N-dimensional numpy\n array. All elements of the initialized variable will be set to the\n corresponding value in the `value` argument.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n verify_shape: Boolean that enables verification of the shape of `value`. If\n `True`, the initializer will throw an error if the shape of `value` is not\n compatible with the shape of the initialized tensor.\n\n Raises:\n TypeError: If the input `value` is not one of the expected types.\n\n Examples:\n The following example can be rewritten using a numpy.ndarray instead\n of the `value` list, even reshaped, as shown in the two commented lines\n below the `value` list initialization.\n\n >>> value = [0, 1, 2, 3, 4, 5, 6, 7]\n >>> init = tf.compat.v1.constant_initializer(value)\n >>> # fitting shape\n >>> with tf.compat.v1.Session():\n ... x = tf.compat.v1.get_variable('x', shape=[2, 4], initializer=init)\n ... x.initializer.run()\n ... print(x.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]]\n >>> # Larger shape\n >>> with tf.compat.v1.Session():\n ... y = tf.compat.v1.get_variable('y', shape=[3, 4], initializer=init)\n ... y.initializer.run()\n ... print(y.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]\n [7. 7. 7. 7.]]\n >>> # Smaller shape\n >>> with tf.compat.v1.Session():\n ... z = tf.compat.v1.get_variable('z', shape=[2, 3], initializer=init)\n Traceback (most recent call last):\n ...\n ValueError: Too many elements provided. Needed at most 6, but received 8\n >>> # Shape verification\n >>> init_verify = tf.compat.v1.constant_initializer(value, verify_shape=True)\n >>> with tf.compat.v1.Session():\n ... u = tf.compat.v1.get_variable('u', shape=[3, 4],\n ... initializer=init_verify)\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (3, 4), got (8,).\n\n @compatibility(TF2)\n Although it is a legacy API endpoint, `tf.compat.v1.constant_initializer`\n is compatible with eager execution and `tf.function`.\n\n To migrate to a non-legacy TF2 API, please use `tf.constant_initializer`\n instead. The `dtype`\n argument in `tf.compat.v1.constant_initializer.__init__()` does not exist in\n `tf.constant_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n In the `compat.v1` symbol, if `verify_shape` is set to `True`, an exception\n is raised when initializing a variable with a different shape from\n `value`. If set to `False`, `value` is reshaped to initialize the variable\n if necessary. An exception would only be raised when the number of\n elements are different.\n\n The `verify_shape` argument is not supported in TF2. Using\n `tf.constant_initializer` is equivalent to setting `verify_shape` to `False`.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.compat.v1.constant_initializer(\n value=value,\n dtype=tf.float32,\n verify_shape=False)\n variable = tf.Variable(initializer(shape=[2, 4]))\n ```\n\n After:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.constant_initializer(value=value)\n tf.Variable(initializer(shape=[2, 4], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :--------------- | :-------------------------- |\n | `value` | `value` | In constructor |\n | `dtype` | `dtype` | In `__call__()` method |\n | `verify_shape` | Not Supported | Equivalent to set to `False`|\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=True)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (2, 2), got (4,).\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=False)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n After:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.constant_initializer(value=value)\n >>> tf.Variable(initializer(shape=[2, 2], dtype=tf.float32)).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.compat.v1.initializers.global_variables", "docs": "Returns an Op that initializes global variables.\n\n This is just a shortcut for `variables_initializer(global_variables())`\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Returns:\n An Op that initializes global variables in the graph.\n ", "desc": "Returns an Op that initializes global variables.", "type": "API"}, {"name": "tf.compat.v1.initializers.glorot_normal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n It draws samples from a truncated normal distribution centered on 0\n with standard deviation (after truncation) given by\n `stddev = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number\n of input units in the weight tensor and `fan_out` is the number of\n output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.glorot_uniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n It draws samples from a uniform distribution within [-limit, limit]\n where `limit` is `sqrt(6 / (fan_in + fan_out))`\n where `fan_in` is the number of input units in the weight tensor\n and `fan_out` is the number of output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.he_normal", "docs": "He normal initializer.\n\n It draws samples from a truncated normal distribution centered on 0\n with standard deviation (after truncation) given by\n `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of\n input units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to seed the random generator.\n\n Returns:\n An initializer.\n\n References:\n [He et al., 2015]\n (https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html)\n # pylint: disable=line-too-long\n ([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf))\n ", "desc": "He normal initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.he_uniform", "docs": "He uniform variance scaling initializer.\n\n It draws samples from a uniform distribution within [-limit, limit]\n where `limit` is `sqrt(6 / fan_in)`\n where `fan_in` is the number of input units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to seed the random generator.\n\n Returns:\n An initializer.\n\n References:\n [He et al., 2015]\n (https://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html)\n # pylint: disable=line-too-long\n ([pdf](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf))\n ", "desc": "He uniform variance scaling initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.identity", "docs": "Initializer that generates the identity matrix.\n\n Only use for 2D matrices.\n\n Args:\n gain: Multiplicative factor to apply to the identity matrix.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n ", "desc": "Initializer that generates the identity matrix.", "type": "API"}, {"name": "tf.compat.v1.initializers.lecun_normal", "docs": "LeCun normal initializer.\n\n It draws samples from a truncated normal distribution centered on 0\n with standard deviation (after truncation) given by\n `stddev = sqrt(1 / fan_in)` where `fan_in` is the number of\n input units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to seed the random generator.\n\n Returns:\n An initializer.\n\n References:\n - Self-Normalizing Neural Networks,\n [Klambauer et al.,\n 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks)\n # pylint: disable=line-too-long\n ([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf))\n - Efficient Backprop,\n [Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)\n ", "desc": "LeCun normal initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.lecun_uniform", "docs": "LeCun uniform initializer.\n\n It draws samples from a uniform distribution within [-limit, limit]\n where `limit` is `sqrt(3 / fan_in)`\n where `fan_in` is the number of input units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to seed the random generator.\n\n Returns:\n An initializer.\n\n References:\n - Self-Normalizing Neural Networks,\n [Klambauer et al.,\n 2017](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks)\n # pylint: disable=line-too-long\n ([pdf](https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf))\n - Efficient Backprop,\n [Lecun et al., 1998](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)\n ", "desc": "LeCun uniform initializer.", "type": "API"}, {"name": "tf.compat.v1.initializers.local_variables", "docs": "Returns an Op that initializes all local variables.\n\n This is just a shortcut for `variables_initializer(local_variables())`\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Returns:\n An Op that initializes all local variables in the graph.\n ", "desc": "Returns an Op that initializes all local variables.", "type": "API"}, {"name": "tf.compat.v1.initializers.ones", "docs": "Initializer that generates tensors initialized to 1.\n\n @compatibility(TF2)\n This API is compatible with TF2 behavior and `tf.function`, and can be\n migrated immediately with `tf.keras.initializers.ones`.\n\n Before:\n >>> initializer = tf.compat.v1.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n After:\n >>> initializer = tf.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 1.", "type": "API"}, {"name": "tf.compat.v1.initializers.orthogonal", "docs": "Initializer that generates an orthogonal matrix.\n\n If the shape of the tensor to initialize is two-dimensional, it is initialized\n with an orthogonal matrix obtained from the QR decomposition of a matrix of\n random numbers drawn from a normal distribution.\n If the matrix has fewer rows than columns then the output will have orthogonal\n rows. Otherwise, the output will have orthogonal columns.\n\n If the shape of the tensor to initialize is more than two-dimensional,\n a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`\n is initialized, where `n` is the length of the shape vector.\n The matrix is subsequently reshaped to give a tensor of the desired shape.\n\n Args:\n gain: multiplicative factor to apply to the orthogonal matrix\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C)\n ([pdf](https://arxiv.org/pdf/1312.6120.pdf))\n ", "desc": "Initializer that generates an orthogonal matrix.", "type": "API"}, {"name": "tf.compat.v1.initializers.random_normal", "docs": "Initializer that generates tensors with a normal distribution.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.RandomNormal` or `tf.keras.initializers.RandomNormal`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default stddev and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.random_normal_initializer(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.RandomNormal(\n mean=mean,\n seed=seed,\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :----------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it as a |\n : : : `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported. |\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.initializers.random_uniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate.\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate. Defaults to 1 for float types.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.RandomUniform` or `tf.keras.initializers.RandomUniform`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default minval, maxval and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.random_uniform_initializer(\n minval=minval,\n maxval=maxval,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n seed=seed)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `minval` | `minval` | Default changes from 0 to -0.05 |\n | `maxval` | `maxval` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.initializers.tables_initializer", "docs": "Returns an Op that initializes all tables of the default graph.\n\n Args:\n name: Optional name for the initialization op.\n\n Returns:\n An Op that initializes all tables. Note that if there are\n not tables the returned Op is a NoOp.\n\n @compatibility(TF2)\n `tf.compat.v1.tables_initializer` is no longer needed with eager execution and\n `tf.function`. In TF2, when creating an initializable table like a\n `tf.lookup.StaticHashTable`, the table will automatically be initialized on\n creation.\n\n #### Before & After Usage Example\n\n Before:\n\n >>> with tf.compat.v1.Session():\n ... init = tf.compat.v1.lookup.KeyValueTensorInitializer(['a', 'b'], [1, 2])\n ... table = tf.compat.v1.lookup.StaticHashTable(init, default_value=-1)\n ... tf.compat.v1.tables_initializer().run()\n ... result = table.lookup(tf.constant(['a', 'c'])).eval()\n >>> result\n array([ 1, -1], dtype=int32)\n\n After:\n\n >>> init = tf.lookup.KeyValueTensorInitializer(['a', 'b'], [1, 2])\n >>> table = tf.lookup.StaticHashTable(init, default_value=-1)\n >>> table.lookup(tf.constant(['a', 'c'])).numpy()\n array([ 1, -1], dtype=int32)\n\n @end_compatibility\n ", "desc": "Returns an Op that initializes all tables of the default graph.", "type": "API"}, {"name": "tf.compat.v1.initializers.truncated_normal", "docs": "Initializer that generates a truncated normal distribution.\n\n These values are similar to values from a `random_normal_initializer`\n except that values more than two standard deviations from the mean\n are discarded and re-drawn. This is the recommended initializer for\n neural network weights and filters.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.truncated_normal` or `tf.keras.initializers.TruncatedNormal`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default stddev and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.truncated_normal_initializer(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.truncated_normal(\n mean=mean,\n seed=seed,\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.initializers.uniform_unit_scaling", "docs": "Initializer that generates tensors without scaling variance.\n\n When initializing a deep network, it is in principle advantageous to keep\n the scale of the input variance constant, so it does not explode or diminish\n by reaching the final layer. If the input is `x` and the operation `x * W`,\n and we want to initialize `W` uniformly at random, we need to pick `W` from\n\n [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]\n\n to keep the scale intact, where `dim = W.shape[0]` (the size of the input).\n A similar calculation for convolutional networks gives an analogous result\n with `dim` equal to the product of the first 3 dimensions. When\n nonlinearities are present, we need to multiply this by a constant `factor`.\n See (Sussillo et al., 2014) for deeper motivation, experiments\n and the calculation of constants. In section 2.3 there, the constants were\n numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.\n\n Args:\n factor: Float. A multiplicative factor by which the values will be scaled.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558)\n ([pdf](http://arxiv.org/pdf/1412.6558.pdf))\n ", "desc": "Initializer that generates tensors without scaling variance.", "type": "API"}, {"name": "tf.compat.v1.initializers.variables", "docs": "Returns an Op that initializes a list of variables.\n\n After you launch the graph in a session, you can run the returned Op to\n initialize all the variables in `var_list`. This Op runs all the\n initializers of the variables in `var_list` in parallel.\n\n Calling `initialize_variables()` is equivalent to passing the list of\n initializers to `Group()`.\n\n If `var_list` is empty, however, the function still returns an Op that can\n be run. That Op just has no effect.\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Args:\n var_list: List of `Variable` objects to initialize.\n name: Optional name for the returned operation.\n\n Returns:\n An Op that run the initializers of all the specified variables.\n ", "desc": "Returns an Op that initializes a list of variables.", "type": "API"}, {"name": "tf.compat.v1.initializers.variance_scaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2 APIs, move to using either\n `tf.initializers.variance_scaling` or `tf.keras.initializers.VarianceScaling`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.variance_scaling_initializer(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.VarianceScaling(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :----------------- | :-------------- | :------------------------- |\n | `scale` | `scale` | No change to defaults |\n | `mode` | `mode` | No change to defaults |\n | `distribution` | `distribution` | No change to defaults. |\n : : : 'normal' maps to 'truncated_normal' :\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`,\n samples are drawn from a truncated/untruncated normal\n distribution with a mean of zero and a standard deviation (after truncation,\n if used) `stddev = sqrt(scale / n)`\n where n is:\n - number of input units in the weight tensor, if mode = \"fan_in\"\n - number of output units, if mode = \"fan_out\"\n - average of the numbers of input and output units, if mode = \"fan_avg\"\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within [-limit, limit], with `limit = sqrt(3 * scale / n)`.\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"normal\", \"uniform\".\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n Raises:\n ValueError: In case of an invalid value for the \"scale\", mode\" or\n \"distribution\" arguments.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.compat.v1.initializers.zeros", "docs": "Initializer that generates tensors initialized to 0.\n\n @compatibility(TF2)\n `tf.compat.v1.zeros_initializer` is compatible with eager execution\n and `tf.function`.\n\n To migrate to TF2, please use `tf.zeros_initializer` instead. The `dtype`\n argument in `tf.compat.v1.zeros_initializer.__init__()` does not exist in\n `tf.zeros_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n variable = tf.Variable(initializer(shape=[3, 3]))\n ```\n\n After:\n\n ```python\n initializer = tf.zeros_initializer()\n variable = tf.Variable(initializer(shape=[3, 3], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------------- | :--------------- | :------------------------- |\n | `dtype` | `dtype` | In `__call__()` method |\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n >>> tf.Variable(initializer(shape=[3])).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3])).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n >>> initializer = tf.compat.v1.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n After:\n\n >>> initializer = tf.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 0.", "type": "API"}, {"name": "tf.compat.v1.InteractiveSession", "docs": "A TensorFlow `Session` for use in interactive contexts, such as a shell.\n\n The only difference with a regular `Session` is that an `InteractiveSession`\n installs itself as the default session on construction.\n The methods `tf.Tensor.eval`\n and `tf.Operation.run`\n will use that session to run ops.\n\n This is convenient in interactive shells and [IPython\n notebooks](http://ipython.org), as it avoids having to pass an explicit\n `Session` object to run ops.\n\n For example:\n\n ```python\n sess = tf.compat.v1.InteractiveSession()\n a = tf.constant(5.0)\n b = tf.constant(6.0)\n c = a * b\n # We can just use 'c.eval()' without passing 'sess'\n print(c.eval())\n sess.close()\n ```\n\n Note that a regular session installs itself as the default session when it\n is created in a `with` statement. The common usage in non-interactive\n programs is to follow that pattern:\n\n ```python\n a = tf.constant(5.0)\n b = tf.constant(6.0)\n c = a * b\n with tf.compat.v1.Session():\n # We can also use 'c.eval()' here.\n print(c.eval())\n ```\n ", "desc": "A TensorFlow `Session` for use in interactive contexts, such as a shell.", "type": "API"}, {"name": "tf.compat.v1.invert_permutation", "docs": "Computes the inverse permutation of a tensor.\n\n This operation computes the inverse of an index permutation. It takes a 1-D\n integer tensor `x`, which represents the indices of a zero-based array, and\n swaps each value with its index position. In other words, for an output tensor\n `y` and an input tensor `x`, this operation computes the following:\n\n `y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`\n\n The values must include 0. There can be no duplicate values or negative values.\n\n For example:\n\n ```\n # tensor `x` is [3, 4, 0, 2, 1]\n invert_permutation(x) ==> [2, 4, 3, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the inverse permutation of a tensor.", "type": "API"}, {"name": "tf.compat.v1.io", "docs": "Public API for tf.io namespace.\n", "desc": "Public API for tf.io namespace.", "type": "API"}, {"name": "tf.compat.v1.io.decode_and_crop_jpeg", "docs": "Decode and Crop a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n It is equivalent to a combination of decode and crop, but much faster by only\n decoding partial jpeg image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n crop_window: A `Tensor` of type `int32`.\n 1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode and Crop a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_base64", "docs": "Decode web-safe base64-encoded strings.\n\n Input may or may not have padding at the end. See\n [EncodeBase64](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64)\n for padding. Web-safe means that input must use - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Base64 strings to decode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decode web-safe base64-encoded strings.", "type": "API"}, {"name": "tf.compat.v1.io.decode_bmp", "docs": "Decode the first frame of a BMP-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the BMP-encoded image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The BMP-encoded image.\n channels: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the first frame of a BMP-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_compressed", "docs": "Decompress strings.\n\n This op decompresses each element of the `bytes` input `Tensor`, which\n is assumed to be compressed using the given `compression_type`.\n\n The `output` is a string `Tensor` of the same shape as `bytes`,\n each element containing the decompressed data from the corresponding\n element in `bytes`.\n\n Args:\n bytes: A `Tensor` of type `string`.\n A Tensor of string which is compressed.\n compression_type: An optional `string`. Defaults to `\"\"`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decompress strings.", "type": "API"}, {"name": "tf.compat.v1.io.decode_csv", "docs": "Convert CSV records to tensors. Each column maps to one tensor.\n\n RFC 4180 format is expected for the CSV records.\n (https://tools.ietf.org/html/rfc4180)\n Note that we allow leading and trailing spaces with int or float field.\n\n Args:\n records: A `Tensor` of type `string`.\n Each string is a record/row in the csv and all records should have\n the same format.\n record_defaults: A list of `Tensor` objects with specific types.\n Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`.\n One tensor per column of the input record, with either a\n scalar default value for that column or an empty vector if the column is\n required.\n field_delim: An optional `string`. Defaults to `\",\"`.\n char delimiter to separate fields in a record.\n use_quote_delim: An optional `bool`. Defaults to `True`.\n If false, treats double quotation marks as regular\n characters inside of the string fields (ignoring RFC 4180, Section 2,\n Bullet 5).\n name: A name for the operation (optional).\n na_value: Additional string to recognize as NA/NaN.\n select_cols: Optional sorted list of column indices to select. If specified,\n only this subset of columns will be parsed and returned.\n\n Returns:\n A list of `Tensor` objects. Has the same type as `record_defaults`.\n Each tensor will have the same shape as records.\n\n Raises:\n ValueError: If any of the arguments is malformed.\n ", "desc": "Convert CSV records to tensors. Each column maps to one tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_gif", "docs": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.\n\n GIF images with frame or transparency compression are not supported.\n On Linux and MacOS systems, convert animated GIFs from compressed to\n uncompressed by running:\n\n convert $src.gif -coalesce $dst.gif\n\n This op also supports decoding JPEGs and PNGs, though it is cleaner to use\n `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The GIF-encoded image.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_image", "docs": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.\n\n Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the\n appropriate operation to convert the input bytes `string` into a `Tensor`\n of type `dtype`.\n\n Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as\n opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D\n arrays `[height, width, num_channels]`. Make sure to take this into account\n when constructing your graph if you are intermixing GIF files with BMP, JPEG,\n and/or PNG files. Alternately, set the `expand_animations` argument of this\n function to `False`, in which case the op will return 3-dimensional tensors\n and will truncate animated GIF files to the first frame.\n\n NOTE: If the first frame of an animated GIF does not occupy the entire\n canvas (maximum frame width x maximum frame height), then it fills the\n unoccupied areas (in the first frame) with zeros (black). For frames after the\n first frame that does not occupy the entire canvas, it uses the previous\n frame to fill the unoccupied areas.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The encoded image bytes.\n channels: An optional `int`. Defaults to `0`. Number of color channels for\n the decoded image.\n dtype: The desired DType of the returned `Tensor`.\n name: A name for the operation (optional)\n expand_animations: An optional `bool`. Defaults to `True`. Controls the\n shape of the returned op's output. If `True`, the returned op will produce\n a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs,\n whether animated or not. If, `False`, the returned op will produce a 3-D\n tensor for all file types and will truncate animated GIFs to the first\n frame.\n\n Returns:\n `Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on\n the file type and the value of the `expand_animations` parameter.\n\n Raises:\n ValueError: On incorrect number of channels.\n ", "desc": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.", "type": "API"}, {"name": "tf.compat.v1.io.decode_jpeg", "docs": "Decode a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n This op also supports decoding PNGs and non-animated GIFs since the interface is\n the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_json_example", "docs": "Convert JSON-encoded Example records to binary protocol buffer strings.\n\n Note: This is **not** a general purpose JSON parsing op.\n\n This op converts JSON-serialized `tf.train.Example` (maybe created with\n `json_format.MessageToJson`, following the\n [standard JSON mapping](\n https://developers.google.com/protocol-buffers/docs/proto3#json))\n to a binary-serialized `tf.train.Example` (equivalent to\n `Example.SerializeToString()`) suitable for conversion to tensors with\n `tf.io.parse_example`.\n\n Here is a `tf.train.Example` proto:\n\n >>> example = tf.train.Example(\n ... features=tf.train.Features(\n ... feature={\n ... \"a\": tf.train.Feature(\n ... int64_list=tf.train.Int64List(\n ... value=[1, 1, 3]))}))\n\n Here it is converted to JSON:\n\n >>> from google.protobuf import json_format\n >>> example_json = json_format.MessageToJson(example)\n >>> print(example_json)\n {\n \"features\": {\n \"feature\": {\n \"a\": {\n \"int64List\": {\n \"value\": [\n \"1\",\n \"1\",\n \"3\"\n ]\n }\n }\n }\n }\n }\n\n This op converts the above json string to a binary proto:\n\n >>> example_binary = tf.io.decode_json_example(example_json)\n >>> example_binary.numpy()\n b'\\n\\x0f\\n\\r\\n\\x01a\\x12\\x08\\x1a\\x06\\x08\\x01\\x08\\x01\\x08\\x03'\n\n The OP works on string tensors of andy shape:\n\n >>> tf.io.decode_json_example([\n ... [example_json, example_json],\n ... [example_json, example_json]]).shape.as_list()\n [2, 2]\n\n This resulting binary-string is equivalent to `Example.SerializeToString()`,\n and can be converted to Tensors using `tf.io.parse_example` and related\n functions:\n\n >>> tf.io.parse_example(\n ... serialized=[example_binary.numpy(),\n ... example.SerializeToString()],\n ... features = {'a': tf.io.FixedLenFeature(shape=[3], dtype=tf.int64)})\n {'a': }\n\n Args:\n json_examples: A string tensor containing json-serialized `tf.Example`\n protos.\n name: A name for the op.\n\n Returns:\n A string Tensor containing the binary-serialized `tf.Example` protos.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If the JSON could not be converted to a\n `tf.Example`\n ", "desc": "Convert JSON-encoded Example records to binary protocol buffer strings.", "type": "API"}, {"name": "tf.compat.v1.io.decode_png", "docs": "Decode a PNG-encoded image to a uint8 or uint16 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the PNG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n If needed, the PNG-encoded image is transformed to match the requested number\n of color channels.\n\n This op also supports decoding JPEGs and non-animated GIFs since the interface\n is the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The PNG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Decode a PNG-encoded image to a uint8 or uint16 tensor.", "type": "API"}, {"name": "tf.compat.v1.io.decode_proto", "docs": "The op extracts fields from a serialized protocol buffers message into tensors.\n\n Note: This API is designed for orthogonality rather than human-friendliness. It\n can be used to parse input protos by hand, but it is intended for use in\n generated code.\n\n The `decode_proto` op extracts fields from a serialized protocol buffers\n message into tensors. The fields in `field_names` are decoded and converted\n to the corresponding `output_types` if possible.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n Each output tensor is a dense tensor. This means that it is padded to hold\n the largest number of repeated elements seen in the input minibatch. (The\n shape is also padded by one to prevent zero-sized dimensions). The actual\n repeat counts for each example in the minibatch can be found in the `sizes`\n output. In many cases the output of `decode_proto` is fed immediately into\n tf.squeeze if missing values are not a concern. When using tf.squeeze, always\n pass the squeeze dimension explicitly to avoid surprises.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n - `map` fields are not directly decoded. They are treated as `repeated` fields,\n of the appropriate entry type. The proto-compiler defines entry types for each\n map field. The type-name is the field name, converted to \"CamelCase\" with\n \"Entry\" appended. The `tf.train.Features.FeatureEntry` message is an example of\n one of these implicit `Entry` types.\n\n - `enum` fields should be read as int32.\n\n Both binary and text proto serializations are supported, and can be\n chosen using the `format` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Here is an example:\n\n The, internal, `Summary.Value` proto contains a\n `oneof {float simple_value; Image image; ...}`\n\n >>> from google.protobuf import text_format\n >>>\n >>> # A Summary.Value contains: oneof {float simple_value; Image image}\n >>> values = [\n ... \"simple_value: 2.2\",\n ... \"simple_value: 1.2\",\n ... \"image { height: 128 width: 512 }\",\n ... \"image { height: 256 width: 256 }\",]\n >>> values = [\n ... text_format.Parse(v, tf.compat.v1.Summary.Value()).SerializeToString()\n ... for v in values]\n\n The following can decode both fields from the serialized strings:\n\n >>> sizes, [simple_value, image] = tf.io.decode_proto(\n ... values,\n ... tf.compat.v1.Summary.Value.DESCRIPTOR.full_name,\n ... field_names=['simple_value', 'image'],\n ... output_types=[tf.float32, tf.string])\n\n The `sizes` has the same shape as the input, with an additional axis across the\n fields that were decoded. Here the first column of `sizes` is the size of the\n decoded `simple_value` field:\n\n >>> print(sizes)\n tf.Tensor(\n [[1 0]\n [1 0]\n [0 1]\n [0 1]], shape=(4, 2), dtype=int32)\n\n The result tensors each have one more index than the input byte-strings.\n The valid elements of each result tensor are indicated by\n the appropriate column of `sizes`. The invalid elements are padded with a\n default value:\n\n >>> print(simple_value)\n tf.Tensor(\n [[2.2]\n [1.2]\n [0. ]\n [0. ]], shape=(4, 1), dtype=float32)\n\n Nested protos are extracted as string tensors:\n\n >>> print(image.dtype)\n \n >>> print(image.shape.as_list())\n [4, 1]\n\n To convert to a `tf.RaggedTensor` representation use:\n\n >>> tf.RaggedTensor.from_tensor(simple_value, lengths=sizes[:, 0]).to_list()\n [[2.2], [1.2], [], []]\n\n Args:\n bytes: A `Tensor` of type `string`.\n Tensor of serialized protos with shape `batch_shape`.\n message_type: A `string`. Name of the proto message type to decode.\n field_names: A list of `strings`.\n List of strings containing proto field names. An extension field can be decoded\n by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME.\n output_types: A list of `tf.DTypes`.\n List of TF types to use for the respective field in field_names.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n Either the special value `local://` or a path to a file containing\n a serialized `FileDescriptorSet`.\n message_format: An optional `string`. Defaults to `\"binary\"`.\n Either `binary` or `text`.\n sanitize: An optional `bool`. Defaults to `False`.\n Whether to sanitize the result or not.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sizes, values).\n\n sizes: A `Tensor` of type `int32`.\n values: A list of `Tensor` objects of type `output_types`.\n ", "desc": "The op extracts fields from a serialized protocol buffers message into tensors.", "type": "API"}, {"name": "tf.compat.v1.io.decode_raw", "docs": "Convert raw byte strings into tensors. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(bytes)`. They will be removed in a future version.\nInstructions for updating:\nbytes is deprecated, use input_bytes instead\n\nArgs:\n input_bytes:\n Each element of the input Tensor is converted to an array of bytes.\n out_type:\n `DType` of the output. Acceptable types are `half`, `float`, `double`,\n `int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`.\n little_endian:\n Whether the `input_bytes` data is in little-endian format. Data will be\n converted into host byte order if necessary.\n name: A name for the operation (optional).\n bytes: Deprecated parameter. Use `input_bytes` instead.\n\nReturns:\n A `Tensor` object storing the decoded bytes.", "desc": "Convert raw byte strings into tensors. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.io.deserialize_many_sparse", "docs": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.\n\n The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where\n `N` is the minibatch size and the rows correspond to packed outputs of\n `serialize_sparse`. The ranks of the original `SparseTensor` objects\n must all match. When the final `SparseTensor` is created, it has rank one\n higher than the ranks of the incoming `SparseTensor` objects (they have been\n concatenated along a new row dimension).\n\n The output `SparseTensor` object's shape values for all dimensions but the\n first are the max across the input `SparseTensor` objects' shape values\n for the corresponding dimensions. Its first shape value is `N`, the minibatch\n size.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `sparse.reorder` to restore index ordering.\n\n For example, if the serialized input is a `[2, 3]` matrix representing two\n original `SparseTensor` objects:\n\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n\n and\n\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n\n then the final deserialized `SparseTensor` will be:\n\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n\n Args:\n serialized_sparse: 2-D `Tensor` of type `string` of shape `[N, 3]`.\n The serialized and packed `SparseTensor` objects.\n dtype: The `dtype` of the serialized `SparseTensor` objects.\n rank: (optional) Python int, the rank of the `SparseTensor` objects.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` representing the deserialized `SparseTensor`s,\n concatenated along the `SparseTensor`s' first dimension.\n\n All of the serialized `SparseTensor`s must have had the same rank and type.\n ", "desc": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.", "type": "API"}, {"name": "tf.compat.v1.io.encode_base64", "docs": "Encode strings into web-safe base64 format.\n\n Refer to [this article](https://en.wikipedia.org/wiki/Base64) for more information on\n base64 format. Base64 strings may have padding with '=' at the\n end so that the encoded has length multiple of 4. See Padding section of the\n link above.\n\n Web-safe means that the encoder uses - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Strings to be encoded.\n pad: An optional `bool`. Defaults to `False`.\n Bool whether padding is applied at the ends.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode strings into web-safe base64 format.", "type": "API"}, {"name": "tf.compat.v1.io.encode_jpeg", "docs": "JPEG-encode an image.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n\n The attr `format` can be used to override the color format of the encoded\n output. Values can be:\n\n * `''`: Use a default format based on the number of channels in the image.\n * `grayscale`: Output a grayscale JPEG image. The `channels` dimension\n of `image` must be 1.\n * `rgb`: Output an RGB JPEG image. The `channels` dimension\n of `image` must be 3.\n\n If `format` is not specified or is the empty string, a default format is picked\n in function of the number of channels in `image`:\n\n * 1: Output a grayscale image.\n * 3: Output an RGB image.\n\n Args:\n image: A `Tensor` of type `uint8`.\n 3-D with shape `[height, width, channels]`.\n format: An optional `string` from: `\"\", \"grayscale\", \"rgb\"`. Defaults to `\"\"`.\n Per pixel image format.\n quality: An optional `int`. Defaults to `95`.\n Quality of the compression from 0 to 100 (higher is better and slower).\n progressive: An optional `bool`. Defaults to `False`.\n If True, create a JPEG that loads progressively (coarse to fine).\n optimize_size: An optional `bool`. Defaults to `False`.\n If True, spend CPU/RAM to reduce size with no quality change.\n chroma_downsampling: An optional `bool`. Defaults to `True`.\n See http://en.wikipedia.org/wiki/Chroma_subsampling.\n density_unit: An optional `string` from: `\"in\", \"cm\"`. Defaults to `\"in\"`.\n Unit used to specify `x_density` and `y_density`:\n pixels per inch (`'in'`) or centimeter (`'cm'`).\n x_density: An optional `int`. Defaults to `300`.\n Horizontal pixels per density unit.\n y_density: An optional `int`. Defaults to `300`.\n Vertical pixels per density unit.\n xmp_metadata: An optional `string`. Defaults to `\"\"`.\n If not empty, embed this XMP metadata in the image header.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG-encode an image.", "type": "API"}, {"name": "tf.compat.v1.io.encode_png", "docs": "PNG-encode an image.\n\n `image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`\n where `channels` is:\n\n * 1: for grayscale.\n * 2: for grayscale + alpha.\n * 3: for RGB.\n * 4: for RGBA.\n\n The ZLIB compression level, `compression`, can be -1 for the PNG-encoder\n default or a value from 0 to 9. 9 is the highest compression level,\n generating the smallest output, but is slower.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.\n 3-D with shape `[height, width, channels]`.\n compression: An optional `int`. Defaults to `-1`. Compression level.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "PNG-encode an image.", "type": "API"}, {"name": "tf.compat.v1.io.encode_proto", "docs": "The op serializes protobuf messages provided in the input tensors.\n\n The types of the tensors in `values` must match the schema for the fields\n specified in `field_names`. All the tensors in `values` must have a common\n shape prefix, *batch_shape*.\n\n The `sizes` tensor specifies repeat counts for each field. The repeat count\n (last dimension) of a each tensor in `values` must be greater than or equal\n to corresponding repeat count in `sizes`.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Args:\n sizes: A `Tensor` of type `int32`.\n Tensor of int32 with shape `[batch_shape, len(field_names)]`.\n values: A list of `Tensor` objects.\n List of tensors containing values for the corresponding field.\n field_names: A list of `strings`.\n List of strings containing proto field names.\n message_type: A `string`. Name of the proto message type to decode.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "The op serializes protobuf messages provided in the input tensors.", "type": "API"}, {"name": "tf.compat.v1.io.extract_jpeg_shape", "docs": "Extract the shape information of a JPEG-encoded image.\n\n This op only parses the image header, so it is much faster than DecodeJpeg.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n (Optional) The output type of the operation (int32 or int64).\n Defaults to int32.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Extract the shape information of a JPEG-encoded image.", "type": "API"}, {"name": "tf.compat.v1.io.FixedLenFeature", "docs": "Configuration for parsing a fixed-length input feature.\n\n To treat sparse input as dense, provide a `default_value`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data.\n dtype: Data type of input.\n default_value: Value to be used if an example is missing this feature. It\n must be compatible with `dtype` and of the specified `shape`.\n ", "desc": "Configuration for parsing a fixed-length input feature.", "type": "API"}, {"name": "tf.compat.v1.io.FixedLenSequenceFeature", "docs": "Configuration for parsing a variable-length input feature into a `Tensor`.\n\n The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has\n a static `shape` of `[None] + shape` and the specified `dtype`.\n The resulting `Tensor` of parsing a `batch_size` many `Example`s has\n a static `shape` of `[batch_size, None] + shape` and the specified `dtype`.\n The entries in the `batch` from different `Examples` will be padded with\n `default_value` to the maximum length present in the `batch`.\n\n To treat a sparse input as dense, provide `allow_missing=True`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data for dimension 2 and higher. First dimension is\n of variable length `None`.\n dtype: Data type of input.\n allow_missing: Whether to allow this feature to be missing from a feature\n list item. Is available only for parsing `SequenceExample` not for\n parsing `Examples`.\n default_value: Scalar value to be used to pad multiple `Example`s to their\n maximum length. Irrelevant for parsing a single `Example` or\n `SequenceExample`. Defaults to \"\" for dtype string and 0 otherwise\n (optional).\n ", "desc": "Configuration for parsing a variable-length input feature into a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.io.gfile", "docs": "Public API for tf.io.gfile namespace.\n", "desc": "Public API for tf.io.gfile namespace.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.copy", "docs": "Copies data from `src` to `dst`.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that you need to always specify a file name, even if moving into a new\n directory. This is because some cloud filesystems don't have the concept of a\n directory.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.mkdir(\"/tmp/new_dir\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/new_dir/y\")\n >>> tf.io.gfile.exists(\"/tmp/new_dir/y\")\n True\n >>> tf.io.gfile.rmtree(\"/tmp/new_dir\")\n\n If you want to prevent errors if the path already exists, you can use\n `overwrite` argument:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\", overwrite=True)\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that the above will still result in an error if you try to overwrite a\n directory with a file.\n\n Note that you cannot copy a directory, only file arguments are supported.\n\n Args:\n src: string, name of the file whose contents need to be copied\n dst: string, name of the file to which to copy to\n overwrite: boolean, if false it's an error for `dst` to be occupied by an\n existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Copies data from `src` to `dst`.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.exists", "docs": "Determines whether a path exists or not.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> # for a GCS filesystem path:\n >>> # tf.io.gfile.exists(\"gs://bucket/file\")\n >>> # for a local filesystem:\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"file:///tmp/x\")\n True\n\n This currently returns `True` for existing directories but don't rely on this\n behavior, especially if you are using cloud filesystems (e.g., GCS, S3,\n Hadoop):\n\n >>> tf.io.gfile.exists(\"/tmp\")\n True\n\n Args:\n path: string, a path\n\n Returns:\n True if the path exists, whether it's a file or a directory.\n False if the path does not exist and there are no filesystem errors.\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API.\n ", "desc": "Determines whether a path exists or not.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.GFile", "docs": "File I/O wrappers without thread locking.\n\n The main roles of the `tf.io.gfile` module are:\n\n 1. To provide an API that is close to Python's file I/O objects, and\n 2. To provide an implementation based on TensorFlow's C++ FileSystem API.\n\n The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a `gs://` prefix, and\n HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`,\n so that you can use these implementations for saving and loading checkpoints,\n writing to TensorBoard logs, and accessing training data (among other uses).\n However, if all your files are local, you can use the regular Python file\n API without any problem.\n\n *Note*: though similar to Python's I/O implementation, there are semantic\n differences to make `tf.io.gfile` more efficient for backing filesystems. For\n example, a write mode file will not be opened until the first write call to\n minimize RPC invocations in network filesystems.\n\n Once you obtain a `GFile` object, you can use it in most ways as you would any\n Python's file object:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n 4\n >>> with tf.io.gfile.GFile(\"/tmp/x\") as f:\n ... f.read()\n 'asdf'\n\n The difference is that you can specify URI schemes to use other filesystems\n (e.g., `gs://` for GCS, `s3://` for S3, etc.), if they are supported. Using\n `file://` as an example, we have:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"qwert\")\n ... f.write(\"asdf\")\n >>> tf.io.gfile.GFile(\"file:///tmp/x\").read()\n 'qwertasdf'\n\n You can also read all lines of a file directly:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> tf.io.gfile.GFile(\"/tmp/x\").readlines()\n ['asdf\\n', 'qwer\\n']\n\n You can iterate over the lines:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> for line in tf.io.gfile.GFile(\"/tmp/x\"):\n ... print(line[:-1]) # removes the end of line character\n asdf\n qwer\n\n Random access read is possible if the underlying filesystem supports it:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdfqwer\")\n >>> f = tf.io.gfile.GFile(\"/tmp/x\")\n >>> f.read(3)\n 'asd'\n >>> f.seek(4)\n >>> f.tell()\n 4\n >>> f.read(3)\n 'qwe'\n >>> f.tell()\n 7\n >>> f.close()\n ", "desc": "File I/O wrappers without thread locking.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.glob", "docs": "Returns a list of files that match the given pattern(s).\n\n The patterns are defined as strings. Supported patterns are defined\n here. Note that the pattern can be a Python iteratable of string patterns.\n\n The format definition of the pattern is:\n\n **pattern**: `{ term }`\n\n **term**:\n * `'*'`: matches any sequence of non-'/' characters\n * `'?'`: matches a single non-'/' character\n * `'[' [ '^' ] { match-list } ']'`: matches any single\n character (not) on the list\n * `c`: matches character `c` where `c != '*', '?', '\\\\', '['`\n * `'\\\\' c`: matches character `c`\n\n **character range**:\n * `c`: matches character `c` while `c != '\\\\', '-', ']'`\n * `'\\\\' c`: matches character `c`\n * `lo '-' hi`: matches character `c` for `lo <= c <= hi`\n\n Examples:\n\n >>> tf.io.gfile.glob(\"*.py\")\n ... # For example, ['__init__.py']\n\n >>> tf.io.gfile.glob(\"__init__.??\")\n ... # As above\n\n >>> files = {\"*.py\"}\n >>> the_iterator = iter(files)\n >>> tf.io.gfile.glob(the_iterator)\n ... # As above\n\n See the C++ function `GetMatchingPaths` in\n [`core/platform/file_system.h`]\n (../../../core/platform/file_system.h)\n for implementation details.\n\n Args:\n pattern: string or iterable of strings. The glob pattern(s).\n\n Returns:\n A list of strings containing filenames that match the given pattern(s).\n\n Raises:\n errors.OpError: If there are filesystem / directory listing errors.\n errors.NotFoundError: If pattern to be matched is an invalid directory.\n ", "desc": "Returns a list of files that match the given pattern(s).", "type": "API"}, {"name": "tf.compat.v1.io.gfile.isdir", "docs": "Returns whether the path is a directory or not.\n\n Args:\n path: string, path to a potential directory\n\n Returns:\n True, if the path is a directory; False otherwise\n ", "desc": "Returns whether the path is a directory or not.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.listdir", "docs": "Returns a list of entries contained within a directory.\n\n The list is in arbitrary order. It does not contain the special entries \".\"\n and \"..\".\n\n Args:\n path: string, path to a directory\n\n Returns:\n [filename1, filename2, ... filenameN] as strings\n\n Raises:\n errors.NotFoundError if directory doesn't exist\n ", "desc": "Returns a list of entries contained within a directory.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.makedirs", "docs": "Creates a directory and all parent/intermediate directories.\n\n It succeeds if path already exists and is writable.\n\n Args:\n path: string, name of the directory to be created\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory and all parent/intermediate directories.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.mkdir", "docs": "Creates a directory with the name given by `path`.\n\n Args:\n path: string, name of the directory to be created\n\n Notes: The parent directories need to exist. Use `tf.io.gfile.makedirs`\n instead if there is the possibility that the parent dirs don't exist.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory with the name given by `path`.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.remove", "docs": "Deletes the path located at 'path'.\n\n Args:\n path: string, a path\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API. E.g.,\n `NotFoundError` if the path does not exist.\n ", "desc": "Deletes the path located at 'path'.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.rename", "docs": "Rename or move a file / directory.\n\n Args:\n src: string, pathname for a file\n dst: string, pathname to which the file needs to be moved\n overwrite: boolean, if false it's an error for `dst` to be occupied by an\n existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Rename or move a file / directory.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.rmtree", "docs": "Deletes everything under path recursively.\n\n Args:\n path: string, a path\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Deletes everything under path recursively.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.stat", "docs": "Returns file statistics for a given path.\n\n Args:\n path: string, path to a file\n\n Returns:\n FileStatistics struct that contains information about the path\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Returns file statistics for a given path.", "type": "API"}, {"name": "tf.compat.v1.io.gfile.walk", "docs": "Recursive directory tree generator for directories.\n\n Args:\n top: string, a Directory name\n topdown: bool, Traverse pre order if True, post order if False.\n onerror: optional handler for errors. Should be a function, it will be\n called with the error as argument. Rethrowing the error aborts the walk.\n Errors that happen while listing directories are ignored.\n\n Yields:\n Each yield is a 3-tuple: the pathname of a directory, followed by lists of\n all its subdirectories and leaf files. That is, each yield looks like:\n `(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`.\n Each item is a string.\n ", "desc": "Recursive directory tree generator for directories.", "type": "API"}, {"name": "tf.compat.v1.io.is_jpeg", "docs": "Convenience function to check if the 'contents' encodes a JPEG image.\n\n Args:\n contents: 0-D `string`. The encoded image bytes.\n name: A name for the operation (optional)\n\n Returns:\n A scalar boolean tensor indicating if 'contents' may be a JPEG image.\n is_jpeg is susceptible to false positives.\n ", "desc": "Convenience function to check if the 'contents' encodes a JPEG image.", "type": "API"}, {"name": "tf.compat.v1.io.match_filenames_once", "docs": "Save the list of files matching pattern, so it is only computed once.\n\n NOTE: The order of the files returned is deterministic.\n\n Args:\n pattern: A file pattern (glob), or 1D tensor of file patterns.\n name: A name for the operations (optional).\n\n Returns:\n A variable that is initialized to the list of files matching the pattern(s).\n ", "desc": "Save the list of files matching pattern, so it is only computed once.", "type": "API"}, {"name": "tf.compat.v1.io.matching_files", "docs": "Returns the set of files matching one or more glob patterns.\n\n Note that this routine only supports wildcard characters in the\n basename portion of the pattern, not in the directory portion.\n Note also that the order of filenames returned is deterministic.\n\n Args:\n pattern: A `Tensor` of type `string`.\n Shell wildcard pattern(s). Scalar or vector of type string.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the set of files matching one or more glob patterns.", "type": "API"}, {"name": "tf.compat.v1.io.PaddingFIFOQueue", "docs": "A FIFOQueue that supports batching variable-sized tensors by padding.\n\n A `PaddingFIFOQueue` may contain components with dynamic shape, while also\n supporting `dequeue_many`. See the constructor for more details.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A FIFOQueue that supports batching variable-sized tensors by padding.", "type": "API"}, {"name": "tf.compat.v1.io.parse_example", "docs": "Parses `Example` protos into a `dict` of tensors.\n\n Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n protos given in `serialized`. We refer to `serialized` as a batch with\n `batch_size` many entries of individual `Example` protos.\n\n `example_names` may contain descriptive names for the corresponding serialized\n protos. These may be useful for debugging purposes, but they have no effect on\n the output. If not `None`, `example_names` must be the same length as\n `serialized`.\n\n This op parses serialized examples into a dictionary mapping keys to `Tensor`\n `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to\n `VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature`\n objects. Each `VarLenFeature` and `SparseFeature` is mapped to a\n `SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each\n `RaggedFeature` is mapped to a `RaggedTensor`.\n\n Each `VarLenFeature` maps to a `SparseTensor` of the specified type\n representing a ragged matrix. Its indices are `[batch, index]` where `batch`\n identifies the example in `serialized`, and `index` is the value's index in\n the list of values associated with that feature and example.\n\n Each `SparseFeature` maps to a `SparseTensor` of the specified type\n representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`.\n Its `values` come from the feature in the examples with key `value_key`.\n A `values[i]` comes from a position `k` in the feature of an example at batch\n entry `batch`. This positional information is recorded in `indices[i]` as\n `[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of\n the feature in the example at with key `SparseFeature.index_key[j]`.\n In other words, we split the indices (except the first index indicating the\n batch entry) of a `SparseTensor` by dimension into different features of the\n `Example`. Due to its complexity a `VarLenFeature` should be preferred over a\n `SparseFeature` whenever possible.\n\n Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or\n `tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.\n\n `FixedLenFeature` entries with a `default_value` are optional. With no default\n value, we will fail if that `Feature` is missing from any example in\n `serialized`.\n\n Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type\n (or `tf.float32` if not specified) and shape\n `(serialized.size(), None) + df.shape`.\n All examples in `serialized` will be padded with `default_value` along the\n second dimension.\n\n Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It\n is formed by stacking the `RaggedTensor` for each example, where the\n `RaggedTensor` for each individual example is constructed using the tensors\n specified by `RaggedTensor.values_key` and `RaggedTensor.partition`. See\n the `tf.io.RaggedFeature` documentation for details and examples.\n\n Examples:\n\n For example, if one expects a `tf.float32` `VarLenFeature` `ft` and three\n serialized `Example`s are provided:\n\n ```\n serialized = [\n features\n { feature { key: \"ft\" value { float_list { value: [1.0, 2.0] } } } },\n features\n { feature []},\n features\n { feature { key: \"ft\" value { float_list { value: [3.0] } } }\n ]\n ```\n\n then the output will look like:\n\n ```python\n {\"ft\": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],\n values=[1.0, 2.0, 3.0],\n dense_shape=(3, 2)) }\n ```\n\n If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and\n `shape=[]` is used then the output will look like:\n\n ```python\n {\"ft\": [[1.0, 2.0], [3.0, -1.0]]}\n ```\n\n Given two `Example` input protos in `serialized`:\n\n ```\n [\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"knit\", \"big\" ] } } }\n feature { key: \"gps\" value { float_list { value: [] } } }\n },\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"emmy\" ] } } }\n feature { key: \"dank\" value { int64_list { value: [ 42 ] } } }\n feature { key: \"gps\" value { } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"kw\": VarLenFeature(tf.string),\n \"dank\": VarLenFeature(tf.int64),\n \"gps\": VarLenFeature(tf.float32),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"kw\": SparseTensor(\n indices=[[0, 0], [0, 1], [1, 0]],\n values=[\"knit\", \"big\", \"emmy\"]\n dense_shape=[2, 2]),\n \"dank\": SparseTensor(\n indices=[[1, 0]],\n values=[42],\n dense_shape=[2, 1]),\n \"gps\": SparseTensor(\n indices=[],\n values=[],\n dense_shape=[2, 0]),\n }\n ```\n\n For dense results in two serialized `Example`s:\n\n ```\n [\n features {\n feature { key: \"age\" value { int64_list { value: [ 0 ] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n },\n features {\n feature { key: \"age\" value { int64_list { value: [] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n }\n ]\n ```\n\n We can use arguments:\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"age\": FixedLenFeature([], dtype=tf.int64, default_value=-1),\n \"gender\": FixedLenFeature([], dtype=tf.string),\n }\n ```\n\n And the expected output is:\n\n ```python\n {\n \"age\": [[0], [-1]],\n \"gender\": [[\"f\"], [\"f\"]],\n }\n ```\n\n An alternative to `VarLenFeature` to obtain a `SparseTensor` is\n `SparseFeature`. For example, given two `Example` input protos in\n `serialized`:\n\n ```\n [\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 3, 20 ] } } }\n },\n features {\n feature { key: \"val\" value { float_list { value: [ 0.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 42 ] } } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"sparse\": SparseFeature(\n index_key=\"ix\", value_key=\"val\", dtype=tf.float32, size=100),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"sparse\": SparseTensor(\n indices=[[0, 3], [0, 20], [1, 42]],\n values=[0.5, -1.0, 0.0]\n dense_shape=[2, 100]),\n }\n ```\n\n See the `tf.io.RaggedFeature` documentation for examples showing how\n `RaggedFeature` can be used to obtain `RaggedTensor`s.\n\n Args:\n serialized: A vector (1-D Tensor) of strings, a batch of binary\n serialized `Example` protos.\n features: A `dict` mapping feature keys to `FixedLenFeature`,\n `VarLenFeature`, `SparseFeature`, and `RaggedFeature` values.\n example_names: A vector (1-D Tensor) of strings (optional), the names of\n the serialized protos in the batch.\n name: A name for this operation (optional).\n\n Returns:\n A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and\n `RaggedTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses `Example` protos into a `dict` of tensors.", "type": "API"}, {"name": "tf.compat.v1.io.parse_sequence_example", "docs": "Parses a batch of `SequenceExample` protos.\n\n Parses a vector of serialized\n [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n protos given in `serialized`.\n\n This op parses serialized sequence examples into a tuple of dictionaries,\n each mapping keys to `Tensor` and `SparseTensor` objects.\n The first dictionary contains mappings for keys appearing in\n `context_features`, and the second dictionary contains mappings for keys\n appearing in `sequence_features`.\n\n At least one of `context_features` and `sequence_features` must be provided\n and non-empty.\n\n The `context_features` keys are associated with a `SequenceExample` as a\n whole, independent of time / frame. In contrast, the `sequence_features` keys\n provide a way to access variable-length data within the `FeatureList` section\n of the `SequenceExample` proto. While the shapes of `context_features` values\n are fixed with respect to frame, the frame dimension (the first dimension)\n of `sequence_features` values may vary between `SequenceExample` protos,\n and even between `feature_list` keys within the same `SequenceExample`.\n\n `context_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and\n default value.\n\n `sequence_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and\n each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified\n type. The shape will be `(B,T,) + df.dense_shape` for\n `FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the\n length of the associated `FeatureList` in the `SequenceExample`. For instance,\n `FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape\n `[None, None]` and dynamic shape `[B, T]`, while\n `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor`\n of static shape `[None, None, k]` and dynamic shape `[B, T, k]`.\n\n Like the input, the resulting output tensors have a batch dimension. This\n means that the original per-example shapes of `VarLenFeature`s and\n `FixedLenSequenceFeature`s can be lost. To handle that situation, this op also\n provides dicts of shape tensors as part of the output. There is one dict for\n the context features, and one for the feature_list features. Context features\n of type `FixedLenFeature`s will not be present, since their shapes are already\n known by the caller. In situations where the input `FixedLenSequenceFeature`s\n are of different sequence lengths across examples, the shorter examples will\n be padded with default datatype values: 0 for numeric types, and the empty\n string for string types.\n\n Each `SparseTensor` corresponding to `sequence_features` represents a ragged\n vector. Its indices are `[time, index]`, where `time` is the `FeatureList`\n entry and `index` is the value's index in the list of values associated with\n that time.\n\n `FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`\n entries with `allow_missing=True` are optional; otherwise, we will fail if\n that `Feature` or `FeatureList` is missing from any example in `serialized`.\n\n `example_name` may contain a descriptive name for the corresponding serialized\n proto. This may be useful for debugging purposes, but it has no effect on the\n output. If not `None`, `example_name` must be a scalar.\n\n Args:\n serialized: A vector (1-D Tensor) of type string containing binary\n serialized `SequenceExample` protos.\n context_features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` or `RaggedFeature` values. These features are associated\n with a `SequenceExample` as a whole.\n sequence_features: A `dict` mapping feature keys to\n `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values.\n These features are associated with data within the `FeatureList` section\n of the `SequenceExample` proto.\n example_names: A vector (1-D Tensor) of strings (optional), the name of the\n serialized protos.\n name: A name for this operation (optional).\n\n Returns:\n A tuple of three `dict`s, each mapping keys to `Tensor`s,\n `SparseTensor`s, and `RaggedTensor`. The first dict contains the context\n key/values, the second dict contains the feature_list key/values, and the\n final dict contains the lengths of any dense feature_list features.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a batch of `SequenceExample` protos.", "type": "API"}, {"name": "tf.compat.v1.io.parse_single_example", "docs": "Parses a single `Example` proto.\n\n Similar to `parse_example`, except:\n\n For dense tensors, the returned `Tensor` is identical to the output of\n `parse_example`, except there is no batch dimension, the output shape is the\n same as the shape given in `dense_shape`.\n\n For `SparseTensor`s, the first (batch) column of the indices matrix is removed\n (the indices matrix is a column vector), the values vector is unchanged, and\n the first (`batch_size`) entry of the shape vector is removed (it is now a\n single element vector).\n\n One might see performance advantages by batching `Example` protos with\n `parse_example` instead of using this function directly.\n\n Args:\n serialized: A scalar string Tensor, a single serialized Example.\n features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` values.\n name: A name for this operation (optional).\n example_names: (Optional) A scalar string Tensor, the associated name.\n\n Returns:\n A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `Example` proto.", "type": "API"}, {"name": "tf.compat.v1.io.parse_single_sequence_example", "docs": "Parses a single `SequenceExample` proto.\n\n Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n proto given in `serialized`.\n\n This op parses a serialized sequence example into a tuple of dictionaries,\n each mapping keys to `Tensor` and `SparseTensor` objects.\n The first dictionary contains mappings for keys appearing in\n `context_features`, and the second dictionary contains mappings for keys\n appearing in `sequence_features`.\n\n At least one of `context_features` and `sequence_features` must be provided\n and non-empty.\n\n The `context_features` keys are associated with a `SequenceExample` as a\n whole, independent of time / frame. In contrast, the `sequence_features` keys\n provide a way to access variable-length data within the `FeatureList` section\n of the `SequenceExample` proto. While the shapes of `context_features` values\n are fixed with respect to frame, the frame dimension (the first dimension)\n of `sequence_features` values may vary between `SequenceExample` protos,\n and even between `feature_list` keys within the same `SequenceExample`.\n\n `context_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`;\n each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature`\n is mapped to a `Tensor`, of the specified type, shape, and default value.\n\n `sequence_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type.\n The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`,\n where `T` is the length of the associated `FeatureList` in the\n `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar\n 1-D `Tensor` of static shape `[None]` and dynamic shape `[T]`, while\n `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor`\n of static shape `[None, k]` and dynamic shape `[T, k]`.\n\n Each `SparseTensor` corresponding to `sequence_features` represents a ragged\n vector. Its indices are `[time, index]`, where `time` is the `FeatureList`\n entry and `index` is the value's index in the list of values associated with\n that time.\n\n `FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`\n entries with `allow_missing=True` are optional; otherwise, we will fail if\n that `Feature` or `FeatureList` is missing from any example in `serialized`.\n\n `example_name` may contain a descriptive name for the corresponding serialized\n proto. This may be useful for debugging purposes, but it has no effect on the\n output. If not `None`, `example_name` must be a scalar.\n\n Note that the batch version of this function, `tf.parse_sequence_example`,\n is written for better memory efficiency and will be faster on large\n `SequenceExample`s.\n\n Args:\n serialized: A scalar (0-D Tensor) of type string, a single binary\n serialized `SequenceExample` proto.\n context_features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` or `RaggedFeature` values. These features are associated\n with a `SequenceExample` as a whole.\n sequence_features: A `dict` mapping feature keys to\n `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values.\n These features are associated with data within the `FeatureList` section\n of the `SequenceExample` proto.\n example_name: A scalar (0-D Tensor) of strings (optional), the name of\n the serialized proto.\n name: A name for this operation (optional).\n\n Returns:\n A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s\n and `RaggedTensor`s.\n\n * The first dict contains the context key/values.\n * The second dict contains the feature_list key/values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `SequenceExample` proto.", "type": "API"}, {"name": "tf.compat.v1.io.parse_tensor", "docs": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar string containing a serialized TensorProto proto.\n out_type: A `tf.DType`.\n The type of the serialized tensor. The provided type must match the\n type of the serialized tensor and no implicit conversion will take place.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.", "type": "API"}, {"name": "tf.compat.v1.io.PriorityQueue", "docs": "A queue implementation that dequeues elements in prioritized order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in prioritized order.", "type": "API"}, {"name": "tf.compat.v1.io.QueueBase", "docs": "Base class for queue implementations.\n\n A queue is a TensorFlow data structure that stores tensors across\n multiple steps, and exposes operations that enqueue and dequeue\n tensors.\n\n Each queue element is a tuple of one or more tensors, where each\n tuple component has a static dtype, and may have a static shape. The\n queue implementations support versions of enqueue and dequeue that\n handle single elements, versions that support enqueuing and\n dequeuing a batch of elements at once.\n\n See `tf.queue.FIFOQueue` and\n `tf.queue.RandomShuffleQueue` for concrete\n implementations of this class, and instructions on how to create\n them.\n ", "desc": "Base class for queue implementations.", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature", "docs": "Configuration for passing a RaggedTensor input feature.\n\n `value_key` specifies the feature key for a variable-length list of values;\n and `partitions` specifies zero or more feature keys for partitioning those\n values into higher dimensions. Each element of `partitions` must be one of\n the following:\n\n * `tf.io.RaggedFeature.RowSplits(key: string)`\n * `tf.io.RaggedFeature.RowLengths(key: string)`\n * `tf.io.RaggedFeature.RowStarts(key: string)`\n * `tf.io.RaggedFeature.RowLimits(key: string)`\n * `tf.io.RaggedFeature.ValueRowIds(key: string)`\n * `tf.io.RaggedFeature.UniformRowLength(length: int)`.\n\n Where `key` is a feature key whose values are used to partition the values.\n Partitions are listed from outermost to innermost.\n\n * If `len(partitions) == 0` (the default), then:\n\n * A feature from a single `tf.Example` is parsed into a 1D `tf.Tensor`.\n * A feature from a batch of `tf.Example`s is parsed into a 2D\n `tf.RaggedTensor`, where the outer dimension is the batch dimension, and\n the inner (ragged) dimension is the feature length in each example.\n\n * If `len(partitions) == 1`, then:\n\n * A feature from a single `tf.Example` is parsed into a 2D\n `tf.RaggedTensor`, where the values taken from the `value_key` are\n separated into rows using the partition key.\n * A feature from a batch of `tf.Example`s is parsed into a 3D\n `tf.RaggedTensor`, where the outer dimension is the batch dimension,\n the two inner dimensions are formed by separating the `value_key` values\n from each example into rows using that example's partition key.\n\n * If `len(partitions) > 1`, then:\n\n * A feature from a single `tf.Example` is parsed into a `tf.RaggedTensor`\n whose rank is `len(partitions)+1`, and whose ragged_rank is\n `len(partitions)`.\n\n * A feature from a batch of `tf.Example`s is parsed into a `tf.RaggedTensor`\n whose rank is `len(partitions)+2` and whose ragged_rank is\n `len(partitions)+1`, where the outer dimension is the batch dimension.\n\n There is one exception: if the final (i.e., innermost) element(s) of\n `partitions` are `UniformRowLength`s, then the values are simply reshaped (as\n a higher-dimensional `tf.Tensor`), rather than being wrapped in a\n `tf.RaggedTensor`.\n\n #### Examples\n\n >>> import google.protobuf.text_format as pbtext\n >>> example_batch = [\n ... pbtext.Merge(r'''\n ... features {\n ... feature {key: \"v\" value {int64_list {value: [3, 1, 4, 1, 5, 9]}}}\n ... feature {key: \"s1\" value {int64_list {value: [0, 2, 3, 3, 6]}}}\n ... feature {key: \"s2\" value {int64_list {value: [0, 2, 3, 4]}}}\n ... }''', tf.train.Example()).SerializeToString(),\n ... pbtext.Merge(r'''\n ... features {\n ... feature {key: \"v\" value {int64_list {value: [2, 7, 1, 8, 2, 8, 1]}}}\n ... feature {key: \"s1\" value {int64_list {value: [0, 3, 4, 5, 7]}}}\n ... feature {key: \"s2\" value {int64_list {value: [0, 1, 1, 4]}}}\n ... }''', tf.train.Example()).SerializeToString()]\n\n >>> features = {\n ... # Zero partitions: returns 1D tf.Tensor for each Example.\n ... 'f1': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64),\n ... # One partition: returns 2D tf.RaggedTensor for each Example.\n ... 'f2': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64, partitions=[\n ... tf.io.RaggedFeature.RowSplits(\"s1\")]),\n ... # Two partitions: returns 3D tf.RaggedTensor for each Example.\n ... 'f3': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64, partitions=[\n ... tf.io.RaggedFeature.RowSplits(\"s2\"),\n ... tf.io.RaggedFeature.RowSplits(\"s1\")])\n ... }\n\n >>> feature_dict = tf.io.parse_single_example(example_batch[0], features)\n >>> for (name, val) in sorted(feature_dict.items()):\n ... print('%s: %s' % (name, val))\n f1: tf.Tensor([3 1 4 1 5 9], shape=(6,), dtype=int64)\n f2: \n f3: \n\n >>> feature_dict = tf.io.parse_example(example_batch, features)\n >>> for (name, val) in sorted(feature_dict.items()):\n ... print('%s: %s' % (name, val))\n f1: \n f2: \n f3: \n\n Fields:\n dtype: Data type of the `RaggedTensor`. Must be one of:\n `tf.dtypes.int64`, `tf.dtypes.float32`, `tf.dtypes.string`.\n value_key: (Optional.) Key for a `Feature` in the input `Example`, whose\n parsed `Tensor` will be the resulting `RaggedTensor.flat_values`. If\n not specified, then it defaults to the key for this `RaggedFeature`.\n partitions: (Optional.) A list of objects specifying the row-partitioning\n tensors (from outermost to innermost). Each entry in this list must be\n one of:\n * `tf.io.RaggedFeature.RowSplits(key: string)`\n * `tf.io.RaggedFeature.RowLengths(key: string)`\n * `tf.io.RaggedFeature.RowStarts(key: string)`\n * `tf.io.RaggedFeature.RowLimits(key: string)`\n * `tf.io.RaggedFeature.ValueRowIds(key: string)`\n * `tf.io.RaggedFeature.UniformRowLength(length: int)`.\n Where `key` is a key for a `Feature` in the input `Example`, whose parsed\n `Tensor` will be the resulting row-partitioning tensor.\n row_splits_dtype: (Optional.) Data type for the row-partitioning tensor(s).\n One of `int32` or `int64`. Defaults to `int32`.\n validate: (Optional.) Boolean indicating whether or not to validate that\n the input values form a valid RaggedTensor. Defaults to `False`.\n ", "desc": "Configuration for passing a RaggedTensor input feature.", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.RowLengths", "docs": "RowLengths(key,)", "desc": "RowLengths(key,)", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.RowLimits", "docs": "RowLimits(key,)", "desc": "RowLimits(key,)", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.RowSplits", "docs": "RowSplits(key,)", "desc": "RowSplits(key,)", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.RowStarts", "docs": "RowStarts(key,)", "desc": "RowStarts(key,)", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.UniformRowLength", "docs": "UniformRowLength(length,)", "desc": "UniformRowLength(length,)", "type": "API"}, {"name": "tf.compat.v1.io.RaggedFeature.ValueRowIds", "docs": "ValueRowIds(key,)", "desc": "ValueRowIds(key,)", "type": "API"}, {"name": "tf.compat.v1.io.RandomShuffleQueue", "docs": "A queue implementation that dequeues elements in a random order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in a random order.", "type": "API"}, {"name": "tf.compat.v1.io.read_file", "docs": "Reads the contents of file.\n\n This operation returns a tensor with the entire contents of the input\n filename. It does not do any parsing, it just returns the contents as\n they are. Usually, this is the first step in the input pipeline.\n\n Example:\n\n >>> with open(\"/tmp/file.txt\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.read_file(\"/tmp/file.txt\")\n \n\n Example of using the op in a function to read an image, decode it and reshape\n the tensor containing the pixel data:\n\n >>> @tf.function\n ... def load_image(filename):\n ... raw = tf.io.read_file(filename)\n ... image = tf.image.decode_png(raw, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... image.set_shape([28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n Args:\n filename: string. filename to read from.\n name: string. Optional name for the op.\n\n Returns:\n A tensor of dtype \"string\", with the file contents.\n ", "desc": "Reads the contents of file.", "type": "API"}, {"name": "tf.compat.v1.io.serialize_many_sparse", "docs": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.\n\n The `SparseTensor` must have rank `R` greater than 1, and the first dimension\n is treated as the minibatch dimension. Elements of the `SparseTensor`\n must be sorted in increasing order of this first dimension. The serialized\n `SparseTensor` objects going into each row of the output `Tensor` will have\n rank `R-1`.\n\n The minibatch size `N` is extracted from `sparse_shape[0]`.\n\n Args:\n sp_input: The input rank `R` `SparseTensor`.\n name: A name prefix for the returned tensors (optional).\n out_type: The `dtype` to use for serialization.\n\n Returns:\n A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column\n represents serialized `SparseTensor`'s indices, values, and shape\n (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.io.serialize_sparse", "docs": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.\n\n Args:\n sp_input: The input `SparseTensor`.\n name: A name prefix for the returned tensors (optional).\n out_type: The `dtype` to use for serialization.\n\n Returns:\n A 3-vector (1-D `Tensor`), with each column representing the serialized\n `SparseTensor`'s indices, values, and shape (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.", "type": "API"}, {"name": "tf.compat.v1.io.serialize_tensor", "docs": "Transforms a Tensor into a serialized TensorProto proto.\n\n This operation transforms data in a `tf.Tensor` into a `tf.Tensor` of type\n `tf.string` containing the data in a binary string format. This operation can\n transform scalar data and linear arrays, but it is most useful in converting\n multidimensional arrays into a format accepted by binary storage formats such\n as a `TFRecord` or `tf.train.Example`.\n\n See also:\n - `tf.io.parse_tensor`: inverse operation of `tf.io.serialize_tensor` that\n transforms a scalar string containing a serialized Tensor into a Tensor of a\n specified type.\n - `tf.ensure_shape`: `parse_tensor` cannot statically determine the shape of\n the parsed tensor. Use `tf.ensure_shape` to set the static shape when running\n under a `tf.function`\n - `.SerializeToString`, serializes a proto to a binary-string\n\n Example of serializing scalar data:\n\n >>> t = tf.constant(1)\n >>> tf.io.serialize_tensor(t)\n \n\n Example of storing non-scalar data into a `tf.train.Example`:\n\n >>> t1 = [[1, 2]]\n >>> t2 = [[7, 8]]\n >>> nonscalar = tf.concat([t1, t2], 0)\n >>> nonscalar\n \n\n Serialize the data using `tf.io.serialize_tensor`.\n\n >>> serialized_nonscalar = tf.io.serialize_tensor(nonscalar)\n >>> serialized_nonscalar\n \n\n Store the data in a `tf.train.Feature`.\n\n >>> feature_of_bytes = tf.train.Feature(\n ... bytes_list=tf.train.BytesList(value=[serialized_nonscalar.numpy()]))\n >>> feature_of_bytes\n bytes_list {\n value: \"\\010...\\000\"\n }\n\n Put the `tf.train.Feature` message into a `tf.train.Example`.\n\n >>> features_for_example = {\n ... 'feature0': feature_of_bytes\n ... }\n >>> example_proto = tf.train.Example(\n ... features=tf.train.Features(feature=features_for_example))\n >>> example_proto\n features {\n feature {\n key: \"feature0\"\n value {\n bytes_list {\n value: \"\\010...\\000\"\n }\n }\n }\n }\n\n Args:\n tensor: A `tf.Tensor`.\n name: string. Optional name for the op.\n\n Returns:\n A Tensor of dtype string.\n ", "desc": "Transforms a Tensor into a serialized TensorProto proto.", "type": "API"}, {"name": "tf.compat.v1.io.SparseFeature", "docs": "Configuration for parsing a sparse input feature from an `Example`.\n\n Note, preferably use `VarLenFeature` (possibly in combination with a\n `SequenceExample`) in order to parse out `SparseTensor`s instead of\n `SparseFeature` due to its simplicity.\n\n Closely mimicking the `SparseTensor` that will be obtained by parsing an\n `Example` with a `SparseFeature` config, a `SparseFeature` contains a\n\n * `value_key`: The name of key for a `Feature` in the `Example` whose parsed\n `Tensor` will be the resulting `SparseTensor.values`.\n\n * `index_key`: A list of names - one for each dimension in the resulting\n `SparseTensor` whose `indices[i][dim]` indicating the position of\n the `i`-th value in the `dim` dimension will be equal to the `i`-th value in\n the Feature with key named `index_key[dim]` in the `Example`.\n\n * `size`: A list of ints for the resulting `SparseTensor.dense_shape`.\n\n For example, we can represent the following 2D `SparseTensor`\n\n ```python\n SparseTensor(indices=[[3, 1], [20, 0]],\n values=[0.5, -1.0]\n dense_shape=[100, 3])\n ```\n\n with an `Example` input proto\n\n ```python\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix0\" value { int64_list { value: [ 3, 20 ] } } }\n feature { key: \"ix1\" value { int64_list { value: [ 1, 0 ] } } }\n }\n ```\n\n and `SparseFeature` config with 2 `index_key`s\n\n ```python\n SparseFeature(index_key=[\"ix0\", \"ix1\"],\n value_key=\"val\",\n dtype=tf.float32,\n size=[100, 3])\n ```\n\n Fields:\n index_key: A single string name or a list of string names of index features.\n For each key the underlying feature's type must be `int64` and its length\n must always match that of the `value_key` feature.\n To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1\n a list of length `rank` should be used.\n value_key: Name of value feature. The underlying feature's type must\n be `dtype` and its length must always match that of all the `index_key`s'\n features.\n dtype: Data type of the `value_key` feature.\n size: A Python int or list thereof specifying the dense shape. Should be a\n list if and only if `index_key` is a list. In that case the list must be\n equal to the length of `index_key`. Each for each entry `i` all values in\n the `index_key`[i] feature must be in `[0, size[i])`.\n already_sorted: A Python boolean to specify whether the values in\n `value_key` are already sorted by their index position. If so skip\n sorting. False by default (optional).\n ", "desc": "Configuration for parsing a sparse input feature from an `Example`.", "type": "API"}, {"name": "tf.compat.v1.io.tf_record_iterator", "docs": "An iterator that read the records from a TFRecords file. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse eager execution and: \n`tf.data.TFRecordDataset(path)`\n\nArgs:\n path: The path to the TFRecords file.\n options: (optional) A TFRecordOptions object.\n\nReturns:\n An iterator of serialized TFRecords.\n\nRaises:\n IOError: If `path` cannot be opened for reading.", "desc": "An iterator that read the records from a TFRecords file. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.io.TFRecordCompressionType", "docs": "The type of compression for the record.", "desc": "The type of compression for the record.", "type": "API"}, {"name": "tf.compat.v1.io.TFRecordOptions", "docs": "Options used for manipulating TFRecord files.", "desc": "Options used for manipulating TFRecord files.", "type": "API"}, {"name": "tf.compat.v1.io.TFRecordWriter", "docs": "A class to write records to a TFRecords file.\n\n [TFRecords tutorial](https://www.tensorflow.org/tutorials/load_data/tfrecord)\n\n TFRecords is a binary format which is optimized for high throughput data\n retrieval, generally in conjunction with `tf.data`. `TFRecordWriter` is used\n to write serialized examples to a file for later consumption. The key steps\n are:\n\n Ahead of time:\n\n - [Convert data into a serialized format](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfexample)\n - [Write the serialized data to one or more files](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfrecord_files_in_python)\n\n During training or evaluation:\n\n - [Read serialized examples into memory](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n - [Parse (deserialize) examples](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n\n A minimal example is given below:\n\n >>> import tempfile\n >>> example_path = os.path.join(tempfile.gettempdir(), \"example.tfrecords\")\n >>> np.random.seed(0)\n\n >>> # Write the records to a file.\n ... with tf.io.TFRecordWriter(example_path) as file_writer:\n ... for _ in range(4):\n ... x, y = np.random.random(), np.random.random()\n ...\n ... record_bytes = tf.train.Example(features=tf.train.Features(feature={\n ... \"x\": tf.train.Feature(float_list=tf.train.FloatList(value=[x])),\n ... \"y\": tf.train.Feature(float_list=tf.train.FloatList(value=[y])),\n ... })).SerializeToString()\n ... file_writer.write(record_bytes)\n\n >>> # Read the data back out.\n >>> def decode_fn(record_bytes):\n ... return tf.io.parse_single_example(\n ... # Data\n ... record_bytes,\n ...\n ... # Schema\n ... {\"x\": tf.io.FixedLenFeature([], dtype=tf.float32),\n ... \"y\": tf.io.FixedLenFeature([], dtype=tf.float32)}\n ... )\n\n >>> for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn):\n ... print(\"x = {x:.4f}, y = {y:.4f}\".format(**batch))\n x = 0.5488, y = 0.7152\n x = 0.6028, y = 0.5449\n x = 0.4237, y = 0.6459\n x = 0.4376, y = 0.8918\n\n This class implements `__enter__` and `__exit__`, and can be used\n in `with` blocks like a normal file. (See the usage example above.)\n ", "desc": "A class to write records to a TFRecords file.", "type": "API"}, {"name": "tf.compat.v1.io.VarLenFeature", "docs": "Configuration for parsing a variable-length input feature.\n\n Fields:\n dtype: Data type of input.\n ", "desc": "Configuration for parsing a variable-length input feature.", "type": "API"}, {"name": "tf.compat.v1.io.write_file", "docs": "Writes `contents` to the file at input `filename`.\n\n Creates the file and recursively creates directory if it does not exist.\n\n Args:\n filename: A `Tensor` of type `string`.\n scalar. The name of the file to which we write the contents.\n contents: A `Tensor` of type `string`.\n scalar. The content to be written to the output file.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes `contents` to the file at input `filename`.", "type": "API"}, {"name": "tf.compat.v1.io.write_graph", "docs": "Writes a graph proto to a file.\n\n The graph is written as a text proto unless `as_text` is `False`.\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')\n ```\n\n or\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')\n ```\n\n Args:\n graph_or_graph_def: A `Graph` or a `GraphDef` protocol buffer.\n logdir: Directory where to write the graph. This can refer to remote\n filesystems, such as Google Cloud Storage (GCS).\n name: Filename for the graph.\n as_text: If `True`, writes the graph as an ASCII proto.\n\n Returns:\n The path of the output proto file.\n ", "desc": "Writes a graph proto to a file.", "type": "API"}, {"name": "tf.compat.v1.is_finite", "docs": "Returns which elements of x are finite.\n\n @compatibility(numpy)\n Equivalent to np.isfinite\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])\n tf.math.is_finite(x) ==> [True, True, True, False, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are finite.", "type": "API"}, {"name": "tf.compat.v1.is_inf", "docs": "Returns which elements of x are Inf.\n\n @compatibility(numpy)\n Equivalent to np.isinf\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.inf, 6.8, np.inf])\n tf.math.is_inf(x) ==> [False, True, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are Inf.", "type": "API"}, {"name": "tf.compat.v1.is_nan", "docs": "Returns which elements of x are NaN.\n\n @compatibility(numpy)\n Equivalent to np.isnan\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])\n tf.math.is_nan(x) ==> [False, True, False, True, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are NaN.", "type": "API"}, {"name": "tf.compat.v1.is_non_decreasing", "docs": "Returns `True` if `x` is non-decreasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.\n If `x` has less than two elements, it is trivially non-decreasing.\n\n See also: `is_strictly_increasing`\n\n >>> x1 = tf.constant([1.0, 1.0, 3.0])\n >>> tf.math.is_non_decreasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_non_decreasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional). Defaults to \"is_non_decreasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is non-decreasing.", "type": "API"}, {"name": "tf.compat.v1.is_numeric_tensor", "docs": "Returns `True` if the elements of `tensor` are numbers.\n\n Specifically, returns `True` if the dtype of `tensor` is one of the following:\n\n * `tf.float16`\n * `tf.float32`\n * `tf.float64`\n * `tf.int8`\n * `tf.int16`\n * `tf.int32`\n * `tf.int64`\n * `tf.uint8`\n * `tf.uint16`\n * `tf.uint32`\n * `tf.uint64`\n * `tf.qint8`\n * `tf.qint16`\n * `tf.qint32`\n * `tf.quint8`\n * `tf.quint16`\n * `tf.complex64`\n * `tf.complex128`\n * `tf.bfloat16`\n\n Returns `False` if `tensor` is of a non-numeric type or if `tensor` is not\n a `tf.Tensor` object.\n ", "desc": "Returns `True` if the elements of `tensor` are numbers.", "type": "API"}, {"name": "tf.compat.v1.is_strictly_increasing", "docs": "Returns `True` if `x` is strictly increasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.\n If `x` has less than two elements, it is trivially strictly increasing.\n\n See also: `is_non_decreasing`\n\n >>> x1 = tf.constant([1.0, 2.0, 3.0])\n >>> tf.math.is_strictly_increasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_strictly_increasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional).\n Defaults to \"is_strictly_increasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is strictly increasing.", "type": "API"}, {"name": "tf.compat.v1.is_tensor", "docs": "Checks whether `x` is a TF-native type that can be passed to many TF ops.\n\n Use `is_tensor` to differentiate types that can ingested by TensorFlow ops\n without any conversion (e.g., `tf.Tensor`, `tf.SparseTensor`, and\n `tf.RaggedTensor`) from types that need to be converted into tensors before\n they are ingested (e.g., numpy `ndarray` and Python scalars).\n\n For example, in the following code block:\n\n ```python\n if not tf.is_tensor(t):\n t = tf.convert_to_tensor(t)\n return t.shape, t.dtype\n ```\n\n we check to make sure that `t` is a tensor (and convert it if not) before\n accessing its `shape` and `dtype`. (But note that not all TensorFlow native\n types have shapes or dtypes; `tf.data.Dataset` is an example of a TensorFlow\n native type that has neither shape nor dtype.)\n\n Args:\n x: A python object to check.\n\n Returns:\n `True` if `x` is a TensorFlow-native type.\n ", "desc": "Checks whether `x` is a TF-native type that can be passed to many TF ops.", "type": "API"}, {"name": "tf.compat.v1.is_variable_initialized", "docs": "Tests if a variable has been initialized.\n\nArgs:\n variable: A `Variable`.\n\nReturns:\n Returns a scalar boolean Tensor, `True` if the variable has been\n initialized, `False` otherwise.\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Tests if a variable has been initialized.", "type": "API"}, {"name": "tf.compat.v1.keras", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.activations", "docs": "Built-in activation functions.\n", "desc": "Built-in activation functions.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.deserialize", "docs": "Returns activation function given a string identifier.\n\n Args:\n name: The name of the activation function.\n custom_objects: Optional `{function_name: function_obj}`\n dictionary listing user-provided activation functions.\n\n Returns:\n Corresponding activation function.\n\n For example:\n\n >>> tf.keras.activations.deserialize('linear')\n \n >>> tf.keras.activations.deserialize('sigmoid')\n \n >>> tf.keras.activations.deserialize('abcd')\n Traceback (most recent call last):\n ...\n ValueError: Unknown activation function:abcd\n\n Raises:\n ValueError: `Unknown activation function` if the input string does not\n denote any defined Tensorflow activation function.\n ", "desc": "Returns activation function given a string identifier.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.elu", "docs": "Exponential Linear Unit.\n\n The exponential linear unit (ELU) with `alpha > 0` is:\n `x` if `x > 0` and\n `alpha * (exp(x) - 1)` if `x < 0`\n The ELU hyperparameter `alpha` controls the value to which an\n ELU saturates for negative net inputs. ELUs diminish the\n vanishing gradient effect.\n\n ELUs have negative values which pushes the mean of the activations\n closer to zero.\n Mean activations that are closer to zero enable faster learning as they\n bring the gradient closer to the natural gradient.\n ELUs saturate to a negative value when the argument gets smaller.\n Saturation means a small derivative which decreases the variation\n and the information that is propagated to the next layer.\n\n Example Usage:\n\n >>> import tensorflow as tf\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu',\n ... input_shape=(28, 28, 1)))\n >>> model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n >>> model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n\n \n\n Args:\n x: Input tensor.\n alpha: A scalar, slope of negative section. `alpha` controls the value to\n which an ELU saturates for negative net inputs.\n\n Returns:\n The exponential linear unit (ELU) activation function: `x` if `x > 0` and\n `alpha * (exp(x) - 1)` if `x < 0`.\n\n\n Reference:\n [Fast and Accurate Deep Network Learning by Exponential Linear Units\n (ELUs) (Clevert et al, 2016)](https://arxiv.org/abs/1511.07289)\n ", "desc": "Exponential Linear Unit.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.exponential", "docs": "Exponential activation function.\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.exponential(a)\n >>> b.numpy()\n array([0.04978707, 0.36787945, 1., 2.7182817 , 20.085537], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor with exponential activation: `exp(x)`.\n ", "desc": "Exponential activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.get", "docs": "Returns function.\n\n Args:\n identifier: Function or string\n\n Returns:\n Function corresponding to the input string or input function.\n\n For example:\n\n >>> tf.keras.activations.get('softmax')\n \n >>> tf.keras.activations.get(tf.keras.activations.softmax)\n \n >>> tf.keras.activations.get(None)\n \n >>> tf.keras.activations.get(abs)\n \n >>> tf.keras.activations.get('abcd')\n Traceback (most recent call last):\n ...\n ValueError: Unknown activation function:abcd\n\n Raises:\n ValueError: Input is an unknown function or string, i.e., the input does\n not denote any defined function.\n ", "desc": "Returns function.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.hard_sigmoid", "docs": "Hard sigmoid activation function.\n\n A faster approximation of the sigmoid activation.\n Piecewise linear approximation of the sigmoid function.\n Ref: 'https://en.wikipedia.org/wiki/Hard_sigmoid'\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.hard_sigmoid(a)\n >>> b.numpy()\n array([0. , 0.3, 0.5, 0.7, 1. ], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The hard sigmoid activation, defined as:\n\n - `if x < -2.5: return 0`\n - `if x > 2.5: return 1`\n - `if -2.5 <= x <= 2.5: return 0.2 * x + 0.5`\n ", "desc": "Hard sigmoid activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.linear", "docs": "Linear activation function (pass-through).\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.linear(a)\n >>> b.numpy()\n array([-3., -1., 0., 1., 3.], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The input, unmodified.\n ", "desc": "Linear activation function (pass-through).", "type": "API"}, {"name": "tf.compat.v1.keras.activations.relu", "docs": "Applies the rectified linear unit activation function.\n\n With default values, this returns the standard ReLU activation:\n `max(x, 0)`, the element-wise maximum of 0 and the input tensor.\n\n Modifying default parameters allows you to use non-zero thresholds,\n change the max value of the activation,\n and to use a non-zero multiple of the input for values below the threshold.\n\n For example:\n\n >>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32)\n >>> tf.keras.activations.relu(foo).numpy()\n array([ 0., 0., 0., 5., 10.], dtype=float32)\n >>> tf.keras.activations.relu(foo, alpha=0.5).numpy()\n array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32)\n >>> tf.keras.activations.relu(foo, max_value=5.).numpy()\n array([0., 0., 0., 5., 5.], dtype=float32)\n >>> tf.keras.activations.relu(foo, threshold=5.).numpy()\n array([-0., -0., 0., 0., 10.], dtype=float32)\n\n Args:\n x: Input `tensor` or `variable`.\n alpha: A `float` that governs the slope for values lower than the\n threshold.\n max_value: A `float` that sets the saturation threshold (the largest value\n the function will return).\n threshold: A `float` giving the threshold value of the activation function\n below which values will be damped or set to zero.\n\n Returns:\n A `Tensor` representing the input tensor,\n transformed by the relu activation function.\n Tensor will be of the same shape and dtype of input `x`.\n ", "desc": "Applies the rectified linear unit activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.selu", "docs": "Scaled Exponential Linear Unit (SELU).\n\n The Scaled Exponential Linear Unit (SELU) activation function is defined as:\n\n - `if x > 0: return scale * x`\n - `if x < 0: return scale * alpha * (exp(x) - 1)`\n\n where `alpha` and `scale` are pre-defined constants\n (`alpha=1.67326324` and `scale=1.05070098`).\n\n Basically, the SELU activation function multiplies `scale` (> 1) with the\n output of the `tf.keras.activations.elu` function to ensure a slope larger\n than one for positive inputs.\n\n The values of `alpha` and `scale` are\n chosen so that the mean and variance of the inputs are preserved\n between two consecutive layers as long as the weights are initialized\n correctly (see `tf.keras.initializers.LecunNormal` initializer)\n and the number of input units is \"large enough\"\n (see reference paper for more information).\n\n Example Usage:\n\n >>> num_classes = 10 # 10-class problem\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))\n\n Args:\n x: A tensor or variable to compute the activation function for.\n\n Returns:\n The scaled exponential unit activation: `scale * elu(x, alpha)`.\n\n Notes:\n - To be used together with the\n `tf.keras.initializers.LecunNormal` initializer.\n - To be used together with the dropout variant\n `tf.keras.layers.AlphaDropout` (not regular dropout).\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Scaled Exponential Linear Unit (SELU).", "type": "API"}, {"name": "tf.compat.v1.keras.activations.serialize", "docs": "Returns the string identifier of an activation function.\n\n Args:\n activation : Function object.\n\n Returns:\n String denoting the name attribute of the input function\n\n For example:\n\n >>> tf.keras.activations.serialize(tf.keras.activations.tanh)\n 'tanh'\n >>> tf.keras.activations.serialize(tf.keras.activations.sigmoid)\n 'sigmoid'\n >>> tf.keras.activations.serialize('abcd')\n Traceback (most recent call last):\n ...\n ValueError: ('Cannot serialize', 'abcd')\n\n Raises:\n ValueError: The input function is not a valid one.\n ", "desc": "Returns the string identifier of an activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.sigmoid", "docs": "Sigmoid activation function, `sigmoid(x) = 1 / (1 + exp(-x))`.\n\n Applies the sigmoid activation function. For small values (<-5),\n `sigmoid` returns a value close to zero, and for large values (>5)\n the result of the function gets close to 1.\n\n Sigmoid is equivalent to a 2-element Softmax, where the second element is\n assumed to be zero. The sigmoid function always returns a value between\n 0 and 1.\n\n For example:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.sigmoid(a)\n >>> b.numpy()\n array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01,\n 1.0000000e+00], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor with the sigmoid activation: `1 / (1 + exp(-x))`.\n ", "desc": "Sigmoid activation function, `sigmoid(x) = 1 / (1 + exp(-x))`.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.softmax", "docs": "Softmax converts a vector of values to a probability distribution.\n\n The elements of the output vector are in range (0, 1) and sum to 1.\n\n Each vector is handled independently. The `axis` argument sets which axis\n of the input the function is applied along.\n\n Softmax is often used as the activation for the last\n layer of a classification network because the result could be interpreted as\n a probability distribution.\n\n The softmax of each vector x is computed as\n `exp(x) / tf.reduce_sum(exp(x))`.\n\n The input values in are the log-odds of the resulting probability.\n\n Args:\n x : Input tensor.\n axis: Integer, axis along which the softmax normalization is applied.\n\n Returns:\n Tensor, output of softmax transformation (all values are non-negative\n and sum to 1).\n\n Examples:\n\n **Example 1: standalone usage**\n\n >>> inputs = tf.random.normal(shape=(32, 10))\n >>> outputs = tf.keras.activations.softmax(inputs)\n >>> tf.reduce_sum(outputs[0, :]) # Each sample in the batch now sums to 1\n \n\n **Example 2: usage in a `Dense` layer**\n\n >>> layer = tf.keras.layers.Dense(32, activation=tf.keras.activations.softmax)\n ", "desc": "Softmax converts a vector of values to a probability distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.softplus", "docs": "Softplus activation function, `softplus(x) = log(exp(x) + 1)`.\n\n Example Usage:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.softplus(a)\n >>> b.numpy()\n array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00,\n 2.0000000e+01], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The softplus activation: `log(exp(x) + 1)`.\n ", "desc": "Softplus activation function, `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.softsign", "docs": "Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.\n\n Example Usage:\n\n >>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32)\n >>> b = tf.keras.activations.softsign(a)\n >>> b.numpy()\n array([-0.5, 0. , 0.5], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The softsign activation: `x / (abs(x) + 1)`.\n ", "desc": "Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.swish", "docs": "Swish activation function, `swish(x) = x * sigmoid(x)`.\n\n Swish activation function which returns `x*sigmoid(x)`.\n It is a smooth, non-monotonic function that consistently matches\n or outperforms ReLU on deep networks, it is unbounded above and\n bounded below.\n\n\n Example Usage:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.swish(a)\n >>> b.numpy()\n array([-4.1223075e-08, -2.6894143e-01, 0.0000000e+00, 7.3105860e-01,\n 2.0000000e+01], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The swish activation applied to `x` (see reference paper for details).\n\n Reference:\n - [Ramachandran et al., 2017](https://arxiv.org/abs/1710.05941)\n ", "desc": "Swish activation function, `swish(x) = x * sigmoid(x)`.", "type": "API"}, {"name": "tf.compat.v1.keras.activations.tanh", "docs": "Hyperbolic tangent activation function.\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.tanh(a)\n >>> b.numpy()\n array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor of same shape and dtype of input `x`, with tanh activation:\n `tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x)))`.\n ", "desc": "Hyperbolic tangent activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.applications", "docs": "Keras Applications are premade architectures with pre-trained weights.\n", "desc": "Keras Applications are premade architectures with pre-trained weights.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet", "docs": "DenseNet models for Keras.\n\nReference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n", "desc": "DenseNet models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet.DenseNet121", "docs": "Instantiates the Densenet121 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet121 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet.DenseNet169", "docs": "Instantiates the Densenet169 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet169 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet.DenseNet201", "docs": "Instantiates the Densenet201 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet201 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.densenet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The input pixels values are scaled between 0 and 1 and each channel is\n normalized with respect to the ImageNet dataset.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.DenseNet121", "docs": "Instantiates the Densenet121 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet121 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.DenseNet169", "docs": "Instantiates the Densenet169 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet169 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.DenseNet201", "docs": "Instantiates the Densenet201 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet201 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet", "docs": "EfficientNet models for Keras.\n\nReference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n", "desc": "EfficientNet models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB0", "docs": "Instantiates the EfficientNetB0 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB0 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB1", "docs": "Instantiates the EfficientNetB1 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB1 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB2", "docs": "Instantiates the EfficientNetB2 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB3", "docs": "Instantiates the EfficientNetB3 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB3 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB4", "docs": "Instantiates the EfficientNetB4 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB4 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB5", "docs": "Instantiates the EfficientNetB5 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB5 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB6", "docs": "Instantiates the EfficientNetB6 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB6 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.EfficientNetB7", "docs": "Instantiates the EfficientNetB7 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB7 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.efficientnet.preprocess_input", "docs": "A placeholder method for backward compatibility.\n\n The preprocessing logic has been included in the efficientnet model\n implementation. Users are no longer required to call this method to normalize\n the input data. This method does nothing and only kept as a placeholder to\n align the API surface between old and new version of model.\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").{mode}\n\n Returns:\n Unchanged `numpy.array` or `tf.Tensor`.\n ", "desc": "A placeholder method for backward compatibility.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB0", "docs": "Instantiates the EfficientNetB0 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB0 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB1", "docs": "Instantiates the EfficientNetB1 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB1 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB2", "docs": "Instantiates the EfficientNetB2 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB3", "docs": "Instantiates the EfficientNetB3 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB3 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB4", "docs": "Instantiates the EfficientNetB4 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB4 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB5", "docs": "Instantiates the EfficientNetB5 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB5 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB6", "docs": "Instantiates the EfficientNetB6 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB6 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.EfficientNetB7", "docs": "Instantiates the EfficientNetB7 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB7 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.imagenet_utils", "docs": "Utilities for ImageNet data preprocessing & prediction decoding.\n", "desc": "Utilities for ImageNet data preprocessing & prediction decoding.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.imagenet_utils.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.imagenet_utils.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n mode: One of \"caffe\", \"tf\" or \"torch\". Defaults to \"caffe\".\n - caffe: will convert the images from RGB to BGR,\n then will zero-center each color channel with\n respect to the ImageNet dataset,\n without scaling.\n - tf: will scale pixels between -1 and 1,\n sample-wise.\n - torch: will scale pixels between 0 and 1 and then\n will normalize each channel with respect to the\n ImageNet dataset.\n \n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n\n Raises:\n \n ValueError: In case of unknown `mode` or `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_resnet_v2", "docs": "Inception-ResNet V2 model for Keras.\n\nReference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n", "desc": "Inception-ResNet V2 model for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_resnet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_resnet_v2.InceptionResNetV2", "docs": "Instantiates the Inception-ResNet v2 architecture.\n\n Reference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For InceptionResNetV2, call\n `tf.keras.applications.inception_resnet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `inception_resnet_v2.preprocess_input`\n will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is `False` (otherwise the input shape\n has to be `(299, 299, 3)` (with `'channels_last'` data format)\n or `(3, 299, 299)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `'avg'` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `'max'` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is `True`, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception-ResNet v2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_resnet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_v3", "docs": "Inception V3 model for Keras.\n\nReference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n", "desc": "Inception V3 model for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_v3.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_v3.InceptionV3", "docs": "Instantiates the Inception v3 architecture.\n\n Reference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For `InceptionV3`, call `tf.keras.applications.inception_v3.preprocess_input`\n on your inputs before passing them to the model.\n `inception_v3.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: Boolean, whether to include the fully-connected\n layer at the top, as the last layer of the network. Default to `True`.\n weights: One of `None` (random initialization),\n `imagenet` (pre-training on ImageNet),\n or the path to the weights file to be loaded. Default to `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)` (with `channels_last` data format)\n or `(3, 299, 299)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n `input_shape` will be ignored if the `input_tensor` is provided.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Default to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception v3 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.inception_v3.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.InceptionResNetV2", "docs": "Instantiates the Inception-ResNet v2 architecture.\n\n Reference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For InceptionResNetV2, call\n `tf.keras.applications.inception_resnet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `inception_resnet_v2.preprocess_input`\n will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is `False` (otherwise the input shape\n has to be `(299, 299, 3)` (with `'channels_last'` data format)\n or `(3, 299, 299)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `'avg'` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `'max'` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is `True`, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception-ResNet v2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.InceptionV3", "docs": "Instantiates the Inception v3 architecture.\n\n Reference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For `InceptionV3`, call `tf.keras.applications.inception_v3.preprocess_input`\n on your inputs before passing them to the model.\n `inception_v3.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: Boolean, whether to include the fully-connected\n layer at the top, as the last layer of the network. Default to `True`.\n weights: One of `None` (random initialization),\n `imagenet` (pre-training on ImageNet),\n or the path to the weights file to be loaded. Default to `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)` (with `channels_last` data format)\n or `(3, 299, 299)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n `input_shape` will be ignored if the `input_tensor` is provided.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Default to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception v3 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.MobileNet", "docs": "Instantiates the MobileNet architecture.\n\n Reference:\n - [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](\n https://arxiv.org/abs/1704.04861)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNet, call `tf.keras.applications.mobilenet.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, only to be specified if `include_top`\n is False (otherwise the input shape has to be `(224, 224, 3)` (with\n `channels_last` data format) or (3, 224, 224) (with `channels_first`\n data format). It should have exactly 3 inputs channels, and width and\n height should be no smaller than 32. E.g. `(200, 200, 3)` would be one\n valid value. Default to `None`.\n `input_shape` will be ignored if the `input_tensor` is provided.\n alpha: Controls the width of the network. This is known as the width\n multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally\n decreases the number of filters in each layer. - If `alpha` > 1.0,\n proportionally increases the number of filters in each layer. - If\n `alpha` = 1, default number of filters from the paper are used at each\n layer. Default to 1.0.\n depth_multiplier: Depth multiplier for depthwise convolution. This is\n called the resolution multiplier in the MobileNet paper. Default to 1.0.\n dropout: Dropout rate. Default to 0.001.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Default to `True`.\n weights: One of `None` (random initialization), 'imagenet' (pre-training\n on ImageNet), or the path to the weights file to be loaded. Default to\n `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`) to\n use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n pooling: Optional pooling mode for feature extraction when `include_top`\n is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: Optional number of classes to classify images into, only to be\n specified if `include_top` is True, and if no `weights` argument is\n specified. Defaults to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNet architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet.MobileNet", "docs": "Instantiates the MobileNet architecture.\n\n Reference:\n - [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](\n https://arxiv.org/abs/1704.04861)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNet, call `tf.keras.applications.mobilenet.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, only to be specified if `include_top`\n is False (otherwise the input shape has to be `(224, 224, 3)` (with\n `channels_last` data format) or (3, 224, 224) (with `channels_first`\n data format). It should have exactly 3 inputs channels, and width and\n height should be no smaller than 32. E.g. `(200, 200, 3)` would be one\n valid value. Default to `None`.\n `input_shape` will be ignored if the `input_tensor` is provided.\n alpha: Controls the width of the network. This is known as the width\n multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally\n decreases the number of filters in each layer. - If `alpha` > 1.0,\n proportionally increases the number of filters in each layer. - If\n `alpha` = 1, default number of filters from the paper are used at each\n layer. Default to 1.0.\n depth_multiplier: Depth multiplier for depthwise convolution. This is\n called the resolution multiplier in the MobileNet paper. Default to 1.0.\n dropout: Dropout rate. Default to 0.001.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Default to `True`.\n weights: One of `None` (random initialization), 'imagenet' (pre-training\n on ImageNet), or the path to the weights file to be loaded. Default to\n `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`) to\n use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n pooling: Optional pooling mode for feature extraction when `include_top`\n is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: Optional number of classes to classify images into, only to be\n specified if `include_top` is True, and if no `weights` argument is\n specified. Defaults to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNet architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v2", "docs": "MobileNet v2 models for Keras.\n\nMobileNetV2 is a general architecture and can be used for multiple use cases.\nDepending on the use case, it can use different input layer size and\ndifferent width factors. This allows different width models to reduce\nthe number of multiply-adds and thereby\nreduce inference cost on mobile devices.\n\nMobileNetV2 is very similar to the original MobileNet,\nexcept that it uses inverted residual blocks with\nbottlenecking features. It has a drastically lower\nparameter count than the original MobileNet.\nMobileNets support any input size greater\nthan 32 x 32, with larger image sizes\noffering better performance.\n\nThe number of parameters and number of multiply-adds\ncan be modified by using the `alpha` parameter,\nwhich increases/decreases the number of filters in each layer.\nBy altering the image size and `alpha` parameter,\nall 22 models from the paper can be built, with ImageNet weights provided.\n\nThe paper demonstrates the performance of MobileNets using `alpha` values of\n1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4\nFor each of these `alpha` values, weights for 5 different input image sizes\nare provided (224, 192, 160, 128, and 96).\n\nThe following table describes the performance of\nMobileNet on various input sizes:\n------------------------------------------------------------------------\nMACs stands for Multiply Adds\n Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy\n--------------------------|------------|---------------|---------|----|---------\n| [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 |\n| [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 |\n| [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 |\n| [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 |\n| [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 |\n| [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 |\n| [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 |\n| [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 |\n| [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 |\n| [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 |\n| [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 |\n| [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 |\n| [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 |\n| [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 |\n| [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 |\n| [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 |\n| [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 |\n| [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 |\n| [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 |\n| [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 |\n| [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 |\n| [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 |\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n", "desc": "MobileNet v2 models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v2.MobileNetV2", "docs": "Instantiates the MobileNetV2 architecture.\n\n MobileNetV2 is very similar to the original MobileNet,\n except that it uses inverted residual blocks with\n bottlenecking features. It has a drastically lower\n parameter count than the original MobileNet.\n MobileNets support any input size greater\n than 32 x 32, with larger image sizes\n offering better performance.\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV2, call `tf.keras.applications.mobilenet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: Float, larger than zero, controls the width of the network. This is\n known as the width multiplier in the MobileNetV2 paper, but the name is\n kept for consistency with `applications.MobileNetV1` model in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1.0, default number of filters from the paper\n are used at each layer.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization), 'imagenet'\n (pre-training on ImageNet), or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction when\n `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional integer number of classes to classify images into, only to\n be specified if `include_top` is True, and if no `weights` argument is\n specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNetV2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v3", "docs": "MobileNet v3 models for Keras.\n", "desc": "MobileNet v3 models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v3.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.mobilenet_v3.preprocess_input", "docs": "A placeholder method for backward compatibility.\n\n The preprocessing logic has been included in the mobilenet_v3 model\n implementation. Users are no longer required to call this method to normalize\n the input data. This method does nothing and only kept as a placeholder to\n align the API surface between old and new version of model.\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").{mode}\n\n Returns:\n Unchanged `numpy.array` or `tf.Tensor`.\n ", "desc": "A placeholder method for backward compatibility.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.MobileNetV2", "docs": "Instantiates the MobileNetV2 architecture.\n\n MobileNetV2 is very similar to the original MobileNet,\n except that it uses inverted residual blocks with\n bottlenecking features. It has a drastically lower\n parameter count than the original MobileNet.\n MobileNets support any input size greater\n than 32 x 32, with larger image sizes\n offering better performance.\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV2, call `tf.keras.applications.mobilenet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: Float, larger than zero, controls the width of the network. This is\n known as the width multiplier in the MobileNetV2 paper, but the name is\n kept for consistency with `applications.MobileNetV1` model in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1.0, default number of filters from the paper\n are used at each layer.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization), 'imagenet'\n (pre-training on ImageNet), or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction when\n `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional integer number of classes to classify images into, only to\n be specified if `include_top` is True, and if no `weights` argument is\n specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNetV2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.MobileNetV3Large", "docs": "Instantiates the MobileNetV3Large architecture.\n\n Reference:\n - [Searching for MobileNetV3](\n https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)\n\n The following table describes the performance of MobileNets v3:\n ------------------------------------------------------------------------\n MACs stands for Multiply Adds\n\n |Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|\n |---|---|---|---|---|\n | mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |\n | mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |\n | mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |\n | mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |\n | mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |\n | mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV3, by default input preprocessing is included as a part of the\n model (as a `Rescaling` layer), and thus\n `tf.keras.applications.mobilenet_v3.preprocess_input` is actually a\n pass-through function. In this use case, MobileNetV3 models expect their inputs\n to be float tensors of pixels with values in the [0-255] range.\n At the same time, preprocessing as a part of the model (i.e. `Rescaling`\n layer) can be disabled by setting `include_preprocessing` argument to False.\n With preprocessing disabled MobileNetV3 models expect their inputs to be float\n tensors of pixels with values in the [-1, 1] range.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: controls the width of the network. This is known as the\n depth multiplier in the MobileNetV3 paper, but the name is kept for\n consistency with MobileNetV1 in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1, default number of filters from the paper\n are used at each layer.\n minimalistic: In addition to large and small models this module also\n contains so-called minimalistic models, these models have the same\n per-layer dimensions characteristic as MobilenetV3 however, they don't\n utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,\n and 5x5 convolutions). While these models are less efficient on CPU, they\n are much more performant on GPU/DSP.\n include_top: Boolean, whether to include the fully-connected\n layer at the top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Integer, optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n dropout_rate: fraction of the input units to drop on the last layer.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n include_preprocessing: Boolean, whether to include the preprocessing\n layer (`Rescaling`) at the bottom of the network. Defaults to `True`.\n\n Call arguments:\n inputs: A floating point `numpy.array` or a `tf.Tensor`, 4D with 3 color\n channels, with values in the range [0, 255] if `include_preprocessing`\n is True and in the range [-1, 1] otherwise.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the MobileNetV3Large architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.MobileNetV3Small", "docs": "Instantiates the MobileNetV3Small architecture.\n\n Reference:\n - [Searching for MobileNetV3](\n https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)\n\n The following table describes the performance of MobileNets v3:\n ------------------------------------------------------------------------\n MACs stands for Multiply Adds\n\n |Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|\n |---|---|---|---|---|\n | mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |\n | mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |\n | mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |\n | mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |\n | mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |\n | mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV3, by default input preprocessing is included as a part of the\n model (as a `Rescaling` layer), and thus\n `tf.keras.applications.mobilenet_v3.preprocess_input` is actually a\n pass-through function. In this use case, MobileNetV3 models expect their inputs\n to be float tensors of pixels with values in the [0-255] range.\n At the same time, preprocessing as a part of the model (i.e. `Rescaling`\n layer) can be disabled by setting `include_preprocessing` argument to False.\n With preprocessing disabled MobileNetV3 models expect their inputs to be float\n tensors of pixels with values in the [-1, 1] range.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: controls the width of the network. This is known as the\n depth multiplier in the MobileNetV3 paper, but the name is kept for\n consistency with MobileNetV1 in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1, default number of filters from the paper\n are used at each layer.\n minimalistic: In addition to large and small models this module also\n contains so-called minimalistic models, these models have the same\n per-layer dimensions characteristic as MobilenetV3 however, they don't\n utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,\n and 5x5 convolutions). While these models are less efficient on CPU, they\n are much more performant on GPU/DSP.\n include_top: Boolean, whether to include the fully-connected\n layer at the top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Integer, optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n dropout_rate: fraction of the input units to drop on the last layer.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n include_preprocessing: Boolean, whether to include the preprocessing\n layer (`Rescaling`) at the bottom of the network. Defaults to `True`.\n\n Call arguments:\n inputs: A floating point `numpy.array` or a `tf.Tensor`, 4D with 3 color\n channels, with values in the range [0, 255] if `include_preprocessing`\n is True and in the range [-1, 1] otherwise.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the MobileNetV3Small architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.nasnet", "docs": "NASNet-A models for Keras.\n\nNASNet refers to Neural Architecture Search Network, a family of models\nthat were designed automatically by learning the model architectures\ndirectly on the dataset of interest.\n\nHere we consider NASNet-A, the highest performance model that was found\nfor the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset,\nobtaining state of the art performance on CIFAR-10 and ImageNet 2012.\nOnly the NASNet-A models, and their respective weights, which are suited\nfor ImageNet 2012 are provided.\n\nThe below table describes the performance on ImageNet 2012:\n--------------------------------------------------------------------------------\n Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M)\n--------------------------------------------------------------------------------\n| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 |\n| NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 |\n--------------------------------------------------------------------------------\n\nReference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n", "desc": "NASNet-A models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.nasnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.nasnet.NASNetLarge", "docs": "Instantiates a NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(331, 331, 3)` for NASNetLarge.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (331, 331, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: in case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.nasnet.NASNetMobile", "docs": "Instantiates a Mobile NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` for NASNetMobile\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (224, 224, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: In case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a Mobile NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.nasnet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.NASNetLarge", "docs": "Instantiates a NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(331, 331, 3)` for NASNetLarge.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (331, 331, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: in case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.NASNetMobile", "docs": "Instantiates a Mobile NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` for NASNetMobile\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (224, 224, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: In case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a Mobile NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet", "docs": "ResNet models for Keras.\n\nReference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n", "desc": "ResNet models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet.ResNet101", "docs": "Instantiates the ResNet101 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet101 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet.ResNet152", "docs": "Instantiates the ResNet152 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet152 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2", "docs": "ResNet v2 models for Keras.\n\nReference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n", "desc": "ResNet v2 models for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2.ResNet101V2", "docs": "Instantiates the ResNet101V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet101V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2.ResNet152V2", "docs": "Instantiates the ResNet152V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet152V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet_v2.ResNet50V2", "docs": "Instantiates the ResNet50V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet50V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet101", "docs": "Instantiates the ResNet101 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet101 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet101V2", "docs": "Instantiates the ResNet101V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet101V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet152", "docs": "Instantiates the ResNet152 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet152 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet152V2", "docs": "Instantiates the ResNet152V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet152V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet50.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet50.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.resnet50.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.ResNet50V2", "docs": "Instantiates the ResNet50V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet50V2 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.VGG16", "docs": "Instantiates the VGG16 model.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG16, call `tf.keras.applications.vgg16.preprocess_input` on your\n inputs before passing them to the model.\n `vgg16.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 input channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG16 model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg16.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg16.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg16.VGG16", "docs": "Instantiates the VGG16 model.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG16, call `tf.keras.applications.vgg16.preprocess_input` on your\n inputs before passing them to the model.\n `vgg16.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 input channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG16 model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.VGG19", "docs": "Instantiates the VGG19 architecture.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG19, call `tf.keras.applications.vgg19.preprocess_input` on your\n inputs before passing them to the model.\n `vgg19.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG19 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg19.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg19.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.vgg19.VGG19", "docs": "Instantiates the VGG19 architecture.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG19, call `tf.keras.applications.vgg19.preprocess_input` on your\n inputs before passing them to the model.\n `vgg19.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG19 architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.Xception", "docs": "Instantiates the Xception architecture.\n\n Reference:\n - [Xception: Deep Learning with Depthwise Separable Convolutions](\n https://arxiv.org/abs/1610.02357) (CVPR 2017)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input image size for this model is 299x299.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For Xception, call `tf.keras.applications.xception.preprocess_input` on your\n inputs before passing them to the model.\n `xception.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)`.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 71.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True,\n and if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Xception architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.xception.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.compat.v1.keras.applications.xception.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.applications.xception.Xception", "docs": "Instantiates the Xception architecture.\n\n Reference:\n - [Xception: Deep Learning with Depthwise Separable Convolutions](\n https://arxiv.org/abs/1610.02357) (CVPR 2017)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input image size for this model is 299x299.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For Xception, call `tf.keras.applications.xception.preprocess_input` on your\n inputs before passing them to the model.\n `xception.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)`.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 71.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True,\n and if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Xception architecture.", "type": "API"}, {"name": "tf.compat.v1.keras.backend", "docs": "Keras backend API.\n", "desc": "Keras backend API.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.clear_session", "docs": "Resets all state generated by Keras.\n\n Keras manages a global state, which it uses to implement the Functional\n model-building API and to uniquify autogenerated layer names.\n\n If you are creating many models in a loop, this global state will consume\n an increasing amount of memory over time, and you may want to clear it.\n Calling `clear_session()` releases the global state: this helps avoid clutter\n from old models and layers, especially when memory is limited.\n\n Example 1: calling `clear_session()` when creating models in a loop\n\n ```python\n for _ in range(100):\n # Without `clear_session()`, each iteration of this loop will\n # slightly increase the size of the global state managed by Keras\n model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])\n\n for _ in range(100):\n # With `clear_session()` called at the beginning,\n # Keras starts with a blank state at each iteration\n # and memory consumption is constant over time.\n tf.keras.backend.clear_session()\n model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])\n ```\n\n Example 2: resetting the layer name generation counter\n\n >>> import tensorflow as tf\n >>> layers = [tf.keras.layers.Dense(10) for _ in range(10)]\n >>> new_layer = tf.keras.layers.Dense(10)\n >>> print(new_layer.name)\n dense_10\n >>> tf.keras.backend.set_learning_phase(1)\n >>> print(tf.keras.backend.learning_phase())\n 1\n >>> tf.keras.backend.clear_session()\n >>> new_layer = tf.keras.layers.Dense(10)\n >>> print(new_layer.name)\n dense\n ", "desc": "Resets all state generated by Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.epsilon", "docs": "Returns the value of the fuzz factor used in numeric expressions.\n\n Returns:\n A float.\n\n Example:\n >>> tf.keras.backend.epsilon()\n 1e-07\n ", "desc": "Returns the value of the fuzz factor used in numeric expressions.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.floatx", "docs": "Returns the default float type, as a string.\n\n E.g. `'float16'`, `'float32'`, `'float64'`.\n\n Returns:\n String, the current default float type.\n\n Example:\n >>> tf.keras.backend.floatx()\n 'float32'\n ", "desc": "Returns the default float type, as a string.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.get_session", "docs": "Returns the TF session to be used by the backend.\n\n If a default TensorFlow session is available, we will return it.\n\n Else, we will return the global Keras session assuming it matches\n the current graph.\n\n If no global Keras session exists at this point:\n we will create a new global session.\n\n Note that you can manually set the global session\n via `K.set_session(sess)`.\n\n Args:\n op_input_list: An option sequence of tensors or ops, which will be used\n to determine the current graph. Otherwise the default graph will be\n used.\n\n Returns:\n A TensorFlow session.\n ", "desc": "Returns the TF session to be used by the backend.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.get_uid", "docs": "Associates a string prefix with an integer counter in a TensorFlow graph.\n\n Args:\n prefix: String prefix to index.\n\n Returns:\n Unique integer ID.\n\n Example:\n\n >>> get_uid('dense')\n 1\n >>> get_uid('dense')\n 2\n\n ", "desc": "Associates a string prefix with an integer counter in a TensorFlow graph.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.image_data_format", "docs": "Returns the default image data format convention.\n\n Returns:\n A string, either `'channels_first'` or `'channels_last'`\n\n Example:\n >>> tf.keras.backend.image_data_format()\n 'channels_last'\n ", "desc": "Returns the default image data format convention.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.is_keras_tensor", "docs": "Returns whether `x` is a Keras tensor.\n\n A \"Keras tensor\" is a tensor that was returned by a Keras layer,\n (`Layer` class) or by `Input`.\n\n Args:\n x: A candidate tensor.\n\n Returns:\n A boolean: Whether the argument is a Keras tensor.\n\n Raises:\n ValueError: In case `x` is not a symbolic tensor.\n\n Examples:\n\n >>> np_var = np.array([1, 2])\n >>> # A numpy array is not a symbolic tensor.\n >>> tf.keras.backend.is_keras_tensor(np_var)\n Traceback (most recent call last):\n ...\n ValueError: Unexpectedly found an instance of type ``.\n Expected a symbolic tensor instance.\n >>> keras_var = tf.keras.backend.variable(np_var)\n >>> # A variable created with the keras backend is not a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_var)\n False\n >>> keras_placeholder = tf.keras.backend.placeholder(shape=(2, 4, 5))\n >>> # A placeholder is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_placeholder)\n True\n >>> keras_input = tf.keras.layers.Input([10])\n >>> # An Input is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_input)\n True\n >>> keras_layer_output = tf.keras.layers.Dense(10)(keras_input)\n >>> # Any Keras layer output is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_layer_output)\n True\n\n ", "desc": "Returns whether `x` is a Keras tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.name_scope", "docs": "A context manager for use when defining a Python op.\n\n This context manager validates that the given `values` are from the\n same graph, makes that graph the default graph, and pushes a\n name scope in that graph (see\n `tf.Graph.name_scope`\n for more details on that).\n\n For example, to define a new Python op called `my_op`:\n\n ```python\n def my_op(a, b, c, name=None):\n with tf.name_scope(name, \"MyOp\", [a, b, c]) as scope:\n a = tf.convert_to_tensor(a, name=\"a\")\n b = tf.convert_to_tensor(b, name=\"b\")\n c = tf.convert_to_tensor(c, name=\"c\")\n # Define some computation that uses `a`, `b`, and `c`.\n return foo_op(..., name=scope)\n ```\n ", "desc": "A context manager for use when defining a Python op.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.reset_uids", "docs": "Resets graph identifiers.\n ", "desc": "Resets graph identifiers.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.rnn", "docs": "Iterates over the time dimension of a tensor.\n\n Args:\n step_function: RNN step function.\n Args;\n input; Tensor with shape `(samples, ...)` (no time dimension),\n representing input for the batch of samples at a certain\n time step.\n states; List of tensors.\n Returns;\n output; Tensor with shape `(samples, output_dim)`\n (no time dimension).\n new_states; List of tensors, same length and shapes\n as 'states'. The first state in the list must be the\n output tensor at the previous timestep.\n inputs: Tensor of temporal data of shape `(samples, time, ...)`\n (at least 3D), or nested tensors, and each of which has shape\n `(samples, time, ...)`.\n initial_states: Tensor with shape `(samples, state_size)`\n (no time dimension), containing the initial values for the states used\n in the step function. In the case that state_size is in a nested\n shape, the shape of initial_states will also follow the nested\n structure.\n go_backwards: Boolean. If True, do the iteration over the time\n dimension in reverse order and return the reversed sequence.\n mask: Binary tensor with shape `(samples, time, 1)`,\n with a zero for every element that is masked.\n constants: List of constant values passed at each step.\n unroll: Whether to unroll the RNN or to use a symbolic `while_loop`.\n input_length: An integer or a 1-D Tensor, depending on whether\n the time dimension is fixed-length or not. In case of variable length\n input, it is used for masking in case there's no mask specified.\n time_major: Boolean. If true, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n zero_output_for_mask: Boolean. If True, the output for masked timestep\n will be zeros, whereas in the False case, output from previous\n timestep is returned.\n return_all_outputs: Boolean. If True, return the recurrent outputs for all\n timesteps in the sequence. If False, only return the output for the\n last timestep (which consumes less memory).\n\n Returns:\n A tuple, `(last_output, outputs, new_states)`.\n last_output: the latest output of the rnn, of shape `(samples, ...)`\n outputs:\n - If `return_all_outputs=True`: a tensor with shape\n `(samples, time, ...)` where each entry `outputs[s, t]` is the\n output of the step function at time `t` for sample `s`\n - Else, a tensor equal to `last_output` with shape\n `(samples, 1, ...)`\n new_states: list of tensors, latest states returned by\n the step function, of shape `(samples, ...)`.\n\n Raises:\n ValueError: if input dimension is less than 3.\n ValueError: if `unroll` is `True` but input timestep is not a fixed\n number.\n ValueError: if `mask` is provided (not `None`) but states is not provided\n (`len(states)` == 0).\n ", "desc": "Iterates over the time dimension of a tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.set_epsilon", "docs": "Sets the value of the fuzz factor used in numeric expressions.\n\n Args:\n value: float. New value of epsilon.\n\n Example:\n >>> tf.keras.backend.epsilon()\n 1e-07\n >>> tf.keras.backend.set_epsilon(1e-5)\n >>> tf.keras.backend.epsilon()\n 1e-05\n >>> tf.keras.backend.set_epsilon(1e-7)\n ", "desc": "Sets the value of the fuzz factor used in numeric expressions.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.set_floatx", "docs": "Sets the default float type.\n\n Note: It is not recommended to set this to float16 for training, as this will\n likely cause numeric stability issues. Instead, mixed precision, which is\n using a mix of float16 and float32, can be used by calling\n `tf.keras.mixed_precision.set_global_policy('mixed_float16')`. See the\n [mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) for details.\n\n Args:\n value: String; `'float16'`, `'float32'`, or `'float64'`.\n\n Example:\n >>> tf.keras.backend.floatx()\n 'float32'\n >>> tf.keras.backend.set_floatx('float64')\n >>> tf.keras.backend.floatx()\n 'float64'\n >>> tf.keras.backend.set_floatx('float32')\n\n Raises:\n ValueError: In case of invalid value.\n ", "desc": "Sets the default float type.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.set_image_data_format", "docs": "Sets the value of the image data format convention.\n\n Args:\n data_format: string. `'channels_first'` or `'channels_last'`.\n\n Example:\n >>> tf.keras.backend.image_data_format()\n 'channels_last'\n >>> tf.keras.backend.set_image_data_format('channels_first')\n >>> tf.keras.backend.image_data_format()\n 'channels_first'\n >>> tf.keras.backend.set_image_data_format('channels_last')\n\n Raises:\n ValueError: In case of invalid `data_format` value.\n ", "desc": "Sets the value of the image data format convention.", "type": "API"}, {"name": "tf.compat.v1.keras.backend.set_session", "docs": "Sets the global TensorFlow session.\n\n Args:\n session: A TF Session.\n ", "desc": "Sets the global TensorFlow session.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks", "docs": "Callbacks: utilities called at certain points during model training.\n", "desc": "Callbacks: utilities called at certain points during model training.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.BaseLogger", "docs": "Callback that accumulates epoch averages of metrics.\n\n This callback is automatically applied to every Keras model.\n\n Args:\n stateful_metrics: Iterable of string names of metrics that\n should *not* be averaged over an epoch.\n Metrics in this list will be logged as-is in `on_epoch_end`.\n All others will be averaged in `on_epoch_end`.\n ", "desc": "Callback that accumulates epoch averages of metrics.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.Callback", "docs": "Abstract base class used to build new callbacks.\n\n Callbacks can be passed to keras methods such as `fit`, `evaluate`, and\n `predict` in order to hook into the various stages of the model training and\n inference lifecycle.\n\n To create a custom callback, subclass `keras.callbacks.Callback` and override\n the method associated with the stage of interest. See\n https://www.tensorflow.org/guide/keras/custom_callback for more information.\n\n Example:\n\n >>> training_finished = False\n >>> class MyCallback(tf.keras.callbacks.Callback):\n ... def on_train_end(self, logs=None):\n ... global training_finished\n ... training_finished = True\n >>> model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])\n >>> model.compile(loss='mean_squared_error')\n >>> model.fit(tf.constant([[1.0]]), tf.constant([[1.0]]),\n ... callbacks=[MyCallback()])\n >>> assert training_finished == True\n\n If you want to use `Callback` objects in a custom training loop:\n\n 1. You should pack all your callbacks into a single `callbacks.CallbackList`\n so they can all be called together.\n 2. You will need to manually call all the `on_*` methods at the appropriate\n locations in your loop. Like this:\n\n ```\n callbacks = tf.keras.callbacks.CallbackList([...])\n callbacks.append(...)\n\n callbacks.on_train_begin(...)\n for epoch in range(EPOCHS):\n callbacks.on_epoch_begin(epoch)\n for i, data in dataset.enumerate():\n callbacks.on_train_batch_begin(i)\n batch_logs = model.train_step(data)\n callbacks.on_train_batch_end(i, batch_logs)\n epoch_logs = ...\n callbacks.on_epoch_end(epoch, epoch_logs)\n final_logs=...\n callbacks.on_train_end(final_logs)\n ```\n\n Attributes:\n params: Dict. Training parameters\n (eg. verbosity, batch size, number of epochs...).\n model: Instance of `keras.models.Model`.\n Reference of the model being trained.\n\n The `logs` dictionary that callback methods\n take as argument will contain keys for quantities relevant to\n the current batch or epoch (see method-specific docstrings).\n ", "desc": "Abstract base class used to build new callbacks.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.CallbackList", "docs": "Container abstracting a list of callbacks.", "desc": "Container abstracting a list of callbacks.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.CSVLogger", "docs": "Callback that streams epoch results to a CSV file.\n\n Supports all values that can be represented as a string,\n including 1D iterables such as `np.ndarray`.\n\n Example:\n\n ```python\n csv_logger = CSVLogger('training.log')\n model.fit(X_train, Y_train, callbacks=[csv_logger])\n ```\n\n Args:\n filename: Filename of the CSV file, e.g. `'run/log.csv'`.\n separator: String used to separate elements in the CSV file.\n append: Boolean. True: append if file exists (useful for continuing\n training). False: overwrite existing file.\n ", "desc": "Callback that streams epoch results to a CSV file.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.EarlyStopping", "docs": "Stop training when a monitored metric has stopped improving.\n\n Assuming the goal of a training is to minimize the loss. With this, the\n metric to be monitored would be `'loss'`, and mode would be `'min'`. A\n `model.fit()` training loop will check at end of every epoch whether\n the loss is no longer decreasing, considering the `min_delta` and\n `patience` if applicable. Once it's found no longer decreasing,\n `model.stop_training` is marked True and the training terminates.\n\n The quantity to be monitored needs to be available in `logs` dict.\n To make it so, pass the loss or metrics at `model.compile()`.\n\n Args:\n monitor: Quantity to be monitored.\n min_delta: Minimum change in the monitored quantity\n to qualify as an improvement, i.e. an absolute\n change of less than min_delta, will count as no\n improvement.\n patience: Number of epochs with no improvement\n after which training will be stopped.\n verbose: Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1\n displays messages when the callback takes an action.\n mode: One of `{\"auto\", \"min\", \"max\"}`. In `min` mode,\n training will stop when the quantity\n monitored has stopped decreasing; in `\"max\"`\n mode it will stop when the quantity\n monitored has stopped increasing; in `\"auto\"`\n mode, the direction is automatically inferred\n from the name of the monitored quantity.\n baseline: Baseline value for the monitored quantity.\n Training will stop if the model doesn't show improvement over the\n baseline.\n restore_best_weights: Whether to restore model weights from\n the epoch with the best value of the monitored quantity.\n If False, the model weights obtained at the last step of\n training are used. An epoch will be restored regardless\n of the performance relative to the `baseline`. If no epoch\n improves on `baseline`, training will run for `patience`\n epochs and restore weights from the best epoch in that set.\n\n Example:\n\n >>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)\n >>> # This callback will stop the training when there is no improvement in\n >>> # the loss for three consecutive epochs.\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=10, batch_size=1, callbacks=[callback],\n ... verbose=0)\n >>> len(history.history['loss']) # Only 4 epochs are run.\n 4\n ", "desc": "Stop training when a monitored metric has stopped improving.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.History", "docs": "Callback that records events into a `History` object.\n\n This callback is automatically applied to\n every Keras model. The `History` object\n gets returned by the `fit` method of models.\n\n Example:\n\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=10, verbose=1)\n >>> print(history.params)\n {'verbose': 1, 'epochs': 10, 'steps': 1}\n >>> # check the keys of history object\n >>> print(history.history.keys())\n dict_keys(['loss'])\n\n ", "desc": "Callback that records events into a `History` object.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.LambdaCallback", "docs": "Callback for creating simple, custom callbacks on-the-fly.\n\n This callback is constructed with anonymous functions that will be called\n at the appropriate time (during `Model.{fit | evaluate | predict}`).\n Note that the callbacks expects positional arguments, as:\n\n - `on_epoch_begin` and `on_epoch_end` expect two positional arguments:\n `epoch`, `logs`\n - `on_batch_begin` and `on_batch_end` expect two positional arguments:\n `batch`, `logs`\n - `on_train_begin` and `on_train_end` expect one positional argument:\n `logs`\n\n Args:\n on_epoch_begin: called at the beginning of every epoch.\n on_epoch_end: called at the end of every epoch.\n on_batch_begin: called at the beginning of every batch.\n on_batch_end: called at the end of every batch.\n on_train_begin: called at the beginning of model training.\n on_train_end: called at the end of model training.\n\n Example:\n\n ```python\n # Print the batch number at the beginning of every batch.\n batch_print_callback = LambdaCallback(\n on_batch_begin=lambda batch,logs: print(batch))\n\n # Stream the epoch loss to a file in JSON format. The file content\n # is not well-formed JSON but rather has a JSON object per line.\n import json\n json_log = open('loss_log.json', mode='wt', buffering=1)\n json_logging_callback = LambdaCallback(\n on_epoch_end=lambda epoch, logs: json_log.write(\n json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\\n'),\n on_train_end=lambda logs: json_log.close()\n )\n\n # Terminate some processes after having finished model training.\n processes = ...\n cleanup_callback = LambdaCallback(\n on_train_end=lambda logs: [\n p.terminate() for p in processes if p.is_alive()])\n\n model.fit(...,\n callbacks=[batch_print_callback,\n json_logging_callback,\n cleanup_callback])\n ```\n ", "desc": "Callback for creating simple, custom callbacks on-the-fly.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.LearningRateScheduler", "docs": "Learning rate scheduler.\n\n At the beginning of every epoch, this callback gets the updated learning rate\n value from `schedule` function provided at `__init__`, with the current epoch\n and current learning rate, and applies the updated learning rate\n on the optimizer.\n\n Args:\n schedule: a function that takes an epoch index (integer, indexed from 0)\n and current learning rate (float) as inputs and returns a new\n learning rate as output (float).\n verbose: int. 0: quiet, 1: update messages.\n\n Example:\n\n >>> # This function keeps the initial learning rate for the first ten epochs\n >>> # and decreases it exponentially after that.\n >>> def scheduler(epoch, lr):\n ... if epoch < 10:\n ... return lr\n ... else:\n ... return lr * tf.math.exp(-0.1)\n >>>\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> round(model.optimizer.lr.numpy(), 5)\n 0.01\n\n >>> callback = tf.keras.callbacks.LearningRateScheduler(scheduler)\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=15, callbacks=[callback], verbose=0)\n >>> round(model.optimizer.lr.numpy(), 5)\n 0.00607\n\n ", "desc": "Learning rate scheduler.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.ModelCheckpoint", "docs": "Callback to save the Keras model or model weights at some frequency.\n\n `ModelCheckpoint` callback is used in conjunction with training using\n `model.fit()` to save a model or weights (in a checkpoint file) at some\n interval, so the model or weights can be loaded later to continue the training\n from the state saved.\n\n A few options this callback provides include:\n\n - Whether to only keep the model that has achieved the \"best performance\" so\n far, or whether to save the model at the end of every epoch regardless of\n performance.\n - Definition of 'best'; which quantity to monitor and whether it should be\n maximized or minimized.\n - The frequency it should save at. Currently, the callback supports saving at\n the end of every epoch, or after a fixed number of training batches.\n - Whether only weights are saved, or the whole model is saved.\n\n Note: If you get `WARNING:tensorflow:Can save best model only with \n available, skipping` see the description of the `monitor` argument for\n details on how to get this right.\n\n Example:\n\n ```python\n model.compile(loss=..., optimizer=...,\n metrics=['accuracy'])\n\n EPOCHS = 10\n checkpoint_filepath = '/tmp/checkpoint'\n model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_filepath,\n save_weights_only=True,\n monitor='val_accuracy',\n mode='max',\n save_best_only=True)\n\n # Model weights are saved at the end of every epoch, if it's the best seen\n # so far.\n model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback])\n\n # The model weights (that are considered the best) are loaded into the model.\n model.load_weights(checkpoint_filepath)\n ```\n\n Args:\n filepath: string or `PathLike`, path to save the model file. e.g.\n filepath = os.path.join(working_dir, 'ckpt', file_name). `filepath`\n can contain named formatting options, which will be filled the value of\n `epoch` and keys in `logs` (passed in `on_epoch_end`). For example: if\n `filepath` is `weights.{epoch:02d}-{val_loss:.2f}.hdf5`, then the model\n checkpoints will be saved with the epoch number and the validation loss\n in the filename. The directory of the filepath should not be reused by\n any other callbacks to avoid conflicts.\n monitor: The metric name to monitor. Typically the metrics are set by the\n `Model.compile` method. Note:\n\n * Prefix the name with `\"val_`\" to monitor validation metrics.\n * Use `\"loss\"` or \"`val_loss`\" to monitor the model's total loss.\n * If you specify metrics as strings, like `\"accuracy\"`, pass the same\n string (with or without the `\"val_\"` prefix).\n * If you pass `metrics.Metric` objects, `monitor` should be set to\n `metric.name`\n * If you're not sure about the metric names you can check the contents\n of the `history.history` dictionary returned by\n `history = model.fit()`\n * Multi-output models set additional prefixes on the metric names.\n\n verbose: Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1\n displays messages when the callback takes an action.\n save_best_only: if `save_best_only=True`, it only saves when the model\n is considered the \"best\" and the latest best model according to the\n quantity monitored will not be overwritten. If `filepath` doesn't\n contain formatting options like `{epoch}` then `filepath` will be\n overwritten by each new better model.\n mode: one of {'auto', 'min', 'max'}. If `save_best_only=True`, the\n decision to overwrite the current save file is made based on either\n the maximization or the minimization of the monitored quantity.\n For `val_acc`, this should be `max`, for `val_loss` this should be\n `min`, etc. In `auto` mode, the mode is set to `max` if the quantities\n monitored are 'acc' or start with 'fmeasure' and are set to `min` for\n the rest of the quantities.\n save_weights_only: if True, then only the model's weights will be saved\n (`model.save_weights(filepath)`), else the full model is saved\n (`model.save(filepath)`).\n save_freq: `'epoch'` or integer. When using `'epoch'`, the callback saves\n the model after each epoch. When using integer, the callback saves the\n model at end of this many batches. If the `Model` is compiled with\n `steps_per_execution=N`, then the saving criteria will be\n checked every Nth batch. Note that if the saving isn't aligned to\n epochs, the monitored metric may potentially be less reliable (it\n could reflect as little as 1 batch, since the metrics get reset every\n epoch). Defaults to `'epoch'`.\n options: Optional `tf.train.CheckpointOptions` object if\n `save_weights_only` is true or optional `tf.saved_model.SaveOptions`\n object if `save_weights_only` is false.\n initial_value_threshold: Floating point initial \"best\" value of the metric\n to be monitored. Only applies if `save_best_value=True`. Only overwrites\n the model weights already saved if the performance of current\n model is better than this value.\n **kwargs: Additional arguments for backwards compatibility. Possible key\n is `period`.\n ", "desc": "Callback to save the Keras model or model weights at some frequency.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.ProgbarLogger", "docs": "Callback that prints metrics to stdout.\n\n Args:\n count_mode: One of `\"steps\"` or `\"samples\"`.\n Whether the progress bar should\n count samples seen or steps (batches) seen.\n stateful_metrics: Iterable of string names of metrics that\n should *not* be averaged over an epoch.\n Metrics in this list will be logged as-is.\n All others will be averaged over time (e.g. loss, etc).\n If not provided, defaults to the `Model`'s metrics.\n\n Raises:\n ValueError: In case of invalid `count_mode`.\n ", "desc": "Callback that prints metrics to stdout.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.ReduceLROnPlateau", "docs": "Reduce learning rate when a metric has stopped improving.\n\n Models often benefit from reducing the learning rate by a factor\n of 2-10 once learning stagnates. This callback monitors a\n quantity and if no improvement is seen for a 'patience' number\n of epochs, the learning rate is reduced.\n\n Example:\n\n ```python\n reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,\n patience=5, min_lr=0.001)\n model.fit(X_train, Y_train, callbacks=[reduce_lr])\n ```\n\n Args:\n monitor: quantity to be monitored.\n factor: factor by which the learning rate will be reduced.\n `new_lr = lr * factor`.\n patience: number of epochs with no improvement after which learning rate\n will be reduced.\n verbose: int. 0: quiet, 1: update messages.\n mode: one of `{'auto', 'min', 'max'}`. In `'min'` mode,\n the learning rate will be reduced when the\n quantity monitored has stopped decreasing; in `'max'` mode it will be\n reduced when the quantity monitored has stopped increasing; in `'auto'`\n mode, the direction is automatically inferred from the name of the\n monitored quantity.\n min_delta: threshold for measuring the new optimum, to only focus on\n significant changes.\n cooldown: number of epochs to wait before resuming normal operation after\n lr has been reduced.\n min_lr: lower bound on the learning rate.\n ", "desc": "Reduce learning rate when a metric has stopped improving.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.RemoteMonitor", "docs": "Callback used to stream events to a server.\n\n Requires the `requests` library.\n Events are sent to `root + '/publish/epoch/end/'` by default. Calls are\n HTTP POST, with a `data` argument which is a\n JSON-encoded dictionary of event data.\n If `send_as_json=True`, the content type of the request will be\n `\"application/json\"`.\n Otherwise the serialized JSON will be sent within a form.\n\n Args:\n root: String; root url of the target server.\n path: String; path relative to `root` to which the events will be sent.\n field: String; JSON field under which the data will be stored.\n The field is used only if the payload is sent within a form\n (i.e. send_as_json is set to False).\n headers: Dictionary; optional custom HTTP headers.\n send_as_json: Boolean; whether the request should be\n sent as `\"application/json\"`.\n ", "desc": "Callback used to stream events to a server.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.TensorBoard", "docs": "Enable visualizations for TensorBoard.\n\n TensorBoard is a visualization tool provided with TensorFlow.\n\n This callback logs events for TensorBoard, including:\n * Metrics summary plots\n * Training graph visualization\n * Activation histograms\n * Sampled profiling\n\n If you have installed TensorFlow with pip, you should be able\n to launch TensorBoard from the command line:\n\n ```sh\n tensorboard --logdir=path_to_your_logs\n ```\n\n You can find more information about TensorBoard\n [here](https://www.tensorflow.org/get_started/summaries_and_tensorboard).\n\n Args:\n log_dir: the path of the directory where to save the log files to be\n parsed by TensorBoard.\n histogram_freq: frequency (in epochs) at which to compute activation and\n weight histograms for the layers of the model. If set to 0, histograms\n won't be computed. Validation data (or split) must be specified for\n histogram visualizations.\n write_graph: whether to visualize the graph in TensorBoard. The log file\n can become quite large when write_graph is set to True.\n write_grads: whether to visualize gradient histograms in TensorBoard.\n `histogram_freq` must be greater than 0.\n batch_size: size of batch of inputs to feed to the network for histograms\n computation.\n write_images: whether to write model weights to visualize as image in\n TensorBoard.\n embeddings_freq: frequency (in epochs) at which selected embedding layers\n will be saved. If set to 0, embeddings won't be computed. Data to be\n visualized in TensorBoard's Embedding tab must be passed as\n `embeddings_data`.\n embeddings_layer_names: a list of names of layers to keep eye on. If None\n or empty list all the embedding layer will be watched.\n embeddings_metadata: a dictionary which maps layer name to a file name in\n which metadata for this embedding layer is saved.\n [Here are details](\n https://www.tensorflow.org/how_tos/embedding_viz/#metadata_optional)\n about metadata files format. In case if the same metadata file is\n used for all embedding layers, string can be passed.\n embeddings_data: data to be embedded at layers specified in\n `embeddings_layer_names`. Numpy array (if the model has a single input)\n or list of Numpy arrays (if the model has multiple inputs). Learn more\n about embeddings [in this guide](\n https://www.tensorflow.org/programmers_guide/embedding).\n update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`,\n writes the losses and metrics to TensorBoard after each batch. The same\n applies for `'epoch'`. If using an integer, let's say `1000`, the\n callback will write the metrics and losses to TensorBoard every 1000\n samples. Note that writing too frequently to TensorBoard can slow down\n your training.\n profile_batch: Profile the batch to sample compute characteristics. By\n default, it will profile the second batch. Set profile_batch=0 to\n disable profiling.\n\n Raises:\n ValueError: If histogram_freq is set and no validation data is provided.\n\n @compatibility(eager)\n Using the `TensorBoard` callback will work when eager execution is enabled,\n with the restriction that outputting histogram summaries of weights and\n gradients is not supported. Consequently, `histogram_freq` will be ignored.\n @end_compatibility\n ", "desc": "Enable visualizations for TensorBoard.", "type": "API"}, {"name": "tf.compat.v1.keras.callbacks.TerminateOnNaN", "docs": "Callback that terminates training when a NaN loss is encountered.\n ", "desc": "Callback that terminates training when a NaN loss is encountered.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints", "docs": "Constraints: functions that impose constraints on weight values.\n", "desc": "Constraints: functions that impose constraints on weight values.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.Constraint", "docs": "Base class for weight constraints.\n\n A `Constraint` instance works like a stateless function.\n Users who subclass this\n class should override the `__call__` method, which takes a single\n weight parameter and return a projected version of that parameter\n (e.g. normalized or clipped). Constraints can be used with various Keras\n layers via the `kernel_constraint` or `bias_constraint` arguments.\n\n Here's a simple example of a non-negative weight constraint:\n\n >>> class NonNegative(tf.keras.constraints.Constraint):\n ...\n ... def __call__(self, w):\n ... return w * tf.cast(tf.math.greater_equal(w, 0.), w.dtype)\n\n >>> weight = tf.constant((-1.0, 1.0))\n >>> NonNegative()(weight)\n \n\n >>> tf.keras.layers.Dense(4, kernel_constraint=NonNegative())\n ", "desc": "Base class for weight constraints.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.deserialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.get", "docs": "Retrieves a Keras constraint function.", "desc": "Retrieves a Keras constraint function.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.max_norm", "docs": "MaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have a norm less than or equal to a desired value.\n\n Also available via the shortcut function `tf.keras.constraints.max_norm`.\n\n Args:\n max_value: the maximum norm value for the incoming weights.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n\n ", "desc": "MaxNorm weight constraint.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.MaxNorm", "docs": "MaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have a norm less than or equal to a desired value.\n\n Also available via the shortcut function `tf.keras.constraints.max_norm`.\n\n Args:\n max_value: the maximum norm value for the incoming weights.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n\n ", "desc": "MaxNorm weight constraint.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.min_max_norm", "docs": "MinMaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have the norm between a lower bound and an upper bound.\n\n Also available via the shortcut function `tf.keras.constraints.min_max_norm`.\n\n Args:\n min_value: the minimum norm for the incoming weights.\n max_value: the maximum norm for the incoming weights.\n rate: rate for enforcing the constraint: weights will be\n rescaled to yield\n `(1 - rate) * norm + rate * norm.clip(min_value, max_value)`.\n Effectively, this means that rate=1.0 stands for strict\n enforcement of the constraint, while rate<1.0 means that\n weights will be rescaled at each step to slowly move\n towards a value inside the desired interval.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "MinMaxNorm weight constraint.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.MinMaxNorm", "docs": "MinMaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have the norm between a lower bound and an upper bound.\n\n Also available via the shortcut function `tf.keras.constraints.min_max_norm`.\n\n Args:\n min_value: the minimum norm for the incoming weights.\n max_value: the maximum norm for the incoming weights.\n rate: rate for enforcing the constraint: weights will be\n rescaled to yield\n `(1 - rate) * norm + rate * norm.clip(min_value, max_value)`.\n Effectively, this means that rate=1.0 stands for strict\n enforcement of the constraint, while rate<1.0 means that\n weights will be rescaled at each step to slowly move\n towards a value inside the desired interval.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "MinMaxNorm weight constraint.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.non_neg", "docs": "Constrains the weights to be non-negative.\n\n Also available via the shortcut function `tf.keras.constraints.non_neg`.\n ", "desc": "Constrains the weights to be non-negative.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.NonNeg", "docs": "Constrains the weights to be non-negative.\n\n Also available via the shortcut function `tf.keras.constraints.non_neg`.\n ", "desc": "Constrains the weights to be non-negative.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.radial_constraint", "docs": "Constrains `Conv2D` kernel weights to be the same for each radius.\n\n Also available via the shortcut function\n `tf.keras.constraints.radial_constraint`.\n\n For example, the desired output for the following 4-by-4 kernel:\n\n ```\n kernel = [[v_00, v_01, v_02, v_03],\n [v_10, v_11, v_12, v_13],\n [v_20, v_21, v_22, v_23],\n [v_30, v_31, v_32, v_33]]\n ```\n\n is this::\n\n ```\n kernel = [[v_11, v_11, v_11, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_11, v_11, v_11]]\n ```\n\n This constraint can be applied to any `Conv2D` layer version, including\n `Conv2DTranspose` and `SeparableConv2D`, and with either `\"channels_last\"` or\n `\"channels_first\"` data format. The method assumes the weight tensor is of\n shape `(rows, cols, input_depth, output_depth)`.\n ", "desc": "Constrains `Conv2D` kernel weights to be the same for each radius.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.RadialConstraint", "docs": "Constrains `Conv2D` kernel weights to be the same for each radius.\n\n Also available via the shortcut function\n `tf.keras.constraints.radial_constraint`.\n\n For example, the desired output for the following 4-by-4 kernel:\n\n ```\n kernel = [[v_00, v_01, v_02, v_03],\n [v_10, v_11, v_12, v_13],\n [v_20, v_21, v_22, v_23],\n [v_30, v_31, v_32, v_33]]\n ```\n\n is this::\n\n ```\n kernel = [[v_11, v_11, v_11, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_11, v_11, v_11]]\n ```\n\n This constraint can be applied to any `Conv2D` layer version, including\n `Conv2DTranspose` and `SeparableConv2D`, and with either `\"channels_last\"` or\n `\"channels_first\"` data format. The method assumes the weight tensor is of\n shape `(rows, cols, input_depth, output_depth)`.\n ", "desc": "Constrains `Conv2D` kernel weights to be the same for each radius.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.unit_norm", "docs": "Constrains the weights incident to each hidden unit to have unit norm.\n\n Also available via the shortcut function `tf.keras.constraints.unit_norm`.\n\n Args:\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "Constrains the weights incident to each hidden unit to have unit norm.", "type": "API"}, {"name": "tf.compat.v1.keras.constraints.UnitNorm", "docs": "Constrains the weights incident to each hidden unit to have unit norm.\n\n Also available via the shortcut function `tf.keras.constraints.unit_norm`.\n\n Args:\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "Constrains the weights incident to each hidden unit to have unit norm.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets", "docs": "Small NumPy datasets for debugging/testing.\n", "desc": "Small NumPy datasets for debugging/testing.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.boston_housing", "docs": "Boston housing price regression dataset.\n", "desc": "Boston housing price regression dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.boston_housing.load_data", "docs": "Loads the Boston Housing dataset.\n\n This is a dataset taken from the StatLib library which is maintained at\n Carnegie Mellon University.\n\n Samples contain 13 attributes of houses at different locations around the\n Boston suburbs in the late 1970s. Targets are the median values of\n the houses at a location (in k$).\n\n The attributes themselves are defined in the\n [StatLib website](http://lib.stat.cmu.edu/datasets/boston).\n\n Args:\n path: path where to cache the dataset locally\n (relative to `~/.keras/datasets`).\n test_split: fraction of the data to reserve as test set.\n seed: Random seed for shuffling the data\n before computing the test split.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: numpy arrays with shape `(num_samples, 13)`\n containing either the training samples (for x_train),\n or test samples (for y_train).\n\n **y_train, y_test**: numpy arrays of shape `(num_samples,)` containing the\n target scalars. The targets are float scalars typically between 10 and\n 50 that represent the home prices in k$.\n ", "desc": "Loads the Boston Housing dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.cifar10", "docs": "CIFAR10 small images classification dataset.\n", "desc": "CIFAR10 small images classification dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.cifar10.load_data", "docs": "Loads the CIFAR10 dataset.\n\n This is a dataset of 50,000 32x32 color training images and 10,000 test\n images, labeled over 10 categories. See more info at the\n [CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).\n\n The classes are:\n\n | Label | Description |\n |:-----:|-------------|\n | 0 | airplane |\n | 1 | automobile |\n | 2 | bird |\n | 3 | cat |\n | 4 | deer |\n | 5 | dog |\n | 6 | frog |\n | 7 | horse |\n | 8 | ship |\n | 9 | truck |\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(50000, 32, 32, 3)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(50000, 1)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n `(10000, 32, 32, 3)`, containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(10000, 1)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()\n assert x_train.shape == (50000, 32, 32, 3)\n assert x_test.shape == (10000, 32, 32, 3)\n assert y_train.shape == (50000, 1)\n assert y_test.shape == (10000, 1)\n ```\n ", "desc": "Loads the CIFAR10 dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.cifar100", "docs": "CIFAR100 small images classification dataset.\n", "desc": "CIFAR100 small images classification dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.cifar100.load_data", "docs": "Loads the CIFAR100 dataset.\n\n This is a dataset of 50,000 32x32 color training images and\n 10,000 test images, labeled over 100 fine-grained classes that are\n grouped into 20 coarse-grained classes. See more info at the\n [CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).\n\n Args:\n label_mode: one of \"fine\", \"coarse\". If it is \"fine\" the category labels\n are the fine-grained labels, if it is \"coarse\" the output labels are the\n coarse-grained superclasses.\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(50000, 32, 32, 3)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-99)\n with shape `(50000, 1)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n `(10000, 32, 32, 3)`, containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-99)\n with shape `(10000, 1)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()\n assert x_train.shape == (50000, 32, 32, 3)\n assert x_test.shape == (10000, 32, 32, 3)\n assert y_train.shape == (50000, 1)\n assert y_test.shape == (10000, 1)\n ```\n ", "desc": "Loads the CIFAR100 dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.fashion_mnist", "docs": "Fashion-MNIST dataset.\n", "desc": "Fashion-MNIST dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.fashion_mnist.load_data", "docs": "Loads the Fashion-MNIST dataset.\n\n This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,\n along with a test set of 10,000 images. This dataset can be used as\n a drop-in replacement for MNIST.\n\n The classes are:\n\n | Label | Description |\n |:-----:|-------------|\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(60000, 28, 28)`, containing the training data.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(60000,)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n (10000, 28, 28), containing the test data.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(10000,)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n assert x_train.shape == (60000, 28, 28)\n assert x_test.shape == (10000, 28, 28)\n assert y_train.shape == (60000,)\n assert y_test.shape == (10000,)\n ```\n\n License:\n The copyright for Fashion-MNIST is held by Zalando SE.\n Fashion-MNIST is licensed under the [MIT license](\n https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE).\n\n ", "desc": "Loads the Fashion-MNIST dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.imdb", "docs": "IMDB sentiment classification dataset.\n", "desc": "IMDB sentiment classification dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.imdb.get_word_index", "docs": "Retrieves a dict mapping words to their index in the IMDB dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their index.\n\n Example:\n\n ```python\n # Retrieve the training sequences.\n (x_train, _), _ = keras.datasets.imdb.load_data()\n # Retrieve the word index file mapping words to indices\n word_index = keras.datasets.imdb.get_word_index()\n # Reverse the word index to obtain a dict mapping indices to words\n inverted_word_index = dict((i, word) for (word, i) in word_index.items())\n # Decode the first sequence in the dataset\n decoded_sequence = \" \".join(inverted_word_index[i] for i in x_train[0])\n ```\n ", "desc": "Retrieves a dict mapping words to their index in the IMDB dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.imdb.load_data", "docs": "Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).\n\n This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment\n (positive/negative). Reviews have been preprocessed, and each review is\n encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode any unknown word.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to None, so all words are kept.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. Defaults to 0, so no words are\n skipped.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. Defaults to None, which\n means no truncation.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. Defaults to 1 because 0 is usually the padding character.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n **kwargs: Used for backwards compatibility.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **y_train, y_test**: lists of integer labels (1 or 0).\n\n Raises:\n ValueError: in case `maxlen` is so low\n that no input sequence could be kept.\n\n Note that the 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n ", "desc": "Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.mnist", "docs": "MNIST handwritten digits dataset.\n", "desc": "MNIST handwritten digits dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.mnist.load_data", "docs": "Loads the MNIST dataset.\n\n This is a dataset of 60,000 28x28 grayscale images of the 10 digits,\n along with a test set of 10,000 images.\n More info can be found at the\n [MNIST homepage](http://yann.lecun.com/exdb/mnist/).\n\n Args:\n path: path where to cache the dataset locally\n (relative to `~/.keras/datasets`).\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(60000, 28, 28)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of digit labels (integers in range 0-9)\n with shape `(60000,)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n (10000, 28, 28), containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of digit labels (integers in range 0-9)\n with shape `(10000,)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n assert x_train.shape == (60000, 28, 28)\n assert x_test.shape == (10000, 28, 28)\n assert y_train.shape == (60000,)\n assert y_test.shape == (10000,)\n ```\n\n License:\n Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset,\n which is a derivative work from original NIST datasets.\n MNIST dataset is made available under the terms of the\n [Creative Commons Attribution-Share Alike 3.0 license.](\n https://creativecommons.org/licenses/by-sa/3.0/)\n ", "desc": "Loads the MNIST dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.reuters", "docs": "Reuters topic classification dataset.\n", "desc": "Reuters topic classification dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.reuters.get_word_index", "docs": "Retrieves a dict mapping words to their index in the Reuters dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their index.\n ", "desc": "Retrieves a dict mapping words to their index in the Reuters dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.datasets.reuters.load_data", "docs": "Loads the Reuters newswire classification dataset.\n\n This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics.\n\n This was originally generated by parsing and preprocessing the classic\n Reuters-21578 dataset, but the preprocessing code is no longer packaged\n with Keras. See this\n [github discussion](https://github.com/keras-team/keras/issues/12072)\n for more info.\n\n Each newswire is encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode any unknown word.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to None, so all words are kept.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. Defaults to 0, so no words are\n skipped.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. Defaults to None, which\n means no truncation.\n test_split: Float between 0 and 1. Fraction of the dataset to be used\n as test data. Defaults to 0.2, meaning 20% of the dataset is used as\n test data.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. Defaults to 1 because 0 is usually the padding character.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n **kwargs: Used for backwards compatibility.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **y_train, y_test**: lists of integer labels (1 or 0).\n\n Note: The 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n ", "desc": "Loads the Reuters newswire classification dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.estimator", "docs": "Keras estimator API.\n", "desc": "Keras estimator API.", "type": "API"}, {"name": "tf.compat.v1.keras.estimator.model_to_estimator", "docs": "Constructs an `Estimator` instance from given keras model.\n\n If you use infrastructure or other tooling that relies on Estimators, you can\n still build a Keras model and use model_to_estimator to convert the Keras\n model to an Estimator for use with downstream systems.\n\n For usage example, please see:\n [Creating estimators from Keras Models](\n https://www.tensorflow.org/guide/estimator#create_an_estimator_from_a_keras_model).\n\n Sample Weights:\n Estimators returned by `model_to_estimator` are configured so that they can\n handle sample weights (similar to `keras_model.fit(x, y, sample_weights)`).\n\n To pass sample weights when training or evaluating the Estimator, the first\n item returned by the input function should be a dictionary with keys\n `features` and `sample_weights`. Example below:\n\n ```python\n keras_model = tf.keras.Model(...)\n keras_model.compile(...)\n\n estimator = tf.keras.estimator.model_to_estimator(keras_model)\n\n def input_fn():\n return dataset_ops.Dataset.from_tensors(\n ({'features': features, 'sample_weights': sample_weights},\n targets))\n\n estimator.train(input_fn, steps=1)\n ```\n\n Example with customized export signature:\n ```python\n inputs = {'a': tf.keras.Input(..., name='a'),\n 'b': tf.keras.Input(..., name='b')}\n outputs = {'c': tf.keras.layers.Dense(..., name='c')(inputs['a']),\n 'd': tf.keras.layers.Dense(..., name='d')(inputs['b'])}\n keras_model = tf.keras.Model(inputs, outputs)\n keras_model.compile(...)\n export_outputs = {'c': tf.estimator.export.RegressionOutput,\n 'd': tf.estimator.export.ClassificationOutput}\n\n estimator = tf.keras.estimator.model_to_estimator(\n keras_model, export_outputs=export_outputs)\n\n def input_fn():\n return dataset_ops.Dataset.from_tensors(\n ({'features': features, 'sample_weights': sample_weights},\n targets))\n\n estimator.train(input_fn, steps=1)\n ```\n\n Args:\n keras_model: A compiled Keras model object. This argument is mutually\n exclusive with `keras_model_path`. Estimator's `model_fn` uses the\n structure of the model to clone the model. Defaults to `None`.\n keras_model_path: Path to a compiled Keras model saved on disk, in HDF5\n format, which can be generated with the `save()` method of a Keras model.\n This argument is mutually exclusive with `keras_model`.\n Defaults to `None`.\n custom_objects: Dictionary for cloning customized objects. This is\n used with classes that is not part of this pip package. For example, if\n user maintains a `relu6` class that inherits from `tf.keras.layers.Layer`,\n then pass `custom_objects={'relu6': relu6}`. Defaults to `None`.\n model_dir: Directory to save `Estimator` model parameters, graph, summary\n files for TensorBoard, etc. If unset a directory will be created with\n `tempfile.mkdtemp`\n config: `RunConfig` to config `Estimator`. Allows setting up things in\n `model_fn` based on configuration such as `num_ps_replicas`, or\n `model_dir`. Defaults to `None`. If both `config.model_dir` and the\n `model_dir` argument (above) are specified the `model_dir` **argument**\n takes precedence.\n checkpoint_format: Sets the format of the checkpoint saved by the estimator\n when training. May be `saver` or `checkpoint`, depending on whether to\n save checkpoints from `tf.train.Saver` or `tf.train.Checkpoint`. This\n argument currently defaults to `saver`. When 2.0 is released, the default\n will be `checkpoint`. Estimators use name-based `tf.train.Saver`\n checkpoints, while Keras models use object-based checkpoints from\n `tf.train.Checkpoint`. Currently, saving object-based checkpoints from\n `model_to_estimator` is only supported by Functional and Sequential\n models. Defaults to 'saver'.\n metric_names_map: Optional dictionary mapping Keras model output metric\n names to custom names. This can be used to override the default Keras\n model output metrics names in a multi IO model use case and provide custom\n names for the `eval_metric_ops` in Estimator.\n The Keras model metric names can be obtained using `model.metrics_names`\n excluding any loss metrics such as total loss and output losses.\n For example, if your Keras model has two outputs `out_1` and `out_2`,\n with `mse` loss and `acc` metric, then `model.metrics_names` will be\n `['loss', 'out_1_loss', 'out_2_loss', 'out_1_acc', 'out_2_acc']`.\n The model metric names excluding the loss metrics will be\n `['out_1_acc', 'out_2_acc']`.\n export_outputs: Optional dictionary. This can be used to override the\n default Keras model output exports in a multi IO model use case and\n provide custom names for the `export_outputs` in\n `tf.estimator.EstimatorSpec`. Default is None, which is equivalent to\n {'serving_default': `tf.estimator.export.PredictOutput`}. If not None,\n the keys must match the keys of `model.output_names`.\n A dict `{name: output}` where:\n * name: An arbitrary name for this output.\n * output: an `ExportOutput` class such as `ClassificationOutput`,\n `RegressionOutput`, or `PredictOutput`. Single-headed models only need\n to specify one entry in this dictionary. Multi-headed models should\n specify one entry for each head, one of which must be named using\n `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`\n If no entry is provided, a default `PredictOutput` mapping to\n `predictions` will be created.\n\n Returns:\n An Estimator from given keras model.\n\n Raises:\n ValueError: If neither keras_model nor keras_model_path was given.\n ValueError: If both keras_model and keras_model_path was given.\n ValueError: If the keras_model_path is a GCS URI.\n ValueError: If keras_model has not been compiled.\n ValueError: If an invalid checkpoint_format was given.\n ", "desc": "Constructs an `Estimator` instance from given keras model.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental", "docs": "Public API for tf.keras.experimental namespace.\n", "desc": "Public API for tf.keras.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.CosineDecay", "docs": "A LearningRateSchedule that uses a cosine decay schedule.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n return initial_learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(\n initial_learning_rate, decay_steps)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.CosineDecayRestarts", "docs": "A LearningRateSchedule that uses a cosine decay schedule with restarts.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function with\n restarts to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n\n The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more\n steps and with `m_mul` times initial learning rate as the new learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed_fn = (\n tf.keras.optimizers.schedules.CosineDecayRestarts(\n initial_learning_rate,\n first_decay_steps))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule with restarts.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.export_saved_model", "docs": "Exports a `tf.keras.Model` as a Tensorflow SavedModel.\n\n Note that at this time, subclassed models can only be saved using\n `serving_only=True`.\n\n The exported `SavedModel` is a standalone serialization of Tensorflow objects,\n and is supported by TF language APIs and the Tensorflow Serving system.\n To load the model, use the function\n `tf.keras.experimental.load_from_saved_model`.\n\n The `SavedModel` contains:\n\n 1. a checkpoint containing the model weights.\n 2. a `SavedModel` proto containing the Tensorflow backend graph. Separate\n graphs are saved for prediction (serving), train, and evaluation. If\n the model has not been compiled, then only the graph computing predictions\n will be exported.\n 3. the model's json config. If the model is subclassed, this will only be\n included if the model's `get_config()` method is overwritten.\n\n Example:\n\n ```python\n import tensorflow as tf\n\n # Create a tf.keras model.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(1, input_shape=[10]))\n model.summary()\n\n # Save the tf.keras model in the SavedModel format.\n path = '/tmp/simple_keras_model'\n tf.keras.experimental.export_saved_model(model, path)\n\n # Load the saved keras model back.\n new_model = tf.keras.experimental.load_from_saved_model(path)\n new_model.summary()\n ```\n\n Args:\n model: A `tf.keras.Model` to be saved. If the model is subclassed, the flag\n `serving_only` must be set to True.\n saved_model_path: a string specifying the path to the SavedModel directory.\n custom_objects: Optional dictionary mapping string names to custom classes\n or functions (e.g. custom loss functions).\n as_text: bool, `False` by default. Whether to write the `SavedModel` proto\n in text format. Currently unavailable in serving-only mode.\n input_signature: A possibly nested sequence of `tf.TensorSpec` objects, used\n to specify the expected model inputs. See `tf.function` for more details.\n serving_only: bool, `False` by default. When this is true, only the\n prediction graph is saved.\n\n Raises:\n NotImplementedError: If the model is a subclassed model, and serving_only is\n False.\n ValueError: If the input signature cannot be inferred from the model.\n AssertionError: If the SavedModel directory already exists and isn't empty.\n ", "desc": "Exports a `tf.keras.Model` as a Tensorflow SavedModel.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.LinearModel", "docs": "Linear Model for regression and classification problems.\n\n This model approximates the following function:\n $$y = \\beta + \\sum_{i=1}^{N} w_{i} * x_{i}$$\n where $$\\beta$$ is the bias and $$w_{i}$$ is the weight for each feature.\n\n Example:\n\n ```python\n model = LinearModel()\n model.compile(optimizer='sgd', loss='mse')\n model.fit(x, y, epochs=epochs)\n ```\n\n This model accepts sparse float inputs as well:\n\n Example:\n ```python\n model = LinearModel()\n opt = tf.keras.optimizers.Adam()\n loss_fn = tf.keras.losses.MeanSquaredError()\n with tf.GradientTape() as tape:\n output = model(sparse_input)\n loss = tf.reduce_mean(loss_fn(target, output))\n grads = tape.gradient(loss, model.weights)\n opt.apply_gradients(zip(grads, model.weights))\n ```\n\n ", "desc": "Linear Model for regression and classification problems.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.load_from_saved_model", "docs": "Loads a keras Model from a SavedModel created by `export_saved_model()`.\n\n This function reinstantiates model state by:\n 1) loading model topology from json (this will eventually come\n from metagraph).\n 2) loading model weights from checkpoint.\n\n Example:\n\n ```python\n import tensorflow as tf\n\n # Create a tf.keras model.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(1, input_shape=[10]))\n model.summary()\n\n # Save the tf.keras model in the SavedModel format.\n path = '/tmp/simple_keras_model'\n tf.keras.experimental.export_saved_model(model, path)\n\n # Load the saved keras model back.\n new_model = tf.keras.experimental.load_from_saved_model(path)\n new_model.summary()\n ```\n\n Args:\n saved_model_path: a string specifying the path to an existing SavedModel.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n a keras.Model instance.\n ", "desc": "Loads a keras Model from a SavedModel created by `export_saved_model()`.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.SequenceFeatures", "docs": "A layer for sequence input.\n\n All `feature_columns` must be sequence dense columns with the same\n `sequence_length`. The output of this method can be fed into sequence\n networks, such as RNN.\n\n The output of this method is a 3D `Tensor` of shape `[batch_size, T, D]`.\n `T` is the maximum sequence length for this batch, which could differ from\n batch to batch.\n\n If multiple `feature_columns` are given with `Di` `num_elements` each, their\n outputs are concatenated. So, the final `Tensor` has shape\n `[batch_size, T, D0 + D1 + ... + Dn]`.\n\n Example:\n\n ```python\n\n import tensorflow as tf\n\n # Behavior of some cells or feature columns may depend on whether we are in\n # training or inference mode, e.g. applying dropout.\n training = True\n rating = tf.feature_column.sequence_numeric_column('rating')\n watches = tf.feature_column.sequence_categorical_column_with_identity(\n 'watches', num_buckets=1000)\n watches_embedding = tf.feature_column.embedding_column(watches,\n dimension=10)\n columns = [rating, watches_embedding]\n\n features = {\n 'rating': tf.sparse.from_dense([[1.0,1.1, 0, 0, 0],\n [2.0,2.1,2.2, 2.3, 2.5]]),\n 'watches': tf.sparse.from_dense([[2, 85, 0, 0, 0],[33,78, 2, 73, 1]])\n }\n\n sequence_input_layer = tf.keras.experimental.SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_input_layer(\n features, training=training)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n hidden_size = 32\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n ", "desc": "A layer for sequence input.", "type": "API"}, {"name": "tf.compat.v1.keras.experimental.WideDeepModel", "docs": "Wide & Deep Model for regression and classification problems.\n\n This model jointly train a linear and a dnn model.\n\n Example:\n\n ```python\n linear_model = LinearModel()\n dnn_model = keras.Sequential([keras.layers.Dense(units=64),\n keras.layers.Dense(units=1)])\n combined_model = WideDeepModel(linear_model, dnn_model)\n combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])\n # define dnn_inputs and linear_inputs as separate numpy arrays or\n # a single numpy array if dnn_inputs is same as linear_inputs.\n combined_model.fit([linear_inputs, dnn_inputs], y, epochs)\n # or define a single `tf.data.Dataset` that contains a single tensor or\n # separate tensors for dnn_inputs and linear_inputs.\n dataset = tf.data.Dataset.from_tensors(([linear_inputs, dnn_inputs], y))\n combined_model.fit(dataset, epochs)\n ```\n\n Both linear and dnn model can be pre-compiled and trained separately\n before jointly training:\n\n Example:\n ```python\n linear_model = LinearModel()\n linear_model.compile('adagrad', 'mse')\n linear_model.fit(linear_inputs, y, epochs)\n dnn_model = keras.Sequential([keras.layers.Dense(units=1)])\n dnn_model.compile('rmsprop', 'mse')\n dnn_model.fit(dnn_inputs, y, epochs)\n combined_model = WideDeepModel(linear_model, dnn_model)\n combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])\n combined_model.fit([linear_inputs, dnn_inputs], y, epochs)\n ```\n\n ", "desc": "Wide & Deep Model for regression and classification problems.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers", "docs": "Keras initializer serialization / deserialization.\n", "desc": "Keras initializer serialization / deserialization.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Constant", "docs": "Initializer that generates tensors with constant values.\n\n The resulting tensor is populated with values of type `dtype`, as\n specified by arguments `value` following the desired `shape` of the\n new tensor (see examples below).\n\n The argument `value` can be a constant value, or a list of values of type\n `dtype`. If `value` is a list, then the length of the list must be less\n than or equal to the number of elements implied by the desired shape of the\n tensor. In the case where the total number of elements in `value` is less\n than the number of elements required by the tensor shape, the last element\n in `value` will be used to fill the remaining entries. If the total number of\n elements in `value` is greater than the number of elements required by the\n tensor shape, the initializer will raise a `ValueError`.\n\n Args:\n value: A Python scalar, list or tuple of values, or a N-dimensional numpy\n array. All elements of the initialized variable will be set to the\n corresponding value in the `value` argument.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n verify_shape: Boolean that enables verification of the shape of `value`. If\n `True`, the initializer will throw an error if the shape of `value` is not\n compatible with the shape of the initialized tensor.\n\n Raises:\n TypeError: If the input `value` is not one of the expected types.\n\n Examples:\n The following example can be rewritten using a numpy.ndarray instead\n of the `value` list, even reshaped, as shown in the two commented lines\n below the `value` list initialization.\n\n >>> value = [0, 1, 2, 3, 4, 5, 6, 7]\n >>> init = tf.compat.v1.constant_initializer(value)\n >>> # fitting shape\n >>> with tf.compat.v1.Session():\n ... x = tf.compat.v1.get_variable('x', shape=[2, 4], initializer=init)\n ... x.initializer.run()\n ... print(x.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]]\n >>> # Larger shape\n >>> with tf.compat.v1.Session():\n ... y = tf.compat.v1.get_variable('y', shape=[3, 4], initializer=init)\n ... y.initializer.run()\n ... print(y.eval())\n [[0. 1. 2. 3.]\n [4. 5. 6. 7.]\n [7. 7. 7. 7.]]\n >>> # Smaller shape\n >>> with tf.compat.v1.Session():\n ... z = tf.compat.v1.get_variable('z', shape=[2, 3], initializer=init)\n Traceback (most recent call last):\n ...\n ValueError: Too many elements provided. Needed at most 6, but received 8\n >>> # Shape verification\n >>> init_verify = tf.compat.v1.constant_initializer(value, verify_shape=True)\n >>> with tf.compat.v1.Session():\n ... u = tf.compat.v1.get_variable('u', shape=[3, 4],\n ... initializer=init_verify)\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (3, 4), got (8,).\n\n @compatibility(TF2)\n Although it is a legacy API endpoint, `tf.compat.v1.constant_initializer`\n is compatible with eager execution and `tf.function`.\n\n To migrate to a non-legacy TF2 API, please use `tf.constant_initializer`\n instead. The `dtype`\n argument in `tf.compat.v1.constant_initializer.__init__()` does not exist in\n `tf.constant_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n In the `compat.v1` symbol, if `verify_shape` is set to `True`, an exception\n is raised when initializing a variable with a different shape from\n `value`. If set to `False`, `value` is reshaped to initialize the variable\n if necessary. An exception would only be raised when the number of\n elements are different.\n\n The `verify_shape` argument is not supported in TF2. Using\n `tf.constant_initializer` is equivalent to setting `verify_shape` to `False`.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.compat.v1.constant_initializer(\n value=value,\n dtype=tf.float32,\n verify_shape=False)\n variable = tf.Variable(initializer(shape=[2, 4]))\n ```\n\n After:\n\n ```python\n value = [0, 1, 2, 3, 4, 5, 6, 7]\n initializer = tf.constant_initializer(value=value)\n tf.Variable(initializer(shape=[2, 4], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :--------------- | :-------------------------- |\n | `value` | `value` | In constructor |\n | `dtype` | `dtype` | In `__call__()` method |\n | `verify_shape` | Not Supported | Equivalent to set to `False`|\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=True)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n Traceback (most recent call last):\n ...\n TypeError: Expected Tensor's shape: (2, 2), got (4,).\n >>> initializer = tf.compat.v1.constant_initializer(\n ... value=value, dtype=tf.float32, verify_shape=False)\n >>> tf.Variable(initializer(shape=[2, 2])).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n After:\n\n >>> value = [1., 2., 3., 4.]\n >>> initializer = tf.constant_initializer(value=value)\n >>> tf.Variable(initializer(shape=[2, 2], dtype=tf.float32)).numpy()\n array([[1., 2.],\n [3., 4.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.deserialize", "docs": "Return an `Initializer` object from its config.", "desc": "Return an `Initializer` object from its config.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.get", "docs": "Retrieve a Keras initializer by the identifier.\n\n The `identifier` may be the string name of a initializers function or class (\n case-sensitively).\n\n >>> identifier = 'Ones'\n >>> tf.keras.initializers.deserialize(identifier)\n <...keras.initializers.initializers_v2.Ones...>\n\n You can also specify `config` of the initializer to this function by passing\n dict containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Initializer` class.\n\n >>> cfg = {'class_name': 'Ones', 'config': {}}\n >>> tf.keras.initializers.deserialize(cfg)\n <...keras.initializers.initializers_v2.Ones...>\n\n In the case that the `identifier` is a class, this method will return a new\n instance of the class by its constructor.\n\n Args:\n identifier: String or dict that contains the initializer name or\n configurations.\n\n Returns:\n Initializer instance base on the input identifier.\n\n Raises:\n ValueError: If the input identifier is not a supported type or in a bad\n format.\n ", "desc": "Retrieve a Keras initializer by the identifier.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.glorot_normal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n It draws samples from a truncated normal distribution centered on 0\n with standard deviation (after truncation) given by\n `stddev = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number\n of input units in the weight tensor and `fan_out` is the number of\n output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.glorot_uniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n It draws samples from a uniform distribution within [-limit, limit]\n where `limit` is `sqrt(6 / (fan_in + fan_out))`\n where `fan_in` is the number of input units in the weight tensor\n and `fan_out` is the number of output units in the weight tensor.\n\n Args:\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ([pdf](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf))\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.he_normal", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.he_uniform", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Identity", "docs": "Initializer that generates the identity matrix.\n\n Only use for 2D matrices.\n\n Args:\n gain: Multiplicative factor to apply to the identity matrix.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n ", "desc": "Initializer that generates the identity matrix.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Initializer", "docs": "Initializer base class: all Keras initializers inherit from this class.\n\n Initializers should implement a `__call__` method with the following\n signature:\n\n ```python\n def __call__(self, shape, dtype=None, **kwargs):\n # returns a tensor of shape `shape` and dtype `dtype`\n # containing values drawn from a distribution of your choice.\n ```\n\n Optionally, you an also implement the method `get_config` and the class\n method `from_config` in order to support serialization -- just like with\n any Keras object.\n\n Here's a simple example: a random normal initializer.\n\n ```python\n import tensorflow as tf\n\n class ExampleRandomNormal(tf.keras.initializers.Initializer):\n\n def __init__(self, mean, stddev):\n self.mean = mean\n self.stddev = stddev\n\n def __call__(self, shape, dtype=None, **kwargs):\n return tf.random.normal(\n shape, mean=self.mean, stddev=self.stddev, dtype=dtype)\n\n def get_config(self): # To support serialization\n return {\"mean\": self.mean, \"stddev\": self.stddev}\n ```\n\n Note that we don't have to implement `from_config` in the example above since\n the constructor arguments of the class the keys in the config returned by\n `get_config` are the same. In this case, the default `from_config`\n works fine.\n ", "desc": "Initializer base class: all Keras initializers inherit from this class.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.lecun_normal", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.lecun_uniform", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.normal", "docs": "Initializer that generates a normal distribution.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 api,\n `tf.compat.v1.keras.initializers.RandomNormal` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomNormal` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.keras.initializers.RandomNormal(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomNormal(\n mean=mean,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with :\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight:\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Ones", "docs": "Initializer that generates tensors initialized to 1.\n\n @compatibility(TF2)\n This API is compatible with TF2 behavior and `tf.function`, and can be\n migrated immediately with `tf.keras.initializers.ones`.\n\n Before:\n >>> initializer = tf.compat.v1.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n After:\n >>> initializer = tf.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 1.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Orthogonal", "docs": "Initializer that generates an orthogonal matrix.\n\n If the shape of the tensor to initialize is two-dimensional, it is initialized\n with an orthogonal matrix obtained from the QR decomposition of a matrix of\n random numbers drawn from a normal distribution.\n If the matrix has fewer rows than columns then the output will have orthogonal\n rows. Otherwise, the output will have orthogonal columns.\n\n If the shape of the tensor to initialize is more than two-dimensional,\n a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`\n is initialized, where `n` is the length of the shape vector.\n The matrix is subsequently reshaped to give a tensor of the desired shape.\n\n Args:\n gain: multiplicative factor to apply to the orthogonal matrix\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C)\n ([pdf](https://arxiv.org/pdf/1312.6120.pdf))\n ", "desc": "Initializer that generates an orthogonal matrix.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.random_normal", "docs": "Initializer that generates a normal distribution.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 api,\n `tf.compat.v1.keras.initializers.RandomNormal` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomNormal` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.keras.initializers.RandomNormal(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomNormal(\n mean=mean,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with :\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight:\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.random_uniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate.\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate. Defaults to 1 for float types.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` api,\n `tf.compat.v1.keras.initializers.RandomUniform` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomUniform` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n\n initializer = tf.compat.v1.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n )\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `minval` | `minval` | No change to defaults |\n | `maxval` | `maxval` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight :\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.RandomNormal", "docs": "Initializer that generates a normal distribution.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 api,\n `tf.compat.v1.keras.initializers.RandomNormal` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomNormal` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.keras.initializers.RandomNormal(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomNormal(\n mean=mean,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with :\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight:\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.RandomUniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate.\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate. Defaults to 1 for float types.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` api,\n `tf.compat.v1.keras.initializers.RandomUniform` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomUniform` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n\n initializer = tf.compat.v1.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n )\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `minval` | `minval` | No change to defaults |\n | `maxval` | `maxval` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight :\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.truncated_normal", "docs": "Initializer that generates a truncated normal distribution.\n\n These values are similar to values from a `random_normal_initializer`\n except that values more than two standard deviations from the mean\n are discarded and re-drawn. This is the recommended initializer for\n neural network weights and filters.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 api,\n `tf.compat.v1.keras.initializers.TruncatedNormal` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.TruncatedNormal` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.keras.initializers.TruncatedNormal(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.TruncatedNormal(\n mean=mean,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight :\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.TruncatedNormal", "docs": "Initializer that generates a truncated normal distribution.\n\n These values are similar to values from a `random_normal_initializer`\n except that values more than two standard deviations from the mean\n are discarded and re-drawn. This is the recommended initializer for\n neural network weights and filters.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 api,\n `tf.compat.v1.keras.initializers.TruncatedNormal` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.TruncatedNormal` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.keras.initializers.TruncatedNormal(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.TruncatedNormal(\n mean=mean,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight :\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.TruncatedNormal(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.uniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate.\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate. Defaults to 1 for float types.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` api,\n `tf.compat.v1.keras.initializers.RandomUniform` is compatible with eager\n execution and `tf.function`.\n\n To switch to native TF2, switch to using\n `tf.keras.initializers.RandomUniform` (not from `compat.v1`) and\n if you need to change the default dtype use\n `tf.keras.backend.set_floatx(float_dtype)`\n or pass the dtype when calling the initializer, rather than passing it\n when constructing the initializer.\n\n Random seed behavior:\n\n Also be aware that if you pass a seed to the TF2 initializer\n API it will reuse that same seed for every single initialization\n (unlike the TF1 initializer)\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n\n initializer = tf.compat.v1.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n # seed=seed, # Setting a seed in the native TF2 API\n # causes it to produce the same initializations\n # across multiple calls of the same initializer.\n )\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :-------------- | :------------------------- |\n | `minval` | `minval` | No change to defaults |\n | `maxval` | `maxval` | No change to defaults |\n | `seed` | `seed` | Different random number generation |\n : : : semantics (to change in a :\n : : : future version). If set, the TF2 version :\n : : : will use stateless random number :\n : : : generation which will produce the exact :\n : : : same initialization even across multiple :\n : : : calls of the initializer instance. the :\n : : : `compat.v1` version will generate new :\n : : : initializations each time. Do not set :\n : : : a seed if you need different :\n : : : initializations each time. Instead :\n : : : either set a global tf seed with\n : : : `tf.random.set_seed` if you need :\n : : : determinism, or initialize each weight :\n : : : with a separate initializer instance :\n : : : and a different seed. :\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n #### Example of fixed-seed behavior differences\n\n `compat.v1` Fixed seed behavior:\n\n >>> initializer = tf.compat.v1.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n After:\n\n >>> initializer = tf.keras.initializers.RandomUniform(seed=10)\n >>> a = initializer(shape=(2, 2))\n >>> b = initializer(shape=(2, 2))\n >>> tf.reduce_sum(a - b) == 0\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.VarianceScaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2 APIs, move to using either\n `tf.initializers.variance_scaling` or `tf.keras.initializers.VarianceScaling`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.variance_scaling_initializer(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.VarianceScaling(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :----------------- | :-------------- | :------------------------- |\n | `scale` | `scale` | No change to defaults |\n | `mode` | `mode` | No change to defaults |\n | `distribution` | `distribution` | No change to defaults. |\n : : : 'normal' maps to 'truncated_normal' :\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`,\n samples are drawn from a truncated/untruncated normal\n distribution with a mean of zero and a standard deviation (after truncation,\n if used) `stddev = sqrt(scale / n)`\n where n is:\n - number of input units in the weight tensor, if mode = \"fan_in\"\n - number of output units, if mode = \"fan_out\"\n - average of the numbers of input and output units, if mode = \"fan_avg\"\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within [-limit, limit], with `limit = sqrt(3 * scale / n)`.\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"normal\", \"uniform\".\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n Raises:\n ValueError: In case of an invalid value for the \"scale\", mode\" or\n \"distribution\" arguments.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.compat.v1.keras.initializers.Zeros", "docs": "Initializer that generates tensors initialized to 0.\n\n @compatibility(TF2)\n `tf.compat.v1.zeros_initializer` is compatible with eager execution\n and `tf.function`.\n\n To migrate to TF2, please use `tf.zeros_initializer` instead. The `dtype`\n argument in `tf.compat.v1.zeros_initializer.__init__()` does not exist in\n `tf.zeros_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n variable = tf.Variable(initializer(shape=[3, 3]))\n ```\n\n After:\n\n ```python\n initializer = tf.zeros_initializer()\n variable = tf.Variable(initializer(shape=[3, 3], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------------- | :--------------- | :------------------------- |\n | `dtype` | `dtype` | In `__call__()` method |\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n >>> tf.Variable(initializer(shape=[3])).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3])).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n >>> initializer = tf.compat.v1.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n After:\n\n >>> initializer = tf.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 0.", "type": "API"}, {"name": "tf.compat.v1.keras.Input", "docs": "`Input()` is used to instantiate a Keras tensor.\n\n A Keras tensor is a symbolic tensor-like object,\n which we augment with certain attributes that allow us to build a Keras model\n just by knowing the inputs and outputs of the model.\n\n For instance, if `a`, `b` and `c` are Keras tensors,\n it becomes possible to do:\n `model = Model(input=[a, b], output=c)`\n\n Args:\n shape: A shape tuple (integers), not including the batch size.\n For instance, `shape=(32,)` indicates that the expected input\n will be batches of 32-dimensional vectors. Elements of this tuple\n can be None; 'None' elements represent dimensions where the shape is\n not known.\n batch_size: optional static batch size (integer).\n name: An optional name string for the layer.\n Should be unique in a model (do not reuse the same name twice).\n It will be autogenerated if it isn't provided.\n dtype: The data type expected by the input, as a string\n (`float32`, `float64`, `int32`...)\n sparse: A boolean specifying whether the placeholder to be created is\n sparse. Only one of 'ragged' and 'sparse' can be True. Note that,\n if `sparse` is False, sparse tensors can still be passed into the\n input - they will be densified with a default value of 0.\n tensor: Optional existing tensor to wrap into the `Input` layer.\n If set, the layer will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n ragged: A boolean specifying whether the placeholder to be created is\n ragged. Only one of 'ragged' and 'sparse' can be True. In this case,\n values of 'None' in the 'shape' argument represent ragged dimensions.\n For more information about RaggedTensors, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensors).\n type_spec: A `tf.TypeSpec` object to create the input placeholder from.\n When provided, all other args except name must be None.\n **kwargs: deprecated arguments support. Supports `batch_shape` and\n `batch_input_shape`.\n\n Returns:\n A `tensor`.\n\n Example:\n\n ```python\n # this is a logistic regression in Keras\n x = Input(shape=(32,))\n y = Dense(16, activation='softmax')(x)\n model = Model(x, y)\n ```\n\n Note that even if eager execution is enabled,\n `Input` produces a symbolic tensor-like object (i.e. a placeholder).\n This symbolic tensor-like object can be used with lower-level\n TensorFlow ops that take tensors as inputs, as such:\n\n ```python\n x = Input(shape=(32,))\n y = tf.square(x) # This op will be treated like a layer\n model = Model(x, y)\n ```\n\n (This behavior does not work for higher-order TensorFlow APIs such as\n control flow and being directly watched by a `tf.GradientTape`).\n\n However, the resulting model will not track any variables that were\n used as inputs to TensorFlow ops. All variable usages must happen within\n Keras layers to make sure they will be tracked by the model's weights.\n\n The Keras Input can also create a placeholder from an arbitrary `tf.TypeSpec`,\n e.g:\n\n ```python\n x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],\n dtype=tf.float32, ragged_rank=1))\n y = x.values\n model = Model(x, y)\n ```\n When passing an arbitrary `tf.TypeSpec`, it must represent the signature of an\n entire batch instead of just one example.\n\n Raises:\n ValueError: If both `sparse` and `ragged` are provided.\n ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are\n provided.\n ValueError: If `shape`, `tensor` and `type_spec` are None.\n ValueError: If arguments besides `type_spec` are non-None while `type_spec`\n is passed.\n ValueError: if any unrecognized parameters are provided.\n ", "desc": "`Input()` is used to instantiate a Keras tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.layers", "docs": "Keras layers API.\n", "desc": "Keras layers API.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AbstractRNNCell", "docs": "Abstract object representing an RNN cell.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This is the base class for implementing RNN cells with custom behavior.\n\n Every `RNNCell` must have the properties below and implement `call` with\n the signature `(output, next_state) = call(input, state)`.\n\n Examples:\n\n ```python\n class MinimalRNNCell(AbstractRNNCell):\n\n def __init__(self, units, **kwargs):\n self.units = units\n super(MinimalRNNCell, self).__init__(**kwargs)\n\n @property\n def state_size(self):\n return self.units\n\n def build(self, input_shape):\n self.kernel = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='uniform',\n name='kernel')\n self.recurrent_kernel = self.add_weight(\n shape=(self.units, self.units),\n initializer='uniform',\n name='recurrent_kernel')\n self.built = True\n\n def call(self, inputs, states):\n prev_output = states[0]\n h = backend.dot(inputs, self.kernel)\n output = h + backend.dot(prev_output, self.recurrent_kernel)\n return output, output\n ```\n\n This definition of cell differs from the definition used in the literature.\n In the literature, 'cell' refers to an object with a single scalar output.\n This definition refers to a horizontal array of such units.\n\n An RNN cell, in the most abstract setting, is anything that has\n a state and performs some operation that takes a matrix of inputs.\n This operation results in an output matrix with `self.output_size` columns.\n If `self.state_size` is an integer, this operation also results in a new\n state matrix with `self.state_size` columns. If `self.state_size` is a\n (possibly nested tuple of) TensorShape object(s), then it should return a\n matching structure of Tensors having shape `[batch_size].concatenate(s)`\n for each `s` in `self.batch_size`.\n ", "desc": "Abstract object representing an RNN cell.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Activation", "docs": "Applies an activation function to an output.\n\n Args:\n activation: Activation function, such as `tf.nn.relu`, or string name of\n built-in activation function, such as \"relu\".\n\n Usage:\n\n >>> layer = tf.keras.layers.Activation('relu')\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.Activation(tf.nn.relu)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Applies an activation function to an output.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ActivityRegularization", "docs": "Layer that applies an update to the cost function based input activity.\n\n Args:\n l1: L1 regularization factor (positive float).\n l2: L2 regularization factor (positive float).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Layer that applies an update to the cost function based input activity.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Add", "docs": "Layer that adds a list of inputs.\n\n It takes as input a list of tensors,\n all of the same shape, and returns\n a single tensor (also of the same shape).\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x1 = tf.random.normal(input_shape)\n >>> x2 = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Add()([x1, x2])\n >>> print(y.shape)\n (2, 3, 4)\n\n Used in a functional model:\n\n >>> input1 = tf.keras.layers.Input(shape=(16,))\n >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)\n >>> input2 = tf.keras.layers.Input(shape=(32,))\n >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)\n >>> # equivalent to `added = tf.keras.layers.add([x1, x2])`\n >>> added = tf.keras.layers.Add()([x1, x2])\n >>> out = tf.keras.layers.Dense(4)(added)\n >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)\n\n ", "desc": "Layer that adds a list of inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AdditiveAttention", "docs": "Additive attention layer, a.k.a. Bahdanau-style attention.\n\n Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of\n shape `[batch_size, Tv, dim]` and `key` tensor of shape\n `[batch_size, Tv, dim]`. The calculation follows the steps:\n\n 1. Reshape `query` and `key` into shapes `[batch_size, Tq, 1, dim]`\n and `[batch_size, 1, Tv, dim]` respectively.\n 2. Calculate scores with shape `[batch_size, Tq, Tv]` as a non-linear\n sum: `scores = tf.reduce_sum(tf.tanh(query + key), axis=-1)`\n 3. Use scores to calculate a distribution with shape\n `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`.\n 4. Use `distribution` to create a linear combination of `value` with\n shape `[batch_size, Tq, dim]`:\n `return tf.matmul(distribution, value)`.\n\n Args:\n use_scale: If `True`, will create a variable to scale the attention scores.\n causal: Boolean. Set to `True` for decoder self-attention. Adds a mask such\n that position `i` cannot attend to positions `j > i`. This prevents the\n flow of information from the future towards the past.\n Defaults to `False`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the\n attention scores. Defaults to 0.0.\n\n Call Args:\n\n inputs: List of the following tensors:\n * query: Query `Tensor` of shape `[batch_size, Tq, dim]`.\n * value: Value `Tensor` of shape `[batch_size, Tv, dim]`.\n * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not\n given, will use `value` for both `key` and `value`, which is the\n most common case.\n mask: List of the following tensors:\n * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`.\n If given, the output will be zero at the positions where\n `mask==False`.\n * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`.\n If given, will apply the mask such that values at positions where\n `mask==False` do not contribute to the result.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n return_attention_scores: bool, it `True`, returns the attention scores\n (after masking and softmax) as an additional output argument.\n\n Output:\n\n Attention outputs of shape `[batch_size, Tq, dim]`.\n [Optional] Attention scores after masking and softmax with shape\n `[batch_size, Tq, Tv]`.\n\n The meaning of `query`, `value` and `key` depend on the application. In the\n case of text similarity, for example, `query` is the sequence embeddings of\n the first piece of text and `value` is the sequence embeddings of the second\n piece of text. `key` is usually the same tensor as `value`.\n\n Here is a code example for using `AdditiveAttention` in a CNN+Attention\n network:\n\n ```python\n # Variable-length int sequences.\n query_input = tf.keras.Input(shape=(None,), dtype='int32')\n value_input = tf.keras.Input(shape=(None,), dtype='int32')\n\n # Embedding lookup.\n token_embedding = tf.keras.layers.Embedding(max_tokens, dimension)\n # Query embeddings of shape [batch_size, Tq, dimension].\n query_embeddings = token_embedding(query_input)\n # Value embeddings of shape [batch_size, Tv, dimension].\n value_embeddings = token_embedding(value_input)\n\n # CNN layer.\n cnn_layer = tf.keras.layers.Conv1D(\n filters=100,\n kernel_size=4,\n # Use 'same' padding so outputs have the same shape as inputs.\n padding='same')\n # Query encoding of shape [batch_size, Tq, filters].\n query_seq_encoding = cnn_layer(query_embeddings)\n # Value encoding of shape [batch_size, Tv, filters].\n value_seq_encoding = cnn_layer(value_embeddings)\n\n # Query-value attention of shape [batch_size, Tq, filters].\n query_value_attention_seq = tf.keras.layers.AdditiveAttention()(\n [query_seq_encoding, value_seq_encoding])\n\n # Reduce over the sequence axis to produce encodings of shape\n # [batch_size, filters].\n query_encoding = tf.keras.layers.GlobalAveragePooling1D()(\n query_seq_encoding)\n query_value_attention = tf.keras.layers.GlobalAveragePooling1D()(\n query_value_attention_seq)\n\n # Concatenate query and document encodings to produce a DNN input layer.\n input_layer = tf.keras.layers.Concatenate()(\n [query_encoding, query_value_attention])\n\n # Add DNN layers, and create Model.\n # ...\n ```\n ", "desc": "Additive attention layer, a.k.a. Bahdanau-style attention.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AlphaDropout", "docs": "Applies Alpha Dropout to the input.\n\n Alpha Dropout is a `Dropout` that keeps mean and variance of inputs\n to their original values, in order to ensure the self-normalizing property\n even after this dropout.\n Alpha Dropout fits well to Scaled Exponential Linear Units\n by randomly setting activations to the negative saturation value.\n\n Args:\n rate: float, drop probability (as with `Dropout`).\n The multiplicative noise will have\n standard deviation `sqrt(rate / (1 - rate))`.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Applies Alpha Dropout to the input.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Attention", "docs": "Dot-product attention layer, a.k.a. Luong-style attention.\n\n Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of\n shape `[batch_size, Tv, dim]` and `key` tensor of shape\n `[batch_size, Tv, dim]`. The calculation follows the steps:\n\n 1. Calculate scores with shape `[batch_size, Tq, Tv]` as a `query`-`key` dot\n product: `scores = tf.matmul(query, key, transpose_b=True)`.\n 2. Use scores to calculate a distribution with shape\n `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`.\n 3. Use `distribution` to create a linear combination of `value` with\n shape `[batch_size, Tq, dim]`:\n `return tf.matmul(distribution, value)`.\n\n Args:\n use_scale: If `True`, will create a scalar variable to scale the attention\n scores.\n causal: Boolean. Set to `True` for decoder self-attention. Adds a mask such\n that position `i` cannot attend to positions `j > i`. This prevents the\n flow of information from the future towards the past.\n Defaults to `False`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the\n attention scores. Defaults to 0.0.\n score_mode: Function to use to compute attention scores, one of\n `{\"dot\", \"concat\"}`. `\"dot\"` refers to the dot product between the query\n and key vectors. `\"concat\"` refers to the hyperbolic tangent of the\n concatenation of the query and key vectors.\n\n Call Args:\n\n inputs: List of the following tensors:\n * query: Query `Tensor` of shape `[batch_size, Tq, dim]`.\n * value: Value `Tensor` of shape `[batch_size, Tv, dim]`.\n * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not\n given, will use `value` for both `key` and `value`, which is the\n most common case.\n mask: List of the following tensors:\n * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`.\n If given, the output will be zero at the positions where\n `mask==False`.\n * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`.\n If given, will apply the mask such that values at positions where\n `mask==False` do not contribute to the result.\n return_attention_scores: bool, it `True`, returns the attention scores\n (after masking and softmax) as an additional output argument.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n\n Output:\n\n Attention outputs of shape `[batch_size, Tq, dim]`.\n [Optional] Attention scores after masking and softmax with shape\n `[batch_size, Tq, Tv]`.\n\n The meaning of `query`, `value` and `key` depend on the application. In the\n case of text similarity, for example, `query` is the sequence embeddings of\n the first piece of text and `value` is the sequence embeddings of the second\n piece of text. `key` is usually the same tensor as `value`.\n\n Here is a code example for using `Attention` in a CNN+Attention network:\n\n ```python\n # Variable-length int sequences.\n query_input = tf.keras.Input(shape=(None,), dtype='int32')\n value_input = tf.keras.Input(shape=(None,), dtype='int32')\n\n # Embedding lookup.\n token_embedding = tf.keras.layers.Embedding(input_dim=1000, output_dim=64)\n # Query embeddings of shape [batch_size, Tq, dimension].\n query_embeddings = token_embedding(query_input)\n # Value embeddings of shape [batch_size, Tv, dimension].\n value_embeddings = token_embedding(value_input)\n\n # CNN layer.\n cnn_layer = tf.keras.layers.Conv1D(\n filters=100,\n kernel_size=4,\n # Use 'same' padding so outputs have the same shape as inputs.\n padding='same')\n # Query encoding of shape [batch_size, Tq, filters].\n query_seq_encoding = cnn_layer(query_embeddings)\n # Value encoding of shape [batch_size, Tv, filters].\n value_seq_encoding = cnn_layer(value_embeddings)\n\n # Query-value attention of shape [batch_size, Tq, filters].\n query_value_attention_seq = tf.keras.layers.Attention()(\n [query_seq_encoding, value_seq_encoding])\n\n # Reduce over the sequence axis to produce encodings of shape\n # [batch_size, filters].\n query_encoding = tf.keras.layers.GlobalAveragePooling1D()(\n query_seq_encoding)\n query_value_attention = tf.keras.layers.GlobalAveragePooling1D()(\n query_value_attention_seq)\n\n # Concatenate query and document encodings to produce a DNN input layer.\n input_layer = tf.keras.layers.Concatenate()(\n [query_encoding, query_value_attention])\n\n # Add DNN layers, and create Model.\n # ...\n ```\n ", "desc": "Dot-product attention layer, a.k.a. Luong-style attention.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Average", "docs": "Layer that averages a list of inputs element-wise.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n Example:\n\n >>> x1 = np.ones((2, 2))\n >>> x2 = np.zeros((2, 2))\n >>> y = tf.keras.layers.Average()([x1, x2])\n >>> y.numpy().tolist()\n [[0.5, 0.5], [0.5, 0.5]]\n\n Usage in a functional model:\n\n >>> input1 = tf.keras.layers.Input(shape=(16,))\n >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)\n >>> input2 = tf.keras.layers.Input(shape=(32,))\n >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)\n >>> avg = tf.keras.layers.Average()([x1, x2])\n >>> out = tf.keras.layers.Dense(4)(avg)\n >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)\n\n Raises:\n ValueError: If there is a shape mismatch between the inputs and the shapes\n cannot be broadcasted to match.\n ", "desc": "Layer that averages a list of inputs element-wise.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AveragePooling1D", "docs": "Average pooling for temporal data.\n\n Downsamples the input representation by taking the average value over the\n window defined by `pool_size`. The window is shifted by `strides`. The\n resulting output when using \"valid\" padding option has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the \"same\" padding option is:\n `output_shape = input_shape / strides`\n\n For example, for strides=1 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=2 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=1 and padding=\"same\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> avg_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the average pooling windows.\n strides: Integer, or None. Factor by which to downscale.\n E.g. 2 will halve the input.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Average pooling for temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AveragePooling2D", "docs": "Average pooling operation for spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output when using `\"valid\"` padding option has a shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `stride=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `strides=(1, 1)` and `padding=\"same\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> avg_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n factors by which to downscale (vertical, horizontal).\n `(2, 2)` will halve the input in both spatial dimension.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n ", "desc": "Average pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AveragePooling3D", "docs": "Average pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.AveragePooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Average pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AvgPool1D", "docs": "Average pooling for temporal data.\n\n Downsamples the input representation by taking the average value over the\n window defined by `pool_size`. The window is shifted by `strides`. The\n resulting output when using \"valid\" padding option has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the \"same\" padding option is:\n `output_shape = input_shape / strides`\n\n For example, for strides=1 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=2 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=1 and padding=\"same\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> avg_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the average pooling windows.\n strides: Integer, or None. Factor by which to downscale.\n E.g. 2 will halve the input.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Average pooling for temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AvgPool2D", "docs": "Average pooling operation for spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output when using `\"valid\"` padding option has a shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `stride=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `strides=(1, 1)` and `padding=\"same\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> avg_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n factors by which to downscale (vertical, horizontal).\n `(2, 2)` will halve the input in both spatial dimension.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n ", "desc": "Average pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.AvgPool3D", "docs": "Average pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.AveragePooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Average pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.BatchNormalization", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Bidirectional", "docs": "Bidirectional wrapper for RNNs.\n\n Args:\n layer: `keras.layers.RNN` instance, such as `keras.layers.LSTM` or\n `keras.layers.GRU`. It could also be a `keras.layers.Layer` instance\n that meets the following criteria:\n 1. Be a sequence-processing layer (accepts 3D+ inputs).\n 2. Have a `go_backwards`, `return_sequences` and `return_state`\n attribute (with the same semantics as for the `RNN` class).\n 3. Have an `input_spec` attribute.\n 4. Implement serialization via `get_config()` and `from_config()`.\n Note that the recommended way to create new RNN layers is to write a\n custom RNN cell and use it with `keras.layers.RNN`, instead of\n subclassing `keras.layers.Layer` directly.\n - When the `returns_sequences` is true, the output of the masked timestep\n will be zero regardless of the layer's original `zero_output_for_mask`\n value.\n merge_mode: Mode by which outputs of the forward and backward RNNs will be\n combined. One of {'sum', 'mul', 'concat', 'ave', None}. If None, the\n outputs will not be combined, they will be returned as a list. Default\n value is 'concat'.\n backward_layer: Optional `keras.layers.RNN`, or `keras.layers.Layer`\n instance to be used to handle backwards input processing.\n If `backward_layer` is not provided, the layer instance passed as the\n `layer` argument will be used to generate the backward layer\n automatically.\n Note that the provided `backward_layer` layer should have properties\n matching those of the `layer` argument, in particular it should have the\n same values for `stateful`, `return_states`, `return_sequences`, etc.\n In addition, `backward_layer` and `layer` should have different\n `go_backwards` argument values.\n A `ValueError` will be raised if these requirements are not met.\n\n Call arguments:\n The call arguments for this layer are the same as those of the wrapped RNN\n layer.\n Beware that when passing the `initial_state` argument during the call of\n this layer, the first half in the list of elements in the `initial_state`\n list will be passed to the forward RNN call and the last half in the list\n of elements will be passed to the backward RNN call.\n\n Raises:\n ValueError:\n 1. If `layer` or `backward_layer` is not a `Layer` instance.\n 2. In case of invalid `merge_mode` argument.\n 3. If `backward_layer` has mismatched properties compared to `layer`.\n\n Examples:\n\n ```python\n model = Sequential()\n model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5, 10)))\n model.add(Bidirectional(LSTM(10)))\n model.add(Dense(5))\n model.add(Activation('softmax'))\n model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\n # With custom backward layer\n model = Sequential()\n forward_layer = LSTM(10, return_sequences=True)\n backward_layer = LSTM(10, activation='relu', return_sequences=True,\n go_backwards=True)\n model.add(Bidirectional(forward_layer, backward_layer=backward_layer,\n input_shape=(5, 10)))\n model.add(Dense(5))\n model.add(Activation('softmax'))\n model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n ```\n ", "desc": "Bidirectional wrapper for RNNs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Concatenate", "docs": "Layer that concatenates a list of inputs.\n\n It takes as input a list of tensors, all of the same shape except\n for the concatenation axis, and returns a single tensor that is the\n concatenation of all inputs.\n\n >>> x = np.arange(20).reshape(2, 2, 5)\n >>> print(x)\n [[[ 0 1 2 3 4]\n [ 5 6 7 8 9]]\n [[10 11 12 13 14]\n [15 16 17 18 19]]]\n >>> y = np.arange(20, 30).reshape(2, 1, 5)\n >>> print(y)\n [[[20 21 22 23 24]]\n [[25 26 27 28 29]]]\n >>> tf.keras.layers.Concatenate(axis=1)([x, y])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> concatted = tf.keras.layers.Concatenate()([x1, x2])\n >>> concatted.shape\n TensorShape([5, 16])\n\n ", "desc": "Layer that concatenates a list of inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv1D", "docs": "1D convolution layer (e.g. temporal convolution).\n\n This layer creates a convolution kernel that is convolved\n with the layer input over a single spatial (or temporal) dimension\n to produce a tensor of outputs.\n If `use_bias` is True, a bias vector is created and added to the outputs.\n Finally, if `activation` is not `None`,\n it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide an `input_shape` argument\n (tuple of integers or `None`, e.g.\n `(10, 128)` for sequences of 10 vectors of 128-dimensional vectors,\n or `(None, 128)` for variable-length sequences of 128-dimensional vectors.\n\n Examples:\n\n >>> # The inputs are 128-length vectors with 10 timesteps, and the batch size\n >>> # is 4.\n >>> input_shape = (4, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu',input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 8, 32)\n\n >>> # With extended batch shape [4, 7] (e.g. weather data where batch\n >>> # dimensions correspond to spatial location and the third dimension\n >>> # corresponds to time.)\n >>> input_shape = (4, 7, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 8, 32)\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer,\n specifying the length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer,\n specifying the stride length of the convolution.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"` or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n `\"causal\"` results in causal (dilated) convolutions, e.g. `output[t]`\n does not depend on `input[t+1:]`. Useful when modeling temporal data\n where the model should not violate the temporal order.\n See [WaveNet: A Generative Model for Raw Audio, section\n 2.1](https://arxiv.org/abs/1609.03499).\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n dilation_rate: an integer or tuple/list of a single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any `strides` value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved\n separately with `filters / groups` filters. The output is the\n concatenation of all the `groups` results along the channel axis.\n Input channels and `filters` must both be divisible by `groups`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3+D tensor with shape: `batch_shape + (steps, input_dim)`\n\n Output shape:\n 3+D tensor with shape: `batch_shape + (new_steps, filters)`\n `steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "1D convolution layer (e.g. temporal convolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv1DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 3)` for data with 128 time steps and 3 channels.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer length of the 1D convolution window.\n strides: An integer specifying the stride of the convolution along the\n time dimension. Specifying a stride value != 1 is incompatible with\n specifying a `dilation_rate` value != 1. Defaults to 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer specifying the amount of padding along\n the time dimension of the output tensor.\n The amount of output padding must be lower than the stride.\n If set to `None` (default), the output shape is inferred.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: an integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying a `dilation_rate` value != 1 is\n incompatible with specifying a stride value != 1.\n Also dilation rate larger than 1 is not currently supported.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, steps, channels)`\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, new_steps, filters)`\n If `output_padding` is specified:\n ```\n new_timesteps = ((timesteps - 1) * strides + kernel_size -\n 2 * padding + output_padding)\n ```\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep learning](\n https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional Networks](\n https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv2D", "docs": "2D convolution layer (e.g. spatial convolution over images).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`. You can use `None` when\n a dimension has variable size.\n\n Examples:\n\n >>> # The inputs are 28x28 RGB images with `channels_last` and the batch\n >>> # size is 4.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 2)\n\n >>> # With `dilation_rate` as 2.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 24, 24, 2)\n\n >>> # With `padding` as \"same\".\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', padding=\"same\", input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 28, 28, 2)\n\n >>> # With extended batch shape [4, 7]:\n >>> input_shape = (4, 7, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 2)\n\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input. When `padding=\"same\"` and\n `strides=1`, the output has the same size as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be `channels_last`.\n dilation_rate: an integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4+D tensor with shape: `batch_shape + (channels, rows, cols)` if\n `data_format='channels_first'`\n or 4+D tensor with shape: `batch_shape + (rows, cols, channels)` if\n `data_format='channels_last'`.\n\n Output shape:\n 4+D tensor with shape: `batch_shape + (filters, new_rows, new_cols)` if\n `data_format='channels_first'` or 4+D tensor with shape: `batch_shape +\n (new_rows, new_cols, filters)` if `data_format='channels_last'`. `rows`\n and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4+ representing\n `activation(conv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is `\"causal\"`.\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "2D convolution layer (e.g. spatial convolution over images).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv2DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 2 integers,\n specifying the amount of padding along the height and width\n of the output tensor.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer, specifying the dilation rate for all spatial\n dimensions for dilated convolution. Specifying different dilation rates\n for different dimensions is not supported.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified:\n ```\n new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n ```\n\n Returns:\n A tensor of rank 4 representing\n `activation(conv2dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv3D", "docs": "3D convolution layer (e.g. spatial convolution over volumes).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes\n with a single channel,\n in `data_format=\"channels_last\"`.\n\n Examples:\n\n >>> # The inputs are 28x28x28 volumes with a single channel, and the\n >>> # batch size is 4\n >>> input_shape =(4, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 26, 2)\n\n >>> # With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames,\n >>> # with 7 frames per video.\n >>> input_shape = (4, 7, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 26, 2)\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the depth,\n height and width of the 3D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides of\n the convolution along each spatial dimension. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `batch_shape + (spatial_dim1, spatial_dim2,\n spatial_dim3, channels)` while `channels_first` corresponds to inputs with\n shape `batch_shape + (channels, spatial_dim1, spatial_dim2,\n spatial_dim3)`. It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`. If you never set it, then it\n will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 5+D tensor with shape: `batch_shape + (channels, conv_dim1, conv_dim2,\n conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (conv_dim1, conv_dim2, conv_dim3,\n channels)` if data_format='channels_last'.\n\n Output shape:\n 5+D tensor with shape: `batch_shape + (filters, new_conv_dim1,\n new_conv_dim2, new_conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (new_conv_dim1, new_conv_dim2,\n new_conv_dim3, filters)` if data_format='channels_last'. `new_conv_dim1`,\n `new_conv_dim2` and `new_conv_dim3` values might have changed due to\n padding.\n\n Returns:\n A tensor of rank 5+ representing\n `activation(conv3d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "3D convolution layer (e.g. spatial convolution over volumes).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Conv3DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels\n if `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the convolution along the depth, height\n and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 3 integers,\n specifying the amount of padding along the depth, height, and\n width.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, depth, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix\n (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 5D tensor with shape:\n `(batch_size, channels, depth, rows, cols)` if data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, depth, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 5D tensor with shape:\n `(batch_size, filters, new_depth, new_rows, new_cols)` if\n data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, new_depth, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n `depth` and `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified::\n ```\n new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] +\n output_padding[2])\n ```\n\n Returns:\n A tensor of rank 5 representing\n `activation(conv3dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ConvLSTM2D", "docs": "2D Convolutional LSTM.\n\n Similar to an LSTM layer, but the input transformations\n and recurrent transformations are both convolutional.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of n integers, specifying the\n dimensions of the convolution window.\n strides: An integer or tuple/list of n integers, specifying the strides of\n the convolution. Specifying any stride value != 1 is incompatible with\n specifying any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive). `\"valid\"` means no\n padding. `\"same\"` results in padding evenly to the left/right or up/down\n of the input such that output has the same height/width dimension as the\n input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch, time, ..., channels)` while `channels_first`\n corresponds to inputs with shape `(batch, time, channels, ...)`. It\n defaults to the `image_data_format` value found in your Keras config file\n at `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n dilation_rate: An integer or tuple/list of n integers, specifying the\n dilation rate to use for dilated convolution. Currently, specifying any\n `dilation_rate` value != 1 is incompatible with specifying any `strides`\n value != 1.\n activation: Activation function to use. By default hyperbolic tangent\n activation function is applied (`tanh(x)`).\n recurrent_activation: Activation function to use for the recurrent step.\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at\n initialization. Use in combination with `bias_initializer=\"zeros\"`. This\n is recommended in [Jozefowicz et al., 2015](\n http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n return_sequences: Boolean. Whether to return the last output in the output\n sequence, or the full sequence. (default False)\n return_state: Boolean Whether to return the last state in addition to the\n output. (default False)\n go_backwards: Boolean (default False). If True, process the input sequence\n backwards.\n stateful: Boolean (default False). If True, the last state for each sample\n at index i in a batch will be used as initial state for the sample of\n index i in the following batch.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state.\n Call arguments:\n inputs: A 5D tensor.\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether a\n given timestep should be masked.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or `recurrent_dropout`\n are set.\n initial_state: List of initial state tensors to be passed to the first call\n of the cell.\n Input shape: - If data_format='channels_first'\n 5D tensor with shape: `(samples, time, channels, rows, cols)` - If\n data_format='channels_last'\n 5D tensor with shape: `(samples, time, rows, cols, channels)`\n Output shape:\n - If `return_state`: a list of tensors. The first tensor is the output. The\n remaining tensors are the last states,\n each 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'. `rows` and `cols` values might have changed\n due to padding.\n - If `return_sequences`: 5D tensor with shape: `(samples, timesteps,\n filters, new_rows, new_cols)` if data_format='channels_first'\n or shape: `(samples, timesteps, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n - Else, 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n\n Raises:\n ValueError: in case of invalid constructor arguments.\n\n References:\n - [Shi et al., 2015](http://arxiv.org/abs/1506.04214v1)\n (the current implementation does not include the feedback loop on the\n cells output).\n ", "desc": "2D Convolutional LSTM.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution1D", "docs": "1D convolution layer (e.g. temporal convolution).\n\n This layer creates a convolution kernel that is convolved\n with the layer input over a single spatial (or temporal) dimension\n to produce a tensor of outputs.\n If `use_bias` is True, a bias vector is created and added to the outputs.\n Finally, if `activation` is not `None`,\n it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide an `input_shape` argument\n (tuple of integers or `None`, e.g.\n `(10, 128)` for sequences of 10 vectors of 128-dimensional vectors,\n or `(None, 128)` for variable-length sequences of 128-dimensional vectors.\n\n Examples:\n\n >>> # The inputs are 128-length vectors with 10 timesteps, and the batch size\n >>> # is 4.\n >>> input_shape = (4, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu',input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 8, 32)\n\n >>> # With extended batch shape [4, 7] (e.g. weather data where batch\n >>> # dimensions correspond to spatial location and the third dimension\n >>> # corresponds to time.)\n >>> input_shape = (4, 7, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 8, 32)\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer,\n specifying the length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer,\n specifying the stride length of the convolution.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"` or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n `\"causal\"` results in causal (dilated) convolutions, e.g. `output[t]`\n does not depend on `input[t+1:]`. Useful when modeling temporal data\n where the model should not violate the temporal order.\n See [WaveNet: A Generative Model for Raw Audio, section\n 2.1](https://arxiv.org/abs/1609.03499).\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n dilation_rate: an integer or tuple/list of a single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any `strides` value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved\n separately with `filters / groups` filters. The output is the\n concatenation of all the `groups` results along the channel axis.\n Input channels and `filters` must both be divisible by `groups`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3+D tensor with shape: `batch_shape + (steps, input_dim)`\n\n Output shape:\n 3+D tensor with shape: `batch_shape + (new_steps, filters)`\n `steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "1D convolution layer (e.g. temporal convolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution1DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 3)` for data with 128 time steps and 3 channels.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer length of the 1D convolution window.\n strides: An integer specifying the stride of the convolution along the\n time dimension. Specifying a stride value != 1 is incompatible with\n specifying a `dilation_rate` value != 1. Defaults to 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer specifying the amount of padding along\n the time dimension of the output tensor.\n The amount of output padding must be lower than the stride.\n If set to `None` (default), the output shape is inferred.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: an integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying a `dilation_rate` value != 1 is\n incompatible with specifying a stride value != 1.\n Also dilation rate larger than 1 is not currently supported.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, steps, channels)`\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, new_steps, filters)`\n If `output_padding` is specified:\n ```\n new_timesteps = ((timesteps - 1) * strides + kernel_size -\n 2 * padding + output_padding)\n ```\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep learning](\n https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional Networks](\n https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution2D", "docs": "2D convolution layer (e.g. spatial convolution over images).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`. You can use `None` when\n a dimension has variable size.\n\n Examples:\n\n >>> # The inputs are 28x28 RGB images with `channels_last` and the batch\n >>> # size is 4.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 2)\n\n >>> # With `dilation_rate` as 2.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 24, 24, 2)\n\n >>> # With `padding` as \"same\".\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', padding=\"same\", input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 28, 28, 2)\n\n >>> # With extended batch shape [4, 7]:\n >>> input_shape = (4, 7, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 2)\n\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input. When `padding=\"same\"` and\n `strides=1`, the output has the same size as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be `channels_last`.\n dilation_rate: an integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4+D tensor with shape: `batch_shape + (channels, rows, cols)` if\n `data_format='channels_first'`\n or 4+D tensor with shape: `batch_shape + (rows, cols, channels)` if\n `data_format='channels_last'`.\n\n Output shape:\n 4+D tensor with shape: `batch_shape + (filters, new_rows, new_cols)` if\n `data_format='channels_first'` or 4+D tensor with shape: `batch_shape +\n (new_rows, new_cols, filters)` if `data_format='channels_last'`. `rows`\n and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4+ representing\n `activation(conv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is `\"causal\"`.\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "2D convolution layer (e.g. spatial convolution over images).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution2DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 2 integers,\n specifying the amount of padding along the height and width\n of the output tensor.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer, specifying the dilation rate for all spatial\n dimensions for dilated convolution. Specifying different dilation rates\n for different dimensions is not supported.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified:\n ```\n new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n ```\n\n Returns:\n A tensor of rank 4 representing\n `activation(conv2dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution3D", "docs": "3D convolution layer (e.g. spatial convolution over volumes).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes\n with a single channel,\n in `data_format=\"channels_last\"`.\n\n Examples:\n\n >>> # The inputs are 28x28x28 volumes with a single channel, and the\n >>> # batch size is 4\n >>> input_shape =(4, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 26, 2)\n\n >>> # With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames,\n >>> # with 7 frames per video.\n >>> input_shape = (4, 7, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 26, 2)\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the depth,\n height and width of the 3D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides of\n the convolution along each spatial dimension. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `batch_shape + (spatial_dim1, spatial_dim2,\n spatial_dim3, channels)` while `channels_first` corresponds to inputs with\n shape `batch_shape + (channels, spatial_dim1, spatial_dim2,\n spatial_dim3)`. It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`. If you never set it, then it\n will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 5+D tensor with shape: `batch_shape + (channels, conv_dim1, conv_dim2,\n conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (conv_dim1, conv_dim2, conv_dim3,\n channels)` if data_format='channels_last'.\n\n Output shape:\n 5+D tensor with shape: `batch_shape + (filters, new_conv_dim1,\n new_conv_dim2, new_conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (new_conv_dim1, new_conv_dim2,\n new_conv_dim3, filters)` if data_format='channels_last'. `new_conv_dim1`,\n `new_conv_dim2` and `new_conv_dim3` values might have changed due to\n padding.\n\n Returns:\n A tensor of rank 5+ representing\n `activation(conv3d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "3D convolution layer (e.g. spatial convolution over volumes).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Convolution3DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels\n if `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the convolution along the depth, height\n and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 3 integers,\n specifying the amount of padding along the depth, height, and\n width.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, depth, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix\n (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 5D tensor with shape:\n `(batch_size, channels, depth, rows, cols)` if data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, depth, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 5D tensor with shape:\n `(batch_size, filters, new_depth, new_rows, new_cols)` if\n data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, new_depth, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n `depth` and `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified::\n ```\n new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] +\n output_padding[2])\n ```\n\n Returns:\n A tensor of rank 5 representing\n `activation(conv3dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Cropping1D", "docs": "Cropping layer for 1D input (e.g. temporal sequence).\n\n It crops along the time dimension (axis 1).\n\n Examples:\n\n >>> input_shape = (2, 3, 2)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1]\n [ 2 3]\n [ 4 5]]\n [[ 6 7]\n [ 8 9]\n [10 11]]]\n >>> y = tf.keras.layers.Cropping1D(cropping=1)(x)\n >>> print(y)\n tf.Tensor(\n [[[2 3]]\n [[8 9]]], shape=(2, 1, 2), dtype=int64)\n\n Args:\n cropping: Int or tuple of int (length 2)\n How many units should be trimmed off at the beginning and end of\n the cropping dimension (axis 1).\n If a single int is provided, the same value will be used for both.\n\n Input shape:\n 3D tensor with shape `(batch_size, axis_to_crop, features)`\n\n Output shape:\n 3D tensor with shape `(batch_size, cropped_axis, features)`\n ", "desc": "Cropping layer for 1D input (e.g. temporal sequence).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Cropping2D", "docs": "Cropping layer for 2D input (e.g. picture).\n\n It crops along spatial dimensions, i.e. height and width.\n\n Examples:\n\n >>> input_shape = (2, 28, 28, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.Cropping2D(cropping=((2, 2), (4, 4)))(x)\n >>> print(y.shape)\n (2, 24, 20, 3)\n\n Args:\n cropping: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.\n - If int: the same symmetric cropping\n is applied to height and width.\n - If tuple of 2 ints:\n interpreted as two different\n symmetric cropping values for height and width:\n `(symmetric_height_crop, symmetric_width_crop)`.\n - If tuple of 2 tuples of 2 ints:\n interpreted as\n `((top_crop, bottom_crop), (left_crop, right_crop))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, cropped_rows, cropped_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, cropped_rows, cropped_cols)`\n ", "desc": "Cropping layer for 2D input (e.g. picture).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Cropping3D", "docs": "Cropping layer for 3D data (e.g. spatial or spatio-temporal).\n\n Examples:\n\n >>> input_shape = (2, 28, 28, 10, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.Cropping3D(cropping=(2, 4, 2))(x)\n >>> print(y.shape)\n (2, 24, 20, 6, 3)\n\n Args:\n cropping: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints.\n - If int: the same symmetric cropping\n is applied to depth, height, and width.\n - If tuple of 3 ints: interpreted as two different\n symmetric cropping values for depth, height, and width:\n `(symmetric_dim1_crop, symmetric_dim2_crop, symmetric_dim3_crop)`.\n - If tuple of 3 tuples of 2 ints: interpreted as\n `((left_dim1_crop, right_dim1_crop), (left_dim2_crop,\n right_dim2_crop), (left_dim3_crop, right_dim3_crop))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_axis_to_crop, second_axis_to_crop,\n third_axis_to_crop)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_cropped_axis, second_cropped_axis, third_cropped_axis,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_cropped_axis, second_cropped_axis,\n third_cropped_axis)`\n ", "desc": "Cropping layer for 3D data (e.g. spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.CuDNNGRU", "docs": "Fast GRU implementation backed by cuDNN.\n\n More information about cuDNN can be found on the [NVIDIA\n developer website](https://developer.nvidia.com/cudnn).\n Can only be run on GPU.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\").\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n recurrent_constraint: Constraint function applied to the\n `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n return_sequences: Boolean. Whether to return the last output in the output\n sequence, or the full sequence.\n return_state: Boolean. Whether to return the last state in addition to the\n output.\n go_backwards: Boolean (default False). If True, process the input sequence\n backwards and return the reversed sequence.\n stateful: Boolean (default False). If True, the last state for each sample\n at index i in a batch will be used as initial state for the sample of\n index i in the following batch.\n ", "desc": "Fast GRU implementation backed by cuDNN.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.CuDNNLSTM", "docs": "Fast LSTM implementation backed by cuDNN.\n\n More information about cuDNN can be found on the [NVIDIA\n developer website](https://developer.nvidia.com/cudnn).\n Can only be run on GPU.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs.\n unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate\n at initialization. Setting it to true will also force\n `bias_initializer=\"zeros\"`. This is recommended in [Jozefowicz et\n al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\").\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n recurrent_constraint: Constraint function applied to the\n `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n return_sequences: Boolean. Whether to return the last output. in the\n output sequence, or the full sequence.\n return_state: Boolean. Whether to return the last state in addition to the\n output.\n go_backwards: Boolean (default False). If True, process the input sequence\n backwards and return the reversed sequence.\n stateful: Boolean (default False). If True, the last state for each sample\n at index i in a batch will be used as initial state for the sample of\n index i in the following batch.\n ", "desc": "Fast LSTM implementation backed by cuDNN.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Dense", "docs": "Just your regular densely-connected NN layer.\n\n `Dense` implements the operation:\n `output = activation(dot(input, kernel) + bias)`\n where `activation` is the element-wise activation function\n passed as the `activation` argument, `kernel` is a weights matrix\n created by the layer, and `bias` is a bias vector created by the layer\n (only applicable if `use_bias` is `True`). These are all attributes of\n `Dense`.\n\n Note: If the input to the layer has a rank greater than 2, then `Dense`\n computes the dot product between the `inputs` and the `kernel` along the\n last axis of the `inputs` and axis 0 of the `kernel` (using `tf.tensordot`).\n For example, if input has dimensions `(batch_size, d0, d1)`,\n then we create a `kernel` with shape `(d1, units)`, and the `kernel` operates\n along axis 2 of the `input`, on every sub-tensor of shape `(1, 1, d1)`\n (there are `batch_size * d0` such sub-tensors).\n The output in this case will have shape `(batch_size, d0, units)`.\n\n Besides, layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n When a popular kwarg `input_shape` is passed, then keras will create\n an input layer to insert before the current layer. This can be treated\n equivalent to explicitly defining an `InputLayer`.\n\n Example:\n\n >>> # Create a `Sequential` model and add a Dense layer as the first layer.\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.Input(shape=(16,)))\n >>> model.add(tf.keras.layers.Dense(32, activation='relu'))\n >>> # Now the model will take as input arrays of shape (None, 16)\n >>> # and output arrays of shape (None, 32).\n >>> # Note that after the first layer, you don't need to specify\n >>> # the size of the input anymore:\n >>> model.add(tf.keras.layers.Dense(32))\n >>> model.output_shape\n (None, 32)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\").\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n\n Input shape:\n N-D tensor with shape: `(batch_size, ..., input_dim)`.\n The most common situation would be\n a 2D input with shape `(batch_size, input_dim)`.\n\n Output shape:\n N-D tensor with shape: `(batch_size, ..., units)`.\n For instance, for a 2D input with shape `(batch_size, input_dim)`,\n the output would have shape `(batch_size, units)`.\n ", "desc": "Just your regular densely-connected NN layer.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.DenseFeatures", "docs": "A layer that produces a dense `Tensor` based on given `feature_columns`.\n\n Generally a single example in training data is described with FeatureColumns.\n At the first layer of the model, this column-oriented data should be converted\n to a single `Tensor`.\n\n This layer can be called multiple times with different features.\n\n This is the V1 version of this layer that uses variable_scope's or partitioner\n to create variables which works well with PartitionedVariables. Variable\n scopes are deprecated in V2, so the V2 version uses name_scopes instead. But\n currently that lacks support for partitioned variables. Use this if you need\n partitioned variables. Use the partitioner argument if you have a Keras model\n and uses `tf.compat.v1.keras.estimator.model_to_estimator` for training.\n\n Example:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n keywords_embedded = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_hash_bucket(\"keywords\", 10K),\n dimension=16)\n columns = [price, keywords_embedded, ...]\n partitioner = tf.compat.v1.fixed_size_partitioner(num_shards=4)\n feature_layer = tf.compat.v1.keras.layers.DenseFeatures(\n feature_columns=columns, partitioner=partitioner)\n\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = feature_layer(features)\n for units in [128, 64, 32]:\n dense_tensor = tf.compat.v1.keras.layers.Dense(\n units, activation='relu')(dense_tensor)\n prediction = tf.compat.v1.keras.layers.Dense(1)(dense_tensor)\n ```\n ", "desc": "A layer that produces a dense `Tensor` based on given `feature_columns`.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.DepthwiseConv2D", "docs": "Depthwise 2D convolution.\n\n Depthwise convolution is a type of convolution in which each input channel is\n convolved with a different kernel (called a depthwise kernel). You\n can understand depthwise convolution as the first step in a depthwise\n separable convolution.\n\n It is implemented via the following steps:\n\n - Split the input into individual channels.\n - Convolve each channel with an individual depthwise kernel with\n `depth_multiplier` output channels.\n - Concatenate the convolved outputs along the channels axis.\n\n Unlike a regular 2D convolution, depthwise convolution does not mix\n information across different input channels.\n\n The `depth_multiplier` argument determines how many filter are applied to one\n input channel. As such, it controls the amount of output channels that are\n generated per input channel in the depthwise step.\n\n Args:\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `'valid'` or `'same'` (case-insensitive). `\"valid\"` means no\n padding. `\"same\"` results in padding with zeros evenly to the left/right\n or up/down of the input such that output has the same height/width\n dimension as the input.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be 'channels_last'.\n dilation_rate: An integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Currently, specifying any\n `dilation_rate` value != 1 is incompatible with specifying any `strides`\n value != 1.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: Initializer for the depthwise kernel matrix (see\n `keras.initializers`). If None, the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). If None, the default initializer ('zeros') will be\n used.\n depthwise_regularizer: Regularizer function applied to the depthwise kernel\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its 'activation') (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to the depthwise kernel\n matrix (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4D tensor with shape: `[batch_size, channels, rows, cols]` if\n data_format='channels_first'\n or 4D tensor with shape: `[batch_size, rows, cols, channels]` if\n data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape: `[batch_size, channels * depth_multiplier, new_rows,\n new_cols]` if `data_format='channels_first'`\n or 4D tensor with shape: `[batch_size,\n new_rows, new_cols, channels * depth_multiplier]` if\n `data_format='channels_last'`. `rows` and `cols` values might have changed\n due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(depthwiseconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n ", "desc": "Depthwise 2D convolution.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.deserialize", "docs": "Instantiates a layer from a config dictionary.\n\n Args:\n config: dict of the form {'class_name': str, 'config': dict}\n custom_objects: dict mapping class names (or function names) of custom\n (non-Keras) objects to class/functions\n\n Returns:\n Layer instance (may be Model, Sequential, Network, Layer...)\n\n Example:\n\n ```python\n # Configuration of Dense(32, activation='relu')\n config = {\n 'class_name': 'Dense',\n 'config': {\n 'activation': 'relu',\n 'activity_regularizer': None,\n 'bias_constraint': None,\n 'bias_initializer': {'class_name': 'Zeros', 'config': {}},\n 'bias_regularizer': None,\n 'dtype': 'float32',\n 'kernel_constraint': None,\n 'kernel_initializer': {'class_name': 'GlorotUniform',\n 'config': {'seed': None}},\n 'kernel_regularizer': None,\n 'name': 'dense',\n 'trainable': True,\n 'units': 32,\n 'use_bias': True\n }\n }\n dense_layer = tf.keras.layers.deserialize(config)\n ```\n ", "desc": "Instantiates a layer from a config dictionary.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.disable_v2_dtype_behavior", "docs": "Disables the V2 dtype behavior for Keras layers.\n\n See `tf.compat.v1.keras.layers.enable_v2_dtype_behavior`.\n ", "desc": "Disables the V2 dtype behavior for Keras layers.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Dot", "docs": "Layer that computes a dot product between samples in two tensors.\n\n E.g. if applied to a list of two tensors `a` and `b` of shape\n `(batch_size, n)`, the output will be a tensor of shape `(batch_size, 1)`\n where each entry `i` will be the dot product between\n `a[i]` and `b[i]`.\n\n >>> x = np.arange(10).reshape(1, 5, 2)\n >>> print(x)\n [[[0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]]]\n >>> y = np.arange(10, 20).reshape(1, 2, 5)\n >>> print(y)\n [[[10 11 12 13 14]\n [15 16 17 18 19]]]\n >>> tf.keras.layers.Dot(axes=(1, 2))([x, y])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> dotted = tf.keras.layers.Dot(axes=1)([x1, x2])\n >>> dotted.shape\n TensorShape([5, 1])\n\n\n ", "desc": "Layer that computes a dot product between samples in two tensors.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Dropout", "docs": "Applies Dropout to the input.\n\n The Dropout layer randomly sets input units to 0 with a frequency of `rate`\n at each step during training time, which helps prevent overfitting.\n Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over\n all inputs is unchanged.\n\n Note that the Dropout layer only applies when `training` is set to True\n such that no values are dropped during inference. When using `model.fit`,\n `training` will be appropriately set to True automatically, and in other\n contexts, you can set the kwarg explicitly to True when calling the layer.\n\n (This is in contrast to setting `trainable=False` for a Dropout layer.\n `trainable` does not affect the layer's behavior, as Dropout does\n not have any variables/weights that can be frozen during training.)\n\n >>> tf.random.set_seed(0)\n >>> layer = tf.keras.layers.Dropout(.2, input_shape=(2,))\n >>> data = np.arange(10).reshape(5, 2).astype(np.float32)\n >>> print(data)\n [[0. 1.]\n [2. 3.]\n [4. 5.]\n [6. 7.]\n [8. 9.]]\n >>> outputs = layer(data, training=True)\n >>> print(outputs)\n tf.Tensor(\n [[ 0. 1.25]\n [ 2.5 3.75]\n [ 5. 6.25]\n [ 7.5 8.75]\n [10. 0. ]], shape=(5, 2), dtype=float32)\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n noise_shape: 1D integer tensor representing the shape of the\n binary dropout mask that will be multiplied with the input.\n For instance, if your inputs have shape\n `(batch_size, timesteps, features)` and\n you want the dropout mask to be the same for all timesteps,\n you can use `noise_shape=(batch_size, 1, features)`.\n seed: A Python integer to use as random seed.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n ", "desc": "Applies Dropout to the input.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ELU", "docs": "Exponential Linear Unit.\n\n It follows:\n\n ```\n f(x) = alpha * (exp(x) - 1.) for x < 0\n f(x) = x for x >= 0\n ```\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha: Scale for the negative factor.\n ", "desc": "Exponential Linear Unit.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Embedding", "docs": "Turns positive integers (indexes) into dense vectors of fixed size.\n\n e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]`\n\n This layer can only be used on positive integer inputs of a fixed range. The\n `tf.keras.layers.TextVectorization`, `tf.keras.layers.StringLookup`,\n and `tf.keras.layers.IntegerLookup` preprocessing layers can help prepare\n inputs for an `Embedding` layer.\n\n This layer accepts `tf.Tensor` and `tf.RaggedTensor` inputs. It cannot be\n called with `tf.SparseTensor` input.\n\n Example:\n\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Embedding(1000, 64, input_length=10))\n >>> # The model will take as input an integer matrix of size (batch,\n >>> # input_length), and the largest integer (i.e. word index) in the input\n >>> # should be no larger than 999 (vocabulary size).\n >>> # Now model.output_shape is (None, 10, 64), where `None` is the batch\n >>> # dimension.\n >>> input_array = np.random.randint(1000, size=(32, 10))\n >>> model.compile('rmsprop', 'mse')\n >>> output_array = model.predict(input_array)\n >>> print(output_array.shape)\n (32, 10, 64)\n\n Args:\n input_dim: Integer. Size of the vocabulary,\n i.e. maximum integer index + 1.\n output_dim: Integer. Dimension of the dense embedding.\n embeddings_initializer: Initializer for the `embeddings`\n matrix (see `keras.initializers`).\n embeddings_regularizer: Regularizer function applied to\n the `embeddings` matrix (see `keras.regularizers`).\n embeddings_constraint: Constraint function applied to\n the `embeddings` matrix (see `keras.constraints`).\n mask_zero: Boolean, whether or not the input value 0 is a special \"padding\"\n value that should be masked out.\n This is useful when using recurrent layers\n which may take variable length input.\n If this is `True`, then all subsequent layers\n in the model need to support masking or an exception will be raised.\n If mask_zero is set to True, as a consequence, index 0 cannot be\n used in the vocabulary (input_dim should equal size of\n vocabulary + 1).\n input_length: Length of input sequences, when it is constant.\n This argument is required if you are going to connect\n `Flatten` then `Dense` layers upstream\n (without it, the shape of the dense outputs cannot be computed).\n\n Input shape:\n 2D tensor with shape: `(batch_size, input_length)`.\n\n Output shape:\n 3D tensor with shape: `(batch_size, input_length, output_dim)`.\n\n **Note on variable placement:**\n By default, if a GPU is available, the embedding matrix will be placed on\n the GPU. This achieves the best performance, but it might cause issues:\n\n - You may be using an optimizer that does not support sparse GPU kernels.\n In this case you will see an error upon training your model.\n - Your embedding matrix may be too large to fit on your GPU. In this case\n you will see an Out Of Memory (OOM) error.\n\n In such cases, you should place the embedding matrix on the CPU memory.\n You can do so with a device scope, as such:\n\n ```python\n with tf.device('cpu:0'):\n embedding_layer = Embedding(...)\n embedding_layer.build()\n ```\n\n The pre-built `embedding_layer` instance can then be added to a `Sequential`\n model (e.g. `model.add(embedding_layer)`), called in a Functional model\n (e.g. `x = embedding_layer(x)`), or used in a subclassed model.\n ", "desc": "Turns positive integers (indexes) into dense vectors of fixed size.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.enable_v2_dtype_behavior", "docs": "Enable the V2 dtype behavior for Keras layers.\n\n By default, the V2 dtype behavior is enabled in TensorFlow 2, so this function\n is only useful if `tf.compat.v1.disable_v2_behavior` has been called. Since\n mixed precision requires V2 dtype behavior to be enabled, this function allows\n you to use mixed precision in Keras layers if `disable_v2_behavior` has been\n called.\n\n When enabled, the dtype of Keras layers defaults to floatx (which is typically\n float32) instead of None. In addition, layers will automatically cast\n floating-point inputs to the layer's dtype.\n\n >>> x = tf.ones((4, 4, 4, 4), dtype='float64')\n >>> layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2)\n >>> print(layer.dtype) # float32 since V2 dtype behavior is enabled\n float32\n >>> y = layer(x) # Layer casts inputs since V2 dtype behavior is enabled\n >>> print(y.dtype.name)\n float32\n\n A layer author can opt-out their layer from the automatic input casting by\n passing `autocast=False` to the base Layer's constructor. This disables the\n autocasting part of the V2 behavior for that layer, but not the defaulting to\n floatx part of the V2 behavior.\n\n When a global `tf.keras.mixed_precision.Policy` is set, a Keras layer's dtype\n will default to the global policy instead of floatx. Layers will automatically\n cast inputs to the policy's compute_dtype.\n ", "desc": "Enable the V2 dtype behavior for Keras layers.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental", "docs": "Public API for tf.keras.layers.experimental namespace.\n", "desc": "Public API for tf.keras.layers.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.EinsumDense", "docs": "A layer that uses tf.einsum as the backing computation.\n\n This layer can perform einsum calculations of arbitrary dimensionality.\n\n Args:\n equation: An equation describing the einsum to perform. This equation must\n be a valid einsum string of the form `ab,bc->ac`, `...ab,bc->...ac`, or\n `ab...,bc->ac...` where 'ab', 'bc', and 'ac' can be any valid einsum axis\n expression sequence.\n output_shape: The expected shape of the output tensor (excluding the batch\n dimension and any dimensions represented by ellipses). You can specify\n None for any dimension that is unknown or can be inferred from the input\n shape.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (that is, a \"linear\" activation: `a(x) = x`).\n bias_axes: A string containing the output dimension(s) to apply a bias to.\n Each character in the `bias_axes` string should correspond to a character\n in the output portion of the `equation` string.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\")..\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n bias_constraint: Constraint function applied to the bias vector.\n\n Examples:\n\n **Biased dense layer with einsums**\n\n This example shows how to instantiate a standard Keras dense layer using\n einsum operations. This example is equivalent to\n `tf.keras.layers.Dense(64, use_bias=True)`.\n\n >>> layer = EinsumDense(\"ab,bc->ac\", output_shape=64, bias_axes=\"c\")\n >>> input_tensor = tf.keras.Input(shape=[32])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 64) dtype=...>\n\n **Applying a dense layer to a sequence**\n\n This example shows how to instantiate a layer that applies the same dense\n operation to every element in a sequence. Here, the 'output_shape' has two\n values (since there are two non-batch dimensions in the output); the first\n dimension in the output_shape is `None`, because the sequence dimension `b`\n has an unknown shape.\n\n >>> layer = EinsumDense(\"abc,cd->abd\",\n ... output_shape=(None, 64),\n ... bias_axes=\"d\")\n >>> input_tensor = tf.keras.Input(shape=[32, 128])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 32, 64) dtype=...>\n\n **Applying a dense layer to a sequence using ellipses**\n\n This example shows how to instantiate a layer that applies the same dense\n operation to every element in a sequence, but uses the ellipsis notation\n instead of specifying the batch and sequence dimensions.\n\n Because we are using ellipsis notation and have specified only one axis, the\n output_shape arg is a single value. When instantiated in this way, the layer\n can handle any number of sequence dimensions - including the case where no\n sequence dimension exists.\n\n >>> layer = EinsumDense(\"...x,xy->...y\", output_shape=64, bias_axes=\"y\")\n >>> input_tensor = tf.keras.Input(shape=[32, 128])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 32, 64) dtype=...>\n ", "desc": "A layer that uses tf.einsum as the backing computation.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing", "docs": "Public API for tf.keras.layers.experimental.preprocessing namespace.\n", "desc": "Public API for tf.keras.layers.experimental.preprocessing namespace.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.CategoryEncoding", "docs": "A preprocessing layer which encodes integer features.\n\n This layer provides options for condensing data into a categorical encoding\n when the total number of tokens are known in advance. It accepts integer\n values as inputs, and it outputs a dense or sparse representation of those\n inputs. For integer inputs where the total number of tokens is not known, use\n `tf.keras.layers.IntegerLookup` instead.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Examples:\n\n **One-hot encoding data**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"one_hot\")\n >>> layer([3, 2, 0, 1])\n \n\n **Multi-hot encoding data**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"multi_hot\")\n >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]])\n \n\n **Using weighted inputs in `\"count\"` mode**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"count\")\n >>> count_weights = np.array([[.1, .2], [.1, .1], [.2, .3], [.4, .2]])\n >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]], count_weights=count_weights)\n \n\n Args:\n num_tokens: The total number of tokens the layer should support. All inputs\n to the layer must integers in the range `0 <= value < num_tokens`, or an\n error will be thrown.\n output_mode: Specification for the output of the layer.\n Defaults to `\"multi_hot\"`. Values can be `\"one_hot\"`, `\"multi_hot\"` or\n `\"count\"`, configuring the layer as follows:\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array of `num_tokens` size, containing a 1 at the element index. If\n the last dimension is size 1, will encode on that dimension. If the\n last dimension is not size 1, will append a new dimension for the\n encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n of `num_tokens` size, containing a 1 for each vocabulary term present\n in the sample. Treats the last dimension as the sample dimension, if\n input shape is `(..., sample_length)`, output shape will be\n `(..., num_tokens)`.\n - `\"count\"`: Like `\"multi_hot\"`, but the int array contains a count of\n the number of times the token at that index appeared in the sample.\n For all output modes, currently only output up to rank 2 is supported.\n sparse: Boolean. If true, returns a `SparseTensor` instead of a dense\n `Tensor`. Defaults to `False`.\n\n Call arguments:\n inputs: A 1D or 2D tensor of integer inputs.\n count_weights: A tensor in the same shape as `inputs` indicating the\n weight for each sample value when summing up in `count` mode. Not used in\n `\"multi_hot\"` or `\"one_hot\"` modes.\n ", "desc": "A preprocessing layer which encodes integer features.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.CenterCrop", "docs": "A preprocessing layer which crops images.\n\n This layers crops the central portion of the images to a target size. If an\n image is smaller than the target size, it will be resized and cropped so as to\n return the largest possible window in the image that matches the target aspect\n ratio.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., target_height, target_width, channels)`.\n\n If the input height/width is even and the target height/width is odd (or\n inversely), the input image is left-padded by 1 pixel.\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n ", "desc": "A preprocessing layer which crops images.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.Discretization", "docs": "A preprocessing layer which buckets continuous features by ranges.\n\n This layer will place each element of its input data into one of several\n contiguous ranges and output an integer index indicating which range each\n element was placed in.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n Any `tf.Tensor` or `tf.RaggedTensor` of dimension 2 or higher.\n\n Output shape:\n Same as input shape.\n\n Arguments:\n bin_boundaries: A list of bin boundaries. The leftmost and rightmost bins\n will always extend to `-inf` and `inf`, so `bin_boundaries=[0., 1., 2.]`\n generates bins `(-inf, 0.)`, `[0., 1.)`, `[1., 2.)`, and `[2., +inf)`. If\n this option is set, `adapt()` should not be called.\n num_bins: The integer number of bins to compute. If this option is set,\n `adapt()` should be called to learn the bin boundaries.\n epsilon: Error tolerance, typically a small fraction close to zero (e.g.\n 0.01). Higher values of epsilon increase the quantile approximation, and\n hence result in more unequal buckets, but could improve performance\n and resource consumption.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, or `\"count\"`\n configuring the layer as follows:\n - `\"int\"`: Return the discritized bin indices directly.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as `num_bins`, containing a 1 at the input's bin\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as `num_bins`, containing a 1 for each bin index\n index present in the sample. Treats the last dimension as the sample\n dimension, if input shape is `(..., sample_length)`, output shape will\n be `(..., num_tokens)`.\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the bin index appeared in the sample.\n sparse: Boolean. Only applicable to `\"one_hot\"`, `\"multi_hot\"`,\n and `\"count\"` output modes. If True, returns a `SparseTensor` instead of\n a dense `Tensor`. Defaults to False.\n\n Examples:\n\n Bucketize float values based on provided buckets.\n >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])\n >>> layer = tf.keras.layers.Discretization(bin_boundaries=[0., 1., 2.])\n >>> layer(input)\n \n\n Bucketize float values based on a number of buckets to compute.\n >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])\n >>> layer = tf.keras.layers.Discretization(num_bins=4, epsilon=0.01)\n >>> layer.adapt(input)\n >>> layer(input)\n \n ", "desc": "A preprocessing layer which buckets continuous features by ranges.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.Hashing", "docs": "A preprocessing layer which hashes and bins categorical features.\n\n This layer transforms categorical inputs to hashed output. It element-wise\n converts a ints or strings to ints in a fixed range. The stable hash\n function uses `tensorflow::ops::Fingerprint` to produce the same output\n consistently across all platforms.\n\n This layer uses [FarmHash64](https://github.com/google/farmhash) by default,\n which provides a consistent hashed output across different platforms and is\n stable across invocations, regardless of device and context, by mixing the\n input bits thoroughly.\n\n If you want to obfuscate the hashed output, you can also pass a random `salt`\n argument in the constructor. In that case, the layer will use the\n [SipHash64](https://github.com/google/highwayhash) hash function, with\n the `salt` value serving as additional input to the hash function.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n **Example (FarmHash64)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3)\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n **Example (FarmHash64) with a mask value**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, mask_value='')\n >>> inp = [['A'], ['B'], [''], ['C'], ['D']]\n >>> layer(inp)\n \n\n **Example (SipHash64)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, salt=[133, 137])\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n **Example (Siphash64 with a single integer, same as `salt=[133, 133]`)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, salt=133)\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n Args:\n num_bins: Number of hash bins. Note that this includes the `mask_value` bin,\n so the effective number of bins is `(num_bins - 1)` if `mask_value` is\n set.\n mask_value: A value that represents masked inputs, which are mapped to\n index 0. Defaults to None, meaning no mask term will be added and the\n hashing will start at index 0.\n salt: A single unsigned integer or None.\n If passed, the hash function used will be SipHash64, with these values\n used as an additional input (known as a \"salt\" in cryptography).\n These should be non-zero. Defaults to `None` (in that\n case, the FarmHash64 hash function is used). It also supports\n tuple/list of 2 unsigned integer numbers, see reference paper for details.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, or `\"count\"`\n configuring the layer as follows:\n - `\"int\"`: Return the integer bin indices directly.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as `num_bins`, containing a 1 at the input's bin\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as `num_bins`, containing a 1 for each bin index\n index present in the sample. Treats the last dimension as the sample\n dimension, if input shape is `(..., sample_length)`, output shape will\n be `(..., num_tokens)`.\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the bin index appeared in the sample.\n sparse: Boolean. Only applicable to `\"one_hot\"`, `\"multi_hot\"`,\n and `\"count\"` output modes. If True, returns a `SparseTensor` instead of\n a dense `Tensor`. Defaults to False.\n **kwargs: Keyword arguments to construct a layer.\n\n Input shape:\n A single or list of string, int32 or int64 `Tensor`,\n `SparseTensor` or `RaggedTensor` of shape `(batch_size, ...,)`\n\n Output shape:\n An int64 `Tensor`, `SparseTensor` or `RaggedTensor` of shape\n `(batch_size, ...)`. If any input is `RaggedTensor` then output is\n `RaggedTensor`, otherwise if any input is `SparseTensor` then output is\n `SparseTensor`, otherwise the output is `Tensor`.\n\n Reference:\n - [SipHash with salt](https://www.131002.net/siphash/siphash.pdf)\n\n ", "desc": "A preprocessing layer which hashes and bins categorical features.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.Normalization", "docs": "A preprocessing layer which normalizes continuous features.\n\n This layer will shift and scale inputs into a distribution centered around\n 0 with standard deviation 1. It accomplishes this by precomputing the mean and\n variance of the data, and calling `(input - mean) / sqrt(var)` at runtime.\n\n The mean and variance values for the layer must be either supplied on\n construction or learned via `adapt()`. `adapt()` will compute the mean and\n variance of the data and store them as the layer's weights. `adapt()` should\n be called before `fit()`, `evaluate()`, or `predict()`.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n axis: Integer, tuple of integers, or None. The axis or axes that should\n have a separate mean and variance for each index in the shape. For\n example, if shape is `(None, 5)` and `axis=1`, the layer will track 5\n separate mean and variance values for the last axis. If `axis` is set to\n `None`, the layer will normalize all elements in the input by a scalar\n mean and variance. Defaults to -1, where the last axis of the input is\n assumed to be a feature dimension and is normalized per index. Note that\n in the specific case of batched scalar inputs where the only axis is the\n batch axis, the default will normalize each index in the batch\n separately. In this case, consider passing `axis=None`.\n mean: The mean value(s) to use during normalization. The passed value(s)\n will be broadcast to the shape of the kept axes above; if the value(s)\n cannot be broadcast, an error will be raised when this layer's `build()`\n method is called.\n variance: The variance value(s) to use during normalization. The passed\n value(s) will be broadcast to the shape of the kept axes above; if the\n value(s) cannot be broadcast, an error will be raised when this layer's\n `build()` method is called.\n\n Examples:\n\n Calculate a global mean and variance by analyzing the dataset in `adapt()`.\n\n >>> adapt_data = np.array([1., 2., 3., 4., 5.], dtype='float32')\n >>> input_data = np.array([1., 2., 3.], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(axis=None)\n >>> layer.adapt(adapt_data)\n >>> layer(input_data)\n \n\n Calculate a mean and variance for each index on the last axis.\n\n >>> adapt_data = np.array([[0., 7., 4.],\n ... [2., 9., 6.],\n ... [0., 7., 4.],\n ... [2., 9., 6.]], dtype='float32')\n >>> input_data = np.array([[0., 7., 4.]], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(axis=-1)\n >>> layer.adapt(adapt_data)\n >>> layer(input_data)\n \n\n Pass the mean and variance directly.\n\n >>> input_data = np.array([[1.], [2.], [3.]], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(mean=3., variance=2.)\n >>> layer(input_data)\n \n ", "desc": "A preprocessing layer which normalizes continuous features.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer", "docs": "Base class for Preprocessing Layers.\n\n **Don't use this class directly: it's an abstract base class!** You may\n be looking for one of the many built-in\n [preprocessing layers](https://keras.io/guides/preprocessing_layers/)\n instead.\n\n Preprocessing layers are layers whose state gets computed before model\n training starts. They do not get updated during training.\n Most preprocessing layers implement an `adapt()` method for state computation.\n\n The `PreprocessingLayer` class is the base class you would subclass to\n implement your own preprocessing layers.\n ", "desc": "Base class for Preprocessing Layers.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.Rescaling", "docs": "A preprocessing layer which rescales input values to a new range.\n\n This layer rescales every value of an input (often an image) by multiplying by\n `scale` and adding `offset`.\n\n For instance:\n\n 1. To rescale an input in the ``[0, 255]`` range\n to be in the `[0, 1]` range, you would pass `scale=1./255`.\n\n 2. To rescale an input in the ``[0, 255]`` range to be in the `[-1, 1]` range,\n you would pass `scale=1./127.5, offset=-1`.\n\n The rescaling is applied both during training and inference. Inputs can be\n of integer or floating point dtype, and by default the layer will output\n floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n Arbitrary.\n\n Output shape:\n Same as input.\n\n Args:\n scale: Float, the scale to apply to the inputs.\n offset: Float, the offset to apply to the inputs.\n ", "desc": "A preprocessing layer which rescales input values to a new range.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.preprocessing.Resizing", "docs": "A preprocessing layer which resizes images.\n\n This layer resizes an image input to a target height and width. The input\n should be a 4D (batched) or 3D (unbatched) tensor in `\"channels_last\"` format.\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and of\n interger or floating point dtype. By default, the layer will output floats.\n\n This layer can be called on tf.RaggedTensor batches of input images of\n distinct sizes, and will resize the outputs to dense tensors of uniform size.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n interpolation: String, the interpolation method. Defaults to `\"bilinear\"`.\n Supports `\"bilinear\"`, `\"nearest\"`, `\"bicubic\"`, `\"area\"`, `\"lanczos3\"`,\n `\"lanczos5\"`, `\"gaussian\"`, `\"mitchellcubic\"`.\n crop_to_aspect_ratio: If True, resize the images without aspect\n ratio distortion. When the original aspect ratio differs from the target\n aspect ratio, the output image will be cropped so as to return the largest\n possible window in the image (of size `(height, width)`) that matches\n the target aspect ratio. By default (`crop_to_aspect_ratio=False`),\n aspect ratio may not be preserved.\n ", "desc": "A preprocessing layer which resizes images.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.experimental.RandomFourierFeatures", "docs": "Layer that projects its inputs into a random feature space.\n\n This layer implements a mapping from input space to a space with `output_dim`\n dimensions, which approximates shift-invariant kernels. A kernel function\n `K(x, y)` is shift-invariant if `K(x, y) == k(x - y)` for some function `k`.\n Many popular Radial Basis Functions (RBF), including Gaussian and\n Laplacian kernels, are shift-invariant.\n\n The implementation of this layer is based on the following paper:\n [\"Random Features for Large-Scale Kernel Machines\"](\n https://people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf)\n by Ali Rahimi and Ben Recht.\n\n The distribution from which the parameters of the random features map (layer)\n are sampled determines which shift-invariant kernel the layer approximates\n (see paper for more details). You can use the distribution of your\n choice. The layer supports out-of-the-box\n approximations of the following two RBF kernels:\n\n - Gaussian: `K(x, y) == exp(- square(x - y) / (2 * square(scale)))`\n - Laplacian: `K(x, y) = exp(-abs(x - y) / scale))`\n\n **Note:** Unlike what is described in the paper and unlike what is used in\n the Scikit-Learn implementation, the output of this layer does not apply\n the `sqrt(2 / D)` normalization factor.\n\n **Usage:** Typically, this layer is used to \"kernelize\" linear models by\n applying a non-linear transformation (this layer) to the input features and\n then training a linear model on top of the transformed features. Depending on\n the loss function of the linear model, the composition of this layer and the\n linear model results to models that are equivalent (up to approximation) to\n kernel SVMs (for hinge loss), kernel logistic regression (for logistic loss),\n kernel linear regression (for squared loss), etc.\n\n Examples:\n\n A kernel multinomial logistic regression model with Gaussian kernel for MNIST:\n\n ```python\n model = keras.Sequential([\n keras.Input(shape=(784,)),\n RandomFourierFeatures(\n output_dim=4096,\n scale=10.,\n kernel_initializer='gaussian'),\n layers.Dense(units=10, activation='softmax'),\n ])\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['categorical_accuracy']\n )\n ```\n\n A quasi-SVM classifier for MNIST:\n\n ```python\n model = keras.Sequential([\n keras.Input(shape=(784,)),\n RandomFourierFeatures(\n output_dim=4096,\n scale=10.,\n kernel_initializer='gaussian'),\n layers.Dense(units=10),\n ])\n model.compile(\n optimizer='adam',\n loss='hinge',\n metrics=['categorical_accuracy']\n )\n ```\n\n To use another kernel, just replace the layer creation line with:\n\n ```python\n random_features_layer = RandomFourierFeatures(\n output_dim=500,\n kernel_initializer=,\n scale=...,\n ...)\n ```\n\n Args:\n output_dim: Positive integer, the dimension of the layer's output, i.e., the\n number of random features used to approximate the kernel.\n kernel_initializer: Determines the distribution of the parameters of the\n random features map (and therefore the kernel approximated by the layer).\n It can be either a string identifier or a Keras `Initializer` instance.\n Currently only 'gaussian' and 'laplacian' are supported string\n identifiers (case insensitive). Note that the kernel matrix is not\n trainable.\n scale: For Gaussian and Laplacian kernels, this corresponds to a scaling\n factor of the corresponding kernel approximated by the layer (see concrete\n definitions above). When provided, it should be a positive float. If None,\n a default value is used: if the kernel initializer is set to \"gaussian\",\n `scale` defaults to `sqrt(input_dim / 2)`, otherwise, it defaults to 1.0.\n Both the approximation error of the kernel and the classification quality\n are sensitive to this parameter. If `trainable` is set to `True`, this\n parameter is learned end-to-end during training and the provided value\n serves as the initial value.\n **Note:** When features from this layer are fed to a linear model,\n by making `scale` trainable, the resulting optimization problem is\n no longer convex (even if the loss function used by the linear model\n is convex).\n trainable: Whether the scaling parameter of the layer should be trainable.\n Defaults to `False`.\n name: String, name to use for this layer.\n ", "desc": "Layer that projects its inputs into a random feature space.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Flatten", "docs": "Flattens the input. Does not affect the batch size.\n\n Note: If inputs are shaped `(batch,)` without a feature axis, then\n flattening adds an extra channel dimension and output shape is `(batch, 1)`.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, ..., channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, ...)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Example:\n\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Conv2D(64, 3, 3, input_shape=(3, 32, 32)))\n >>> model.output_shape\n (None, 1, 10, 64)\n\n >>> model.add(Flatten())\n >>> model.output_shape\n (None, 640)\n\n ", "desc": "Flattens the input. Does not affect the batch size.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GaussianDropout", "docs": "Apply multiplicative 1-centered Gaussian noise.\n\n As it is a regularization layer, it is only active at training time.\n\n Args:\n rate: Float, drop probability (as with `Dropout`).\n The multiplicative noise will have\n standard deviation `sqrt(rate / (1 - rate))`.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Apply multiplicative 1-centered Gaussian noise.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GaussianNoise", "docs": "Apply additive zero-centered Gaussian noise.\n\n This is useful to mitigate overfitting\n (you could see it as a form of random data augmentation).\n Gaussian Noise (GS) is a natural choice as corruption process\n for real valued inputs.\n\n As it is a regularization layer, it is only active at training time.\n\n Args:\n stddev: Float, standard deviation of the noise distribution.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding noise) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Apply additive zero-centered Gaussian noise.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAveragePooling1D", "docs": "Global average pooling operation for temporal data.\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling1D()(x)\n >>> print(y.shape)\n (2, 4)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(batch_size, steps)` indicating whether\n a given step should be masked (excluded from the average).\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global average pooling operation for temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAveragePooling2D", "docs": "Global average pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global average pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAveragePooling3D", "docs": "Global Average pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Average pooling operation for 3D data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAvgPool1D", "docs": "Global average pooling operation for temporal data.\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling1D()(x)\n >>> print(y.shape)\n (2, 4)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(batch_size, steps)` indicating whether\n a given step should be masked (excluded from the average).\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global average pooling operation for temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAvgPool2D", "docs": "Global average pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global average pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalAvgPool3D", "docs": "Global Average pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Average pooling operation for 3D data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPool1D", "docs": "Global max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over\n the time dimension.\n\n For example:\n\n >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> x = tf.reshape(x, [3, 3, 1])\n >>> x\n \n >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D()\n >>> max_pool_1d(x)\n \n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPool2D", "docs": "Global max pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalMaxPool2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global max pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPool3D", "docs": "Global Max pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Max pooling operation for 3D data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPooling1D", "docs": "Global max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over\n the time dimension.\n\n For example:\n\n >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> x = tf.reshape(x, [3, 3, 1])\n >>> x\n \n >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D()\n >>> max_pool_1d(x)\n \n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPooling2D", "docs": "Global max pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalMaxPool2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global max pooling operation for spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GlobalMaxPooling3D", "docs": "Global Max pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Max pooling operation for 3D data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GRU", "docs": "Gated Recurrent Unit - Cho et al. 2014.\n\n There are two variants. The default one is based on 1406.1078v3 and\n has reset gate applied to hidden state before matrix multiplication. The\n other one is based on original 1406.1078v1 and has the order reversed.\n\n The second variant is compatible with CuDNNGRU (GPU-only) and allows\n inference on CPU. Thus it has separate biases for `kernel` and\n `recurrent_kernel`. Use `'reset_after'=True` and\n `recurrent_activation='sigmoid'`.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use\n for the recurrent step.\n Default: hard sigmoid (`hard_sigmoid`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n recurrent_regularizer: Regularizer function applied to\n the `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")..\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n recurrent_constraint: Constraint function applied to\n the `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the inputs.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the recurrent state.\n return_sequences: Boolean. Whether to return the last output\n in the output sequence, or the full sequence.\n return_state: Boolean. Whether to return the last state\n in addition to the output.\n go_backwards: Boolean (default False).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default False). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default False).\n If True, the network will be unrolled,\n else a symbolic loop will be used.\n Unrolling can speed-up a RNN,\n although it tends to be more memory-intensive.\n Unrolling is only suitable for short sequences.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n reset_after: GRU convention (whether to apply reset gate after or\n before matrix multiplication). False = \"before\" (default),\n True = \"after\" (cuDNN compatible).\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False`\n entry indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n ", "desc": "Gated Recurrent Unit - Cho et al. 2014.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.GRUCell", "docs": "Cell class for the GRU layer.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass None, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use\n for the recurrent step.\n Default: hard sigmoid (`hard_sigmoid`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix,\n used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n recurrent_regularizer: Regularizer function applied to\n the `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n recurrent_constraint: Constraint function applied to\n the `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for the linear transformation of the inputs.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the recurrent state.\n reset_after: GRU convention (whether to apply reset gate after or\n before matrix multiplication). False = \"before\" (default),\n True = \"after\" (cuDNN compatible).\n\n Call arguments:\n inputs: A 2D tensor.\n states: List of state tensors corresponding to the previous timestep.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n ", "desc": "Cell class for the GRU layer.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Input", "docs": "`Input()` is used to instantiate a Keras tensor.\n\n A Keras tensor is a symbolic tensor-like object,\n which we augment with certain attributes that allow us to build a Keras model\n just by knowing the inputs and outputs of the model.\n\n For instance, if `a`, `b` and `c` are Keras tensors,\n it becomes possible to do:\n `model = Model(input=[a, b], output=c)`\n\n Args:\n shape: A shape tuple (integers), not including the batch size.\n For instance, `shape=(32,)` indicates that the expected input\n will be batches of 32-dimensional vectors. Elements of this tuple\n can be None; 'None' elements represent dimensions where the shape is\n not known.\n batch_size: optional static batch size (integer).\n name: An optional name string for the layer.\n Should be unique in a model (do not reuse the same name twice).\n It will be autogenerated if it isn't provided.\n dtype: The data type expected by the input, as a string\n (`float32`, `float64`, `int32`...)\n sparse: A boolean specifying whether the placeholder to be created is\n sparse. Only one of 'ragged' and 'sparse' can be True. Note that,\n if `sparse` is False, sparse tensors can still be passed into the\n input - they will be densified with a default value of 0.\n tensor: Optional existing tensor to wrap into the `Input` layer.\n If set, the layer will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n ragged: A boolean specifying whether the placeholder to be created is\n ragged. Only one of 'ragged' and 'sparse' can be True. In this case,\n values of 'None' in the 'shape' argument represent ragged dimensions.\n For more information about RaggedTensors, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensors).\n type_spec: A `tf.TypeSpec` object to create the input placeholder from.\n When provided, all other args except name must be None.\n **kwargs: deprecated arguments support. Supports `batch_shape` and\n `batch_input_shape`.\n\n Returns:\n A `tensor`.\n\n Example:\n\n ```python\n # this is a logistic regression in Keras\n x = Input(shape=(32,))\n y = Dense(16, activation='softmax')(x)\n model = Model(x, y)\n ```\n\n Note that even if eager execution is enabled,\n `Input` produces a symbolic tensor-like object (i.e. a placeholder).\n This symbolic tensor-like object can be used with lower-level\n TensorFlow ops that take tensors as inputs, as such:\n\n ```python\n x = Input(shape=(32,))\n y = tf.square(x) # This op will be treated like a layer\n model = Model(x, y)\n ```\n\n (This behavior does not work for higher-order TensorFlow APIs such as\n control flow and being directly watched by a `tf.GradientTape`).\n\n However, the resulting model will not track any variables that were\n used as inputs to TensorFlow ops. All variable usages must happen within\n Keras layers to make sure they will be tracked by the model's weights.\n\n The Keras Input can also create a placeholder from an arbitrary `tf.TypeSpec`,\n e.g:\n\n ```python\n x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],\n dtype=tf.float32, ragged_rank=1))\n y = x.values\n model = Model(x, y)\n ```\n When passing an arbitrary `tf.TypeSpec`, it must represent the signature of an\n entire batch instead of just one example.\n\n Raises:\n ValueError: If both `sparse` and `ragged` are provided.\n ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are\n provided.\n ValueError: If `shape`, `tensor` and `type_spec` are None.\n ValueError: If arguments besides `type_spec` are non-None while `type_spec`\n is passed.\n ValueError: if any unrecognized parameters are provided.\n ", "desc": "`Input()` is used to instantiate a Keras tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.InputLayer", "docs": "Layer to be used as an entry point into a Network (a graph of layers).\n\n It can either wrap an existing tensor (pass an `input_tensor` argument)\n or create a placeholder tensor (pass arguments `input_shape`, and\n optionally, `dtype`).\n\n It is generally recommend to use the Keras Functional model via `Input`,\n (which creates an `InputLayer`) without directly using `InputLayer`.\n\n When using `InputLayer` with the Keras Sequential model, it can be skipped by\n moving the `input_shape` parameter to the first layer after the `InputLayer`.\n\n This class can create placeholders for `tf.Tensors`, `tf.SparseTensors`, and\n `tf.RaggedTensors` by choosing `sparse=True` or `ragged=True`. Note that\n `sparse` and `ragged` can't be configured to `True` at the same time.\n Usage:\n\n ```python\n # With explicit InputLayer.\n model = tf.keras.Sequential([\n tf.keras.layers.InputLayer(input_shape=(4,)),\n tf.keras.layers.Dense(8)])\n model.compile(tf.optimizers.RMSprop(0.001), loss='mse')\n model.fit(np.zeros((10, 4)),\n np.ones((10, 8)))\n\n # Without InputLayer and let the first layer to have the input_shape.\n # Keras will add a input for the model behind the scene.\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(8, input_shape=(4,))])\n model.compile(tf.optimizers.RMSprop(0.001), loss='mse')\n model.fit(np.zeros((10, 4)),\n np.ones((10, 8)))\n ```\n\n Args:\n input_shape: Shape tuple (not including the batch axis), or `TensorShape`\n instance (not including the batch axis).\n batch_size: Optional input batch size (integer or `None`).\n dtype: Optional datatype of the input. When not provided, the Keras\n default `float` type will be used.\n input_tensor: Optional tensor to use as layer input. If set, the layer\n will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n sparse: Boolean, whether the placeholder created is meant to be sparse.\n Default to `False`.\n ragged: Boolean, whether the placeholder created is meant to be ragged.\n In this case, values of `None` in the `shape` argument represent\n ragged dimensions. For more information about `tf.RaggedTensor`, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensor).\n Default to `False`.\n type_spec: A `tf.TypeSpec` object to create Input from. This `tf.TypeSpec`\n represents the entire batch. When provided, all other args except\n name must be `None`.\n name: Optional name of the layer (string).\n ", "desc": "Layer to be used as an entry point into a Network (a graph of layers).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.InputSpec", "docs": "Specifies the rank, dtype and shape of every input to a layer.\n\n Layers can expose (if appropriate) an `input_spec` attribute:\n an instance of `InputSpec`, or a nested structure of `InputSpec` instances\n (one per input tensor). These objects enable the layer to run input\n compatibility checks for input structure, input rank, input shape, and\n input dtype.\n\n A None entry in a shape is compatible with any dimension,\n a None shape is compatible with any shape.\n\n Args:\n dtype: Expected DataType of the input.\n shape: Shape tuple, expected shape of the input\n (may include None for unchecked axes). Includes the batch size.\n ndim: Integer, expected rank of the input.\n max_ndim: Integer, maximum rank of the input.\n min_ndim: Integer, minimum rank of the input.\n axes: Dictionary mapping integer axes to\n a specific dimension value.\n allow_last_axis_squeeze: If True, then allow inputs of rank N+1 as long\n as the last axis of the input is 1, as well as inputs of rank N-1\n as long as the last axis of the spec is 1.\n name: Expected key corresponding to this input when passing data as\n a dictionary.\n\n Example:\n\n ```python\n class MyLayer(Layer):\n def __init__(self):\n super(MyLayer, self).__init__()\n # The layer will accept inputs with shape (?, 28, 28) & (?, 28, 28, 1)\n # and raise an appropriate error message otherwise.\n self.input_spec = InputSpec(\n shape=(None, 28, 28, 1),\n allow_last_axis_squeeze=True)\n ```\n ", "desc": "Specifies the rank, dtype and shape of every input to a layer.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Lambda", "docs": "Wraps arbitrary expressions as a `Layer` object.\n\n The `Lambda` layer exists so that arbitrary expressions can be used\n as a `Layer` when constructing `Sequential`\n and Functional API models. `Lambda` layers are best suited for simple\n operations or quick experimentation. For more advanced use cases, follow\n [this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models)\n for subclassing `tf.keras.layers.Layer`.\n\n WARNING: `tf.keras.layers.Lambda` layers have (de)serialization limitations!\n\n The main reason to subclass `tf.keras.layers.Layer` instead of using a\n `Lambda` layer is saving and inspecting a Model. `Lambda` layers\n are saved by serializing the Python bytecode, which is fundamentally\n non-portable. They should only be loaded in the same environment where\n they were saved. Subclassed layers can be saved in a more portable way\n by overriding their `get_config` method. Models that rely on\n subclassed Layers are also often easier to visualize and reason about.\n\n Examples:\n\n ```python\n # add a x -> x^2 layer\n model.add(Lambda(lambda x: x ** 2))\n ```\n ```python\n # add a layer that returns the concatenation\n # of the positive part of the input and\n # the opposite of the negative part\n\n def antirectifier(x):\n x -= K.mean(x, axis=1, keepdims=True)\n x = K.l2_normalize(x, axis=1)\n pos = K.relu(x)\n neg = K.relu(-x)\n return K.concatenate([pos, neg], axis=1)\n\n model.add(Lambda(antirectifier))\n ```\n\n Variables:\n While it is possible to use Variables with Lambda layers, this practice is\n discouraged as it can easily lead to bugs. For instance, consider the\n following layer:\n\n ```python\n scale = tf.Variable(1.)\n scale_layer = tf.keras.layers.Lambda(lambda x: x * scale)\n ```\n\n Because scale_layer does not directly track the `scale` variable, it will\n not appear in `scale_layer.trainable_weights` and will therefore not be\n trained if `scale_layer` is used in a Model.\n\n A better pattern is to write a subclassed Layer:\n\n ```python\n class ScaleLayer(tf.keras.layers.Layer):\n def __init__(self):\n super(ScaleLayer, self).__init__()\n self.scale = tf.Variable(1.)\n\n def call(self, inputs):\n return inputs * self.scale\n ```\n\n In general, Lambda layers can be convenient for simple stateless\n computation, but anything more complex should use a subclass Layer instead.\n\n Args:\n function: The function to be evaluated. Takes input tensor as first\n argument.\n output_shape: Expected output shape from function. This argument can be\n inferred if not explicitly provided. Can be a tuple or function. If a\n tuple, it only specifies the first dimension onward;\n sample dimension is assumed either the same as the input: `output_shape =\n (input_shape[0], ) + output_shape` or, the input is `None` and\n the sample dimension is also `None`: `output_shape = (None, ) +\n output_shape` If a function, it specifies the entire shape as a function\n of the\n input shape: `output_shape = f(input_shape)`\n mask: Either None (indicating no masking) or a callable with the same\n signature as the `compute_mask` layer method, or a tensor that will be\n returned as output mask regardless of what the input is.\n arguments: Optional dictionary of keyword arguments to be passed to the\n function.\n Input shape: Arbitrary. Use the keyword argument input_shape (tuple of\n integers, does not include the samples axis) when using this layer as the\n first layer in a model.\n Output shape: Specified by `output_shape` argument\n ", "desc": "Wraps arbitrary expressions as a `Layer` object.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Layer", "docs": "This is the class from which all layers inherit.\n\n A layer is a callable object that takes as input one or more tensors and\n that outputs one or more tensors. It involves *computation*, defined\n in the `call()` method, and a *state* (weight variables). State can be\n created in various places, at the convenience of the subclass implementer:\n\n * in `__init__()`;\n * in the optional `build()` method, which is invoked by the first\n `__call__()` to the layer, and supplies the shape(s) of the input(s),\n which may not have been known at initialization time;\n * in the first invocation of `call()`, with some caveats discussed\n below.\n\n Users will just instantiate a layer and then treat it as a callable.\n\n Args:\n trainable: Boolean, whether the layer's variables should be trainable.\n name: String name of the layer.\n dtype: The dtype of the layer's computations and weights. Can also be a\n `tf.keras.mixed_precision.Policy`, which allows the computation and weight\n dtype to differ. Default of `None` means to use\n `tf.keras.mixed_precision.global_policy()`, which is a float32 policy\n unless set to different value.\n dynamic: Set this to `True` if your layer should only be run eagerly, and\n should not be used to generate a static computation graph.\n This would be the case for a Tree-RNN or a recursive network,\n for example, or generally for any layer that manipulates tensors\n using Python control flow. If `False`, we assume that the layer can\n safely be used to generate a static computation graph.\n\n Attributes:\n name: The name of the layer (string).\n dtype: The dtype of the layer's weights.\n variable_dtype: Alias of `dtype`.\n compute_dtype: The dtype of the layer's computations. Layers automatically\n cast inputs to this dtype which causes the computations and output to also\n be in this dtype. When mixed precision is used with a\n `tf.keras.mixed_precision.Policy`, this will be different than\n `variable_dtype`.\n dtype_policy: The layer's dtype policy. See the\n `tf.keras.mixed_precision.Policy` documentation for details.\n trainable_weights: List of variables to be included in backprop.\n non_trainable_weights: List of variables that should not be\n included in backprop.\n weights: The concatenation of the lists trainable_weights and\n non_trainable_weights (in this order).\n trainable: Whether the layer should be trained (boolean), i.e. whether\n its potentially-trainable weights should be returned as part of\n `layer.trainable_weights`.\n input_spec: Optional (list of) `InputSpec` object(s) specifying the\n constraints on inputs that can be accepted by the layer.\n\n We recommend that descendants of `Layer` implement the following methods:\n\n * `__init__()`: Defines custom layer attributes, and creates layer weights\n that do not depend on input shapes, using `add_weight()`, or other state.\n * `build(self, input_shape)`: This method can be used to create weights that\n depend on the shape(s) of the input(s), using `add_weight()`, or other\n state. `__call__()` will automatically build the layer (if it has not been\n built yet) by calling `build()`.\n * `call(self, inputs, *args, **kwargs)`: Called in `__call__` after making\n sure `build()` has been called. `call()` performs the logic of applying the\n layer to the `inputs`. The first invocation may additionally create state\n that could not be conveniently created in `build()`; see its docstring\n for details.\n Two reserved keyword arguments you can optionally use in `call()` are:\n - `training` (boolean, whether the call is in inference mode or training\n mode). See more details in [the layer/model subclassing guide](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models#privileged_training_argument_in_the_call_method)\n - `mask` (boolean tensor encoding masked timesteps in the input, used\n in RNN layers). See more details in [the layer/model subclassing guide](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models#privileged_mask_argument_in_the_call_method)\n A typical signature for this method is `call(self, inputs)`, and user could\n optionally add `training` and `mask` if the layer need them. `*args` and\n `**kwargs` is only useful for future extension when more input parameters\n are planned to be added.\n * `get_config(self)`: Returns a dictionary containing the configuration used\n to initialize this layer. If the keys differ from the arguments\n in `__init__`, then override `from_config(self)` as well.\n This method is used when saving\n the layer or a model that contains this layer.\n\n Examples:\n\n Here's a basic example: a layer with two variables, `w` and `b`,\n that returns `y = w . x + b`.\n It shows how to implement `build()` and `call()`.\n Variables set as attributes of a layer are tracked as weights\n of the layers (in `layer.weights`).\n\n ```python\n class SimpleDense(Layer):\n\n def __init__(self, units=32):\n super(SimpleDense, self).__init__()\n self.units = units\n\n def build(self, input_shape): # Create the state of the layer (weights)\n w_init = tf.random_normal_initializer()\n self.w = tf.Variable(\n initial_value=w_init(shape=(input_shape[-1], self.units),\n dtype='float32'),\n trainable=True)\n b_init = tf.zeros_initializer()\n self.b = tf.Variable(\n initial_value=b_init(shape=(self.units,), dtype='float32'),\n trainable=True)\n\n def call(self, inputs): # Defines the computation from inputs to outputs\n return tf.matmul(inputs, self.w) + self.b\n\n # Instantiates the layer.\n linear_layer = SimpleDense(4)\n\n # This will also call `build(input_shape)` and create the weights.\n y = linear_layer(tf.ones((2, 2)))\n assert len(linear_layer.weights) == 2\n\n # These weights are trainable, so they're listed in `trainable_weights`:\n assert len(linear_layer.trainable_weights) == 2\n ```\n\n Note that the method `add_weight()` offers a shortcut to create weights:\n\n ```python\n class SimpleDense(Layer):\n\n def __init__(self, units=32):\n super(SimpleDense, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='random_normal',\n trainable=True)\n self.b = self.add_weight(shape=(self.units,),\n initializer='random_normal',\n trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n ```\n\n Besides trainable weights, updated via backpropagation during training,\n layers can also have non-trainable weights. These weights are meant to\n be updated manually during `call()`. Here's a example layer that computes\n the running sum of its inputs:\n\n ```python\n class ComputeSum(Layer):\n\n def __init__(self, input_dim):\n super(ComputeSum, self).__init__()\n # Create a non-trainable weight.\n self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),\n trainable=False)\n\n def call(self, inputs):\n self.total.assign_add(tf.reduce_sum(inputs, axis=0))\n return self.total\n\n my_sum = ComputeSum(2)\n x = tf.ones((2, 2))\n\n y = my_sum(x)\n print(y.numpy()) # [2. 2.]\n\n y = my_sum(x)\n print(y.numpy()) # [4. 4.]\n\n assert my_sum.weights == [my_sum.total]\n assert my_sum.non_trainable_weights == [my_sum.total]\n assert my_sum.trainable_weights == []\n ```\n\n For more information about creating layers, see the guide\n [Making new Layers and Models via subclassing](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models)\n ", "desc": "This is the class from which all layers inherit.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LayerNormalization", "docs": "Layer normalization layer (Ba et al., 2016).\n\n Normalize the activations of the previous layer for each given example in a\n batch independently, rather than across a batch like Batch Normalization.\n i.e. applies a transformation that maintains the mean activation within each\n example close to 0 and the activation standard deviation close to 1.\n\n Given a tensor `inputs`, moments are calculated and normalization\n is performed across the axes specified in `axis`.\n\n Example:\n\n >>> data = tf.constant(np.arange(10).reshape(5, 2) * 10, dtype=tf.float32)\n >>> print(data)\n tf.Tensor(\n [[ 0. 10.]\n [20. 30.]\n [40. 50.]\n [60. 70.]\n [80. 90.]], shape=(5, 2), dtype=float32)\n\n >>> layer = tf.keras.layers.LayerNormalization(axis=1)\n >>> output = layer(data)\n >>> print(output)\n tf.Tensor(\n [[-1. 1.]\n [-1. 1.]\n [-1. 1.]\n [-1. 1.]\n [-1. 1.]], shape=(5, 2), dtype=float32)\n\n Notice that with Layer Normalization the normalization happens across the\n axes *within* each example, rather than across different examples in the\n batch.\n\n If `scale` or `center` are enabled, the layer will scale the normalized\n outputs by broadcasting them with a trainable variable `gamma`, and center\n the outputs by broadcasting with a trainable variable `beta`. `gamma` will\n default to a ones tensor and `beta` will default to a zeros tensor, so that\n centering and scaling are no-ops before training has begun.\n\n So, with scaling and centering enabled the normalization equations\n are as follows:\n\n Let the intermediate activations for a mini-batch to be the `inputs`.\n\n For each sample `x_i` in `inputs` with `k` features, we compute the mean and\n variance of the sample:\n\n ```python\n mean_i = sum(x_i[j] for j in range(k)) / k\n var_i = sum((x_i[j] - mean_i) ** 2 for j in range(k)) / k\n ```\n\n and then compute a normalized `x_i_normalized`, including a small factor\n `epsilon` for numerical stability.\n\n ```python\n x_i_normalized = (x_i - mean_i) / sqrt(var_i + epsilon)\n ```\n\n And finally `x_i_normalized ` is linearly transformed by `gamma` and `beta`,\n which are learned parameters:\n\n ```python\n output_i = x_i_normalized * gamma + beta\n ```\n\n `gamma` and `beta` will span the axes of `inputs` specified in `axis`, and\n this part of the inputs' shape must be fully defined.\n\n For example:\n\n >>> layer = tf.keras.layers.LayerNormalization(axis=[1, 2, 3])\n >>> layer.build([5, 20, 30, 40])\n >>> print(layer.beta.shape)\n (20, 30, 40)\n >>> print(layer.gamma.shape)\n (20, 30, 40)\n\n Note that other implementations of layer normalization may choose to define\n `gamma` and `beta` over a separate set of axes from the axes being\n normalized across. For example, Group Normalization\n ([Wu et al. 2018](https://arxiv.org/abs/1803.08494)) with group size of 1\n corresponds to a Layer Normalization that normalizes across height, width,\n and channel and has `gamma` and `beta` span only the channel dimension.\n So, this Layer Normalization implementation will not match a Group\n Normalization layer with group size set to 1.\n\n Args:\n axis: Integer or List/Tuple. The axis or axes to normalize across. Typically\n this is the features axis/axes. The left-out axes are typically the batch\n axis/axes. This argument defaults to `-1`, the last dimension in the\n input.\n epsilon: Small float added to variance to avoid dividing by zero. Defaults\n to 1e-3\n center: If True, add offset of `beta` to normalized tensor. If False, `beta`\n is ignored. Defaults to True.\n scale: If True, multiply by `gamma`. If False, `gamma` is not used. Defaults\n to True. When the next layer is linear (also e.g. `nn.relu`), this can be\n disabled since the scaling will be done by the next layer.\n beta_initializer: Initializer for the beta weight. Defaults to zeros.\n gamma_initializer: Initializer for the gamma weight. Defaults to ones.\n beta_regularizer: Optional regularizer for the beta weight. None by default.\n gamma_regularizer: Optional regularizer for the gamma weight. None by\n default.\n beta_constraint: Optional constraint for the beta weight. None by default.\n gamma_constraint: Optional constraint for the gamma weight. None by default.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape` (tuple of\n integers, does not include the samples axis) when using this layer as the\n first layer in a model.\n\n Output shape:\n Same shape as input.\n\n Reference:\n - [Lei Ba et al., 2016](https://arxiv.org/abs/1607.06450).\n ", "desc": "Layer normalization layer (Ba et al., 2016).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LeakyReLU", "docs": "Leaky version of a Rectified Linear Unit.\n\n It allows a small gradient when the unit is not active:\n\n ```\n f(x) = alpha * x if x < 0\n f(x) = x if x >= 0\n ```\n\n Usage:\n\n >>> layer = tf.keras.layers.LeakyReLU()\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-0.9, -0.3, 0.0, 2.0]\n >>> layer = tf.keras.layers.LeakyReLU(alpha=0.1)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-0.3, -0.1, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha: Float >= 0. Negative slope coefficient. Default to 0.3.\n\n ", "desc": "Leaky version of a Rectified Linear Unit.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LocallyConnected1D", "docs": "Locally-connected layer for 1D inputs.\n\n The `LocallyConnected1D` layer works similarly to\n the `Conv1D` layer, except that weights are unshared,\n that is, a different set of filters is applied at each different patch\n of the input.\n\n Note: layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n\n Example:\n ```python\n # apply a unshared weight convolution 1d of length 3 to a sequence with\n # 10 timesteps, with 64 output filters\n model = Sequential()\n model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))\n # now model.output_shape == (None, 8, 64)\n # add a new conv1d on top\n model.add(LocallyConnected1D(32, 3))\n # now model.output_shape == (None, 6, 32)\n ```\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer, specifying the\n length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer, specifying the\n stride length of the convolution.\n padding: Currently only supports `\"valid\"` (case-insensitive). `\"same\"`\n may be supported in the future. `\"valid\"` means no padding.\n data_format: A string, one of `channels_last` (default) or\n `channels_first`. The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape `(batch, length,\n channels)` while `channels_first` corresponds to inputs with shape\n `(batch, channels, length)`. It defaults to the `image_data_format`\n value found in your Keras config file at `~/.keras/keras.json`. If you\n never set it, then it will be \"channels_last\".\n activation: Activation function to use. If you don't specify anything, no\n activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\")..\n kernel_constraint: Constraint function applied to the kernel matrix.\n bias_constraint: Constraint function applied to the bias vector.\n implementation: implementation mode, either `1`, `2`, or `3`. `1` loops\n over input spatial locations to perform the forward pass. It is\n memory-efficient but performs a lot of (small) ops. `2` stores layer\n weights in a dense but sparsely-populated 2D matrix and implements the\n forward pass as a single matrix-multiply. It uses a lot of RAM but\n performs few (large) ops. `3` stores layer weights in a sparse tensor\n and implements the forward pass as a single sparse matrix-multiply.\n How to choose:\n `1`: large, dense models,\n `2`: small models,\n `3`: large, sparse models, where \"large\" stands for large\n input/output activations (i.e. many `filters`, `input_filters`,\n large `input_size`, `output_size`), and \"sparse\" stands for few\n connections between inputs and outputs, i.e. small ratio `filters *\n input_filters * kernel_size / (input_size * strides)`, where inputs\n to and outputs of the layer are assumed to have shapes `(input_size,\n input_filters)`, `(output_size, filters)` respectively. It is\n recommended to benchmark each in the setting of interest to pick the\n most efficient one (in terms of speed and memory usage). Correct\n choice of implementation can lead to dramatic speed improvements\n (e.g. 50X), potentially at the expense of RAM. Also, only\n `padding=\"valid\"` is supported by `implementation=1`.\n Input shape:\n 3D tensor with shape: `(batch_size, steps, input_dim)`\n Output shape:\n 3D tensor with shape: `(batch_size, new_steps, filters)` `steps` value\n might have changed due to padding or strides.\n ", "desc": "Locally-connected layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LocallyConnected2D", "docs": "Locally-connected layer for 2D inputs.\n\n The `LocallyConnected2D` layer works similarly\n to the `Conv2D` layer, except that weights are unshared,\n that is, a different set of filters is applied at each\n different patch of the input.\n\n Note: layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n\n Examples:\n ```python\n # apply a 3x3 unshared weights convolution with 64 output filters on a\n 32x32 image\n # with `data_format=\"channels_last\"`:\n model = Sequential()\n model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))\n # now model.output_shape == (None, 30, 30, 64)\n # notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64\n parameters\n\n # add a 3x3 unshared weights convolution on top, with 32 output filters:\n model.add(LocallyConnected2D(32, (3, 3)))\n # now model.output_shape == (None, 28, 28, 32)\n ```\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the width\n and height of the 2D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the width and height. Can be a single integer to\n specify the same value for all spatial dimensions.\n padding: Currently only support `\"valid\"` (case-insensitive). `\"same\"`\n will be supported in future. `\"valid\"` means no padding.\n data_format: A string, one of `channels_last` (default) or\n `channels_first`. The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape `(batch, height, width,\n channels)` while `channels_first` corresponds to inputs with shape\n `(batch, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n activation: Activation function to use. If you don't specify anything, no\n activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\").\n kernel_constraint: Constraint function applied to the kernel matrix.\n bias_constraint: Constraint function applied to the bias vector.\n implementation: implementation mode, either `1`, `2`, or `3`. `1` loops\n over input spatial locations to perform the forward pass. It is\n memory-efficient but performs a lot of (small) ops. `2` stores layer\n weights in a dense but sparsely-populated 2D matrix and implements the\n forward pass as a single matrix-multiply. It uses a lot of RAM but\n performs few (large) ops. `3` stores layer weights in a sparse tensor\n and implements the forward pass as a single sparse matrix-multiply.\n How to choose:\n `1`: large, dense models,\n `2`: small models,\n `3`: large, sparse models, where \"large\" stands for large\n input/output activations (i.e. many `filters`, `input_filters`,\n large `np.prod(input_size)`, `np.prod(output_size)`), and \"sparse\"\n stands for few connections between inputs and outputs, i.e. small\n ratio `filters * input_filters * np.prod(kernel_size) /\n (np.prod(input_size) * np.prod(strides))`, where inputs to and\n outputs of the layer are assumed to have shapes `input_size +\n (input_filters,)`, `output_size + (filters,)` respectively. It is\n recommended to benchmark each in the setting of interest to pick the\n most efficient one (in terms of speed and memory usage). Correct\n choice of implementation can lead to dramatic speed improvements\n (e.g. 50X), potentially at the expense of RAM. Also, only\n `padding=\"valid\"` is supported by `implementation=1`.\n Input shape:\n 4D tensor with shape: `(samples, channels, rows, cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, rows, cols, channels)` if\n data_format='channels_last'.\n Output shape:\n 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'. `rows` and `cols` values might have changed\n due to padding.\n ", "desc": "Locally-connected layer for 2D inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LSTM", "docs": "Long Short-Term Memory layer - Hochreiter 1997.\n\n Note that this cell is not optimized for performance on GPU. Please use\n `tf.compat.v1.keras.layers.CuDNNLSTM` for better performance on GPU.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use\n for the recurrent step.\n Default: hard sigmoid (`hard_sigmoid`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs..\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix,\n used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n unit_forget_bias: Boolean.\n If True, add 1 to the bias of the forget gate at initialization.\n Setting it to true will also force `bias_initializer=\"zeros\"`.\n This is recommended in [Jozefowicz et al., 2015](\n http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n recurrent_regularizer: Regularizer function applied to\n the `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\").\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n recurrent_constraint: Constraint function applied to\n the `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the inputs.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the recurrent state.\n return_sequences: Boolean. Whether to return the last output\n in the output sequence, or the full sequence.\n return_state: Boolean. Whether to return the last state\n in addition to the output.\n go_backwards: Boolean (default False).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default False). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default False).\n If True, the network will be unrolled,\n else a symbolic loop will be used.\n Unrolling can speed-up a RNN,\n although it tends to be more memory-intensive.\n Unrolling is only suitable for short sequences.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False`\n entry indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n ", "desc": "Long Short-Term Memory layer - Hochreiter 1997.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.LSTMCell", "docs": "Cell class for the LSTM layer.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use\n for the recurrent step.\n Default: hard sigmoid (`hard_sigmoid`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix,\n used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n unit_forget_bias: Boolean.\n If True, add 1 to the bias of the forget gate at initialization.\n Setting it to true will also force `bias_initializer=\"zeros\"`.\n This is recommended in [Jozefowicz et al., 2015](\n http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n recurrent_regularizer: Regularizer function applied to\n the `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n recurrent_constraint: Constraint function applied to\n the `recurrent_kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the inputs.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for\n the linear transformation of the recurrent state.\n\n Call arguments:\n inputs: A 2D tensor.\n states: List of state tensors corresponding to the previous timestep.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n ", "desc": "Cell class for the LSTM layer.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Masking", "docs": "Masks a sequence by using a mask value to skip timesteps.\n\n For each timestep in the input tensor (dimension #1 in the tensor),\n if all values in the input tensor at that timestep\n are equal to `mask_value`, then the timestep will be masked (skipped)\n in all downstream layers (as long as they support masking).\n\n If any downstream layer does not support masking yet receives such\n an input mask, an exception will be raised.\n\n Example:\n\n Consider a Numpy data array `x` of shape `(samples, timesteps, features)`,\n to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you\n lack data for these timesteps. You can:\n\n - Set `x[:, 3, :] = 0.` and `x[:, 5, :] = 0.`\n - Insert a `Masking` layer with `mask_value=0.` before the LSTM layer:\n\n ```python\n samples, timesteps, features = 32, 10, 8\n inputs = np.random.random([samples, timesteps, features]).astype(np.float32)\n inputs[:, 3, :] = 0.\n inputs[:, 5, :] = 0.\n\n model = tf.keras.models.Sequential()\n model.add(tf.keras.layers.Masking(mask_value=0.,\n input_shape=(timesteps, features)))\n model.add(tf.keras.layers.LSTM(32))\n\n output = model(inputs)\n # The time step 3 and 5 will be skipped from LSTM calculation.\n ```\n\n See [the masking and padding guide](\n https://www.tensorflow.org/guide/keras/masking_and_padding)\n for more details.\n ", "desc": "Masks a sequence by using a mask value to skip timesteps.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Maximum", "docs": "Layer that computes the maximum (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Maximum()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> maxed = tf.keras.layers.Maximum()([x1, x2])\n >>> maxed.shape\n TensorShape([5, 8])\n ", "desc": "Layer that computes the maximum (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPool1D", "docs": "Max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over a\n spatial window of size `pool_size`. The window is shifted by `strides`. The\n resulting output, when using the `\"valid\"` padding option, has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = input_shape / strides`\n\n For example, for `strides=1` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=2` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=1` and `padding=\"same\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> max_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the max pooling window.\n strides: Integer, or None. Specifies how much the pooling window moves\n for each pooling step.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPool2D", "docs": "Max pooling operation for 2D spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output,\n when using the `\"valid\"` padding option, has a spatial shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> max_pool_2d(x)\n \n\n For example, for `strides=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> max_pool_2d(x)\n \n\n Usage Example:\n\n >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]],\n ... [[2.], [2.], [3.], [2.]],\n ... [[4.], [1.], [1.], [1.]],\n ... [[2.], [2.], [1.], [4.]]]])\n >>> output = tf.constant([[[[1], [0]],\n ... [[0], [1]]]])\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... input_shape=(4, 4, 1)))\n >>> model.compile('adam', 'mean_squared_error')\n >>> model.predict(input_image, steps=1)\n array([[[[2.],\n [4.]],\n [[4.],\n [4.]]]], dtype=float32)\n\n For example, for stride=(1, 1) and padding=\"same\":\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> max_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n window size over which to take the maximum.\n `(2, 2)` will take the max value over a 2x2 pooling window.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values. Specifies how far the pooling window moves\n for each pooling step. If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n\n Returns:\n A tensor of rank 4 representing the maximum pooled values. See above for\n output shape.\n ", "desc": "Max pooling operation for 2D spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPool3D", "docs": "Max pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: Tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.MaxPooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Max pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPooling1D", "docs": "Max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over a\n spatial window of size `pool_size`. The window is shifted by `strides`. The\n resulting output, when using the `\"valid\"` padding option, has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = input_shape / strides`\n\n For example, for `strides=1` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=2` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=1` and `padding=\"same\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> max_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the max pooling window.\n strides: Integer, or None. Specifies how much the pooling window moves\n for each pooling step.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPooling2D", "docs": "Max pooling operation for 2D spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output,\n when using the `\"valid\"` padding option, has a spatial shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> max_pool_2d(x)\n \n\n For example, for `strides=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> max_pool_2d(x)\n \n\n Usage Example:\n\n >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]],\n ... [[2.], [2.], [3.], [2.]],\n ... [[4.], [1.], [1.], [1.]],\n ... [[2.], [2.], [1.], [4.]]]])\n >>> output = tf.constant([[[[1], [0]],\n ... [[0], [1]]]])\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... input_shape=(4, 4, 1)))\n >>> model.compile('adam', 'mean_squared_error')\n >>> model.predict(input_image, steps=1)\n array([[[[2.],\n [4.]],\n [[4.],\n [4.]]]], dtype=float32)\n\n For example, for stride=(1, 1) and padding=\"same\":\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> max_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n window size over which to take the maximum.\n `(2, 2)` will take the max value over a 2x2 pooling window.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values. Specifies how far the pooling window moves\n for each pooling step. If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n\n Returns:\n A tensor of rank 4 representing the maximum pooled values. See above for\n output shape.\n ", "desc": "Max pooling operation for 2D spatial data.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MaxPooling3D", "docs": "Max pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: Tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.MaxPooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Max pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Minimum", "docs": "Layer that computes the minimum (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Minimum()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> minned = tf.keras.layers.Minimum()([x1, x2])\n >>> minned.shape\n TensorShape([5, 8])\n ", "desc": "Layer that computes the minimum (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.MultiHeadAttention", "docs": "MultiHeadAttention layer.\n\n This is an implementation of multi-headed attention as described in the paper\n \"Attention is all you Need\" (Vaswani et al., 2017).\n If `query`, `key,` `value` are the same, then\n this is self-attention. Each timestep in `query` attends to the\n corresponding sequence in `key`, and returns a fixed-width vector.\n\n This layer first projects `query`, `key` and `value`. These are\n (effectively) a list of tensors of length `num_attention_heads`, where the\n corresponding shapes are `(batch_size, , key_dim)`,\n `(batch_size, , key_dim)`,\n `(batch_size, , value_dim)`.\n\n Then, the query and key tensors are dot-producted and scaled. These are\n softmaxed to obtain attention probabilities. The value tensors are then\n interpolated by these probabilities, then concatenated back to a single\n tensor.\n\n Finally, the result tensor with the last dimension as value_dim can take an\n linear projection and return.\n\n When using MultiHeadAttention inside a custom Layer, the custom Layer must\n implement `build()` and call MultiHeadAttention's `_build_from_signature()`.\n This enables weights to be restored correctly when the model is loaded.\n TODO(b/172609172): link to documentation about calling custom build functions\n when used in a custom Layer.\n\n Examples:\n\n Performs 1D cross-attention over two sequence inputs with an attention mask.\n Returns the additional attention weights over heads.\n\n >>> layer = MultiHeadAttention(num_heads=2, key_dim=2)\n >>> target = tf.keras.Input(shape=[8, 16])\n >>> source = tf.keras.Input(shape=[4, 16])\n >>> output_tensor, weights = layer(target, source,\n ... return_attention_scores=True)\n >>> print(output_tensor.shape)\n (None, 8, 16)\n >>> print(weights.shape)\n (None, 2, 8, 4)\n\n Performs 2D self-attention over a 5D input tensor on axes 2 and 3.\n\n >>> layer = MultiHeadAttention(num_heads=2, key_dim=2, attention_axes=(2, 3))\n >>> input_tensor = tf.keras.Input(shape=[5, 3, 4, 16])\n >>> output_tensor = layer(input_tensor, input_tensor)\n >>> print(output_tensor.shape)\n (None, 5, 3, 4, 16)\n\n Args:\n num_heads: Number of attention heads.\n key_dim: Size of each attention head for query and key.\n value_dim: Size of each attention head for value.\n dropout: Dropout probability.\n use_bias: Boolean, whether the dense layers use bias vectors/matrices.\n output_shape: The expected shape of an output tensor, besides the batch and\n sequence dims. If not specified, projects back to the key feature dim.\n attention_axes: axes over which the attention is applied. `None` means\n attention over all axes, but batch, heads, and features.\n kernel_initializer: Initializer for dense layer kernels.\n bias_initializer: Initializer for dense layer biases.\n kernel_regularizer: Regularizer for dense layer kernels.\n bias_regularizer: Regularizer for dense layer biases.\n activity_regularizer: Regularizer for dense layer activity.\n kernel_constraint: Constraint for dense layer kernels.\n bias_constraint: Constraint for dense layer kernels.\n\n Call arguments:\n query: Query `Tensor` of shape `(B, T, dim)`.\n value: Value `Tensor` of shape `(B, S, dim)`.\n key: Optional key `Tensor` of shape `(B, S, dim)`. If not given, will use\n `value` for both `key` and `value`, which is the most common case.\n attention_mask: a boolean mask of shape `(B, T, S)`, that prevents\n attention to certain positions. The boolean mask specifies which query\n elements can attend to which key elements, 1 indicates attention and 0\n indicates no attention. Broadcasting can happen for the missing batch\n dimensions and the head dimension.\n return_attention_scores: A boolean to indicate whether the output should\n be `(attention_output, attention_scores)` if `True`, or `attention_output`\n if `False`. Defaults to `False`.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n Defaults to either using the training mode of the parent layer/model,\n or False (inference) if there is no parent layer.\n\n Returns:\n attention_output: The result of the computation, of shape `(B, T, E)`,\n where `T` is for target sequence shapes and `E` is the query input last\n dimension if `output_shape` is `None`. Otherwise, the multi-head outputs\n are project to the shape specified by `output_shape`.\n attention_scores: [Optional] multi-head attention coefficients over\n attention axes.\n ", "desc": "MultiHeadAttention layer.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Multiply", "docs": "Layer that multiplies (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Multiply()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> multiplied = tf.keras.layers.Multiply()([x1, x2])\n >>> multiplied.shape\n TensorShape([5, 8])\n ", "desc": "Layer that multiplies (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Permute", "docs": "Permutes the dimensions of the input according to a given pattern.\n\n Useful e.g. connecting RNNs and convnets.\n\n Example:\n\n ```python\n model = Sequential()\n model.add(Permute((2, 1), input_shape=(10, 64)))\n # now: model.output_shape == (None, 64, 10)\n # note: `None` is the batch dimension\n ```\n\n Args:\n dims: Tuple of integers. Permutation pattern does not include the\n samples dimension. Indexing starts at 1.\n For instance, `(2, 1)` permutes the first and second dimensions\n of the input.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same as the input shape, but with the dimensions re-ordered according\n to the specified pattern.\n ", "desc": "Permutes the dimensions of the input according to a given pattern.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.PReLU", "docs": "Parametric Rectified Linear Unit.\n\n It follows:\n\n ```\n f(x) = alpha * x for x < 0\n f(x) = x for x >= 0\n ```\n\n where `alpha` is a learned array with the same shape as x.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha_initializer: Initializer function for the weights.\n alpha_regularizer: Regularizer for the weights.\n alpha_constraint: Constraint for the weights.\n shared_axes: The axes along which to share learnable\n parameters for the activation function.\n For example, if the incoming feature maps\n are from a 2D convolution\n with output shape `(batch, height, width, channels)`,\n and you wish to share parameters across space\n so that each filter only has one set of parameters,\n set `shared_axes=[1, 2]`.\n ", "desc": "Parametric Rectified Linear Unit.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ReLU", "docs": "Rectified Linear Unit activation function.\n\n With default values, it returns element-wise `max(x, 0)`.\n\n Otherwise, it follows:\n\n ```\n f(x) = max_value if x >= max_value\n f(x) = x if threshold <= x < max_value\n f(x) = negative_slope * (x - threshold) otherwise\n ```\n\n Usage:\n\n >>> layer = tf.keras.layers.ReLU()\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.ReLU(max_value=1.0)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 1.0]\n >>> layer = tf.keras.layers.ReLU(negative_slope=1.0)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-3.0, -1.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.ReLU(threshold=1.5)\n >>> output = layer([-3.0, -1.0, 1.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n max_value: Float >= 0. Maximum activation value. Default to None, which\n means unlimited.\n negative_slope: Float >= 0. Negative slope coefficient. Default to 0.\n threshold: Float >= 0. Threshold value for thresholded activation. Default\n to 0.\n ", "desc": "Rectified Linear Unit activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.RepeatVector", "docs": "Repeats the input n times.\n\n Example:\n\n ```python\n model = Sequential()\n model.add(Dense(32, input_dim=32))\n # now: model.output_shape == (None, 32)\n # note: `None` is the batch dimension\n\n model.add(RepeatVector(3))\n # now: model.output_shape == (None, 3, 32)\n ```\n\n Args:\n n: Integer, repetition factor.\n Input shape: 2D tensor of shape `(num_samples, features)`.\n Output shape: 3D tensor of shape `(num_samples, n, features)`.\n ", "desc": "Repeats the input n times.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Reshape", "docs": "Layer that reshapes inputs into the given shape.\n\n Input shape:\n Arbitrary, although all dimensions in the input shape must be known/fixed.\n Use the keyword argument `input_shape` (tuple of integers, does not include\n the samples/batch size axis) when using this layer as the first layer\n in a model.\n\n Output shape:\n `(batch_size,) + target_shape`\n\n Example:\n\n >>> # as first layer in a Sequential model\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Reshape((3, 4), input_shape=(12,)))\n >>> # model.output_shape == (None, 3, 4), `None` is the batch size.\n >>> model.output_shape\n (None, 3, 4)\n\n >>> # as intermediate layer in a Sequential model\n >>> model.add(tf.keras.layers.Reshape((6, 2)))\n >>> model.output_shape\n (None, 6, 2)\n\n >>> # also supports shape inference using `-1` as dimension\n >>> model.add(tf.keras.layers.Reshape((-1, 2, 2)))\n >>> model.output_shape\n (None, 3, 2, 2)\n ", "desc": "Layer that reshapes inputs into the given shape.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.RNN", "docs": "Base class for recurrent layers.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Args:\n cell: A RNN cell instance or a list of RNN cell instances.\n A RNN cell is a class that has:\n - A `call(input_at_t, states_at_t)` method, returning\n `(output_at_t, states_at_t_plus_1)`. The call method of the\n cell can also take the optional argument `constants`, see\n section \"Note on passing external constants\" below.\n - A `state_size` attribute. This can be a single integer\n (single state) in which case it is the size of the recurrent\n state. This can also be a list/tuple of integers (one size per state).\n The `state_size` can also be TensorShape or tuple/list of\n TensorShape, to represent high dimension state.\n - A `output_size` attribute. This can be a single integer or a\n TensorShape, which represent the shape of the output. For backward\n compatible reason, if this attribute is not available for the\n cell, the value will be inferred by the first element of the\n `state_size`.\n - A `get_initial_state(inputs=None, batch_size=None, dtype=None)`\n method that creates a tensor meant to be fed to `call()` as the\n initial state, if the user didn't specify any initial state via other\n means. The returned initial state should have a shape of\n [batch_size, cell.state_size]. The cell might choose to create a\n tensor full of zeros, or full of other values based on the cell's\n implementation.\n `inputs` is the input tensor to the RNN layer, which should\n contain the batch size as its shape[0], and also dtype. Note that\n the shape[0] might be `None` during the graph construction. Either\n the `inputs` or the pair of `batch_size` and `dtype` are provided.\n `batch_size` is a scalar tensor that represents the batch size\n of the inputs. `dtype` is `tf.DType` that represents the dtype of\n the inputs.\n For backward compatibility, if this method is not implemented\n by the cell, the RNN layer will create a zero filled tensor with the\n size of [batch_size, cell.state_size].\n In the case that `cell` is a list of RNN cell instances, the cells\n will be stacked on top of each other in the RNN, resulting in an\n efficient stacked RNN.\n return_sequences: Boolean (default `False`). Whether to return the last\n output in the output sequence, or the full sequence.\n return_state: Boolean (default `False`). Whether to return the last state\n in addition to the output.\n go_backwards: Boolean (default `False`).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default `False`). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default `False`).\n If True, the network will be unrolled, else a symbolic loop will be used.\n Unrolling can speed-up a RNN, although it tends to be more\n memory-intensive. Unrolling is only suitable for short sequences.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n zero_output_for_mask: Boolean (default `False`).\n Whether the output should use zeros for the masked timesteps. Note that\n this field is only used when `return_sequences` is True and mask is\n provided. It can useful if you want to reuse the raw output sequence of\n the RNN without interference from the masked timesteps, eg, merging\n bidirectional RNNs.\n\n Call arguments:\n inputs: Input tensor.\n mask: Binary tensor of shape `[batch_size, timesteps]` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False`\n entry indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is for use with cells that use dropout.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n constants: List of constant tensors to be passed to the cell at each\n timestep.\n\n Input shape:\n N-D tensor with shape `[batch_size, timesteps, ...]` or\n `[timesteps, batch_size, ...]` when time_major is True.\n\n Output shape:\n - If `return_state`: a list of tensors. The first tensor is\n the output. The remaining tensors are the last states,\n each with shape `[batch_size, state_size]`, where `state_size` could\n be a high dimension tensor shape.\n - If `return_sequences`: N-D tensor with shape\n `[batch_size, timesteps, output_size]`, where `output_size` could\n be a high dimension tensor shape, or\n `[timesteps, batch_size, output_size]` when `time_major` is True.\n - Else, N-D tensor with shape `[batch_size, output_size]`, where\n `output_size` could be a high dimension tensor shape.\n\n Masking:\n This layer supports masking for input data with a variable number\n of timesteps. To introduce masks to your data,\n use an [tf.keras.layers.Embedding] layer with the `mask_zero` parameter\n set to `True`.\n\n Note on using statefulness in RNNs:\n You can set RNN layers to be 'stateful', which means that the states\n computed for the samples in one batch will be reused as initial states\n for the samples in the next batch. This assumes a one-to-one mapping\n between samples in different successive batches.\n\n To enable statefulness:\n - Specify `stateful=True` in the layer constructor.\n - Specify a fixed batch size for your model, by passing\n If sequential model:\n `batch_input_shape=(...)` to the first layer in your model.\n Else for functional model with 1 or more Input layers:\n `batch_shape=(...)` to all the first layers in your model.\n This is the expected shape of your inputs\n *including the batch size*.\n It should be a tuple of integers, e.g. `(32, 10, 100)`.\n - Specify `shuffle=False` when calling `fit()`.\n\n To reset the states of your model, call `.reset_states()` on either\n a specific layer, or on your entire model.\n\n Note on specifying the initial state of RNNs:\n You can specify the initial state of RNN layers symbolically by\n calling them with the keyword argument `initial_state`. The value of\n `initial_state` should be a tensor or list of tensors representing\n the initial state of the RNN layer.\n\n You can specify the initial state of RNN layers numerically by\n calling `reset_states` with the keyword argument `states`. The value of\n `states` should be a numpy array or list of numpy arrays representing\n the initial state of the RNN layer.\n\n Note on passing external constants to RNNs:\n You can pass \"external\" constants to the cell using the `constants`\n keyword argument of `RNN.__call__` (as well as `RNN.call`) method. This\n requires that the `cell.call` method accepts the same keyword argument\n `constants`. Such constants can be used to condition the cell\n transformation on additional static inputs (not changing over time),\n a.k.a. an attention mechanism.\n\n Examples:\n\n ```python\n # First, let's define a RNN Cell, as a layer subclass.\n\n class MinimalRNNCell(keras.layers.Layer):\n\n def __init__(self, units, **kwargs):\n self.units = units\n self.state_size = units\n super(MinimalRNNCell, self).__init__(**kwargs)\n\n def build(self, input_shape):\n self.kernel = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='uniform',\n name='kernel')\n self.recurrent_kernel = self.add_weight(\n shape=(self.units, self.units),\n initializer='uniform',\n name='recurrent_kernel')\n self.built = True\n\n def call(self, inputs, states):\n prev_output = states[0]\n h = backend.dot(inputs, self.kernel)\n output = h + backend.dot(prev_output, self.recurrent_kernel)\n return output, [output]\n\n # Let's use this cell in a RNN layer:\n\n cell = MinimalRNNCell(32)\n x = keras.Input((None, 5))\n layer = RNN(cell)\n y = layer(x)\n\n # Here's how to use the cell to build a stacked RNN:\n\n cells = [MinimalRNNCell(32), MinimalRNNCell(64)]\n x = keras.Input((None, 5))\n layer = RNN(cells)\n y = layer(x)\n ```\n ", "desc": "Base class for recurrent layers.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SeparableConv1D", "docs": "Depthwise separable 1D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"`, or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input. `\"causal\"` results in causal\n (dilated) convolutions, e.g. `output[t]` does not depend on `input[t+1:]`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel (see `keras.regularizers`).\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel (see `keras.regularizers`).\n bias_regularizer: Optional regularizer for the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Optional regularizer function for the output\n (see `keras.regularizers`).\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training\n (see `keras.constraints`).\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`\n (see `keras.constraints`).\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`\n (see `keras.constraints`).\n trainable: Boolean, if `True` the weights of this layer will be marked as\n trainable (and listed in `layer.trainable_weights`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, channels, steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, steps, channels)` if data_format='channels_last'.\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, filters, new_steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, new_steps, filters)` if data_format='channels_last'.\n `new_steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(separableconv1d(inputs, kernel) + bias)`.\n ", "desc": "Depthwise separable 1D convolution.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SeparableConv2D", "docs": "Depthwise separable 2D convolution.\n\n Separable convolutions consist of first performing\n a depthwise spatial convolution\n (which acts on each input channel separately)\n followed by a pointwise convolution which mixes the resulting\n output channels. The `depth_multiplier` argument controls how many\n output channels are generated per input channel in the depthwise step.\n\n Intuitively, separable convolutions can be understood as\n a way to factorize a convolution kernel into two smaller kernels,\n or as an extreme version of an Inception block.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions. Current implementation only supports equal\n length strides in the row and column dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels\n for each input channel.\n The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Regularizer function applied to\n the depthwise kernel matrix (see `keras.regularizers`).\n pointwise_regularizer: Regularizer function applied to\n the pointwise kernel matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to\n the depthwise kernel matrix\n (see `keras.constraints`).\n pointwise_constraint: Constraint function applied to\n the pointwise kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(separableconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ", "desc": "Depthwise separable 2D convolution.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SeparableConvolution1D", "docs": "Depthwise separable 1D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"`, or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input. `\"causal\"` results in causal\n (dilated) convolutions, e.g. `output[t]` does not depend on `input[t+1:]`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel (see `keras.regularizers`).\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel (see `keras.regularizers`).\n bias_regularizer: Optional regularizer for the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Optional regularizer function for the output\n (see `keras.regularizers`).\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training\n (see `keras.constraints`).\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`\n (see `keras.constraints`).\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`\n (see `keras.constraints`).\n trainable: Boolean, if `True` the weights of this layer will be marked as\n trainable (and listed in `layer.trainable_weights`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, channels, steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, steps, channels)` if data_format='channels_last'.\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, filters, new_steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, new_steps, filters)` if data_format='channels_last'.\n `new_steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(separableconv1d(inputs, kernel) + bias)`.\n ", "desc": "Depthwise separable 1D convolution.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SeparableConvolution2D", "docs": "Depthwise separable 2D convolution.\n\n Separable convolutions consist of first performing\n a depthwise spatial convolution\n (which acts on each input channel separately)\n followed by a pointwise convolution which mixes the resulting\n output channels. The `depth_multiplier` argument controls how many\n output channels are generated per input channel in the depthwise step.\n\n Intuitively, separable convolutions can be understood as\n a way to factorize a convolution kernel into two smaller kernels,\n or as an extreme version of an Inception block.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions. Current implementation only supports equal\n length strides in the row and column dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels\n for each input channel.\n The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Regularizer function applied to\n the depthwise kernel matrix (see `keras.regularizers`).\n pointwise_regularizer: Regularizer function applied to\n the pointwise kernel matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to\n the depthwise kernel matrix\n (see `keras.constraints`).\n pointwise_constraint: Constraint function applied to\n the pointwise kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(separableconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ", "desc": "Depthwise separable 2D convolution.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.serialize", "docs": "Serializes a `Layer` object into a JSON-compatible representation.\n\n Args:\n layer: The `Layer` object to serialize.\n\n Returns:\n A JSON-serializable dict representing the object's config.\n\n Example:\n\n ```python\n from pprint import pprint\n model = tf.keras.models.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(32, activation='relu'))\n\n pprint(tf.keras.layers.serialize(model))\n # prints the configuration of the model, as a dict.\n ", "desc": "Serializes a `Layer` object into a JSON-compatible representation.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SimpleRNN", "docs": "Fully-connected RNN where the output is to be fed back to input.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass None, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\"). Default: `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for the linear transformation of the inputs.\n Default: 0.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for the linear transformation of the\n recurrent state. Default: 0.\n return_sequences: Boolean. Whether to return the last output\n in the output sequence, or the full sequence. Default: `False`.\n return_state: Boolean. Whether to return the last state\n in addition to the output. Default: `False`\n go_backwards: Boolean (default False).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default False). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default False).\n If True, the network will be unrolled,\n else a symbolic loop will be used.\n Unrolling can speed-up a RNN,\n although it tends to be more memory-intensive.\n Unrolling is only suitable for short sequences.\n\n Call arguments:\n inputs: A 3D tensor, with shape `[batch, timesteps, feature]`.\n mask: Binary tensor of shape `[batch, timesteps]` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False` entry\n indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n\n Examples:\n\n ```python\n inputs = np.random.random([32, 10, 8]).astype(np.float32)\n simple_rnn = tf.keras.layers.SimpleRNN(4)\n\n output = simple_rnn(inputs) # The output has shape `[32, 4]`.\n\n simple_rnn = tf.keras.layers.SimpleRNN(\n 4, return_sequences=True, return_state=True)\n\n # whole_sequence_output has shape `[32, 10, 4]`.\n # final_state has shape `[32, 4]`.\n whole_sequence_output, final_state = simple_rnn(inputs)\n ```\n ", "desc": "Fully-connected RNN where the output is to be fed back to input.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SimpleRNNCell", "docs": "Cell class for SimpleRNN.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This class processes one step within the whole time sequence input, whereas\n `tf.keras.layer.SimpleRNN` processes the whole sequence.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n\n Call arguments:\n inputs: A 2D tensor, with shape of `[batch, feature]`.\n states: A 2D tensor with shape of `[batch, units]`, which is the state from\n the previous time step. For timestep 0, the initial state provided by user\n will be feed to cell.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n\n Examples:\n\n ```python\n inputs = np.random.random([32, 10, 8]).astype(np.float32)\n rnn = tf.keras.layers.RNN(tf.keras.layers.SimpleRNNCell(4))\n\n output = rnn(inputs) # The output has shape `[32, 4]`.\n\n rnn = tf.keras.layers.RNN(\n tf.keras.layers.SimpleRNNCell(4),\n return_sequences=True,\n return_state=True)\n\n # whole_sequence_output has shape `[32, 10, 4]`.\n # final_state has shape `[32, 4]`.\n whole_sequence_output, final_state = rnn(inputs)\n ```\n ", "desc": "Cell class for SimpleRNN.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Softmax", "docs": "Softmax activation function.\n\n Example without mask:\n\n >>> inp = np.asarray([1., 2., 1.])\n >>> layer = tf.keras.layers.Softmax()\n >>> layer(inp).numpy()\n array([0.21194157, 0.5761169 , 0.21194157], dtype=float32)\n >>> mask = np.asarray([True, False, True], dtype=bool)\n >>> layer(inp, mask).numpy()\n array([0.5, 0. , 0.5], dtype=float32)\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n axis: Integer, or list of Integers, axis along which the softmax\n normalization is applied.\n Call arguments:\n inputs: The inputs, or logits to the softmax layer.\n mask: A boolean mask of the same shape as `inputs`. Defaults to `None`. The\n mask specifies 1 to keep and 0 to mask.\n\n Returns:\n softmaxed output with the same shape as `inputs`.\n ", "desc": "Softmax activation function.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SpatialDropout1D", "docs": "Spatial 1D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 1D feature maps instead of individual elements. If adjacent frames\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout1D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n Call arguments:\n inputs: A 3D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 3D tensor with shape: `(samples, timesteps, channels)`\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 1D version of Dropout.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SpatialDropout2D", "docs": "Spatial 2D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 2D feature maps instead of individual elements. If adjacent pixels\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout2D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode,\n the channels dimension (the depth) is at index 1, in 'channels_last' mode\n is it at index 3. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be \"channels_last\".\n Call arguments:\n inputs: A 4D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 4D tensor with shape: `(samples, channels, rows, cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, rows, cols, channels)` if\n data_format='channels_last'.\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 2D version of Dropout.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.SpatialDropout3D", "docs": "Spatial 3D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 3D feature maps instead of individual elements. If adjacent voxels\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout3D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode,\n the channels dimension (the depth) is at index 1, in 'channels_last' mode\n is it at index 4. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be \"channels_last\".\n Call arguments:\n inputs: A 5D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 5D tensor with shape: `(samples, channels, dim1, dim2, dim3)` if\n data_format='channels_first'\n or 5D tensor with shape: `(samples, dim1, dim2, dim3, channels)` if\n data_format='channels_last'.\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 3D version of Dropout.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.StackedRNNCells", "docs": "Wrapper allowing a stack of RNN cells to behave as a single cell.\n\n Used to implement efficient stacked RNNs.\n\n Args:\n cells: List of RNN cell instances.\n\n Examples:\n\n ```python\n batch_size = 3\n sentence_max_length = 5\n n_features = 2\n new_shape = (batch_size, sentence_max_length, n_features)\n x = tf.constant(np.reshape(np.arange(30), new_shape), dtype = tf.float32)\n\n rnn_cells = [tf.keras.layers.LSTMCell(128) for _ in range(2)]\n stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells)\n lstm_layer = tf.keras.layers.RNN(stacked_lstm)\n\n result = lstm_layer(x)\n ```\n ", "desc": "Wrapper allowing a stack of RNN cells to behave as a single cell.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Subtract", "docs": "Layer that subtracts two inputs.\n\n It takes as input a list of tensors of size 2,\n both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]),\n also of the same shape.\n\n Examples:\n\n ```python\n import keras\n\n input1 = keras.layers.Input(shape=(16,))\n x1 = keras.layers.Dense(8, activation='relu')(input1)\n input2 = keras.layers.Input(shape=(32,))\n x2 = keras.layers.Dense(8, activation='relu')(input2)\n # Equivalent to subtracted = keras.layers.subtract([x1, x2])\n subtracted = keras.layers.Subtract()([x1, x2])\n\n out = keras.layers.Dense(4)(subtracted)\n model = keras.models.Model(inputs=[input1, input2], outputs=out)\n ```\n ", "desc": "Layer that subtracts two inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ThresholdedReLU", "docs": "Thresholded Rectified Linear Unit.\n\n It follows:\n\n ```\n f(x) = x for x > theta\n f(x) = 0 otherwise`\n ```\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n theta: Float >= 0. Threshold location of activation.\n ", "desc": "Thresholded Rectified Linear Unit.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.TimeDistributed", "docs": "This wrapper allows to apply a layer to every temporal slice of an input.\n\n Every input should be at least 3D, and the dimension of index one of the\n first input will be considered to be the temporal dimension.\n\n Consider a batch of 32 video samples, where each sample is a 128x128 RGB image\n with `channels_last` data format, across 10 timesteps.\n The batch input shape is `(32, 10, 128, 128, 3)`.\n\n You can then use `TimeDistributed` to apply the same `Conv2D` layer to each\n of the 10 timesteps, independently:\n\n >>> inputs = tf.keras.Input(shape=(10, 128, 128, 3))\n >>> conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))\n >>> outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)\n >>> outputs.shape\n TensorShape([None, 10, 126, 126, 64])\n\n Because `TimeDistributed` applies the same instance of `Conv2D` to each of the\n timestamps, the same set of weights are used at each timestamp.\n\n Args:\n layer: a `tf.keras.layers.Layer` instance.\n\n Call arguments:\n inputs: Input tensor of shape (batch, time, ...) or nested tensors,\n and each of which has shape (batch, time, ...).\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the\n wrapped layer (only if the layer supports this argument).\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether\n a given timestep should be masked. This argument is passed to the\n wrapped layer (only if the layer supports this argument).\n\n Raises:\n ValueError: If not initialized with a `tf.keras.layers.Layer` instance.\n ", "desc": "This wrapper allows to apply a layer to every temporal slice of an input.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.UpSampling1D", "docs": "Upsampling layer for 1D inputs.\n\n Repeats each temporal step `size` times along the time axis.\n\n Examples:\n\n >>> input_shape = (2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1 2]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 9 10 11]]]\n >>> y = tf.keras.layers.UpSampling1D(size=2)(x)\n >>> print(y)\n tf.Tensor(\n [[[ 0 1 2]\n [ 0 1 2]\n [ 3 4 5]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 6 7 8]\n [ 9 10 11]\n [ 9 10 11]]], shape=(2, 4, 3), dtype=int64)\n\n Args:\n size: Integer. Upsampling factor.\n\n Input shape:\n 3D tensor with shape: `(batch_size, steps, features)`.\n\n Output shape:\n 3D tensor with shape: `(batch_size, upsampled_steps, features)`.\n ", "desc": "Upsampling layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.UpSampling2D", "docs": "Upsampling layer for 2D inputs.\n\n Repeats the rows and columns of the data\n by `size[0]` and `size[1]` respectively.\n\n Examples:\n\n >>> input_shape = (2, 2, 1, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[[ 0 1 2]]\n [[ 3 4 5]]]\n [[[ 6 7 8]]\n [[ 9 10 11]]]]\n >>> y = tf.keras.layers.UpSampling2D(size=(1, 2))(x)\n >>> print(y)\n tf.Tensor(\n [[[[ 0 1 2]\n [ 0 1 2]]\n [[ 3 4 5]\n [ 3 4 5]]]\n [[[ 6 7 8]\n [ 6 7 8]]\n [[ 9 10 11]\n [ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64)\n\n Args:\n size: Int, or tuple of 2 integers.\n The upsampling factors for rows and columns.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n interpolation: A string, one of `\"area\"`, `\"bicubic\"`, `\"bilinear\"`,\n `\"gaussian\"`, `\"lanczos3\"`, `\"lanczos5\"`, `\"mitchellcubic\"`, `\"nearest\"`.\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, upsampled_rows, upsampled_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, upsampled_rows, upsampled_cols)`\n ", "desc": "Upsampling layer for 2D inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.UpSampling3D", "docs": "Upsampling layer for 3D inputs.\n\n Repeats the 1st, 2nd and 3rd dimensions\n of the data by `size[0]`, `size[1]` and `size[2]` respectively.\n\n Examples:\n\n >>> input_shape = (2, 1, 2, 1, 3)\n >>> x = tf.constant(1, shape=input_shape)\n >>> y = tf.keras.layers.UpSampling3D(size=2)(x)\n >>> print(y.shape)\n (2, 2, 4, 2, 3)\n\n Args:\n size: Int, or tuple of 3 integers.\n The upsampling factors for dim1, dim2 and dim3.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, dim1, dim2, dim3, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, dim1, dim2, dim3)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)`\n ", "desc": "Upsampling layer for 3D inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.Wrapper", "docs": "Abstract wrapper base class.\n\n Wrappers take another layer and augment it in various ways.\n Do not use this class as a layer, it is only an abstract base class.\n Two usable wrappers are the `TimeDistributed` and `Bidirectional` wrappers.\n\n Args:\n layer: The layer to be wrapped.\n ", "desc": "Abstract wrapper base class.", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ZeroPadding1D", "docs": "Zero-padding layer for 1D input (e.g. temporal sequence).\n\n Examples:\n\n >>> input_shape = (2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1 2]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 9 10 11]]]\n >>> y = tf.keras.layers.ZeroPadding1D(padding=2)(x)\n >>> print(y)\n tf.Tensor(\n [[[ 0 0 0]\n [ 0 0 0]\n [ 0 1 2]\n [ 3 4 5]\n [ 0 0 0]\n [ 0 0 0]]\n [[ 0 0 0]\n [ 0 0 0]\n [ 6 7 8]\n [ 9 10 11]\n [ 0 0 0]\n [ 0 0 0]]], shape=(2, 6, 3), dtype=int64)\n\n Args:\n padding: Int, or tuple of int (length 2), or dictionary.\n - If int:\n How many zeros to add at the beginning and end of\n the padding dimension (axis 1).\n - If tuple of int (length 2):\n How many zeros to add at the beginning and the end of\n the padding dimension (`(left_pad, right_pad)`).\n\n Input shape:\n 3D tensor with shape `(batch_size, axis_to_pad, features)`\n\n Output shape:\n 3D tensor with shape `(batch_size, padded_axis, features)`\n ", "desc": "Zero-padding layer for 1D input (e.g. temporal sequence).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ZeroPadding2D", "docs": "Zero-padding layer for 2D input (e.g. picture).\n\n This layer can add rows and columns of zeros\n at the top, bottom, left and right side of an image tensor.\n\n Examples:\n\n >>> input_shape = (1, 1, 2, 2)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[[0 1]\n [2 3]]]]\n >>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x)\n >>> print(y)\n tf.Tensor(\n [[[[0 0]\n [0 0]\n [0 0]\n [0 0]]\n [[0 0]\n [0 1]\n [2 3]\n [0 0]]\n [[0 0]\n [0 0]\n [0 0]\n [0 0]]]], shape=(1, 3, 4, 2), dtype=int64)\n\n Args:\n padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.\n - If int: the same symmetric padding\n is applied to height and width.\n - If tuple of 2 ints:\n interpreted as two different\n symmetric padding values for height and width:\n `(symmetric_height_pad, symmetric_width_pad)`.\n - If tuple of 2 tuples of 2 ints:\n interpreted as\n `((top_pad, bottom_pad), (left_pad, right_pad))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, padded_rows, padded_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, padded_rows, padded_cols)`\n ", "desc": "Zero-padding layer for 2D input (e.g. picture).", "type": "API"}, {"name": "tf.compat.v1.keras.layers.ZeroPadding3D", "docs": "Zero-padding layer for 3D data (spatial or spatio-temporal).\n\n Examples:\n\n >>> input_shape = (1, 1, 2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.ZeroPadding3D(padding=2)(x)\n >>> print(y.shape)\n (1, 5, 6, 6, 3)\n\n Args:\n padding: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints.\n - If int: the same symmetric padding\n is applied to height and width.\n - If tuple of 3 ints:\n interpreted as two different\n symmetric padding values for height and width:\n `(symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad)`.\n - If tuple of 3 tuples of 2 ints:\n interpreted as\n `((left_dim1_pad, right_dim1_pad), (left_dim2_pad,\n right_dim2_pad), (left_dim3_pad, right_dim3_pad))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_axis_to_pad, second_axis_to_pad,\n third_axis_to_pad)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_padded_axis, second_padded_axis,\n third_axis_to_pad)`\n ", "desc": "Zero-padding layer for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.compat.v1.keras.losses", "docs": "Built-in loss functions.\n", "desc": "Built-in loss functions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.BinaryCrossentropy", "docs": "Computes the cross-entropy loss between true labels and predicted labels.\n\n Use this cross-entropy loss for binary (0 or 1) classification applications.\n The loss function requires the following inputs:\n\n - `y_true` (true label): This is either 0 or 1.\n - `y_pred` (predicted value): This is the model's prediction, i.e, a single\n floating-point value which either represents a\n [logit](https://en.wikipedia.org/wiki/Logit), (i.e, value in [-inf, inf]\n when `from_logits=True`) or a probability (i.e, value in [0., 1.] when\n `from_logits=False`).\n\n **Recommended Usage:** (set `from_logits=True`)\n\n With `tf.keras` API:\n\n ```python\n model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n ....\n )\n ```\n\n As a standalone function:\n\n >>> # Example 1: (batch_size = 1, number of samples = 4)\n >>> y_true = [0, 1, 0, 0]\n >>> y_pred = [-18.6, 0.51, 2.94, -12.8]\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n\n >>> # Example 2: (batch_size = 2, number of samples = 4)\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[-18.6, 0.51], [2.94, -12.8]]\n >>> # Using default 'auto'/'sum_over_batch_size' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n >>> # Using 'sample_weight' attribute\n >>> bce(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.243\n >>> # Using 'sum' reduction` type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> bce(y_true, y_pred).numpy()\n 1.730\n >>> # Using 'none' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> bce(y_true, y_pred).numpy()\n array([0.235, 1.496], dtype=float32)\n\n **Default Usage:** (set `from_logits=False`)\n\n >>> # Make the following updates to the above \"Recommended Usage\" section\n >>> # 1. Set `from_logits=False`\n >>> tf.keras.losses.BinaryCrossentropy() # OR ...('from_logits=False')\n >>> # 2. Update `y_pred` to use probabilities instead of logits\n >>> y_pred = [0.6, 0.3, 0.2, 0.8] # OR [[0.6, 0.3], [0.2, 0.8]]\n ", "desc": "Computes the cross-entropy loss between true labels and predicted labels.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.categorical_hinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 3, size=(2,))\n >>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> pos = np.sum(y_true * y_pred, axis=-1)\n >>> neg = np.amax((1. - y_true) * y_pred, axis=-1)\n >>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be\n either `{-1, +1}` or `{0, 1}` (i.e. a one-hot-encoded tensor).\n y_pred: The predicted values.\n\n Returns:\n Categorical hinge loss values.\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.CategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided in a `one_hot` representation. If you want to\n provide labels as integers, please use `SparseCategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature.\n\n In the snippet below, there is `# classes` floating pointing values per\n example. The shape of both `y_pred` and `y_true` are\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy()\n >>> cce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.CategoricalHinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge()\n >>> h(y_true, y_pred).numpy()\n 1.4\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.6\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.8\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.2, 1.6], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge())\n ```\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.cosine", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.cosine_proximity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.cosine_similarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.CosineSimilarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and targets.\n If either `y_true` or `y_pred` is a zero vector, cosine similarity will be 0\n regardless of the proximity between predictions and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1)\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = -((0. + 0.) + (0.5 + 0.5)) / 2\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.5\n\n >>> # Calling with 'sample_weight'.\n >>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n -0.0999\n\n >>> # Using 'sum' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.999\n\n >>> # Using 'none' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cosine_loss(y_true, y_pred).numpy()\n array([-0., -0.999], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1))\n ```\n\n Args:\n axis: The axis along which the cosine similarity is computed\n (the features axis). Defaults to -1.\n reduction: Type of `tf.keras.losses.Reduction` to apply to loss.\n Default value is `AUTO`. `AUTO` indicates that the reduction option will\n be determined by the usage context. For almost all cases this defaults to\n `SUM_OVER_BATCH_SIZE`. When used with `tf.distribute.Strategy`, outside of\n built-in training loops such as `tf.keras` `compile` and `fit`, using\n `AUTO` or `SUM_OVER_BATCH_SIZE` will raise an error. Please see this\n custom training [tutorial]\n (https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details.\n name: Optional name for the instance.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.deserialize", "docs": "Deserializes a serialized loss class/function instance.\n\n Args:\n name: Loss configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Loss` instance or a loss function.\n ", "desc": "Deserializes a serialized loss class/function instance.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.get", "docs": "Retrieves a Keras loss as a `function`/`Loss` class instance.\n\n The `identifier` may be the string name of a loss function or `Loss` class.\n\n >>> loss = tf.keras.losses.get(\"categorical_crossentropy\")\n >>> type(loss)\n \n >>> loss = tf.keras.losses.get(\"CategoricalCrossentropy\")\n >>> type(loss)\n \n\n You can also specify `config` of the loss to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Loss` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> loss = tf.keras.losses.get(identifier)\n >>> type(loss)\n \n\n Args:\n identifier: A loss identifier. One of None or string name of a loss\n function/class or loss configuration dictionary or a loss function or a\n loss class instance.\n\n Returns:\n A Keras loss as a `function`/ `Loss` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras loss as a `function`/`Loss` class instance.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.Hinge", "docs": "Computes the hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(1 - y_true * y_pred, 0)`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Hinge()\n >>> h(y_true, y_pred).numpy()\n 1.3\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.55\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.6\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.1, 1.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge())\n ```\n ", "desc": "Computes the hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.Huber", "docs": "Computes the Huber loss between `y_true` and `y_pred`.\n\n For each value x in `error = y_true - y_pred`:\n\n ```\n loss = 0.5 * x^2 if |x| <= d\n loss = 0.5 * d^2 + d * (|x| - d) if |x| > d\n ```\n where d is `delta`. See: https://en.wikipedia.org/wiki/Huber_loss\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Huber()\n >>> h(y_true, y_pred).numpy()\n 0.155\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.09\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 0.31\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([0.18, 0.13], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Huber())\n ```\n ", "desc": "Computes the Huber loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.KLDivergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> kl = tf.keras.losses.KLDivergence()\n >>> kl(y_true, y_pred).numpy()\n 0.458\n\n >>> # Calling with 'sample_weight'.\n >>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.366\n\n >>> # Using 'sum' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> kl(y_true, y_pred).numpy()\n 0.916\n\n >>> # Using 'none' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> kl(y_true, y_pred).numpy()\n array([0.916, -3.08e-06], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence())\n ```\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.LogCosh", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`,\n where x is the error `y_pred - y_true`.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> l = tf.keras.losses.LogCosh()\n >>> l(y_true, y_pred).numpy()\n 0.108\n\n >>> # Calling with 'sample_weight'.\n >>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.087\n\n >>> # Using 'sum' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> l(y_true, y_pred).numpy()\n 0.217\n\n >>> # Using 'none' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> l(y_true, y_pred).numpy()\n array([0.217, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh())\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.Loss", "docs": "Loss base class.\n\n To be implemented by subclasses:\n * `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.\n\n Example subclass implementation:\n\n ```python\n class MeanSquaredError(Loss):\n\n def call(self, y_true, y_pred):\n return tf.reduce_mean(tf.math.square(y_pred - y_true), axis=-1)\n ```\n\n When used with `tf.distribute.Strategy`, outside of built-in training loops\n such as `tf.keras` `compile` and `fit`, please use 'SUM' or 'NONE' reduction\n types, and reduce losses explicitly in your training loop. Using 'AUTO' or\n 'SUM_OVER_BATCH_SIZE' will raise an error.\n\n Please see this custom training [tutorial](\n https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details on this.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:\n\n ```python\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = (tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size))\n ```\n ", "desc": "Loss base class.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MeanAbsoluteError", "docs": "Computes the mean of absolute difference between labels and predictions.\n\n `loss = abs(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError()\n >>> mae(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mae(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mae(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError())\n ```\n ", "desc": "Computes the mean of absolute difference between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Formula:\n\n `loss = 100 * abs((y_true - y_pred) / y_true)`\n\n Note that to avoid dividing by zero, a small epsilon value\n is added to the denominator.\n\n Standalone usage:\n\n >>> y_true = [[2., 1.], [2., 3.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError()\n >>> mape(y_true, y_pred).numpy()\n 50.\n\n >>> # Calling with 'sample_weight'.\n >>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 20.\n\n >>> # Using 'sum' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mape(y_true, y_pred).numpy()\n 100.\n\n >>> # Using 'none' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mape(y_true, y_pred).numpy()\n array([25., 75.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanAbsolutePercentageError())\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MeanSquaredError", "docs": "Computes the mean of squares of errors between labels and predictions.\n\n `loss = square(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError()\n >>> mse(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mse(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mse(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError())\n ```\n ", "desc": "Computes the mean of squares of errors between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = square(log(y_true + 1.) - log(y_pred + 1.))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError()\n >>> msle(y_true, y_pred).numpy()\n 0.240\n\n >>> # Calling with 'sample_weight'.\n >>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.120\n\n >>> # Using 'sum' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> msle(y_true, y_pred).numpy()\n 0.480\n\n >>> # Using 'none' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> msle(y_true, y_pred).numpy()\n array([0.240, 0.240], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanSquaredLogarithmicError())\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.Poisson", "docs": "Computes the Poisson loss between `y_true` and `y_pred`.\n\n `loss = y_pred - y_true * log(y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> p = tf.keras.losses.Poisson()\n >>> p(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.4\n\n >>> # Using 'sum' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> p(y_true, y_pred).numpy()\n 0.999\n\n >>> # Using 'none' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> p(y_true, y_pred).numpy()\n array([0.999, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson())\n ```\n ", "desc": "Computes the Poisson loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.serialize", "docs": "Serializes loss function or `Loss` instance.\n\n Args:\n loss: A Keras `Loss` instance or a loss function.\n\n Returns:\n Loss configuration dictionary.\n ", "desc": "Serializes loss function or `Loss` instance.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy()\n >>> scce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> scce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> scce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.SparseCategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.losses.SquaredHinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = square(maximum(1 - y_true * y_pred, 0))`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.SquaredHinge()\n >>> h(y_true, y_pred).numpy()\n 1.86\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.73\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 3.72\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.46, 2.26], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge())\n ```\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics", "docs": "All Keras metrics.\n", "desc": "All Keras metrics.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Accuracy", "docs": "Calculates how often predictions equal labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Accuracy()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],\n ... sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Accuracy()])\n ```\n ", "desc": "Calculates how often predictions equal labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.AUC", "docs": "Approximates the AUC (Area under the curve) of the ROC or PR curves.\n\n The AUC (Area under the curve) of the ROC (Receiver operating\n characteristic; default) or PR (Precision Recall) curves are quality measures\n of binary classifiers. Unlike the accuracy, and like cross-entropy\n losses, ROC-AUC and PR-AUC evaluate all the operational points of a model.\n\n This class approximates AUCs using a Riemann sum. During the metric\n accumulation phrase, predictions are accumulated within predefined buckets\n by value. The AUC is then computed by interpolating per-bucket averages. These\n buckets define the evaluated operational points.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the AUC.\n To discretize the AUC curve, a linearly spaced set of thresholds is used to\n compute pairs of recall and precision values. The area under the ROC-curve is\n therefore computed using the height of the recall values by the false positive\n rate, while the area under the PR-curve is the computed using the height of\n the precision values by the recall.\n\n This value is ultimately returned as `auc`, an idempotent operation that\n computes the area under a discretized curve of precision versus recall values\n (computed using the aforementioned variables). The `num_thresholds` variable\n controls the degree of discretization with larger numbers of thresholds more\n closely approximating the true AUC. The quality of the approximation may vary\n dramatically depending on `num_thresholds`. The `thresholds` parameter can be\n used to manually specify thresholds which split the predictions more evenly.\n\n For a best approximation of the real AUC, `predictions` should be distributed\n approximately uniformly in the range [0, 1] (if `from_logits=False`). The\n quality of the AUC approximation may be poor if this is not the case. Setting\n `summation_method` to 'minoring' or 'majoring' can help quantify the error in\n the approximation by providing lower or upper bound estimate of the AUC.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use when discretizing the roc curve. Values must be > 1.\n curve: (Optional) Specifies the name of the curve to be computed, 'ROC'\n [default] or 'PR' for the Precision-Recall-curve.\n summation_method: (Optional) Specifies the [Riemann summation method](\n https://en.wikipedia.org/wiki/Riemann_sum) used.\n 'interpolation' (default) applies mid-point summation scheme for `ROC`.\n For PR-AUC, interpolates (true/false) positives but not the ratio that\n is precision (see Davis & Goadrich 2006 for details);\n 'minoring' applies left summation\n for increasing intervals and right summation for decreasing intervals;\n 'majoring' does the opposite.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n thresholds: (Optional) A list of floating point values to use as the\n thresholds for discretizing the curve. If set, the `num_thresholds`\n parameter is ignored. Values should be in [0, 1]. Endpoint thresholds\n equal to {-epsilon, 1+epsilon} for a small positive epsilon value will\n be automatically included with these to correctly handle predictions\n equal to exactly 0 or 1.\n multi_label: boolean indicating whether multilabel data should be\n treated as such, wherein AUC is computed separately for each label and\n then averaged across labels, or (when False) if the data should be\n flattened into a single label before AUC computation. In the latter\n case, when multilabel data is passed to AUC, each label-prediction pair\n is treated as an individual data point. Should be set to False for\n multi-class data.\n num_labels: (Optional) The number of labels, used when `multi_label` is\n True. If `num_labels` is not specified, then state variables get created\n on the first call to `update_state`.\n label_weights: (Optional) list, array, or tensor of non-negative weights\n used to compute AUCs for multilabel data. When `multi_label` is True,\n the weights are applied to the individual label AUCs when they are\n averaged to produce the multi-label AUC. When it's False, they are used\n to weight the individual label predictions in computing the confusion\n matrix on the flattened data. Note that this is unlike class_weights in\n that class_weights weights the example depending on the value of its\n label, whereas label_weights depends only on the index of that label\n before flattening; therefore `label_weights` should not be used for\n multi-class data.\n from_logits: boolean indicating whether the predictions (`y_pred` in\n `update_state`) are probabilities or sigmoid logits. As a rule of thumb,\n when using a keras loss, the `from_logits` constructor argument of the\n loss should match the AUC `from_logits` constructor argument.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.AUC(num_thresholds=3)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> # threshold values are [0 - 1e-7, 0.5, 1 + 1e-7]\n >>> # tp = [2, 1, 0], fp = [2, 0, 0], fn = [0, 1, 2], tn = [0, 2, 2]\n >>> # tp_rate = recall = [1, 0.5, 0], fp_rate = [1, 0, 0]\n >>> # auc = ((((1+0.5)/2)*(1-0)) + (((0.5+0)/2)*(0-0))) = 0.75\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n # Reports the AUC of a model outputting a probability.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(),\n metrics=[tf.keras.metrics.AUC()])\n\n # Reports the AUC of a model outputting a logit.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)])\n ```\n ", "desc": "Approximates the AUC (Area under the curve) of the ROC or PR curves.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.binary_accuracy", "docs": "Calculates how often predictions match binary labels.\n\n Standalone usage:\n >>> y_true = [[1], [1], [0], [0]]\n >>> y_pred = [[1], [1], [0], [0]]\n >>> m = tf.keras.metrics.binary_accuracy(y_true, y_pred)\n >>> assert m.shape == (4,)\n >>> m.numpy()\n array([1., 1., 1., 1.], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n threshold: (Optional) Float representing the threshold for deciding whether\n prediction values are 1 or 0.\n\n Returns:\n Binary accuracy values. shape = `[batch_size, d0, .. dN-1]`\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.BinaryAccuracy", "docs": "Calculates how often predictions match binary labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n threshold: (Optional) Float representing the threshold for deciding\n whether prediction values are 1 or 0.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryAccuracy()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.BinaryCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are only two\n label classes (0 and 1).\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional )Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed.\n e.g. `label_smoothing=0.2` means that we will use a value of `0.1` for\n label `0` and `0.9` for label `1`\".\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryCrossentropy()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.81492424\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162905\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.categorical_accuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: One-hot ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Categorical accuracy values.\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.CategoricalAccuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `categorical accuracy`: an idempotent operation that\n simply divides `total` by `count`.\n\n `y_pred` and `y_true` should be passed in as vectors of probabilities, rather\n than as labels. If necessary, use `tf.one_hot` to expand `y_true` as a vector.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalAccuracy()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.CategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are multiple\n label classes (2 or more). Here we assume that labels are given as a `one_hot`\n representation. eg., When labels values are [2, 0, 1],\n `y_true` = [[0, 0, 1], [1, 0, 0], [0, 1, 0]].\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed. e.g.\n `label_smoothing=0.2` means that we will use a value of `0.1` for label\n `0` and `0.9` for label `1`\"\n\n Standalone usage:\n\n >>> # EPSILON = 1e-7, y = y_true, y` = y_pred\n >>> # y` = clip_ops.clip_by_value(output, EPSILON, 1. - EPSILON)\n >>> # y` = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(y'), axis = -1)\n >>> # = -((log 0.95), (log 0.1))\n >>> # = [0.051, 2.302]\n >>> # Reduced xent = (0.051 + 2.302) / 2\n >>> m = tf.keras.metrics.CategoricalCrossentropy()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.CategoricalHinge", "docs": "Computes the categorical hinge metric between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.4000001\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.2\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalHinge()])\n ```\n ", "desc": "Computes the categorical hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.cosine", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.cosine_proximity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.CosineSimilarity", "docs": "Computes the cosine similarity between the labels and predictions.\n\n `cosine similarity = (a . b) / ||a|| ||b||`\n\n See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).\n\n This metric keeps the average cosine similarity between `predictions` and\n `labels` over a stream of data.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n axis: (Optional) Defaults to -1. The dimension along which the cosine\n similarity is computed.\n\n Standalone usage:\n\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = ((0. + 0.) + (0.5 + 0.5)) / 2\n >>> m = tf.keras.metrics.CosineSimilarity(axis=1)\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]],\n ... sample_weight=[0.3, 0.7])\n >>> m.result().numpy()\n 0.6999999\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CosineSimilarity(axis=1)])\n ```\n ", "desc": "Computes the cosine similarity between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.deserialize", "docs": "Deserializes a serialized metric class/function instance.\n\n Args:\n config: Metric configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Metric` instance or a metric function.\n ", "desc": "Deserializes a serialized metric class/function instance.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.FalseNegatives", "docs": "Calculates the number of false negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalseNegatives()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalseNegatives()])\n ```\n ", "desc": "Calculates the number of false negatives.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.FalsePositives", "docs": "Calculates the number of false positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false positives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalsePositives()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalsePositives()])\n ```\n ", "desc": "Calculates the number of false positives.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.get", "docs": "Retrieves a Keras metric as a `function`/`Metric` class instance.\n\n The `identifier` may be the string name of a metric function or class.\n\n >>> metric = tf.keras.metrics.get(\"categorical_crossentropy\")\n >>> type(metric)\n \n >>> metric = tf.keras.metrics.get(\"CategoricalCrossentropy\")\n >>> type(metric)\n \n\n You can also specify `config` of the metric to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Metric` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> metric = tf.keras.metrics.get(identifier)\n >>> type(metric)\n \n\n Args:\n identifier: A metric identifier. One of None or string name of a metric\n function/class or metric configuration dictionary or a metric function or\n a metric class instance\n\n Returns:\n A Keras metric as a `function`/ `Metric` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras metric as a `function`/`Metric` class instance.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Hinge", "docs": "Computes the hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Hinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.3\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.1\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Hinge()])\n ```\n ", "desc": "Computes the hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.KLDivergence", "docs": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.\n\n `metric = y_true * log(y_true / y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.KLDivergence()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.45814306\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162892\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.KLDivergence()])\n ```\n ", "desc": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.logcosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.LogCoshError", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`, where x is the error (y_pred - y_true)\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.LogCoshError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.10844523\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.21689045\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.LogCoshError()])\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Mean", "docs": "Computes the (weighted) mean of the given values.\n\n For example, if values is [1, 3, 5, 7] then the mean is 4.\n If the weights were specified as [1, 1, 0, 0] then the mean would be 2.\n\n This metric creates two variables, `total` and `count` that are used to\n compute the average of `values`. This average is ultimately returned as `mean`\n which is an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Mean()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 4.0\n >>> m.reset_state()\n >>> m.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Mean(name='mean_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) mean of the given values.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanAbsoluteError", "docs": "Computes the mean absolute error between the labels and predictions.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsoluteError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsoluteError()])\n ```\n ", "desc": "Computes the mean absolute error between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsolutePercentageError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 250000000.0\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 500000000.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsolutePercentageError()])\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanIoU", "docs": "Computes the mean Intersection-Over-Union metric.\n\n General definition and computation:\n\n Intersection-Over-Union is a common evaluation metric for semantic image\n segmentation.\n\n For an individual class, the IoU metric is defined as follows:\n\n ```\n iou = true_positives / (true_positives + false_positives + false_negatives)\n ```\n\n To compute IoUs, the predictions are accumulated in a confusion matrix,\n weighted by `sample_weight` and the metric is then calculated from it.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Note that this class first computes IoUs for all individual classes, then\n returns the mean of these values.\n\n Args:\n num_classes: The possible number of labels the prediction task can have.\n This value must be provided, since a confusion matrix of dimension =\n [num_classes, num_classes] will be allocated.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> # cm = [[1, 1],\n >>> # [1, 1]]\n >>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]\n >>> # iou = true_positives / (sum_row + sum_col - true_positives))\n >>> # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33\n >>> m = tf.keras.metrics.MeanIoU(num_classes=2)\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1])\n >>> m.result().numpy()\n 0.33333334\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1],\n ... sample_weight=[0.3, 0.3, 0.3, 0.1])\n >>> m.result().numpy()\n 0.23809525\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])\n ```\n ", "desc": "Computes the mean Intersection-Over-Union metric.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanRelativeError", "docs": "Computes the mean relative error by normalizing with the given values.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the mean relative error. This is weighted by `sample_weight`, and\n it is ultimately returned as `mean_relative_error`:\n an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n normalizer: The normalizer values with same shape as predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanRelativeError(normalizer=[1, 3, 2, 3])\n >>> m.update_state([1, 3, 2, 3], [2, 4, 6, 8])\n\n >>> # metric = mean(|y_pred - y_true| / normalizer)\n >>> # = mean([1, 1, 4, 5] / [1, 3, 2, 3]) = mean([1, 1/3, 2, 5/3])\n >>> # = 5/4 = 1.25\n >>> m.result().numpy()\n 1.25\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanRelativeError(normalizer=[1, 3])])\n ```\n ", "desc": "Computes the mean relative error by normalizing with the given values.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanSquaredError", "docs": "Computes the mean squared error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredError()])\n ```\n ", "desc": "Computes the mean squared error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredLogarithmicError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.12011322\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.24022643\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()])\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MeanTensor", "docs": "Computes the element-wise (weighted) mean of the given tensors.\n\n `MeanTensor` returns a tensor with the same shape of the input tensors. The\n mean value is updated by keeping local variables `total` and `count`. The\n `total` tracks the sum of the weighted values, and `count` stores the sum of\n the weighted counts.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n shape: (Optional) A list of integers, a tuple of integers, or a 1-D Tensor\n of type int32. If not specified, the shape is inferred from the values at\n the first call of update_state.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanTensor()\n >>> m.update_state([0, 1, 2, 3])\n >>> m.update_state([4, 5, 6, 7])\n >>> m.result().numpy()\n array([2., 3., 4., 5.], dtype=float32)\n\n >>> m.update_state([12, 10, 8, 6], sample_weight= [0, 0.2, 0.5, 1])\n >>> m.result().numpy()\n array([2. , 3.6363635, 4.8 , 5.3333335], dtype=float32)\n\n >>> m = tf.keras.metrics.MeanTensor(dtype=tf.float64, shape=(1, 4))\n >>> m.result().numpy()\n array([[0., 0., 0., 0.]])\n >>> m.update_state([[0, 1, 2, 3]])\n >>> m.update_state([[4, 5, 6, 7]])\n >>> m.result().numpy()\n array([[2., 3., 4., 5.]])\n ", "desc": "Computes the element-wise (weighted) mean of the given tensors.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Metric", "docs": "Encapsulates metric logic and state.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n **kwargs: Additional layer keywords arguments.\n\n Standalone usage:\n\n ```python\n m = SomeMetric(...)\n for input in ...:\n m.update_state(input)\n print('Final result: ', m.result().numpy())\n ```\n\n Usage with `compile()` API:\n\n ```python\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(10, activation='softmax'))\n\n model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),\n loss=tf.keras.losses.CategoricalCrossentropy(),\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n\n data = np.random.random((1000, 32))\n labels = np.random.random((1000, 10))\n\n dataset = tf.data.Dataset.from_tensor_slices((data, labels))\n dataset = dataset.batch(32)\n\n model.fit(dataset, epochs=10)\n ```\n\n To be implemented by subclasses:\n * `__init__()`: All state variables should be created in this method by\n calling `self.add_weight()` like: `self.var = self.add_weight(...)`\n * `update_state()`: Has all updates to the state variables like:\n self.var.assign_add(...).\n * `result()`: Computes and returns a scalar value or a dict of scalar values\n for the metric from the state variables.\n\n Example subclass implementation:\n\n ```python\n class BinaryTruePositives(tf.keras.metrics.Metric):\n\n def __init__(self, name='binary_true_positives', **kwargs):\n super(BinaryTruePositives, self).__init__(name=name, **kwargs)\n self.true_positives = self.add_weight(name='tp', initializer='zeros')\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, tf.bool)\n y_pred = tf.cast(y_pred, tf.bool)\n\n values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))\n values = tf.cast(values, self.dtype)\n if sample_weight is not None:\n sample_weight = tf.cast(sample_weight, self.dtype)\n sample_weight = tf.broadcast_to(sample_weight, values.shape)\n values = tf.multiply(values, sample_weight)\n self.true_positives.assign_add(tf.reduce_sum(values))\n\n def result(self):\n return self.true_positives\n ```\n ", "desc": "Encapsulates metric logic and state.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Poisson", "docs": "Computes the Poisson metric between `y_true` and `y_pred`.\n\n `metric = y_pred - y_true * log(y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Poisson()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.99999994\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Poisson()])\n ```\n ", "desc": "Computes the Poisson metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Precision", "docs": "Computes the precision of the predictions with respect to the labels.\n\n The metric creates two local variables, `true_positives` and `false_positives`\n that are used to compute the precision. This value is ultimately returned as\n `precision`, an idempotent operation that simply divides `true_positives`\n by the sum of `true_positives` and `false_positives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, we'll calculate precision as how often on average a class\n among the top-k classes with the highest predicted values of a batch entry is\n correct and can be found in the label for that entry.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold and/or in the\n top-k highest predictions, and computing the fraction of them for which\n `class_id` is indeed a correct label.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate precision with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Precision()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n >>> # With top_k=2, it will calculate precision over y_true[:2] and y_pred[:2]\n >>> m = tf.keras.metrics.Precision(top_k=2)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.0\n\n >>> # With top_k=4, it will calculate precision over y_true[:4] and y_pred[:4]\n >>> m = tf.keras.metrics.Precision(top_k=4)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Precision()])\n ```\n ", "desc": "Computes the precision of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.PrecisionAtRecall", "docs": "Computes best precision where recall is >= specified value.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n precision at the given recall. The threshold for the given recall\n value is computed and used to evaluate the corresponding precision.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n recall: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.PrecisionAtRecall(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[2, 2, 2, 1, 1])\n >>> m.result().numpy()\n 0.33333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)])\n ```\n ", "desc": "Computes best precision where recall is >= specified value.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Recall", "docs": "Computes the recall of the predictions with respect to the labels.\n\n This metric creates two local variables, `true_positives` and\n `false_negatives`, that are used to compute the recall. This value is\n ultimately returned as `recall`, an idempotent operation that simply divides\n `true_positives` by the sum of `true_positives` and `false_negatives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, recall will be computed as how often on average a class\n among the labels of a batch entry is in the top-k predictions.\n\n If `class_id` is specified, we calculate recall by considering only the\n entries in the batch for which `class_id` is in the label, and computing the\n fraction of them for which `class_id` is above the threshold and/or in the\n top-k predictions.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate recall with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Recall()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Recall()])\n ```\n ", "desc": "Computes the recall of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.RecallAtPrecision", "docs": "Computes best recall where precision is >= specified value.\n\n For a given score-label-distribution the required precision might not\n be achievable, in this case 0.0 is returned as recall.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n recall at the given precision. The threshold for the given precision\n value is computed and used to evaluate the corresponding recall.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n precision: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RecallAtPrecision(0.8)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RecallAtPrecision(precision=0.8)])\n ```\n ", "desc": "Computes best recall where precision is >= specified value.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.RootMeanSquaredError", "docs": "Computes root mean squared error metric between `y_true` and `y_pred`.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RootMeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.70710677\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RootMeanSquaredError()])\n ```\n ", "desc": "Computes root mean squared error metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SensitivityAtSpecificity", "docs": "Computes best sensitivity where specificity is >= specified value.\n\n the sensitivity at a given specificity.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n sensitivity at the given specificity. The threshold for the given specificity\n value is computed and used to evaluate the corresponding sensitivity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n specificity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given specificity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SensitivityAtSpecificity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 1])\n >>> m.result().numpy()\n 0.333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SensitivityAtSpecificity()])\n ```\n ", "desc": "Computes best sensitivity where specificity is >= specified value.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.serialize", "docs": "Serializes metric function or `Metric` instance.\n\n Args:\n metric: A Keras `Metric` instance or a metric function.\n\n Returns:\n Metric configuration dictionary.\n ", "desc": "Serializes metric function or `Metric` instance.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.sparse_categorical_accuracy", "docs": "Calculates how often predictions match integer labels.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: Integer ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Sparse categorical accuracy values.\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.sparse_top_k_categorical_accuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_top_k_categorical_accuracy(\n ... y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: tensor of true targets.\n y_pred: tensor of predicted targets.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Sparse top K categorical accuracy value.\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SparseCategoricalAccuracy", "docs": "Calculates how often predictions match integer labels.\n\n ```python\n acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))\n ```\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `sparse categorical accuracy`: an idempotent operation\n that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseCategoricalAccuracy()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n Use this crossentropy metric when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` metric.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n axis: (Optional) Defaults to -1. The dimension along which the metric is\n computed.\n\n Standalone usage:\n\n >>> # y_true = one_hot(y_true) = [[0, 1, 0], [0, 0, 1]]\n >>> # logits = log(y_pred)\n >>> # softmax = exp(logits) / sum(exp(logits), axis=-1)\n >>> # softmax = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(softmax), 1)\n >>> # log(softmax) = [[-2.9957, -0.0513, -16.1181],\n >>> # [-2.3026, -0.2231, -2.3026]]\n >>> # y_true * log(softmax) = [[0, -0.0513, 0], [0, 0, -2.3026]]\n >>> # xent = [0.0513, 2.3026]\n >>> # Reduced xent = (0.0513 + 2.3026) / 2\n >>> m = tf.keras.metrics.SparseCategoricalCrossentropy()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1)\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SpecificityAtSensitivity", "docs": "Computes best specificity where sensitivity is >= specified value.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n specificity at the given sensitivity. The threshold for the given sensitivity\n value is computed and used to evaluate the corresponding specificity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n sensitivity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given sensitivity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SpecificityAtSensitivity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.66666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 2])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SpecificityAtSensitivity()])\n ```\n ", "desc": "Computes best specificity where sensitivity is >= specified value.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.SquaredHinge", "docs": "Computes the squared hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SquaredHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.86\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.46\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SquaredHinge()])\n ```\n ", "desc": "Computes the squared hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.Sum", "docs": "Computes the (weighted) sum of the given values.\n\n For example, if values is [1, 3, 5, 7] then the sum is 16.\n If the weights were specified as [1, 1, 0, 0] then the sum would be 4.\n\n This metric creates one variable, `total`, that is used to compute the sum of\n `values`. This is ultimately returned as `sum`.\n\n If `sample_weight` is `None`, weights default to 1. Use `sample_weight` of 0\n to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Sum()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 16.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Sum(name='sum_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) sum of the given values.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.top_k_categorical_accuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: The ground truth values.\n y_pred: The prediction values.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Top K categorical accuracy value.\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.TopKCategoricalAccuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1)\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.TrueNegatives", "docs": "Calculates the number of true negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of true negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TrueNegatives()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TrueNegatives()])\n ```\n ", "desc": "Calculates the number of true negatives.", "type": "API"}, {"name": "tf.compat.v1.keras.metrics.TruePositives", "docs": "Calculates the number of true positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true positives. This metric creates one local variable, `true_positives`\n that is used to keep track of the number of true positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TruePositives()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TruePositives()])\n ```\n ", "desc": "Calculates the number of true positives.", "type": "API"}, {"name": "tf.compat.v1.keras.mixed_precision", "docs": "Keras mixed precision API.\n\nSee [the mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to\nuse the API.\n\n", "desc": "Keras mixed precision API.", "type": "API"}, {"name": "tf.compat.v1.keras.mixed_precision.LossScaleOptimizer", "docs": "An optimizer that applies loss scaling to prevent numeric underflow.\n\n Loss scaling is a technique to prevent numeric underflow in intermediate\n gradients when float16 is used. To prevent underflow, the loss is multiplied\n (or \"scaled\") by a certain factor called the \"loss scale\", which causes\n intermediate gradients to be scaled by the loss scale as well. The final\n gradients are divided (or \"unscaled\") by the loss scale to bring them back to\n their original value.\n\n `LossScaleOptimizer` wraps another optimizer and applies loss scaling to it.\n By default, the loss scale is dynamically updated over time so you do not have\n to choose the loss scale. The `minimize` method automatically scales the loss,\n unscales the gradients, and updates the loss scale so all you have to do is\n wrap your optimizer with a `LossScaleOptimizer` if you use `minimize`. For\n example:\n\n >>> opt = tf.keras.optimizers.SGD(0.25)\n >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)\n >>> var = tf.Variable(1.)\n >>> loss_fn = lambda: var ** 2\n >>> # 'minimize' applies loss scaling and updates the loss sale.\n >>> opt.minimize(loss_fn, var_list=var)\n >>> var.numpy()\n 0.5\n\n If a `tf.GradientTape` is used to compute gradients instead of `minimize`, you\n must scale the loss and gradients manually. This can be done with the\n `LossScaleOptimizer.get_scaled_loss` and\n `LossScaleOptimizer.get_unscaled_gradients` methods. For example:\n\n >>> with tf.GradientTape() as tape:\n ... loss = loss_fn()\n ... scaled_loss = opt.get_scaled_loss(loss)\n >>> scaled_grad = tape.gradient(scaled_loss, var)\n >>> (grad,) = opt.get_unscaled_gradients([scaled_grad])\n >>> opt.apply_gradients([(grad, var)]) # Loss scale is updated here\n >>> var.numpy()\n 0.25\n\n Warning: If you forget to call `get_scaled_loss` or `get_unscaled_gradients`\n (or both) when using a `tf.GradientTape`, the model will likely converge to a\n worse quality. Please make sure you call each function exactly once.\n\n When mixed precision with float16 is used, there is typically no risk of\n underflow affecting model quality if loss scaling is properly used. See\n [the mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) for more information\n on how to use mixed precision.\n\n Args:\n inner_optimizer: The `tf.keras.optimizers.Optimizer` or\n `tf.keras.optimizers.experimental.Optimizer` instance to wrap.\n dynamic: Bool indicating whether dynamic loss scaling is used. Defaults to\n True. If True, the loss scale will be dynamically updated over time using\n an algorithm that keeps the loss scale at approximately its optimal value.\n If False, a single fixed loss scale is used and `initial_scale` must be\n specified, which is used as the loss scale. Recommended to keep as True,\n as choosing a fixed loss scale can be tricky. Currently, there is a small\n performance overhead to dynamic loss scaling compared to fixed loss\n scaling.\n initial_scale: The initial loss scale. If `dynamic` is True, this defaults\n to `2 ** 15`. If `dynamic` is False, this must be specified and acts as\n the sole loss scale, as the loss scale does not change over time. When\n dynamic loss scaling is used, is better for this to be a very high number,\n because a loss scale that is too high gets lowered far more quickly than a\n loss scale that is too low gets raised.\n dynamic_growth_steps: With dynamic loss scaling, every\n `dynamic_growth_steps` steps with finite gradients, the loss scale is\n doubled. Defaults to 2000. If a nonfinite gradient is encountered, the\n count is reset back to zero, gradients are skipped that step, and the loss\n scale is halved. The count can be queried with\n `LossScaleOptimizer.dynamic_counter`. This argument can only be specified\n if `dynamic` is True.\n\n `LossScaleOptimizer` will occasionally skip applying gradients to the\n variables, in which case the trainable variables will not change that step.\n This is done because the dynamic loss scale will sometimes be raised too\n high, causing overflow in the gradients. Typically, the first 2 to 15 steps of\n the model are skipped as the initial loss scale is very high, but afterwards\n steps will only be skipped on average 0.05% of the time (the fraction of steps\n skipped is `1 / dynamic_growth_steps`).\n\n `LossScaleOptimizer` delegates all public `Optimizer` methods to the inner\n optimizer. Additionally, in methods `minimize` and `get_gradients`, it scales\n the loss and unscales the gradients. In methods `minimize` and\n `apply_gradients`, it additionally updates the loss scale and skips applying\n gradients if any gradient has a nonfinite value.\n\n ### Hyperparameters\n\n If wrapping a `tf.keras.optimizers.Optimizer`, hyperparameters can be accessed\n and set on the LossScaleOptimizer, which will be delegated to the wrapped\n optimizer.\n\n >>> opt = tf.keras.optimizers.Adam(beta_1=0.8, epsilon=1e-5)\n >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)\n >>> opt.beta_1 # Equivalent to `opt.inner_optimizer.beta_1`\n 0.8\n >>> opt.beta_1 = 0.7 # Equivalent to `opt.inner_optimizer.beta_1 = 0.7`\n >>> opt.beta_1\n 0.7\n >>> opt.inner_optimizer.beta_1\n 0.7\n\n However, accessing or setting non-hyperparameters is not delegated to the\n LossScaleOptimizer. In an Adam optimizer, `beta_1` is a hyperparameter but\n `epsilon` is not, as the Adam optimizer only calls `Optimizer._set_hyper` on\n `beta_1`.\n\n >>> opt.inner_optimizer.epsilon\n 1e-5\n >>> opt.epsilon\n Traceback (most recent call last):\n ...\n AttributeError: 'LossScaleOptimizer' object has no attribute 'epsilon'\n >>> opt.epsilon = 1e-4 # This does NOT set epsilon on `opt.inner_optimizer`\n >>> opt.inner_optimizer.epsilon\n >>> 1e-5\n\n In the above example, despite epsilon being set on the LossScaleOptimizer, the\n old epsilon value will still be used when training as epsilon was not set on\n the inner optimizer.\n ", "desc": "An optimizer that applies loss scaling to prevent numeric underflow.", "type": "API"}, {"name": "tf.compat.v1.keras.Model", "docs": "`Model` groups layers into an object with training and inference features.\n\n Args:\n inputs: The input(s) of the model: a `keras.Input` object or list of\n `keras.Input` objects.\n outputs: The output(s) of the model. See Functional API example below.\n name: String, the name of the model.\n\n There are two ways to instantiate a `Model`:\n\n 1 - With the \"Functional API\", where you start from `Input`,\n you chain layer calls to specify the model's forward pass,\n and finally you create your model from inputs and outputs:\n\n ```python\n import tensorflow as tf\n\n inputs = tf.keras.Input(shape=(3,))\n x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)\n outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)\n model = tf.keras.Model(inputs=inputs, outputs=outputs)\n ```\n\n Note: Only dicts, lists, and tuples of input tensors are supported. Nested\n inputs are not supported (e.g. lists of list or dicts of dict).\n\n A new Functional API model can also be created by using the\n intermediate tensors. This enables you to quickly extract sub-components\n of the model.\n\n Example:\n\n ```python\n inputs = keras.Input(shape=(None, None, 3))\n processed = keras.layers.RandomCrop(width=32, height=32)(inputs)\n conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)\n pooling = keras.layers.GlobalAveragePooling2D()(conv)\n feature = keras.layers.Dense(10)(pooling)\n\n full_model = keras.Model(inputs, feature)\n backbone = keras.Model(processed, conv)\n activations = keras.Model(conv, feature)\n ```\n\n Note that the `backbone` and `activations` models are not\n created with `keras.Input` objects, but with the tensors that are originated\n from `keras.Inputs` objects. Under the hood, the layers and weights will\n be shared across these models, so that user can train the `full_model`, and\n use `backbone` or `activations` to do feature extraction.\n The inputs and outputs of the model can be nested structures of tensors as\n well, and the created models are standard Functional API models that support\n all the existing APIs.\n\n 2 - By subclassing the `Model` class: in that case, you should define your\n layers in `__init__()` and you should implement the model's forward pass\n in `call()`.\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n\n def call(self, inputs):\n x = self.dense1(inputs)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n If you subclass `Model`, you can optionally have\n a `training` argument (boolean) in `call()`, which you can use to specify\n a different behavior in training and inference:\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n self.dropout = tf.keras.layers.Dropout(0.5)\n\n def call(self, inputs, training=False):\n x = self.dense1(inputs)\n if training:\n x = self.dropout(x, training=training)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n Once the model is created, you can config the model with losses and metrics\n with `model.compile()`, train the model with `model.fit()`, or use the model\n to do prediction with `model.predict()`.\n ", "desc": "`Model` groups layers into an object with training and inference features.", "type": "API"}, {"name": "tf.compat.v1.keras.models", "docs": "Keras models API.\n", "desc": "Keras models API.", "type": "API"}, {"name": "tf.compat.v1.keras.models.clone_model", "docs": "Clone a Functional or Sequential `Model` instance.\n\n Model cloning is similar to calling a model on new inputs,\n except that it creates new layers (and thus new weights) instead\n of sharing the weights of the existing layers.\n\n Note that\n `clone_model` will not preserve the uniqueness of shared objects within the\n model (e.g. a single variable attached to two distinct layers will be\n restored as two separate variables).\n\n Args:\n model: Instance of `Model`\n (could be a Functional model or a Sequential model).\n input_tensors: optional list of input tensors or InputLayer objects\n to build the model upon. If not provided,\n new `Input` objects will be created.\n clone_function: Callable to be used to clone each layer in the target\n model (except `InputLayer` instances). It takes as argument the layer\n instance to be cloned, and returns the corresponding layer instance to\n be used in the model copy. If unspecified, this callable defaults to\n the following serialization/deserialization function:\n `lambda layer: layer.__class__.from_config(layer.get_config())`.\n By passing a custom callable, you can customize your copy of the\n model, e.g. by wrapping certain layers of interest (you might want to\n replace all `LSTM` instances with equivalent\n `Bidirectional(LSTM(...))` instances, for example).\n\n Returns:\n An instance of `Model` reproducing the behavior\n of the original model, on top of new inputs tensors,\n using newly instantiated weights. The cloned model may behave\n differently from the original model if a custom `clone_function`\n modifies the layer.\n\n Example:\n\n ```python\n # Create a test Sequential model.\n model = keras.Sequential([\n keras.Input(shape=(728,)),\n keras.layers.Dense(32, activation='relu'),\n keras.layers.Dense(1, activation='sigmoid'),\n ])\n # Create a copy of the test model (with freshly initialized weights).\n new_model = clone_model(model)\n ```\n\n Note that subclassed models cannot be cloned, since their internal\n layer structure is not known. To achieve equivalent functionality\n as `clone_model` in the case of a subclassed model, simply make sure\n that the model class implements `get_config()`\n (and optionally `from_config()`), and call:\n\n ```python\n new_model = model.__class__.from_config(model.get_config())\n ```\n ", "desc": "Clone a Functional or Sequential `Model` instance.", "type": "API"}, {"name": "tf.compat.v1.keras.models.load_model", "docs": "Loads a model saved via `model.save()`.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> model.save('/tmp/model')\n >>> loaded_model = tf.keras.models.load_model('/tmp/model')\n >>> x = tf.random.uniform((10, 3))\n >>> assert np.allclose(model.predict(x), loaded_model.predict(x))\n\n Note that the model weights may have different scoped names after being\n loaded. Scoped names include the model/layer names, such as\n `\"dense_1/kernel:0\"`. It is recommended that you use the layer properties to\n access specific variables, e.g. `model.get_layer(\"dense_1\").kernel`.\n\n Args:\n filepath: One of the following:\n - String or `pathlib.Path` object, path to the saved model\n - `h5py.File` object from which to load the model\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n compile: Boolean, whether to compile the model\n after loading.\n options: Optional `tf.saved_model.LoadOptions` object that specifies\n options for loading from SavedModel.\n\n Returns:\n A Keras model instance. If the original model was compiled, and saved with\n the optimizer, then the returned model will be compiled. Otherwise, the\n model will be left uncompiled. In the case that an uncompiled model is\n returned, a warning is displayed if the `compile` argument is set to\n `True`.\n\n Raises:\n ImportError: if loading from an hdf5 file and h5py is not available.\n IOError: In case of an invalid savefile.\n ", "desc": "Loads a model saved via `model.save()`.", "type": "API"}, {"name": "tf.compat.v1.keras.models.Model", "docs": "`Model` groups layers into an object with training and inference features.\n\n Args:\n inputs: The input(s) of the model: a `keras.Input` object or list of\n `keras.Input` objects.\n outputs: The output(s) of the model. See Functional API example below.\n name: String, the name of the model.\n\n There are two ways to instantiate a `Model`:\n\n 1 - With the \"Functional API\", where you start from `Input`,\n you chain layer calls to specify the model's forward pass,\n and finally you create your model from inputs and outputs:\n\n ```python\n import tensorflow as tf\n\n inputs = tf.keras.Input(shape=(3,))\n x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)\n outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)\n model = tf.keras.Model(inputs=inputs, outputs=outputs)\n ```\n\n Note: Only dicts, lists, and tuples of input tensors are supported. Nested\n inputs are not supported (e.g. lists of list or dicts of dict).\n\n A new Functional API model can also be created by using the\n intermediate tensors. This enables you to quickly extract sub-components\n of the model.\n\n Example:\n\n ```python\n inputs = keras.Input(shape=(None, None, 3))\n processed = keras.layers.RandomCrop(width=32, height=32)(inputs)\n conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)\n pooling = keras.layers.GlobalAveragePooling2D()(conv)\n feature = keras.layers.Dense(10)(pooling)\n\n full_model = keras.Model(inputs, feature)\n backbone = keras.Model(processed, conv)\n activations = keras.Model(conv, feature)\n ```\n\n Note that the `backbone` and `activations` models are not\n created with `keras.Input` objects, but with the tensors that are originated\n from `keras.Inputs` objects. Under the hood, the layers and weights will\n be shared across these models, so that user can train the `full_model`, and\n use `backbone` or `activations` to do feature extraction.\n The inputs and outputs of the model can be nested structures of tensors as\n well, and the created models are standard Functional API models that support\n all the existing APIs.\n\n 2 - By subclassing the `Model` class: in that case, you should define your\n layers in `__init__()` and you should implement the model's forward pass\n in `call()`.\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n\n def call(self, inputs):\n x = self.dense1(inputs)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n If you subclass `Model`, you can optionally have\n a `training` argument (boolean) in `call()`, which you can use to specify\n a different behavior in training and inference:\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n self.dropout = tf.keras.layers.Dropout(0.5)\n\n def call(self, inputs, training=False):\n x = self.dense1(inputs)\n if training:\n x = self.dropout(x, training=training)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n Once the model is created, you can config the model with losses and metrics\n with `model.compile()`, train the model with `model.fit()`, or use the model\n to do prediction with `model.predict()`.\n ", "desc": "`Model` groups layers into an object with training and inference features.", "type": "API"}, {"name": "tf.compat.v1.keras.models.model_from_config", "docs": "Instantiates a Keras model from its config.\n\n Usage:\n ```\n # for a Functional API model\n tf.keras.Model().from_config(model.get_config())\n\n # for a Sequential model\n tf.keras.Sequential().from_config(model.get_config())\n ```\n\n Args:\n config: Configuration dictionary.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n\n Raises:\n TypeError: if `config` is not a dictionary.\n ", "desc": "Instantiates a Keras model from its config.", "type": "API"}, {"name": "tf.compat.v1.keras.models.model_from_json", "docs": "Parses a JSON model configuration string and returns a model instance.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> config = model.to_json()\n >>> loaded_model = tf.keras.models.model_from_json(config)\n\n Args:\n json_string: JSON string encoding a model configuration.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n ", "desc": "Parses a JSON model configuration string and returns a model instance.", "type": "API"}, {"name": "tf.compat.v1.keras.models.model_from_yaml", "docs": "Parses a yaml model configuration file and returns a model instance.\n\n Note: Since TF 2.6, this method is no longer supported and will raise a\n RuntimeError.\n\n Args:\n yaml_string: YAML string or open file encoding a model configuration.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n\n Raises:\n RuntimeError: announces that the method poses a security risk\n ", "desc": "Parses a yaml model configuration file and returns a model instance.", "type": "API"}, {"name": "tf.compat.v1.keras.models.save_model", "docs": "Saves a model as a TensorFlow SavedModel or HDF5 file.\n\n See the [Serialization and Saving guide](https://keras.io/guides/serialization_and_saving/)\n for details.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> model.save('/tmp/model')\n >>> loaded_model = tf.keras.models.load_model('/tmp/model')\n >>> x = tf.random.uniform((10, 3))\n >>> assert np.allclose(model.predict(x), loaded_model.predict(x))\n\n Note that `model.save()` is an alias for `tf.keras.models.save_model()`.\n\n The SavedModel and HDF5 file contains:\n\n - the model's configuration (topology)\n - the model's weights\n - the model's optimizer's state (if any)\n\n Thus models can be reinstantiated in the exact same state, without any of the\n code used for model definition or training.\n\n Note that the model weights may have different scoped names after being\n loaded. Scoped names include the model/layer names, such as\n `\"dense_1/kernel:0\"`. It is recommended that you use the layer properties to\n access specific variables, e.g. `model.get_layer(\"dense_1\").kernel`.\n\n __SavedModel serialization format__\n\n Keras SavedModel uses `tf.saved_model.save` to save the model and all\n trackable objects attached to the model (e.g. layers and variables). The model\n config, weights, and optimizer are saved in the SavedModel. Additionally, for\n every Keras layer attached to the model, the SavedModel stores:\n\n * the config and metadata -- e.g. name, dtype, trainable status\n * traced call and loss functions, which are stored as TensorFlow subgraphs.\n\n The traced functions allow the SavedModel format to save and load custom\n layers without the original class definition.\n\n You can choose to not save the traced functions by disabling the `save_traces`\n option. This will decrease the time it takes to save the model and the\n amount of disk space occupied by the output SavedModel. If you enable this\n option, then you _must_ provide all custom class definitions when loading\n the model. See the `custom_objects` argument in `tf.keras.models.load_model`.\n\n Args:\n model: Keras model instance to be saved.\n filepath: One of the following:\n - String or `pathlib.Path` object, path where to save the model\n - `h5py.File` object where to save the model\n overwrite: Whether we should overwrite any existing model at the target\n location, or instead ask the user with a manual prompt.\n include_optimizer: If True, save optimizer's state together.\n save_format: Either 'tf' or 'h5', indicating whether to save the model\n to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5'\n in TF 1.X.\n signatures: Signatures to save with the SavedModel. Applicable to the 'tf'\n format only. Please see the `signatures` argument in\n `tf.saved_model.save` for details.\n options: (only applies to SavedModel format) `tf.saved_model.SaveOptions`\n object that specifies options for saving to SavedModel.\n save_traces: (only applies to SavedModel format) When enabled, the\n SavedModel will store the function traces for each layer. This\n can be disabled, so that only the configs of each layer are stored.\n Defaults to `True`. Disabling this will decrease serialization time and\n reduce file size, but it requires that all custom layers/models\n implement a `get_config()` method.\n\n Raises:\n ImportError: If save format is hdf5, and h5py is not available.\n ", "desc": "Saves a model as a TensorFlow SavedModel or HDF5 file.", "type": "API"}, {"name": "tf.compat.v1.keras.models.Sequential", "docs": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.\n\n `Sequential` provides training and inference features on this model.\n\n Examples:\n\n ```python\n # Optionally, the first layer can receive an `input_shape` argument:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n # Afterwards, we do automatic shape inference:\n model.add(tf.keras.layers.Dense(4))\n\n # This is identical to the following:\n model = tf.keras.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(8))\n\n # Note that you can also omit the `input_shape` argument.\n # In that case the model doesn't have any weights until the first call\n # to a training/evaluation method (since it isn't yet built):\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n # model.weights not created yet\n\n # Whereas if you specify the input shape, the model gets built\n # continuously as you are adding layers:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n model.add(tf.keras.layers.Dense(4))\n len(model.weights)\n # Returns \"4\"\n\n # When using the delayed-build pattern (no input shape specified), you can\n # choose to manually build your model by calling\n # `build(batch_input_shape)`:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n model.build((None, 16))\n len(model.weights)\n # Returns \"4\"\n\n # Note that when using the delayed-build pattern (no input shape specified),\n # the model gets built the first time you call `fit`, `eval`, or `predict`,\n # or the first time you call the model on some input data.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(1))\n model.compile(optimizer='sgd', loss='mse')\n # This builds the model for the first time:\n model.fit(x, y, batch_size=32, epochs=10)\n ```\n ", "desc": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers", "docs": "Built-in optimizer classes.\n\nFor more examples see the base class `tf.keras.optimizers.Optimizer`.\n\n", "desc": "Built-in optimizer classes.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Adadelta", "docs": "Optimizer that implements the Adadelta algorithm.\n\n Adadelta optimization is a stochastic gradient descent method that is based on\n adaptive learning rate per dimension to address two drawbacks:\n\n - The continual decay of learning rates throughout training.\n - The need for a manually selected global learning rate.\n\n Adadelta is a more robust extension of Adagrad that adapts learning rates\n based on a moving window of gradient updates, instead of accumulating all\n past gradients. This way, Adadelta continues learning even when many updates\n have been done. Compared to Adagrad, in the original version of Adadelta you\n don't have to set an initial learning rate. In this version, the initial\n learning rate can be set, as in most other Keras optimizers.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adadelta` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n rho: A `Tensor` or a floating point value. The decay rate.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adadelta\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Zeiler, 2012](http://arxiv.org/abs/1212.5701)\n ", "desc": "Optimizer that implements the Adadelta algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Adagrad", "docs": "Optimizer that implements the Adagrad algorithm.\n\n Adagrad is an optimizer with parameter-specific learning rates,\n which are adapted relative to how frequently a parameter gets\n updated during training. The more updates a parameter receives,\n the smaller the updates.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adagrad` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n initial_accumulator_value: Floating point value.\n Starting value for the accumulators (per-parameter momentum values).\n Must be non-negative.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adagrad\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value..\n\n Reference:\n - [Duchi et al., 2011](\n http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).\n ", "desc": "Optimizer that implements the Adagrad algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Adam", "docs": "Optimizer that implements the Adam algorithm.\n\n Adam optimization is a stochastic gradient descent method that is based on\n adaptive estimation of first-order and second-order moments.\n\n According to\n [Kingma et al., 2014](http://arxiv.org/abs/1412.6980),\n the method is \"*computationally\n efficient, has little memory requirement, invariant to diagonal rescaling of\n gradients, and is well suited for problems that are large in terms of\n data/parameters*\".\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use, The\n learning rate. Defaults to 0.001.\n beta_1: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use. The\n exponential decay rate for the 1st moment estimates. Defaults to 0.9.\n beta_2: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use, The\n exponential decay rate for the 2nd moment estimates. Defaults to 0.999.\n epsilon: A small constant for numerical stability. This epsilon is\n \"epsilon hat\" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to\n 1e-7.\n amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper \"On the Convergence of Adam and beyond\". Defaults to `False`.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.Adam(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> # The first step is `-learning_rate*sign(grad)`\n >>> var1.numpy()\n 9.9\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n - [Reddi et al., 2018](\n https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`.\n\n Notes:\n\n The default value of 1e-7 for epsilon might not be a good default in\n general. For example, when training an Inception network on ImageNet a\n current good choice is 1.0 or 0.1. Note that since Adam uses the\n formulation just before Section 2.1 of the Kingma and Ba paper rather than\n the formulation in Algorithm 1, the \"epsilon\" referred to here is \"epsilon\n hat\" in the paper.\n\n The sparse implementation of this algorithm (used when the gradient is an\n IndexedSlices object, typically because of `tf.gather` or an embedding\n lookup in the forward pass) does apply momentum to variable slices even if\n they were not used in the forward pass (meaning they have a gradient equal\n to zero). Momentum decay (beta1) is also applied to the entire momentum\n accumulator. This means that the sparse behavior is equivalent to the dense\n behavior (in contrast to some momentum implementations which ignore momentum\n unless a variable slice was actually used).\n ", "desc": "Optimizer that implements the Adam algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Adamax", "docs": "Optimizer that implements the Adamax algorithm.\n\n It is a variant of Adam based on the infinity norm.\n Default parameters follow those provided in the paper.\n Adamax is sometimes superior to adam, specially in models with embeddings.\n\n Initialization:\n\n ```python\n m = 0 # Initialize initial 1st moment vector\n v = 0 # Initialize the exponentially weighted infinity norm\n t = 0 # Initialize timestep\n ```\n\n The update rule for parameter `w` with gradient `g` is\n described at the end of section 7.1 of the paper:\n\n ```python\n t += 1\n m = beta1 * m + (1 - beta) * g\n v = max(beta2 * v, abs(g))\n current_lr = learning_rate / (1 - beta1 ** t)\n w = w - current_lr * m / (v + epsilon)\n ```\n\n Similarly to `Adam`, the epsilon is added for numerical stability\n (especially to get rid of division by zero when `v_t == 0`).\n\n In contrast to `Adam`, the sparse implementation of this algorithm\n (used when the gradient is an IndexedSlices object, typically because of\n `tf.gather` or an embedding lookup in the forward pass) only updates\n variable slices and corresponding `m_t`, `v_t` terms when that part of\n the variable was used in the forward pass. This means that the sparse\n behavior is contrast to the dense behavior (similar to some momentum\n implementations which ignore momentum unless a variable slice was actually\n used).\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adamax\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n ", "desc": "Optimizer that implements the Adamax algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.deserialize", "docs": "Inverse of the `serialize` function.\n\n Args:\n config: Optimizer configuration dictionary.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras Optimizer instance.\n ", "desc": "Inverse of the `serialize` function.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Ftrl", "docs": "Optimizer that implements the FTRL algorithm.\n\n \"Follow The Regularized Leader\" (FTRL) is an optimization algorithm developed\n at Google for click-through rate prediction in the early 2010s. It is most\n suitable for shallow models with large and sparse feature spaces.\n The algorithm is described by\n [McMahan et al., 2013](https://research.google.com/pubs/archive/41159.pdf).\n The Keras version has support for both online L2 regularization\n (the L2 regularization described in the paper\n above) and shrinkage-type L2 regularization\n (which is the addition of an L2 penalty to the loss function).\n\n Initialization:\n\n ```python\n n = 0\n sigma = 0\n z = 0\n ```\n\n Update rule for one variable `w`:\n\n ```python\n prev_n = n\n n = n + g ** 2\n sigma = (sqrt(n) - sqrt(prev_n)) / lr\n z = z + g - sigma * w\n if abs(z) < lambda_1:\n w = 0\n else:\n w = (sgn(z) * lambda_1 - z) / ((beta + sqrt(n)) / alpha + lambda_2)\n ```\n\n Notation:\n\n - `lr` is the learning rate\n - `g` is the gradient for the variable\n - `lambda_1` is the L1 regularization strength\n - `lambda_2` is the L2 regularization strength\n\n Check the documentation for the `l2_shrinkage_regularization_strength`\n parameter for more details when shrinkage is enabled, in which case gradient\n is replaced with a gradient with shrinkage.\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n learning_rate_power: A float value, must be less or equal to zero.\n Controls how the learning rate decreases during training. Use zero for\n a fixed learning rate.\n initial_accumulator_value: The starting value for accumulators.\n Only zero or positive values are allowed.\n l1_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n l2_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Ftrl\"`.\n l2_shrinkage_regularization_strength: A float value, must be greater than\n or equal to zero. This differs from L2 above in that the L2 above is a\n stabilization penalty, whereas this L2 shrinkage is a magnitude penalty.\n When input is sparse shrinkage will only happen on the active weights.\n beta: A float value, representing the beta value from the paper.\n Defaults to 0.0.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [McMahan et al., 2013](\n https://research.google.com/pubs/archive/41159.pdf)\n ", "desc": "Optimizer that implements the FTRL algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.get", "docs": "Retrieves a Keras Optimizer instance.\n\n Args:\n identifier: Optimizer identifier, one of\n - String: name of an optimizer\n - Dictionary: configuration dictionary. - Keras Optimizer instance (it\n will be returned unchanged). - TensorFlow Optimizer instance (it\n will be wrapped as a Keras Optimizer).\n\n Returns:\n A Keras Optimizer instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras Optimizer instance.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Nadam", "docs": "Optimizer that implements the NAdam algorithm.\n Much like Adam is essentially RMSprop with momentum, Nadam is Adam with\n Nesterov momentum.\n\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Nadam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage Example:\n >>> opt = tf.keras.optimizers.Nadam(learning_rate=0.2)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> \"{:.1f}\".format(var1.numpy())\n 9.8\n\n Reference:\n - [Dozat, 2015](http://cs229.stanford.edu/proj2015/054_report.pdf).\n ", "desc": "Optimizer that implements the NAdam algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.Optimizer", "docs": "Base class for Keras optimizers.\n\n You should not use this class directly, but instead instantiate one of its\n subclasses such as `tf.keras.optimizers.SGD`, `tf.keras.optimizers.Adam`, etc.\n\n ### Usage\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 * var1 + 2 * var2 * var2\n # In graph mode, returns op that minimizes the loss by updating the listed\n # variables.\n opt_op = opt.minimize(loss, var_list=[var1, var2])\n opt_op.run()\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Usage in custom training loops\n\n In Keras models, sometimes variables are created when the model is first\n called, instead of construction time. Examples include 1) sequential models\n without input shape pre-defined, or 2) subclassed models. Pass var_list as\n callable in these cases.\n\n Example:\n\n ```python\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))\n model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))\n loss_fn = lambda: tf.keras.losses.mse(model(input), output)\n var_list_fn = lambda: model.trainable_weights\n for input, output in data:\n opt.minimize(loss_fn, var_list_fn)\n ```\n\n ### Processing gradients before applying them\n\n Calling `minimize()` takes care of both computing the gradients and\n applying them to the variables. If you want to process the gradients\n before applying them you can instead use the optimizer in three steps:\n\n 1. Compute the gradients with `tf.GradientTape`.\n 2. Process the gradients as you wish.\n 3. Apply the processed gradients with `apply_gradients()`.\n\n Example:\n\n ```python\n # Create an optimizer.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n # Compute the gradients for a list of variables.\n with tf.GradientTape() as tape:\n loss = \n vars = \n grads = tape.gradient(loss, vars)\n\n # Process the gradients, for example cap them, etc.\n # capped_grads = [MyCapper(g) for g in grads]\n processed_grads = [process_gradient(g) for g in grads]\n\n # Ask the optimizer to apply the processed gradients.\n opt.apply_gradients(zip(processed_grads, var_list))\n ```\n\n ### Use with `tf.distribute.Strategy`\n\n This optimizer class is `tf.distribute.Strategy` aware, which means it\n automatically sums gradients across all replicas. To average gradients,\n you divide your loss by the global batch size, which is done\n automatically if you use `tf.keras` built-in training or evaluation loops.\n See the `reduction` argument of your loss which should be set to\n `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` for averaging or\n `tf.keras.losses.Reduction.SUM` for not.\n\n To aggregate gradients yourself, call `apply_gradients` with\n `experimental_aggregate_gradients` set to False. This is useful if you need to\n process aggregated gradients.\n\n If you are not using these and you want to average gradients, you should use\n `tf.math.reduce_sum` to add up your per-example losses and then divide by the\n global batch size. Note that when using `tf.distribute.Strategy`, the first\n component of a tensor's shape is the *replica-local* batch size, which is off\n by a factor equal to the number of replicas being used to compute a single\n step. As a result, using `tf.math.reduce_mean` will give the wrong answer,\n resulting in gradients that can be many times too big.\n\n ### Variable Constraints\n\n All Keras optimizers respect variable constraints. If constraint function is\n passed to any variable, the constraint will be applied to the variable after\n the gradient has been applied to the variable.\n Important: If gradient is sparse tensor, variable constraint is not supported.\n\n ### Thread Compatibility\n\n The entire optimizer is currently thread compatible, not thread-safe. The user\n needs to perform synchronization if necessary.\n\n ### Slots\n\n Many optimizer subclasses, such as `Adam` and `Adagrad` allocate and manage\n additional variables associated with the variables to train. These are called\n Slots. Slots have names and you can ask the optimizer for the names of\n the slots that it uses. Once you have a slot name you can ask the optimizer\n for the variable it created to hold the slot value.\n\n This can be useful if you want to log debug a training algorithm, report stats\n about the slots, etc.\n\n ### Hyperparameters\n\n These are arguments passed to the optimizer subclass constructor\n (the `__init__` method), and then passed to `self._set_hyper()`.\n They can be either regular Python values (like 1.0), tensors, or\n callables. If they are callable, the callable will be called during\n `apply_gradients()` to get the value for the hyper parameter.\n\n Hyperparameters can be overwritten through user code:\n\n Example:\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 + 2 * var2\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n # update learning rate\n opt.learning_rate = 0.05\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Callable learning rate\n\n Optimizer accepts a callable learning rate in two ways. The first way is\n through built-in or customized\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The schedule will be\n called on each iteration with `schedule(iteration)`, a `tf.Variable`\n owned by the optimizer.\n\n Example:\n\n >>> var = tf.Variable(np.random.random(size=(1,)))\n >>> learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(\n ... initial_learning_rate=.01, decay_steps=20, decay_rate=.1)\n >>> opt = tf.keras.optimizers.SGD(learning_rate=learning_rate)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> var = tf.Variable(np.random.random(size=(1,)))\n >>> def lr_callable():\n ... return .1\n >>> opt = tf.keras.optimizers.SGD(learning_rate=lr_callable)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> opt = tf.keras.optimizers.RMSprop(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0 # d(loss) / d(var1) = var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> var1.numpy()\n 9.683772\n\n Reference:\n - [Hinton, 2012](\n http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)\n ", "desc": "Optimizer that implements the RMSprop algorithm.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules", "docs": "Public API for tf.keras.optimizers.schedules namespace.\n", "desc": "Public API for tf.keras.optimizers.schedules namespace.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.CosineDecay", "docs": "A LearningRateSchedule that uses a cosine decay schedule.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n return initial_learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(\n initial_learning_rate, decay_steps)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.CosineDecayRestarts", "docs": "A LearningRateSchedule that uses a cosine decay schedule with restarts.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function with\n restarts to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n\n The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more\n steps and with `m_mul` times initial learning rate as the new learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed_fn = (\n tf.keras.optimizers.schedules.CosineDecayRestarts(\n initial_learning_rate,\n first_decay_steps))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule with restarts.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.deserialize", "docs": "Instantiates a `LearningRateSchedule` object from a serialized form.\n\n Args:\n config: The serialized form of the `LearningRateSchedule`.\n Dictionary of the form {'class_name': str, 'config': dict}.\n custom_objects: A dictionary mapping class names (or function names) of\n custom (non-Keras) objects to class/functions.\n\n Returns:\n A `LearningRateSchedule` object.\n\n Example:\n\n ```python\n # Configuration for PolynomialDecay\n config = {\n 'class_name': 'PolynomialDecay',\n 'config': {'cycle': False,\n 'decay_steps': 10000,\n 'end_learning_rate': 0.01,\n 'initial_learning_rate': 0.1,\n 'name': None,\n 'power': 0.5}}\n lr_schedule = tf.keras.optimizers.schedules.deserialize(config)\n ```\n ", "desc": "Instantiates a `LearningRateSchedule` object from a serialized form.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.ExponentialDecay", "docs": "A LearningRateSchedule that uses an exponential decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies an exponential decay function\n to an optimizer step, given a provided initial learning rate.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate * decay_rate ^ (step / decay_steps)\n ```\n\n If the argument `staircase` is `True`, then `step / decay_steps` is\n an integer division and the decayed learning rate follows a\n staircase function.\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: When fitting a Keras model, decay every 100000 steps with a base\n of 0.96:\n\n ```python\n initial_learning_rate = 0.1\n lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate,\n decay_steps=100000,\n decay_rate=0.96,\n staircase=True)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an exponential decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay", "docs": "A LearningRateSchedule that uses an inverse time decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies the inverse decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * step / decay_step)\n ```\n\n or, if `staircase` is `True`, as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * floor(step / decay_step))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a Keras model when decaying 1/t with a rate of 0.5:\n\n ```python\n ...\n initial_learning_rate = 0.1\n decay_steps = 1.0\n decay_rate = 0.5\n learning_rate_fn = keras.optimizers.schedules.InverseTimeDecay(\n initial_learning_rate, decay_steps, decay_rate)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an inverse time decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule", "docs": "The learning rate schedule base class.\n\n You can use a learning rate schedule to modulate how the learning rate\n of your optimizer changes over time.\n\n Several built-in learning rate schedules are available, such as\n `tf.keras.optimizers.schedules.ExponentialDecay` or\n `tf.keras.optimizers.schedules.PiecewiseConstantDecay`:\n\n ```python\n lr_schedule = keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate=1e-2,\n decay_steps=10000,\n decay_rate=0.9)\n optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)\n ```\n\n A `LearningRateSchedule` instance can be passed in as the `learning_rate`\n argument of any optimizer.\n\n To implement your own schedule object, you should implement the `__call__`\n method, which takes a `step` argument (scalar integer tensor, the\n current training step count).\n Like for any other Keras object, you can also optionally\n make your object serializable by implementing the `get_config`\n and `from_config` methods.\n\n Example:\n\n ```python\n class MyLRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):\n\n def __init__(self, initial_learning_rate):\n self.initial_learning_rate = initial_learning_rate\n\n def __call__(self, step):\n return self.initial_learning_rate / (step + 1)\n\n optimizer = tf.keras.optimizers.SGD(learning_rate=MyLRSchedule(0.1))\n ```\n ", "desc": "The learning rate schedule base class.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay", "docs": "A LearningRateSchedule that uses a piecewise constant decay schedule.\n\n The function returns a 1-arg callable to compute the piecewise constant\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n\n Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5\n for the next 10000 steps, and 0.1 for any additional steps.\n\n ```python\n step = tf.Variable(0, trainable=False)\n boundaries = [100000, 110000]\n values = [1.0, 0.5, 0.1]\n learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(\n boundaries, values)\n\n # Later, whenever we perform an optimization step, we pass in the step.\n learning_rate = learning_rate_fn(step)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as the boundary tensors.\n\n The output of the 1-arg function that takes the `step`\n is `values[0]` when `step <= boundaries[0]`,\n `values[1]` when `step > boundaries[0]` and `step <= boundaries[1]`, ...,\n and values[-1] when `step > boundaries[-1]`.\n ", "desc": "A LearningRateSchedule that uses a piecewise constant decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.PolynomialDecay", "docs": "A LearningRateSchedule that uses a polynomial decay schedule.\n\n It is commonly observed that a monotonically decreasing learning rate, whose\n degree of change is carefully chosen, results in a better performing model.\n This schedule applies a polynomial decay function to an optimizer step,\n given a provided `initial_learning_rate`, to reach an `end_learning_rate`\n in the given `decay_steps`.\n\n It requires a `step` value to compute the decayed learning rate. You\n can just pass a TensorFlow variable that you increment at each training\n step.\n\n The schedule is a 1-arg callable that produces a decayed learning rate\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n If `cycle` is True then a multiple of `decay_steps` is used, the first one\n that is bigger than `step`.\n\n ```python\n def decayed_learning_rate(step):\n decay_steps = decay_steps * ceil(step / decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using\n sqrt (i.e. power=0.5):\n\n ```python\n ...\n starter_learning_rate = 0.1\n end_learning_rate = 0.01\n decay_steps = 10000\n learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(\n starter_learning_rate,\n decay_steps,\n end_learning_rate,\n power=0.5)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a polynomial decay schedule.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.schedules.serialize", "docs": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.\n\n Args:\n learning_rate_schedule: The `LearningRateSchedule` object to serialize.\n\n Returns:\n A JSON-serializable dict representing the object's config.\n\n Example:\n\n >>> lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n ... 0.1, decay_steps=100000, decay_rate=0.96, staircase=True)\n >>> tf.keras.optimizers.schedules.serialize(lr_schedule)\n {'class_name': 'ExponentialDecay', 'config': {...}}\n ", "desc": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.serialize", "docs": "Serialize the optimizer configuration to JSON compatible python dict.\n\n The configuration can be used for persistence and reconstruct the `Optimizer`\n instance again.\n\n >>> tf.keras.optimizers.serialize(tf.keras.optimizers.SGD())\n {'class_name': 'SGD', 'config': {'name': 'SGD', 'learning_rate': 0.01,\n 'decay': 0.0, 'momentum': 0.0,\n 'nesterov': False}}\n\n Args:\n optimizer: An `Optimizer` instance to serialize.\n\n Returns:\n Python dict which contains the configuration of the input optimizer.\n ", "desc": "Serialize the optimizer configuration to JSON compatible python dict.", "type": "API"}, {"name": "tf.compat.v1.keras.optimizers.SGD", "docs": "Gradient descent (with momentum) optimizer.\n\n Update rule for parameter `w` with gradient `g` when `momentum` is 0:\n\n ```python\n w = w - learning_rate * g\n ```\n\n Update rule when `momentum` is larger than 0:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + velocity\n ```\n\n When `nesterov=True`, this rule becomes:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + momentum * velocity - learning_rate * g\n ```\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use. The\n learning rate. Defaults to 0.01.\n momentum: float hyperparameter >= 0 that accelerates gradient descent\n in the relevant\n direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient\n descent.\n nesterov: boolean. Whether to apply Nesterov momentum.\n Defaults to `False`.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"SGD\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n >>> var = tf.Variable(1.0)\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> # Step is `- learning_rate * grad`\n >>> var.numpy()\n 0.9\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)\n >>> var = tf.Variable(1.0)\n >>> val0 = var.value()\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> # First step is `- learning_rate * grad`\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val1 = var.value()\n >>> (val0 - val1).numpy()\n 0.1\n >>> # On later steps, step-size increases because of momentum\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val2 = var.value()\n >>> (val1 - val2).numpy()\n 0.18\n\n Reference:\n - For `nesterov=True`, See [Sutskever et al., 2013](\n http://jmlr.org/proceedings/papers/v28/sutskever13.pdf).\n ", "desc": "Gradient descent (with momentum) optimizer.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing", "docs": "Utilities to preprocess data before training.\n\nDeprecated: `tf.keras.preprocessing` APIs do not operate on tensors and are\nnot recommended for new code. Prefer loading data with either\n`tf.keras.utils.text_dataset_from_directory` or\n`tf.keras.utils.image_dataset_from_directory`, and then transforming the output\n`tf.data.Dataset` with preprocessing layers. These approaches will offer\nbetter performance and intergration with the broader Tensorflow ecosystem. For\nmore information, see the tutorials for [loading text](\nhttps://www.tensorflow.org/tutorials/load_data/text), [loading images](\nhttps://www.tensorflow.org/tutorials/load_data/images), and [augmenting images](\nhttps://www.tensorflow.org/tutorials/images/data_augmentation), as well as the\n[preprocessing layer guide](\nhttps://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilities to preprocess data before training.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image", "docs": "Utilies for image preprocessing and augmentation.\n\nDeprecated: `tf.keras.preprocessing.image` APIs do not operate on tensors and\nare not recommended for new code. Prefer loading data with\n`tf.keras.utils.image_dataset_from_directory`, and then transforming the output\n`tf.data.Dataset` with preprocessing layers. For more information, see the\ntutorials for [loading images](\nhttps://www.tensorflow.org/tutorials/load_data/images) and [augmenting images](\nhttps://www.tensorflow.org/tutorials/images/data_augmentation), as well as the\n[preprocessing layer guide](\nhttps://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilies for image preprocessing and augmentation.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.apply_affine_transform", "docs": "Applies an affine transformation specified by the parameters given.\n\n Args:\n x: 3D numpy array - a 2D image with one or more channels.\n theta: Rotation angle in degrees.\n tx: Width shift.\n ty: Heigh shift.\n shear: Shear angle in degrees.\n zx: Zoom in x direction.\n zy: Zoom in y direction\n row_axis: Index of axis for rows (aka Y axis) in the input\n image. Direction: left to right.\n col_axis: Index of axis for columns (aka X axis) in the input\n image. Direction: top to bottom.\n channel_axis: Index of axis for channels in the input image.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n order: int, order of interpolation\n\n Returns:\n The transformed version of the input.\n\n Raises:\n ImportError: if SciPy is not available.\n ", "desc": "Applies an affine transformation specified by the parameters given.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.apply_brightness_shift", "docs": "Performs a brightness shift.\n\n Args:\n x: Input tensor. Must be 3D.\n brightness: Float. The new brightness value.\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Default: True.\n\n Returns:\n Numpy image tensor.\n\n Raises:\n ImportError: if PIL is not available.\n ", "desc": "Performs a brightness shift.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.apply_channel_shift", "docs": "Performs a channel shift.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity: Transformation intensity.\n channel_axis: Index of axis for channels in the input tensor.\n\n Returns:\n Numpy image tensor.\n ", "desc": "Performs a channel shift.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.array_to_img", "docs": "Converts a 3D Numpy array to a PIL Image instance.\n\n Usage:\n\n ```python\n from PIL import Image\n img = np.random.random(size=(100, 100, 3))\n pil_img = tf.keras.preprocessing.image.array_to_img(img)\n ```\n\n\n Args:\n x: Input data, in any form that can be converted to a Numpy array.\n data_format: Image data format, can be either `\"channels_first\"` or\n `\"channels_last\"`. Defaults to `None`, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to `\"channels_last\"`).\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Defaults to `True`.\n dtype: Dtype to use. Default to `None`, in which case the global setting\n `tf.keras.backend.floatx()` is used (unless you changed it, it defaults\n to `\"float32\"`)\n\n Returns:\n A PIL Image instance.\n\n Raises:\n ImportError: if PIL is not available.\n ValueError: if invalid `x` or `data_format` is passed.\n ", "desc": "Converts a 3D Numpy array to a PIL Image instance.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.DirectoryIterator", "docs": "Iterator capable of reading images from a directory on disk.\n\n Deprecated: `tf.keras.preprocessing.image.DirectoryIterator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n directory: Path to the directory to read images from. Each subdirectory in\n this directory will be considered to contain images from one class, or\n alternatively you could specify class subdirectories via the `classes`\n argument.\n image_data_generator: Instance of `ImageDataGenerator` to use for random\n transformations and normalization.\n target_size: tuple of integers, dimensions to resize input images to.\n color_mode: One of `\"rgb\"`, `\"rgba\"`, `\"grayscale\"`. Color mode to read\n images.\n classes: Optional list of strings, names of subdirectories containing\n images from each class (e.g. `[\"dogs\", \"cats\"]`). It will be computed\n automatically if not set.\n class_mode: Mode for yielding the targets:\n - `\"binary\"`: binary targets (if there are only two classes),\n - `\"categorical\"`: categorical targets,\n - `\"sparse\"`: integer targets,\n - `\"input\"`: targets are images identical to input images (mainly used\n to work with autoencoders),\n - `None`: no targets get yielded (only input images are yielded).\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n seed: Random seed for data shuffling.\n data_format: String, one of `channels_first`, `channels_last`.\n save_to_dir: Optional directory where to save the pictures being yielded,\n in a viewable format. This is useful for visualizing the random\n transformations being applied, for debugging purposes.\n save_prefix: String prefix to use for saving sample images (if\n `save_to_dir` is set).\n save_format: Format to use for saving sample images (if `save_to_dir` is\n set).\n subset: Subset of data (`\"training\"` or `\"validation\"`) if\n validation_split is set in ImageDataGenerator.\n interpolation: Interpolation method used to resample the image if the\n target size is different from that of the loaded image. Supported\n methods are \"nearest\", \"bilinear\", and \"bicubic\". If PIL version 1.1.3\n or newer is installed, \"lanczos\" is also supported. If PIL version 3.4.0\n or newer is installed, \"box\" and \"hamming\" are also supported. By\n default, \"nearest\" is used.\n keep_aspect_ratio: Boolean, whether to resize images to a target size\n without aspect ratio distortion. The image is cropped in the center\n with target aspect ratio before resizing.\n dtype: Dtype to use for generated arrays.\n ", "desc": "Iterator capable of reading images from a directory on disk.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.ImageDataGenerator", "docs": "Generate batches of tensor image data with real-time data augmentation.\n\n Deprecated: `tf.keras.preprocessing.image.ImageDataGenerator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n The data will be looped over (in batches).\n\n Args:\n featurewise_center: Boolean. Set input mean to 0 over the dataset,\n feature-wise.\n samplewise_center: Boolean. Set each sample mean to 0.\n featurewise_std_normalization: Boolean. Divide inputs by std of the\n dataset, feature-wise.\n samplewise_std_normalization: Boolean. Divide each input by its std.\n zca_epsilon: epsilon for ZCA whitening. Default is 1e-6.\n zca_whitening: Boolean. Apply ZCA whitening.\n rotation_range: Int. Degree range for random rotations.\n width_shift_range: Float, 1-D array-like or int\n - float: fraction of total width, if < 1, or pixels if >= 1.\n - 1-D array-like: random elements from the array.\n - int: integer number of pixels from interval `(-width_shift_range,\n +width_shift_range)` - With `width_shift_range=2` possible values\n are integers `[-1, 0, +1]`, same as with `width_shift_range=[-1, 0,\n +1]`, while with `width_shift_range=1.0` possible values are floats\n in the interval [-1.0, +1.0).\n height_shift_range: Float, 1-D array-like or int\n - float: fraction of total height, if < 1, or pixels if >= 1.\n - 1-D array-like: random elements from the array.\n - int: integer number of pixels from interval `(-height_shift_range,\n +height_shift_range)` - With `height_shift_range=2` possible values\n are integers `[-1, 0, +1]`, same as with `height_shift_range=[-1, 0,\n +1]`, while with `height_shift_range=1.0` possible values are floats\n in the interval [-1.0, +1.0).\n brightness_range: Tuple or list of two floats. Range for picking a\n brightness shift value from.\n shear_range: Float. Shear Intensity (Shear angle in counter-clockwise\n direction in degrees)\n zoom_range: Float or [lower, upper]. Range for random zoom. If a float,\n `[lower, upper] = [1-zoom_range, 1+zoom_range]`.\n channel_shift_range: Float. Range for random channel shifts.\n fill_mode: One of {\"constant\", \"nearest\", \"reflect\" or \"wrap\"}. Default is\n 'nearest'. Points outside the boundaries of the input are filled\n according to the given mode:\n - 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k)\n - 'nearest': aaaaaaaa|abcd|dddddddd\n - 'reflect': abcddcba|abcd|dcbaabcd\n - 'wrap': abcdabcd|abcd|abcdabcd\n cval: Float or Int. Value used for points outside the boundaries when\n `fill_mode = \"constant\"`.\n horizontal_flip: Boolean. Randomly flip inputs horizontally.\n vertical_flip: Boolean. Randomly flip inputs vertically.\n rescale: rescaling factor. Defaults to None. If None or 0, no rescaling is\n applied, otherwise we multiply the data by the value provided (after\n applying all other transformations).\n preprocessing_function: function that will be applied on each input. The\n function will run after the image is resized and augmented.\n The function should take one argument: one image (Numpy tensor with\n rank 3), and should output a Numpy tensor with the same shape.\n data_format: Image data format, either \"channels_first\" or\n \"channels_last\". \"channels_last\" mode means that the images should have\n shape `(samples, height, width, channels)`, \"channels_first\" mode means\n that the images should have shape `(samples, channels, height, width)`.\n It defaults to the `image_data_format` value found in your Keras config\n file at `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n validation_split: Float. Fraction of images reserved for validation\n (strictly between 0 and 1).\n dtype: Dtype to use for the generated arrays.\n\n Raises:\n ValueError: If the value of the argument, `data_format` is other than\n `\"channels_last\"` or `\"channels_first\"`.\n ValueError: If the value of the argument, `validation_split` > 1\n or `validation_split` < 0.\n\n Examples:\n\n Example of using `.flow(x, y)`:\n\n ```python\n (x_train, y_train), (x_test, y_test) = cifar10.load_data()\n y_train = utils.to_categorical(y_train, num_classes)\n y_test = utils.to_categorical(y_test, num_classes)\n datagen = ImageDataGenerator(\n featurewise_center=True,\n featurewise_std_normalization=True,\n rotation_range=20,\n width_shift_range=0.2,\n height_shift_range=0.2,\n horizontal_flip=True,\n validation_split=0.2)\n # compute quantities required for featurewise normalization\n # (std, mean, and principal components if ZCA whitening is applied)\n datagen.fit(x_train)\n # fits the model on batches with real-time data augmentation:\n model.fit(datagen.flow(x_train, y_train, batch_size=32,\n subset='training'),\n validation_data=datagen.flow(x_train, y_train,\n batch_size=8, subset='validation'),\n steps_per_epoch=len(x_train) / 32, epochs=epochs)\n # here's a more \"manual\" example\n for e in range(epochs):\n print('Epoch', e)\n batches = 0\n for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):\n model.fit(x_batch, y_batch)\n batches += 1\n if batches >= len(x_train) / 32:\n # we need to break the loop by hand because\n # the generator loops indefinitely\n break\n ```\n\n Example of using `.flow_from_directory(directory)`:\n\n ```python\n train_datagen = ImageDataGenerator(\n rescale=1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)\n test_datagen = ImageDataGenerator(rescale=1./255)\n train_generator = train_datagen.flow_from_directory(\n 'data/train',\n target_size=(150, 150),\n batch_size=32,\n class_mode='binary')\n validation_generator = test_datagen.flow_from_directory(\n 'data/validation',\n target_size=(150, 150),\n batch_size=32,\n class_mode='binary')\n model.fit(\n train_generator,\n steps_per_epoch=2000,\n epochs=50,\n validation_data=validation_generator,\n validation_steps=800)\n ```\n\n Example of transforming images and masks together.\n\n ```python\n # we create two instances with the same arguments\n data_gen_args = dict(featurewise_center=True,\n featurewise_std_normalization=True,\n rotation_range=90,\n width_shift_range=0.1,\n height_shift_range=0.1,\n zoom_range=0.2)\n image_datagen = ImageDataGenerator(**data_gen_args)\n mask_datagen = ImageDataGenerator(**data_gen_args)\n # Provide the same seed and keyword arguments to the fit and flow methods\n seed = 1\n image_datagen.fit(images, augment=True, seed=seed)\n mask_datagen.fit(masks, augment=True, seed=seed)\n image_generator = image_datagen.flow_from_directory(\n 'data/images',\n class_mode=None,\n seed=seed)\n mask_generator = mask_datagen.flow_from_directory(\n 'data/masks',\n class_mode=None,\n seed=seed)\n # combine generators into one which yields image and masks\n train_generator = zip(image_generator, mask_generator)\n model.fit(\n train_generator,\n steps_per_epoch=2000,\n epochs=50)\n ```\n ", "desc": "Generate batches of tensor image data with real-time data augmentation.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.img_to_array", "docs": "Converts a PIL Image instance to a Numpy array.\n\n Usage:\n\n ```python\n from PIL import Image\n img_data = np.random.random(size=(100, 100, 3))\n img = tf.keras.preprocessing.image.array_to_img(img_data)\n array = tf.keras.preprocessing.image.img_to_array(img)\n ```\n\n\n Args:\n img: Input PIL Image instance.\n data_format: Image data format, can be either `\"channels_first\"` or\n `\"channels_last\"`. Defaults to `None`, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to `\"channels_last\"`).\n dtype: Dtype to use. Default to `None`, in which case the global setting\n `tf.keras.backend.floatx()` is used (unless you changed it, it defaults\n to `\"float32\"`).\n\n Returns:\n A 3D Numpy array.\n\n Raises:\n ValueError: if invalid `img` or `data_format` is passed.\n ", "desc": "Converts a PIL Image instance to a Numpy array.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.Iterator", "docs": "Base class for image data iterators.\n\n Deprecated: `tf.keras.preprocessing.image.Iterator` is not recommended for\n new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Every `Iterator` must implement the `_get_batches_of_transformed_samples`\n method.\n\n Args:\n n: Integer, total number of samples in the dataset to loop over.\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n seed: Random seeding for data shuffling.\n ", "desc": "Base class for image data iterators.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.load_img", "docs": "Loads an image into PIL format.\n\n Usage:\n\n ```\n image = tf.keras.preprocessing.image.load_img(image_path)\n input_arr = tf.keras.preprocessing.image.img_to_array(image)\n input_arr = np.array([input_arr]) # Convert single image to a batch.\n predictions = model.predict(input_arr)\n ```\n\n Args:\n path: Path to image file.\n grayscale: DEPRECATED use `color_mode=\"grayscale\"`.\n color_mode: One of `\"grayscale\"`, `\"rgb\"`, `\"rgba\"`. Default: `\"rgb\"`.\n The desired image format.\n target_size: Either `None` (default to original size) or tuple of ints\n `(img_height, img_width)`.\n interpolation: Interpolation method used to resample the image if the\n target size is different from that of the loaded image. Supported\n methods are `\"nearest\"`, `\"bilinear\"`, and `\"bicubic\"`. If PIL version\n 1.1.3 or newer is installed, `\"lanczos\"` is also supported. If PIL\n version 3.4.0 or newer is installed, `\"box\"` and `\"hamming\"` are also\n supported. By default, `\"nearest\"` is used.\n keep_aspect_ratio: Boolean, whether to resize images to a target\n size without aspect ratio distortion. The image is cropped in\n the center with target aspect ratio before resizing.\n\n Returns:\n A PIL Image instance.\n\n Raises:\n ImportError: if PIL is not available.\n ValueError: if interpolation method is not supported.\n ", "desc": "Loads an image into PIL format.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator", "docs": "Iterator yielding data from a Numpy array.\n\n Deprecated: `tf.keras.preprocessing.image.NumpyArrayIterator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Numpy array of input data or tuple. If tuple, the second elements is\n either another numpy array or a list of numpy arrays, each of which gets\n passed through as an output without any modifications.\n y: Numpy array of targets data.\n image_data_generator: Instance of `ImageDataGenerator` to use for random\n transformations and normalization.\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n sample_weight: Numpy array of sample weights.\n seed: Random seed for data shuffling.\n data_format: String, one of `channels_first`, `channels_last`.\n save_to_dir: Optional directory where to save the pictures being yielded,\n in a viewable format. This is useful for visualizing the random\n transformations being applied, for debugging purposes.\n save_prefix: String prefix to use for saving sample images (if\n `save_to_dir` is set).\n save_format: Format to use for saving sample images (if `save_to_dir` is\n set).\n subset: Subset of data (`\"training\"` or `\"validation\"`) if\n validation_split is set in ImageDataGenerator.\n ignore_class_split: Boolean (default: False), ignore difference\n in number of classes in labels across train and validation\n split (useful for non-classification tasks)\n dtype: Dtype to use for the generated arrays.\n ", "desc": "Iterator yielding data from a Numpy array.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_brightness", "docs": "Performs a random brightness shift.\n\n Deprecated: `tf.keras.preprocessing.image.random_brightness` does not operate\n on tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomBrightness` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n brightness_range: Tuple of floats; brightness range.\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Default: True.\n\n Returns:\n Numpy image tensor.\n\n Raises:\n ValueError if `brightness_range` isn't a tuple.\n ", "desc": "Performs a random brightness shift.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_channel_shift", "docs": "Performs a random channel shift.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity_range: Transformation intensity.\n channel_axis: Index of axis for channels in the input tensor.\n\n Returns:\n Numpy image tensor.\n ", "desc": "Performs a random channel shift.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_rotation", "docs": "Performs a random rotation of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_rotation` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomRotation` which provides equivalent functionality as a\n preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n rg: Rotation range, in degrees.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Rotated Numpy image tensor.\n ", "desc": "Performs a random rotation of a Numpy image tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_shear", "docs": "Performs a random spatial shear of a Numpy image tensor.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity: Transformation intensity in degrees.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Sheared Numpy image tensor.\n ", "desc": "Performs a random spatial shear of a Numpy image tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_shift", "docs": "Performs a random spatial shift of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_shift` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomTranslation` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n wrg: Width shift range, as a float fraction of the width.\n hrg: Height shift range, as a float fraction of the height.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Shifted Numpy image tensor.\n ", "desc": "Performs a random spatial shift of a Numpy image tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.random_zoom", "docs": "Performs a random spatial zoom of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_zoom` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomZoom` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n zoom_range: Tuple of floats; zoom range for width and height.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Zoomed Numpy image tensor.\n\n Raises:\n ValueError: if `zoom_range` isn't a tuple.\n ", "desc": "Performs a random spatial zoom of a Numpy image tensor.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.image.save_img", "docs": "Saves an image stored as a Numpy array to a path or file object.\n\n Args:\n path: Path or file object.\n x: Numpy array.\n data_format: Image data format, either `\"channels_first\"` or\n `\"channels_last\"`.\n file_format: Optional file format override. If omitted, the format to use\n is determined from the filename extension. If a file object was used\n instead of a filename, this parameter should always be used.\n scale: Whether to rescale image values to be within `[0, 255]`.\n **kwargs: Additional keyword arguments passed to `PIL.Image.save()`.\n ", "desc": "Saves an image stored as a Numpy array to a path or file object.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.sequence", "docs": "Utilities for preprocessing sequence data.\n\nDeprecated: `tf.keras.preprocessing.sequence` APIs are not recommended for new\ncode. Prefer `tf.keras.utils.timeseries_dataset_from_array` and\nthe `tf.data` APIs which provide a much more flexible mechanisms for dealing\nwith sequences. See the [tf.data guide](https://www.tensorflow.org/guide/data)\nfor more details.\n\n", "desc": "Utilities for preprocessing sequence data.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.sequence.make_sampling_table", "docs": "Generates a word rank-based probabilistic sampling table.\n\n Used for generating the `sampling_table` argument for `skipgrams`.\n `sampling_table[i]` is the probability of sampling\n the word i-th most common word in a dataset\n (more common words should be sampled less frequently, for balance).\n\n The sampling probabilities are generated according\n to the sampling distribution used in word2vec:\n\n ```\n p(word) = (min(1, sqrt(word_frequency / sampling_factor) /\n (word_frequency / sampling_factor)))\n ```\n\n We assume that the word frequencies follow Zipf's law (s=1) to derive\n a numerical approximation of frequency(rank):\n\n `frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`\n where `gamma` is the Euler-Mascheroni constant.\n\n Args:\n size: Int, number of possible words to sample.\n sampling_factor: The sampling factor in the word2vec formula.\n\n Returns:\n A 1D Numpy array of length `size` where the ith entry\n is the probability that a word of rank i should be sampled.\n ", "desc": "Generates a word rank-based probabilistic sampling table.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.sequence.pad_sequences", "docs": "Pads sequences to the same length.\n\n This function transforms a list (of length `num_samples`)\n of sequences (lists of integers)\n into a 2D Numpy array of shape `(num_samples, num_timesteps)`.\n `num_timesteps` is either the `maxlen` argument if provided,\n or the length of the longest sequence in the list.\n\n Sequences that are shorter than `num_timesteps`\n are padded with `value` until they are `num_timesteps` long.\n\n Sequences longer than `num_timesteps` are truncated\n so that they fit the desired length.\n\n The position where padding or truncation happens is determined by\n the arguments `padding` and `truncating`, respectively.\n Pre-padding or removing values from the beginning of the sequence is the\n default.\n\n >>> sequence = [[1], [2, 3], [4, 5, 6]]\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence)\n array([[0, 0, 1],\n [0, 2, 3],\n [4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1)\n array([[-1, -1, 1],\n [-1, 2, 3],\n [ 4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post')\n array([[1, 0, 0],\n [2, 3, 0],\n [4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2)\n array([[0, 1],\n [2, 3],\n [5, 6]], dtype=int32)\n\n Args:\n sequences: List of sequences (each sequence is a list of integers).\n maxlen: Optional Int, maximum length of all sequences. If not provided,\n sequences will be padded to the length of the longest individual\n sequence.\n dtype: (Optional, defaults to `\"int32\"`). Type of the output sequences.\n To pad sequences with variable length strings, you can use `object`.\n padding: String, \"pre\" or \"post\" (optional, defaults to `\"pre\"`):\n pad either before or after each sequence.\n truncating: String, \"pre\" or \"post\" (optional, defaults to `\"pre\"`):\n remove values from sequences larger than\n `maxlen`, either at the beginning or at the end of the sequences.\n value: Float or String, padding value. (Optional, defaults to 0.)\n\n Returns:\n Numpy array with shape `(len(sequences), maxlen)`\n\n Raises:\n ValueError: In case of invalid values for `truncating` or `padding`,\n or in case of invalid shape for a `sequences` entry.\n ", "desc": "Pads sequences to the same length.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.sequence.skipgrams", "docs": "Generates skipgram word pairs.\n\n This function transforms a sequence of word indexes (list of integers)\n into tuples of words of the form:\n\n - (word, word in the same window), with label 1 (positive samples).\n - (word, random word from the vocabulary), with label 0 (negative samples).\n\n Read more about Skipgram in this gnomic paper by Mikolov et al.:\n [Efficient Estimation of Word Representations in\n Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)\n\n Args:\n sequence: A word sequence (sentence), encoded as a list\n of word indices (integers). If using a `sampling_table`,\n word indices are expected to match the rank\n of the words in a reference dataset (e.g. 10 would encode\n the 10-th most frequently occurring token).\n Note that index 0 is expected to be a non-word and will be skipped.\n vocabulary_size: Int, maximum possible word index + 1\n window_size: Int, size of sampling windows (technically half-window).\n The window of a word `w_i` will be\n `[i - window_size, i + window_size+1]`.\n negative_samples: Float >= 0. 0 for no negative (i.e. random) samples.\n 1 for same number as positive samples.\n shuffle: Whether to shuffle the word couples before returning them.\n categorical: bool. if False, labels will be\n integers (eg. `[0, 1, 1 .. ]`),\n if `True`, labels will be categorical, e.g.\n `[[1,0],[0,1],[0,1] .. ]`.\n sampling_table: 1D array of size `vocabulary_size` where the entry i\n encodes the probability to sample a word of rank i.\n seed: Random seed.\n\n Returns:\n couples, labels: where `couples` are int pairs and\n `labels` are either 0 or 1.\n\n Note:\n By convention, index 0 in the vocabulary is\n a non-word and will be skipped.\n ", "desc": "Generates skipgram word pairs.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator", "docs": "Utility class for generating batches of temporal data.\n\n Deprecated: `tf.keras.preprocessing.sequence.TimeseriesGenerator` does not\n operate on tensors and is not recommended for new code. Prefer using a\n `tf.data.Dataset` which provides a more efficient and flexible mechanism for\n batching, shuffling, and windowing input. See the\n [tf.data guide](https://www.tensorflow.org/guide/data) for more details.\n\n This class takes in a sequence of data-points gathered at\n equal intervals, along with time series parameters such as\n stride, length of history, etc., to produce batches for\n training/validation.\n\n Arguments:\n data: Indexable generator (such as list or Numpy array)\n containing consecutive data points (timesteps).\n The data should be at 2D, and axis 0 is expected\n to be the time dimension.\n targets: Targets corresponding to timesteps in `data`.\n It should have same length as `data`.\n length: Length of the output sequences (in number of timesteps).\n sampling_rate: Period between successive individual timesteps\n within sequences. For rate `r`, timesteps\n `data[i]`, `data[i-r]`, ... `data[i - length]`\n are used for create a sample sequence.\n stride: Period between successive output sequences.\n For stride `s`, consecutive output samples would\n be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc.\n start_index: Data points earlier than `start_index` will not be used\n in the output sequences. This is useful to reserve part of the\n data for test or validation.\n end_index: Data points later than `end_index` will not be used\n in the output sequences. This is useful to reserve part of the\n data for test or validation.\n shuffle: Whether to shuffle output samples,\n or instead draw them in chronological order.\n reverse: Boolean: if `true`, timesteps in each output sample will be\n in reverse chronological order.\n batch_size: Number of timeseries samples in each batch\n (except maybe the last one).\n\n Returns:\n A [Sequence](\n https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence)\n instance.\n\n Examples:\n ```python\n from keras.preprocessing.sequence import TimeseriesGenerator\n import numpy as np\n data = np.array([[i] for i in range(50)])\n targets = np.array([[i] for i in range(50)])\n data_gen = TimeseriesGenerator(data, targets,\n length=10, sampling_rate=2,\n batch_size=2)\n assert len(data_gen) == 20\n batch_0 = data_gen[0]\n x, y = batch_0\n assert np.array_equal(x,\n np.array([[[0], [2], [4], [6], [8]],\n [[1], [3], [5], [7], [9]]]))\n assert np.array_equal(y,\n np.array([[10], [11]]))\n ```\n ", "desc": "Utility class for generating batches of temporal data.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text", "docs": "Utilities for text input preprocessing.\n\nDeprecated: `tf.keras.preprocessing.text` APIs are not recommended for new code.\nPrefer `tf.keras.utils.text_dataset_from_directory` and\n`tf.keras.layers.TextVectorization` which provide a more efficient approach\nfor preprocessing text input. For an introduction to these APIs, see\nthe [text loading tutorial]\n(https://www.tensorflow.org/tutorials/load_data/text)\nand [preprocessing layer guide]\n(https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilities for text input preprocessing.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text.hashing_trick", "docs": "Converts a text to a sequence of indexes in a fixed-size hashing space.\n\n Deprecated: `tf.keras.text.preprocessing.hashing_trick` does not operate on\n tensors and is not recommended for new code. Prefer `tf.keras.layers.Hashing`\n which provides equivalent functionality through a layer which accepts\n `tf.Tensor` input. See the [preprocessing layer guide]\n (https://www.tensorflow.org/guide/keras/preprocessing_layers)\n for an overview of preprocessing layers.\n\n Args:\n text: Input text (string).\n n: Dimension of the hashing space.\n hash_function: defaults to python `hash` function, can be 'md5' or\n any function that takes in input a string and returns a int.\n Note that 'hash' is not a stable hashing function, so\n it is not consistent across different runs, while 'md5'\n is a stable hashing function.\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default: ``!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\\\t\\\\n``,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to set the text to lowercase.\n split: str. Separator for word splitting.\n analyzer: function. Custom analyzer to split the text\n\n Returns:\n A list of integer word indices (unicity non-guaranteed).\n `0` is a reserved index that won't be assigned to any word.\n Two or more words may be assigned to the same index, due to possible\n collisions by the hashing function.\n The [probability](\n https://en.wikipedia.org/wiki/Birthday_problem#Probability_table)\n of a collision is in relation to the dimension of the hashing space and\n the number of distinct objects.\n ", "desc": "Converts a text to a sequence of indexes in a fixed-size hashing space.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text.one_hot", "docs": "One-hot encodes a text into a list of word indexes of size `n`.\n\n Deprecated: `tf.keras.text.preprocessing.one_hot` does not operate on tensors\n and is not recommended for new code. Prefer `tf.keras.layers.Hashing` with\n `output_mode='one_hot'` which provides equivalent functionality through a\n layer which accepts `tf.Tensor` input. See the [preprocessing layer guide]\n (https://www.tensorflow.org/guide/keras/preprocessing_layers)\n for an overview of preprocessing layers.\n\n This function receives as input a string of text and returns a\n list of encoded integers each corresponding to a word (or token)\n in the given input string.\n\n Args:\n input_text: Input text (string).\n n: int. Size of vocabulary.\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default:\n ```\n '!\"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\\t\\n\n ```,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to set the text to lowercase.\n split: str. Separator for word splitting.\n analyzer: function. Custom analyzer to split the text\n\n Returns:\n List of integers in `[1, n]`. Each integer encodes a word\n (unicity non-guaranteed).\n ", "desc": "One-hot encodes a text into a list of word indexes of size `n`.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text.text_to_word_sequence", "docs": "Converts a text to a sequence of words (or tokens).\n\n Deprecated: `tf.keras.preprocessing.text.text_to_word_sequence` does not\n operate on tensors and is not recommended for new code. Prefer\n `tf.strings.regex_replace` and `tf.strings.split` which provide equivalent\n functionality and accept `tf.Tensor` input. For an overview of text handling\n in Tensorflow, see the [text loading tutorial]\n (https://www.tensorflow.org/tutorials/load_data/text).\n\n This function transforms a string of text into a list of words\n while ignoring `filters` which include punctuations by default.\n\n >>> sample_text = 'This is a sample sentence.'\n >>> tf.keras.preprocessing.text.text_to_word_sequence(sample_text)\n ['this', 'is', 'a', 'sample', 'sentence']\n\n Args:\n input_text: Input text (string).\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default: ``'!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\\\t\\\\n'``,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to convert the input to lowercase.\n split: str. Separator for word splitting.\n\n Returns:\n A list of words (or tokens).\n ", "desc": "Converts a text to a sequence of words (or tokens).", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text.Tokenizer", "docs": "Text tokenization utility class.\n\n Deprecated: `tf.keras.preprocessing.text.Tokenizer` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.TextVectorization` which provides equivalent functionality\n through a layer which accepts `tf.Tensor` input. See the\n [text loading tutorial](https://www.tensorflow.org/tutorials/load_data/text)\n for an overview of the layer and text handling in tensorflow.\n\n This class allows to vectorize a text corpus, by turning each\n text into either a sequence of integers (each integer being the index\n of a token in a dictionary) or into a vector where the coefficient\n for each token could be binary, based on word count, based on tf-idf...\n\n By default, all punctuation is removed, turning the texts into\n space-separated sequences of words\n (words maybe include the `'` character). These sequences are then\n split into lists of tokens. They will then be indexed or vectorized.\n\n `0` is a reserved index that won't be assigned to any word.\n\n Args:\n num_words: the maximum number of words to keep, based\n on word frequency. Only the most common `num_words-1` words will\n be kept.\n filters: a string where each element is a character that will be\n filtered from the texts. The default is all punctuation, plus\n tabs and line breaks, minus the `'` character.\n lower: boolean. Whether to convert the texts to lowercase.\n split: str. Separator for word splitting.\n char_level: if True, every character will be treated as a token.\n oov_token: if given, it will be added to word_index and used to\n replace out-of-vocabulary words during text_to_sequence calls\n analyzer: function. Custom analyzer to split the text.\n The default analyzer is text_to_word_sequence\n ", "desc": "Text tokenization utility class.", "type": "API"}, {"name": "tf.compat.v1.keras.preprocessing.text.tokenizer_from_json", "docs": "Parses a JSON tokenizer configuration and returns a tokenizer instance.\n\n Deprecated: `tf.keras.preprocessing.text.Tokenizer` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.TextVectorization` which provides equivalent functionality\n through a layer which accepts `tf.Tensor` input. See the\n [text loading tutorial](https://www.tensorflow.org/tutorials/load_data/text)\n for an overview of the layer and text handling in tensorflow.\n\n Args:\n json_string: JSON string encoding a tokenizer configuration.\n\n Returns:\n A Keras Tokenizer instance\n ", "desc": "Parses a JSON tokenizer configuration and returns a tokenizer instance.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers", "docs": "Built-in regularizers.\n", "desc": "Built-in regularizers.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.deserialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.get", "docs": "Retrieve a regularizer instance from a config or identifier.", "desc": "Retrieve a regularizer instance from a config or identifier.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.L1", "docs": "A regularizer that applies a L1 regularization penalty.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n L1 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l1')\n\n In this case, the default value used is `l1=0.01`.\n\n Arguments:\n l1: Float; L1 regularization factor.\n ", "desc": "A regularizer that applies a L1 regularization penalty.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.l1_l2", "docs": "Create a regularizer that applies both L1 and L2 penalties.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n The L2 regularization penalty is computed as:\n `loss = l2 * reduce_sum(square(x))`\n\n Args:\n l1: Float; L1 regularization factor.\n l2: Float; L2 regularization factor.\n\n Returns:\n An L1L2 Regularizer with the given regularization factors.\n ", "desc": "Create a regularizer that applies both L1 and L2 penalties.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.L1L2", "docs": "A regularizer that applies both L1 and L2 regularization penalties.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n The L2 regularization penalty is computed as\n `loss = l2 * reduce_sum(square(x))`\n\n L1L2 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l1_l2')\n\n In this case, the default values used are `l1=0.01` and `l2=0.01`.\n\n Arguments:\n l1: Float; L1 regularization factor.\n l2: Float; L2 regularization factor.\n ", "desc": "A regularizer that applies both L1 and L2 regularization penalties.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.L2", "docs": "A regularizer that applies a L2 regularization penalty.\n\n The L2 regularization penalty is computed as:\n `loss = l2 * reduce_sum(square(x))`\n\n L2 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l2')\n\n In this case, the default value used is `l2=0.01`.\n\n Arguments:\n l2: Float; L2 regularization factor.\n ", "desc": "A regularizer that applies a L2 regularization penalty.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.Regularizer", "docs": "Regularizer base class.\n\n Regularizers allow you to apply penalties on layer parameters or layer\n activity during optimization. These penalties are summed into the loss\n function that the network optimizes.\n\n Regularization penalties are applied on a per-layer basis. The exact API will\n depend on the layer, but many layers (e.g. `Dense`, `Conv1D`, `Conv2D` and\n `Conv3D`) have a unified API.\n\n These layers expose 3 keyword arguments:\n\n - `kernel_regularizer`: Regularizer to apply a penalty on the layer's kernel\n - `bias_regularizer`: Regularizer to apply a penalty on the layer's bias\n - `activity_regularizer`: Regularizer to apply a penalty on the layer's output\n\n All layers (including custom layers) expose `activity_regularizer` as a\n settable property, whether or not it is in the constructor arguments.\n\n The value returned by the `activity_regularizer` is divided by the input\n batch size so that the relative weighting between the weight regularizers and\n the activity regularizers does not change with the batch size.\n\n You can access a layer's regularization penalties by calling `layer.losses`\n after calling the layer on inputs.\n\n ## Example\n\n >>> layer = tf.keras.layers.Dense(\n ... 5, input_dim=5,\n ... kernel_initializer='ones',\n ... kernel_regularizer=tf.keras.regularizers.L1(0.01),\n ... activity_regularizer=tf.keras.regularizers.L2(0.01))\n >>> tensor = tf.ones(shape=(5, 5)) * 2.0\n >>> out = layer(tensor)\n\n >>> # The kernel regularization term is 0.25\n >>> # The activity regularization term (after dividing by the batch size) is 5\n >>> tf.math.reduce_sum(layer.losses)\n \n\n ## Available penalties\n\n ```python\n tf.keras.regularizers.L1(0.3) # L1 Regularization Penalty\n tf.keras.regularizers.L2(0.1) # L2 Regularization Penalty\n tf.keras.regularizers.L1L2(l1=0.01, l2=0.01) # L1 + L2 penalties\n ```\n\n ## Directly calling a regularizer\n\n Compute a regularization loss on a tensor by directly calling a regularizer\n as if it is a one-argument function.\n\n E.g.\n >>> regularizer = tf.keras.regularizers.L2(2.)\n >>> tensor = tf.ones(shape=(5, 5))\n >>> regularizer(tensor)\n \n\n\n ## Developing new regularizers\n\n Any function that takes in a weight matrix and returns a scalar\n tensor can be used as a regularizer, e.g.:\n\n >>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l1')\n ... def l1_reg(weight_matrix):\n ... return 0.01 * tf.math.reduce_sum(tf.math.abs(weight_matrix))\n ...\n >>> layer = tf.keras.layers.Dense(5, input_dim=5,\n ... kernel_initializer='ones', kernel_regularizer=l1_reg)\n >>> tensor = tf.ones(shape=(5, 5))\n >>> out = layer(tensor)\n >>> layer.losses\n []\n\n Alternatively, you can write your custom regularizers in an\n object-oriented way by extending this regularizer base class, e.g.:\n\n >>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l2')\n ... class L2Regularizer(tf.keras.regularizers.Regularizer):\n ... def __init__(self, l2=0.): # pylint: disable=redefined-outer-name\n ... self.l2 = l2\n ...\n ... def __call__(self, x):\n ... return self.l2 * tf.math.reduce_sum(tf.math.square(x))\n ...\n ... def get_config(self):\n ... return {'l2': float(self.l2)}\n ...\n >>> layer = tf.keras.layers.Dense(\n ... 5, input_dim=5, kernel_initializer='ones',\n ... kernel_regularizer=L2Regularizer(l2=0.5))\n\n >>> tensor = tf.ones(shape=(5, 5))\n >>> out = layer(tensor)\n >>> layer.losses\n []\n\n ### A note on serialization and deserialization:\n\n Registering the regularizers as serializable is optional if you are just\n training and executing models, exporting to and from SavedModels, or saving\n and loading weight checkpoints.\n\n Registration is required for saving and\n loading models to HDF5 format, Keras model cloning, some visualization\n utilities, and exporting models to and from JSON. If using this functionality,\n you must make sure any python process running your model has also defined\n and registered your custom regularizer.\n ", "desc": "Regularizer base class.", "type": "API"}, {"name": "tf.compat.v1.keras.regularizers.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.keras.Sequential", "docs": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.\n\n `Sequential` provides training and inference features on this model.\n\n Examples:\n\n ```python\n # Optionally, the first layer can receive an `input_shape` argument:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n # Afterwards, we do automatic shape inference:\n model.add(tf.keras.layers.Dense(4))\n\n # This is identical to the following:\n model = tf.keras.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(8))\n\n # Note that you can also omit the `input_shape` argument.\n # In that case the model doesn't have any weights until the first call\n # to a training/evaluation method (since it isn't yet built):\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n # model.weights not created yet\n\n # Whereas if you specify the input shape, the model gets built\n # continuously as you are adding layers:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n model.add(tf.keras.layers.Dense(4))\n len(model.weights)\n # Returns \"4\"\n\n # When using the delayed-build pattern (no input shape specified), you can\n # choose to manually build your model by calling\n # `build(batch_input_shape)`:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n model.build((None, 16))\n len(model.weights)\n # Returns \"4\"\n\n # Note that when using the delayed-build pattern (no input shape specified),\n # the model gets built the first time you call `fit`, `eval`, or `predict`,\n # or the first time you call the model on some input data.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(1))\n model.compile(optimizer='sgd', loss='mse')\n # This builds the model for the first time:\n model.fit(x, y, batch_size=32, epochs=10)\n ```\n ", "desc": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.", "type": "API"}, {"name": "tf.compat.v1.keras.utils", "docs": "Public Keras utilities.\n", "desc": "Public Keras utilities.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.custom_object_scope", "docs": "Exposes custom classes/functions to Keras deserialization internals.\n\n Under a scope `with custom_object_scope(objects_dict)`, Keras methods such\n as `tf.keras.models.load_model` or `tf.keras.models.model_from_config`\n will be able to deserialize any custom object referenced by a\n saved config (e.g. a custom layer or metric).\n\n Example:\n\n Consider a custom regularizer `my_regularizer`:\n\n ```python\n layer = Dense(3, kernel_regularizer=my_regularizer)\n config = layer.get_config() # Config contains a reference to `my_regularizer`\n ...\n # Later:\n with custom_object_scope({'my_regularizer': my_regularizer}):\n layer = Dense.from_config(config)\n ```\n\n Args:\n *args: Dictionary or dictionaries of `{name: object}` pairs.\n ", "desc": "Exposes custom classes/functions to Keras deserialization internals.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.CustomObjectScope", "docs": "Exposes custom classes/functions to Keras deserialization internals.\n\n Under a scope `with custom_object_scope(objects_dict)`, Keras methods such\n as `tf.keras.models.load_model` or `tf.keras.models.model_from_config`\n will be able to deserialize any custom object referenced by a\n saved config (e.g. a custom layer or metric).\n\n Example:\n\n Consider a custom regularizer `my_regularizer`:\n\n ```python\n layer = Dense(3, kernel_regularizer=my_regularizer)\n config = layer.get_config() # Config contains a reference to `my_regularizer`\n ...\n # Later:\n with custom_object_scope({'my_regularizer': my_regularizer}):\n layer = Dense.from_config(config)\n ```\n\n Args:\n *args: Dictionary or dictionaries of `{name: object}` pairs.\n ", "desc": "Exposes custom classes/functions to Keras deserialization internals.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.deserialize_keras_object", "docs": "Turns the serialized form of a Keras object back into an actual object.\n\n This function is for mid-level library implementers rather than end users.\n\n Importantly, this utility requires you to provide the dict of `module_objects`\n to use for looking up the object config; this is not populated by default.\n If you need a deserialization utility that has preexisting knowledge of\n built-in Keras objects, use e.g. `keras.layers.deserialize(config)`,\n `keras.metrics.deserialize(config)`, etc.\n\n Calling `deserialize_keras_object` while underneath the\n `SharedObjectLoadingScope` context manager will cause any already-seen shared\n objects to be returned as-is rather than creating a new object.\n\n Args:\n identifier: the serialized form of the object.\n module_objects: A dictionary of built-in objects to look the name up in.\n Generally, `module_objects` is provided by midlevel library implementers.\n custom_objects: A dictionary of custom objects to look the name up in.\n Generally, `custom_objects` is provided by the end user.\n printable_module_name: A human-readable string representing the type of the\n object. Printed in case of exception.\n\n Returns:\n The deserialized object.\n\n Example:\n\n A mid-level library implementer might want to implement a utility for\n retrieving an object from its config, as such:\n\n ```python\n def deserialize(config, custom_objects=None):\n return deserialize_keras_object(\n identifier,\n module_objects=globals(),\n custom_objects=custom_objects,\n name=\"MyObjectType\",\n )\n ```\n\n This is how e.g. `keras.layers.deserialize()` is implemented.\n ", "desc": "Turns the serialized form of a Keras object back into an actual object.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.GeneratorEnqueuer", "docs": "Builds a queue out of a data generator.\n\n The provided generator can be finite in which case the class will throw\n a `StopIteration` exception.\n\n Args:\n generator: a generator function which yields data\n use_multiprocessing: use multiprocessing if True, otherwise threading\n random_seed: Initial seed for workers,\n will be incremented by one for each worker.\n ", "desc": "Builds a queue out of a data generator.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.get_custom_objects", "docs": "Retrieves a live reference to the global dictionary of custom objects.\n\n Updating and clearing custom objects using `custom_object_scope`\n is preferred, but `get_custom_objects` can\n be used to directly access the current collection of custom objects.\n\n Example:\n\n ```python\n get_custom_objects().clear()\n get_custom_objects()['MyObject'] = MyObject\n ```\n\n Returns:\n Global dictionary of names to classes (`_GLOBAL_CUSTOM_OBJECTS`).\n ", "desc": "Retrieves a live reference to the global dictionary of custom objects.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.get_file", "docs": "Downloads a file from a URL if it not already in the cache.\n\n By default the file at the url `origin` is downloaded to the\n cache_dir `~/.keras`, placed in the cache_subdir `datasets`,\n and given the filename `fname`. The final location of a file\n `example.txt` would therefore be `~/.keras/datasets/example.txt`.\n\n Files in tar, tar.gz, tar.bz, and zip formats can also be extracted.\n Passing a hash will verify the file after download. The command line\n programs `shasum` and `sha256sum` can compute the hash.\n\n Example:\n\n ```python\n path_to_downloaded_file = tf.keras.utils.get_file(\n \"flower_photos\",\n \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\",\n untar=True)\n ```\n\n Args:\n fname: Name of the file. If an absolute path `/path/to/file.txt` is\n specified the file will be saved at that location. If `None`, the\n name of the file at `origin` will be used.\n origin: Original URL of the file.\n untar: Deprecated in favor of `extract` argument.\n boolean, whether the file should be decompressed\n md5_hash: Deprecated in favor of `file_hash` argument.\n md5 hash of the file for verification\n file_hash: The expected hash string of the file after download.\n The sha256 and md5 hash algorithms are both supported.\n cache_subdir: Subdirectory under the Keras cache dir where the file is\n saved. If an absolute path `/path/to/folder` is\n specified the file will be saved at that location.\n hash_algorithm: Select the hash algorithm to verify the file.\n options are `'md5'`, `'sha256'`, and `'auto'`.\n The default 'auto' detects the hash algorithm in use.\n extract: True tries extracting the file as an Archive, like tar or zip.\n archive_format: Archive format to try for extracting the file.\n Options are `'auto'`, `'tar'`, `'zip'`, and `None`.\n `'tar'` includes tar, tar.gz, and tar.bz files.\n The default `'auto'` corresponds to `['tar', 'zip']`.\n None or an empty list will return no matches found.\n cache_dir: Location to store cached files, when None it\n defaults to the default directory `~/.keras/`.\n\n Returns:\n Path to the downloaded file\n ", "desc": "Downloads a file from a URL if it not already in the cache.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.get_registered_name", "docs": "Returns the name registered to an object within the Keras framework.\n\n This function is part of the Keras serialization and deserialization\n framework. It maps objects to the string names associated with those objects\n for serialization/deserialization.\n\n Args:\n obj: The object to look up.\n\n Returns:\n The name associated with the object, or the default Python name if the\n object is not registered.\n ", "desc": "Returns the name registered to an object within the Keras framework.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.get_registered_object", "docs": "Returns the class associated with `name` if it is registered with Keras.\n\n This function is part of the Keras serialization and deserialization\n framework. It maps strings to the objects associated with them for\n serialization/deserialization.\n\n Example:\n ```\n def from_config(cls, config, custom_objects=None):\n if 'my_custom_object_name' in config:\n config['hidden_cls'] = tf.keras.utils.get_registered_object(\n config['my_custom_object_name'], custom_objects=custom_objects)\n ```\n\n Args:\n name: The name to look up.\n custom_objects: A dictionary of custom objects to look the name up in.\n Generally, custom_objects is provided by the user.\n module_objects: A dictionary of custom objects to look the name up in.\n Generally, module_objects is provided by midlevel library implementers.\n\n Returns:\n An instantiable class associated with 'name', or None if no such class\n exists.\n ", "desc": "Returns the class associated with `name` if it is registered with Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.get_source_inputs", "docs": "Returns the list of input tensors necessary to compute `tensor`.\n\n Output will always be a list of tensors\n (potentially with 1 element).\n\n Args:\n tensor: The tensor to start from.\n layer: Origin layer of the tensor. Will be\n determined via tensor._keras_history if not provided.\n node_index: Origin node index of the tensor.\n\n Returns:\n List of input tensors.\n ", "desc": "Returns the list of input tensors necessary to compute `tensor`.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.model_to_dot", "docs": "Convert a Keras model to dot format.\n\n Args:\n model: A Keras model instance.\n show_shapes: whether to display shape information.\n show_dtype: whether to display layer dtypes.\n show_layer_names: whether to display layer names.\n rankdir: `rankdir` argument passed to PyDot,\n a string specifying the format of the plot:\n 'TB' creates a vertical plot;\n 'LR' creates a horizontal plot.\n expand_nested: whether to expand nested models into clusters.\n dpi: Dots per inch.\n subgraph: whether to return a `pydot.Cluster` instance.\n layer_range: input of `list` containing two `str` items, which is the\n starting layer name and ending layer name (both inclusive) indicating\n the range of layers for which the `pydot.Dot` will be generated. It\n also accepts regex patterns instead of exact name. In such case, start\n predicate will be the first element it matches to `layer_range[0]`\n and the end predicate will be the last element it matches to\n `layer_range[1]`. By default `None` which considers all layers of\n model. Note that you must pass range such that the resultant subgraph\n must be complete.\n show_layer_activations: Display layer activations (only for layers that\n have an `activation` property).\n\n Returns:\n A `pydot.Dot` instance representing the Keras model or\n a `pydot.Cluster` instance representing nested model if\n `subgraph=True`.\n\n Raises:\n ValueError: if `model_to_dot` is called before the model is built.\n ImportError: if graphviz or pydot are not available.\n ", "desc": "Convert a Keras model to dot format.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.normalize", "docs": "Normalizes a Numpy array.\n\n Args:\n x: Numpy array to normalize.\n axis: axis along which to normalize.\n order: Normalization order (e.g. `order=2` for L2 norm).\n\n Returns:\n A normalized copy of the array.\n ", "desc": "Normalizes a Numpy array.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.OrderedEnqueuer", "docs": "Builds a Enqueuer from a Sequence.\n\n Args:\n sequence: A `tf.keras.utils.data_utils.Sequence` object.\n use_multiprocessing: use multiprocessing if True, otherwise threading\n shuffle: whether to shuffle the data at the beginning of each epoch\n ", "desc": "Builds a Enqueuer from a Sequence.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.plot_model", "docs": "Converts a Keras model to dot format and save to a file.\n\n Example:\n\n ```python\n input = tf.keras.Input(shape=(100,), dtype='int32', name='input')\n x = tf.keras.layers.Embedding(\n output_dim=512, input_dim=10000, input_length=100)(input)\n x = tf.keras.layers.LSTM(32)(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x)\n model = tf.keras.Model(inputs=[input], outputs=[output])\n dot_img_file = '/tmp/model_1.png'\n tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True)\n ```\n\n Args:\n model: A Keras model instance\n to_file: File name of the plot image.\n show_shapes: whether to display shape information.\n show_dtype: whether to display layer dtypes.\n show_layer_names: whether to display layer names.\n rankdir: `rankdir` argument passed to PyDot,\n a string specifying the format of the plot: 'TB' creates a vertical\n plot; 'LR' creates a horizontal plot.\n expand_nested: Whether to expand nested models into clusters.\n dpi: Dots per inch.\n layer_range: input of `list` containing two `str` items, which is the\n starting layer name and ending layer name (both inclusive) indicating the\n range of layers for which the plot will be generated. It also accepts\n regex patterns instead of exact name. In such case, start predicate will\n be the first element it matches to `layer_range[0]` and the end predicate\n will be the last element it matches to `layer_range[1]`. By default `None`\n which considers all layers of model. Note that you must pass range such\n that the resultant subgraph must be complete.\n show_layer_activations: Display layer activations (only for layers that\n have an `activation` property).\n\n Raises:\n ValueError: if `plot_model` is called before the model is built.\n\n Returns:\n A Jupyter notebook Image object if Jupyter is installed.\n This enables in-line display of the model plots in notebooks.\n ", "desc": "Converts a Keras model to dot format and save to a file.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.Progbar", "docs": "Displays a progress bar.\n\n Args:\n target: Total number of steps expected, None if unknown.\n width: Progress bar width on screen.\n verbose: Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose)\n stateful_metrics: Iterable of string names of metrics that should *not* be\n averaged over time. Metrics in this list will be displayed as-is. All\n others will be averaged by the progbar before display.\n interval: Minimum visual progress update interval (in seconds).\n unit_name: Display name for step counts (usually \"step\" or \"sample\").\n ", "desc": "Displays a progress bar.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.register_keras_serializable", "docs": "Registers an object with the Keras serialization framework.\n\n This decorator injects the decorated class or function into the Keras custom\n object dictionary, so that it can be serialized and deserialized without\n needing an entry in the user-provided custom object dict. It also injects a\n function that Keras will call to get the object's serializable string key.\n\n Note that to be serialized and deserialized, classes must implement the\n `get_config()` method. Functions do not have this requirement.\n\n The object will be registered under the key 'package>name' where `name`,\n defaults to the object name if not passed.\n\n Args:\n package: The package that this class belongs to.\n name: The name to serialize this class under in this package. If None, the\n class' name will be used.\n\n Returns:\n A decorator that registers the decorated class with the passed names.\n ", "desc": "Registers an object with the Keras serialization framework.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.Sequence", "docs": "Base object for fitting to a sequence of data, such as a dataset.\n\n Every `Sequence` must implement the `__getitem__` and the `__len__` methods.\n If you want to modify your dataset between epochs you may implement\n `on_epoch_end`.\n The method `__getitem__` should return a complete batch.\n\n Notes:\n\n `Sequence` are a safer way to do multiprocessing. This structure guarantees\n that the network will only train once\n on each sample per epoch which is not the case with generators.\n\n Examples:\n\n ```python\n from skimage.io import imread\n from skimage.transform import resize\n import numpy as np\n import math\n\n # Here, `x_set` is list of path to the images\n # and `y_set` are the associated classes.\n\n class CIFAR10Sequence(Sequence):\n\n def __init__(self, x_set, y_set, batch_size):\n self.x, self.y = x_set, y_set\n self.batch_size = batch_size\n\n def __len__(self):\n return math.ceil(len(self.x) / self.batch_size)\n\n def __getitem__(self, idx):\n batch_x = self.x[idx * self.batch_size:(idx + 1) *\n self.batch_size]\n batch_y = self.y[idx * self.batch_size:(idx + 1) *\n self.batch_size]\n\n return np.array([\n resize(imread(file_name), (200, 200))\n for file_name in batch_x]), np.array(batch_y)\n ```\n ", "desc": "Base object for fitting to a sequence of data, such as a dataset.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.SequenceEnqueuer", "docs": "Base class to enqueue inputs.\n\n The task of an Enqueuer is to use parallelism to speed up preprocessing.\n This is done with processes or threads.\n\n Example:\n\n ```python\n enqueuer = SequenceEnqueuer(...)\n enqueuer.start()\n datas = enqueuer.get()\n for data in datas:\n # Use the inputs; training, evaluating, predicting.\n # ... stop sometime.\n enqueuer.stop()\n ```\n\n The `enqueuer.get()` should be an infinite stream of data.\n ", "desc": "Base class to enqueue inputs.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.serialize_keras_object", "docs": "Serialize a Keras object into a JSON-compatible representation.\n\n Calls to `serialize_keras_object` while underneath the\n `SharedObjectSavingScope` context manager will cause any objects re-used\n across multiple layers to be saved with a special shared object ID. This\n allows the network to be re-created properly during deserialization.\n\n Args:\n instance: The object to serialize.\n\n Returns:\n A dict-like, JSON-compatible representation of the object's config.\n ", "desc": "Serialize a Keras object into a JSON-compatible representation.", "type": "API"}, {"name": "tf.compat.v1.keras.utils.to_categorical", "docs": "Converts a class vector (integers) to binary class matrix.\n\n E.g. for use with `categorical_crossentropy`.\n\n Args:\n y: Array-like with class values to be converted into a matrix\n (integers from 0 to `num_classes - 1`).\n num_classes: Total number of classes. If `None`, this would be inferred\n as `max(y) + 1`.\n dtype: The data type expected by the input. Default: `'float32'`.\n\n Returns:\n A binary matrix representation of the input. The class axis is placed\n last.\n\n Example:\n\n >>> a = tf.keras.utils.to_categorical([0, 1, 2, 3], num_classes=4)\n >>> a = tf.constant(a, shape=[4, 4])\n >>> print(a)\n tf.Tensor(\n [[1. 0. 0. 0.]\n [0. 1. 0. 0.]\n [0. 0. 1. 0.]\n [0. 0. 0. 1.]], shape=(4, 4), dtype=float32)\n\n >>> b = tf.constant([.9, .04, .03, .03,\n ... .3, .45, .15, .13,\n ... .04, .01, .94, .05,\n ... .12, .21, .5, .17],\n ... shape=[4, 4])\n >>> loss = tf.keras.backend.categorical_crossentropy(a, b)\n >>> print(np.around(loss, 5))\n [0.10536 0.82807 0.1011 1.77196]\n\n >>> loss = tf.keras.backend.categorical_crossentropy(a, a)\n >>> print(np.around(loss, 5))\n [0. 0. 0. 0.]\n ", "desc": "Converts a class vector (integers) to binary class matrix.", "type": "API"}, {"name": "tf.compat.v1.keras.wrappers", "docs": "Public API for tf.keras.wrappers namespace.\n", "desc": "Public API for tf.keras.wrappers namespace.", "type": "API"}, {"name": "tf.compat.v1.keras.wrappers.scikit_learn", "docs": "Wrapper for using the Scikit-Learn API with Keras models.\n", "desc": "Wrapper for using the Scikit-Learn API with Keras models.", "type": "API"}, {"name": "tf.compat.v1.keras.wrappers.scikit_learn.KerasClassifier", "docs": "Implementation of the scikit-learn classifier API for Keras.\n\n DEPRECATED. Use [Sci-Keras](https://github.com/adriangb/scikeras) instead.\n See https://www.adriangb.com/scikeras/stable/migration.html\n for help migrating.\n ", "desc": "Implementation of the scikit-learn classifier API for Keras.", "type": "API"}, {"name": "tf.compat.v1.keras.wrappers.scikit_learn.KerasRegressor", "docs": "Implementation of the scikit-learn regressor API for Keras.\n\n DEPRECATED. Use [Sci-Keras](https://github.com/adriangb/scikeras) instead.\n See https://www.adriangb.com/scikeras/stable/migration.html\n for help migrating.\n ", "desc": "Implementation of the scikit-learn regressor API for Keras.", "type": "API"}, {"name": "tf.compat.v1.layers", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.layers.average_pooling1d", "docs": "Average Pooling layer for 1D inputs.\n\n Args:\n inputs: The tensor over which to pool. Must have rank 3.\n pool_size: An integer or tuple/list of a single integer,\n representing the size of the pooling window.\n strides: An integer or tuple/list of a single integer, specifying the\n strides of the pooling operation.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n name: A string, the name of the layer.\n\n Returns:\n The output tensor, of rank 3.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.average_pooling1d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.AveragePooling1D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Average Pooling layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.layers.average_pooling2d", "docs": "Average pooling layer for 2D inputs (e.g. images).\n\n Args:\n inputs: The tensor over which to pool. Must have rank 4.\n pool_size: An integer or tuple/list of 2 integers: (pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n name: A string, the name of the layer.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.average_pooling2d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.AveragePooling2D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Average pooling layer for 2D inputs (e.g. images).", "type": "API"}, {"name": "tf.compat.v1.layers.average_pooling3d", "docs": "Average pooling layer for 3D inputs (e.g. volumes).\n\n Args:\n inputs: The tensor over which to pool. Must have rank 5.\n pool_size: An integer or tuple/list of 3 integers:\n (pool_depth, pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n name: A string, the name of the layer.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling3D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.average_pooling3d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.AveragePooling3D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Average pooling layer for 3D inputs (e.g. volumes).", "type": "API"}, {"name": "tf.compat.v1.layers.AveragePooling1D", "docs": "Average Pooling layer for 1D inputs.\n\n Args:\n pool_size: An integer or tuple/list of a single integer,\n representing the size of the pooling window.\n strides: An integer or tuple/list of a single integer, specifying the\n strides of the pooling operation.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.AveragePooling1D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.AveragePooling1D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Average Pooling layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.layers.AveragePooling2D", "docs": "Average pooling layer for 2D inputs (e.g. images).\n\n Args:\n pool_size: An integer or tuple/list of 2 integers: (pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.AveragePooling2D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.AveragePooling2D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Average pooling layer for 2D inputs (e.g. images).", "type": "API"}, {"name": "tf.compat.v1.layers.AveragePooling3D", "docs": "Average pooling layer for 3D inputs (e.g. volumes).\n\n Args:\n pool_size: An integer or tuple/list of 3 integers:\n (pool_depth, pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.AveragePooling3D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.AveragePooling3D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.AveragePooling3D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Average pooling layer for 3D inputs (e.g. volumes).", "type": "API"}, {"name": "tf.compat.v1.layers.batch_normalization", "docs": "Functional interface for the batch normalization layer from_config(Ioffe et al., 2015).\n\n Note: when training, the moving_mean and moving_variance need to be updated.\n By default the update ops are placed in `tf.GraphKeys.UPDATE_OPS`, so they\n need to be executed alongside the `train_op`. Also, be sure to add any\n batch_normalization ops before getting the update_ops collection. Otherwise,\n update_ops will be empty, and training/inference will not work properly. For\n example:\n\n ```python\n x_norm = tf.compat.v1.layers.batch_normalization(x, training=training)\n\n # ...\n\n update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)\n train_op = optimizer.minimize(loss)\n train_op = tf.group([train_op, update_ops])\n ```\n\n Args:\n inputs: Tensor input.\n axis: An `int`, the axis that should be normalized (typically the features\n axis). For instance, after a `Convolution2D` layer with\n `data_format=\"channels_first\"`, set `axis=1` in `BatchNormalization`.\n momentum: Momentum for the moving average.\n epsilon: Small float added to variance to avoid dividing by zero.\n center: If True, add offset of `beta` to normalized tensor. If False, `beta`\n is ignored.\n scale: If True, multiply by `gamma`. If False, `gamma` is not used. When the\n next layer is linear (also e.g. `nn.relu`), this can be disabled since the\n scaling can be done by the next layer.\n beta_initializer: Initializer for the beta weight.\n gamma_initializer: Initializer for the gamma weight.\n moving_mean_initializer: Initializer for the moving mean.\n moving_variance_initializer: Initializer for the moving variance.\n beta_regularizer: Optional regularizer for the beta weight.\n gamma_regularizer: Optional regularizer for the gamma weight.\n beta_constraint: An optional projection function to be applied to the `beta`\n weight after being updated by an `Optimizer` (e.g. used to implement norm\n constraints or value constraints for layer weights). The function must\n take as input the unprojected variable and must return the projected\n variable (which must have the same shape). Constraints are not safe to use\n when doing asynchronous distributed training.\n gamma_constraint: An optional projection function to be applied to the\n `gamma` weight after being updated by an `Optimizer`.\n training: Either a Python boolean, or a TensorFlow boolean scalar tensor\n (e.g. a placeholder). Whether to return the output in training mode\n (normalized with statistics of the current batch) or in inference mode\n (normalized with moving statistics). **NOTE**: make sure to set this\n parameter correctly, or else your training/inference will not work\n properly.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).\n name: String, the name of the layer.\n reuse: Boolean, whether to reuse the weights of a previous layer by the same\n name.\n renorm: Whether to use Batch Renormalization (Ioffe, 2017). This adds extra\n variables during training. The inference is the same for either value of\n this parameter.\n renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to\n scalar `Tensors` used to clip the renorm correction. The correction `(r,\n d)` is used as `corrected_value = normalized_value * r + d`, with `r`\n clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin,\n dmax are set to inf, 0, inf, respectively.\n renorm_momentum: Momentum used to update the moving means and standard\n deviations with renorm. Unlike `momentum`, this affects training and\n should be neither too small (which would add noise) nor too large (which\n would give stale estimates). Note that `momentum` is still applied to get\n the means and variances for inference.\n fused: if `None` or `True`, use a faster, fused implementation if possible.\n If `False`, use the system recommended implementation.\n virtual_batch_size: An `int`. By default, `virtual_batch_size` is `None`,\n which means batch normalization is performed across the whole batch. When\n `virtual_batch_size` is not `None`, instead perform \"Ghost Batch\n Normalization\", which creates virtual sub-batches which are each\n normalized separately (with shared gamma, beta, and moving statistics).\n Must divide the actual batch size during execution.\n adjustment: A function taking the `Tensor` containing the (dynamic) shape of\n the input tensor and returning a pair (scale, bias) to apply to the\n normalized values (before gamma and beta), only during training. For\n example, if axis==-1,\n `adjustment = lambda shape: (\n tf.random.uniform(shape[-1:], 0.93, 1.07),\n tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized\n value by up to 7% up or down, then shift the result by up to 0.1\n (with independent scaling and bias for each feature but shared\n across all examples), and finally apply gamma and/or beta. If\n `None`, no adjustment is applied. Cannot be specified if\n virtual_batch_size is specified.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n Batch Renormalization - Towards Reducing Minibatch Dependence in\n Batch-Normalized Models:\n [Ioffe,\n 2017](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models)\n ([pdf](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models.pdf))\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.BatchNormalization`.\n\n The batch updating pattern with\n `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used in\n native TF2. Consult the `tf.keras.layers.BatchNormalization` documentation\n for further information.\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n x_norm = tf.compat.v1.layers.batch_normalization(x)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input(shape=(28, 28, 1),)\n y = tf.keras.layers.BatchNormalization()(x)\n model = tf.keras.Model(x, y)\n ```\n #### How to Map Arguments\n\n TF1 Arg Name | TF2 Arg Name | Note\n :------------------------ | :------------------------ | :---------------\n `name` | `name` | Layer base class\n `trainable` | `trainable` | Layer base class\n `axis` | `axis` | -\n `momentum` | `momentum` | -\n `epsilon` | `epsilon` | -\n `center` | `center` | -\n `scale` | `scale` | -\n `beta_initializer` | `beta_initializer` | -\n `gamma_initializer` | `gamma_initializer` | -\n `moving_mean_initializer` | `moving_mean_initializer` | -\n `beta_regularizer` | `beta_regularizer' | -\n `gamma_regularizer` | `gamma_regularizer' | -\n `beta_constraint` | `beta_constraint' | -\n `gamma_constraint` | `gamma_constraint' | -\n `renorm` | Not supported | -\n `renorm_clipping` | Not supported | -\n `renorm_momentum` | Not supported | -\n `fused` | Not supported | -\n `virtual_batch_size` | Not supported | -\n `adjustment` | Not supported | -\n\n @end_compatibility\n ", "desc": "Functional interface for the batch normalization layer from_config(Ioffe et al., 2015).", "type": "API"}, {"name": "tf.compat.v1.layers.BatchNormalization", "docs": "Batch Normalization layer from (Ioffe et al., 2015).\n\n Keras APIs handle BatchNormalization updates to the moving_mean and\n moving_variance as part of their `fit()` and `evaluate()` loops. However, if a\n custom training loop is used with an instance of `Model`, these updates need\n to be explicitly included. Here's a simple example of how it can be done:\n\n ```python\n # model is an instance of Model that contains BatchNormalization layer.\n update_ops = model.get_updates_for(None) + model.get_updates_for(features)\n train_op = optimizer.minimize(loss)\n train_op = tf.group([train_op, update_ops])\n ```\n\n Args:\n axis: An `int` or list of `int`, the axis or axes that should be normalized,\n typically the features axis/axes. For instance, after a `Conv2D` layer\n with `data_format=\"channels_first\"`, set `axis=1`. If a list of axes is\n provided, each axis in `axis` will be normalized\n simultaneously. Default is `-1` which uses the last axis. Note: when\n using multi-axis batch norm, the `beta`, `gamma`, `moving_mean`, and\n `moving_variance` variables are the same rank as the input Tensor,\n with dimension size 1 in all reduced (non-axis) dimensions).\n momentum: Momentum for the moving average.\n epsilon: Small float added to variance to avoid dividing by zero.\n center: If True, add offset of `beta` to normalized tensor. If False, `beta`\n is ignored.\n scale: If True, multiply by `gamma`. If False, `gamma` is not used. When the\n next layer is linear (also e.g. `nn.relu`), this can be disabled since the\n scaling can be done by the next layer.\n beta_initializer: Initializer for the beta weight.\n gamma_initializer: Initializer for the gamma weight.\n moving_mean_initializer: Initializer for the moving mean.\n moving_variance_initializer: Initializer for the moving variance.\n beta_regularizer: Optional regularizer for the beta weight.\n gamma_regularizer: Optional regularizer for the gamma weight.\n beta_constraint: An optional projection function to be applied to the `beta`\n weight after being updated by an `Optimizer` (e.g. used to implement norm\n constraints or value constraints for layer weights). The function must\n take as input the unprojected variable and must return the projected\n variable (which must have the same shape). Constraints are not safe to use\n when doing asynchronous distributed training.\n gamma_constraint: An optional projection function to be applied to the\n `gamma` weight after being updated by an `Optimizer`.\n renorm: Whether to use Batch Renormalization (Ioffe, 2017). This adds extra\n variables during training. The inference is the same for either value of\n this parameter.\n renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to\n scalar `Tensors` used to clip the renorm correction. The correction `(r,\n d)` is used as `corrected_value = normalized_value * r + d`, with `r`\n clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin,\n dmax are set to inf, 0, inf, respectively.\n renorm_momentum: Momentum used to update the moving means and standard\n deviations with renorm. Unlike `momentum`, this affects training and\n should be neither too small (which would add noise) nor too large (which\n would give stale estimates). Note that `momentum` is still applied to get\n the means and variances for inference.\n fused: if `None` or `True`, use a faster, fused implementation if possible.\n If `False`, use the system recommended implementation.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).\n virtual_batch_size: An `int`. By default, `virtual_batch_size` is `None`,\n which means batch normalization is performed across the whole batch. When\n `virtual_batch_size` is not `None`, instead perform \"Ghost Batch\n Normalization\", which creates virtual sub-batches which are each\n normalized separately (with shared gamma, beta, and moving statistics).\n Must divide the actual batch size during execution.\n adjustment: A function taking the `Tensor` containing the (dynamic) shape of\n the input tensor and returning a pair (scale, bias) to apply to the\n normalized values (before gamma and beta), only during training. For\n example, if axis==-1,\n `adjustment = lambda shape: (\n tf.random.uniform(shape[-1:], 0.93, 1.07),\n tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized\n value by up to 7% up or down, then shift the result by up to 0.1\n (with independent scaling and bias for each feature but shared\n across all examples), and finally apply gamma and/or beta. If\n `None`, no adjustment is applied. Cannot be specified if\n virtual_batch_size is specified.\n name: A string, the name of the layer.\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n Batch Renormalization - Towards Reducing Minibatch Dependence in\n Batch-Normalized Models:\n [Ioffe,\n 2017](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models)\n ([pdf](http://papers.nips.cc/paper/6790-batch-renormalization-towards-reducing-minibatch-dependence-in-batch-normalized-models.pdf))\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.BatchNormalization`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n bn = tf.compat.v1.layers.BatchNormalization()\n ```\n\n After:\n\n ```python\n bn = tf.keras.layers.BatchNormalization()\n ```\n\n #### How to Map Arguments\n\n TF1 Arg Name | TF2 Arg Name | Note\n :------------------------ | :------------------------ | :---------------\n `name` | `name` | Layer base class\n `trainable` | `trainable` | Layer base class\n `axis` | `axis` | -\n `momentum` | `momentum` | -\n `epsilon` | `epsilon` | -\n `center` | `center` | -\n `scale` | `scale` | -\n `beta_initializer` | `beta_initializer` | -\n `gamma_initializer` | `gamma_initializer` | -\n `moving_mean_initializer` | `moving_mean_initializer` | -\n `beta_regularizer` | `beta_regularizer' | -\n `gamma_regularizer` | `gamma_regularizer' | -\n `beta_constraint` | `beta_constraint' | -\n `gamma_constraint` | `gamma_constraint' | -\n `renorm` | Not supported | -\n `renorm_clipping` | Not supported | -\n `renorm_momentum` | Not supported | -\n `fused` | Not supported | -\n `virtual_batch_size` | Not supported | -\n `adjustment` | Not supported | -\n\n @end_compatibility\n ", "desc": "Batch Normalization layer from (Ioffe et al., 2015).", "type": "API"}, {"name": "tf.compat.v1.layers.Conv1D", "docs": "1D convolution layer (e.g. temporal convolution).\n\n This layer creates a convolution kernel that is convolved\n (actually cross-correlated) with the layer input to produce a tensor of\n outputs. If `use_bias` is True (and a `bias_initializer` is provided),\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer, specifying the\n length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer,\n specifying the stride length of the convolution.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n dilation_rate: An integer or tuple/list of a single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any `strides` value != 1.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Conv1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.Conv1D(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.Conv1D(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "1D convolution layer (e.g. temporal convolution).", "type": "API"}, {"name": "tf.compat.v1.layers.Conv2D", "docs": "2D convolution layer (e.g. spatial convolution over images).\n\n This layer creates a convolution kernel that is convolved\n (actually cross-correlated) with the layer input to produce a tensor of\n outputs. If `use_bias` is True (and a `bias_initializer` is provided),\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Conv2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.Conv2D(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.Conv2D(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "2D convolution layer (e.g. spatial convolution over images).", "type": "API"}, {"name": "tf.compat.v1.layers.conv2d_transpose", "docs": "Functional interface for transposed 2D convolution layer.\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n Args:\n inputs: Input tensor.\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A tuple or list of 2 positive integers specifying the spatial\n dimensions of the filters. Can be a single integer to specify the same\n value for all spatial dimensions.\n strides: A tuple or list of 2 positive integers specifying the strides\n of the convolution. Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n activation: Activation function. Set it to `None` to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If `None`, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n reuse: Boolean, whether to reuse the weights of a previous layer\n by the same name.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.Conv2DTranspose`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.conv2d_transpose(x, filters=3, kernel_size=3)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.Conv2DTranspose(filters=3, kernels_size=3)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Functional interface for transposed 2D convolution layer.", "type": "API"}, {"name": "tf.compat.v1.layers.Conv2DTranspose", "docs": "Transposed 2D convolution layer (sometimes called 2D Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A tuple or list of 2 positive integers specifying the spatial\n dimensions of the filters. Can be a single integer to specify the same\n value for all spatial dimensions.\n strides: A tuple or list of 2 positive integers specifying the strides\n of the convolution. Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.Conv2DTranspose`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.Conv2DTranspose(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.Conv2DTranspose(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "Transposed 2D convolution layer (sometimes called 2D Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.layers.Conv3D", "docs": "3D convolution layer (e.g. spatial convolution over volumes).\n\n This layer creates a convolution kernel that is convolved\n (actually cross-correlated) with the layer input to produce a tensor of\n outputs. If `use_bias` is True (and a `bias_initializer` is provided),\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the convolution along the depth,\n height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n dilation_rate: An integer or tuple/list of 3 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Conv3D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.Conv3D(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.Conv3D(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "3D convolution layer (e.g. spatial convolution over volumes).", "type": "API"}, {"name": "tf.compat.v1.layers.conv3d_transpose", "docs": "Functional interface for transposed 3D convolution layer.\n\n Args:\n inputs: Input tensor.\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A tuple or list of 3 positive integers specifying the spatial\n dimensions of the filters. Can be a single integer to specify the same\n value for all spatial dimensions.\n strides: A tuple or list of 3 positive integers specifying the strides\n of the convolution. Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n reuse: Boolean, whether to reuse the weights of a previous layer\n by the same name.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.Conv3DTranspose`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.conv3d_transpose(x, filters=3, kernel_size=3)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.Conv3DTranspose(filters=3, kernels_size=3)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Functional interface for transposed 3D convolution layer.", "type": "API"}, {"name": "tf.compat.v1.layers.Conv3DTranspose", "docs": "Transposed 3D convolution layer (sometimes called 3D Deconvolution).\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for all spatial\n dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides\n of the convolution along the depth, height and width.\n Can be a single integer to specify the same value for all spatial\n dimensions.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n activation: Activation function. Set it to `None` to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: An initializer for the convolution kernel.\n bias_initializer: An initializer for the bias vector. If `None`, the default\n initializer will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.Conv3DTranspose`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.Conv3DTranspose(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.Conv3DTranspose(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "Transposed 3D convolution layer (sometimes called 3D Deconvolution).", "type": "API"}, {"name": "tf.compat.v1.layers.Dense", "docs": "Densely-connected layer class.\n\n This layer implements the operation:\n `outputs = activation(inputs * kernel + bias)`\n Where `activation` is the activation function passed as the `activation`\n argument (if not `None`), `kernel` is a weights matrix created by the layer,\n and `bias` is a bias vector created by the layer\n (only if `use_bias` is `True`).\n\n Args:\n units: Integer or Long, dimensionality of the output space.\n activation: Activation function (callable). Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: Initializer function for the weight matrix.\n If `None` (default), weights are initialized using the default\n initializer used by `tf.compat.v1.get_variable`.\n bias_initializer: Initializer function for the bias.\n kernel_regularizer: Regularizer function for the weight matrix.\n bias_regularizer: Regularizer function for the bias.\n activity_regularizer: Regularizer function for the output.\n kernel_constraint: An optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n bias_constraint: An optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: String, the name of the layer. Layers with the same name will\n share weights, but to avoid mistakes we require reuse=True in such cases.\n _reuse: Boolean, whether to reuse the weights of a previous layer\n by the same name.\n\n Properties:\n units: Python integer, dimensionality of the output space.\n activation: Activation function (callable).\n use_bias: Boolean, whether the layer uses a bias.\n kernel_initializer: Initializer instance (or name) for the kernel matrix.\n bias_initializer: Initializer instance (or name) for the bias.\n kernel_regularizer: Regularizer instance for the kernel matrix (callable)\n bias_regularizer: Regularizer instance for the bias (callable).\n activity_regularizer: Regularizer instance for the output (callable)\n kernel_constraint: Constraint function for the kernel matrix.\n bias_constraint: Constraint function for the bias.\n kernel: Weight matrix (TensorFlow variable or tensor).\n bias: Bias vector, if applicable (TensorFlow variable or tensor).\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Dense`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n dense = tf.compat.v1.layers.Dense(units=3)\n ```\n\n After:\n\n ```python\n dense = tf.keras.layers.Dense(units=3)\n ```\n\n @end_compatibility\n ", "desc": "Densely-connected layer class.", "type": "API"}, {"name": "tf.compat.v1.layers.Dropout", "docs": "Applies Dropout to the input.\n\n Dropout consists in randomly setting a fraction `rate` of input units to 0\n at each update during training time, which helps prevent overfitting.\n The units that are kept are scaled by `1 / (1 - rate)`, so that their\n sum is unchanged at training time and inference time.\n\n Args:\n rate: The dropout rate, between 0 and 1. E.g. `rate=0.1` would drop out\n 10% of input units.\n noise_shape: 1D tensor of type `int32` representing the shape of the\n binary dropout mask that will be multiplied with the input.\n For instance, if your inputs have shape\n `(batch_size, timesteps, features)`, and you want the dropout mask\n to be the same for all timesteps, you can use\n `noise_shape=[batch_size, 1, features]`.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed`.\n for behavior.\n name: The name of the layer (string).\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Dropout`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n dropout = tf.compat.v1.layers.Dropout()\n ```\n\n After:\n\n ```python\n dropout = tf.keras.layers.Dropout()\n ```\n @end_compatibility\n ", "desc": "Applies Dropout to the input.", "type": "API"}, {"name": "tf.compat.v1.layers.experimental", "docs": "Public API for tf.keras.__internal__.legacy.layers.experimental namespace.\n", "desc": "Public API for tf.keras.__internal__.legacy.layers.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.layers.experimental.keras_style_scope", "docs": "Use Keras-style variable management.\n\n All tf.layers and tf RNN cells created in this scope use Keras-style\n variable management. Creating such layers with a scope= argument is\n disallowed, and reuse=True is disallowed.\n\n The purpose of this scope is to allow users of existing layers to\n slowly transition to a Keras layers API without breaking existing\n functionality.\n\n One example of this is when using TensorFlow's RNN classes with Keras\n Models or Networks. Because Keras models do not properly set variable\n scopes, users of RNNs may either accidentally share scopes between two\n different models, or get errors about variables that already exist.\n\n Example:\n\n ```python\n class RNNModel(tf.keras.Model):\n\n def __init__(self, name):\n super(RNNModel, self).__init__(name=name)\n self.rnn = tf.compat.v1.nn.rnn_cell.MultiRNNCell(\n [tf.compat.v1.nn.rnn_cell.LSTMCell(64) for _ in range(2)])\n\n def call(self, input, state):\n return self.rnn(input, state)\n\n model_1 = RNNModel(\"model_1\")\n model_2 = RNNModel(\"model_2\")\n\n # OK\n output_1, next_state_1 = model_1(input, state)\n # Raises an error about trying to create an already existing variable.\n output_2, next_state_2 = model_2(input, state)\n ```\n\n The solution is to wrap the model construction and execution in a keras-style\n scope:\n\n ```python\n with keras_style_scope():\n model_1 = RNNModel(\"model_1\")\n model_2 = RNNModel(\"model_2\")\n\n # model_1 and model_2 are guaranteed to create their own variables.\n output_1, next_state_1 = model_1(input, state)\n output_2, next_state_2 = model_2(input, state)\n\n assert len(model_1.weights) > 0\n assert len(model_2.weights) > 0\n assert(model_1.weights != model_2.weights)\n ```\n\n Yields:\n A keras layer style scope.\n ", "desc": "Use Keras-style variable management.", "type": "API"}, {"name": "tf.compat.v1.layers.experimental.set_keras_style", "docs": "Use Keras-style variable management.\n\n All tf.layers and tf RNN cells created after keras style ha been enabled\n use Keras-style variable management. Creating such layers with a\n scope= argument is disallowed, and reuse=True is disallowed.\n\n The purpose of this function is to allow users of existing layers to\n slowly transition to Keras layers API without breaking existing\n functionality.\n\n For more details, see the documentation for `keras_style_scope`.\n\n Note, once keras style has been set, it is set globally for the entire\n program and cannot be unset.\n\n Example:\n\n ```python\n set_keras_style()\n\n model_1 = RNNModel(name=\"model_1\")\n model_2 = RNNModel(name=\"model_2\")\n\n # model_1 and model_2 are guaranteed to create their own variables.\n output_1, next_state_1 = model_1(input, state)\n output_2, next_state_2 = model_2(input, state)\n\n assert len(model_1.weights) > 0\n assert len(model_2.weights) > 0\n assert(model_1.weights != model_2.weights)\n ```\n ", "desc": "Use Keras-style variable management.", "type": "API"}, {"name": "tf.compat.v1.layers.Flatten", "docs": "Flattens an input tensor while preserving the batch axis (axis 0).\n\n Args:\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, ..., channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, ...)`.\n\n Examples:\n\n ```\n x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32')\n y = Flatten()(x)\n # now `y` has shape `(None, 16)`\n\n x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32')\n y = Flatten()(x)\n # now `y` has shape `(None, None)`\n ```\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is `tf.keras.layers.Flatten`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n flatten = tf.compat.v1.layers.Flatten()\n ```\n\n After:\n\n ```python\n flatten = tf.keras.layers.Flatten()\n ```\n @end_compatibility\n ", "desc": "Flattens an input tensor while preserving the batch axis (axis 0).", "type": "API"}, {"name": "tf.compat.v1.layers.InputSpec", "docs": "Specifies the rank, dtype and shape of every input to a layer.\n\n Layers can expose (if appropriate) an `input_spec` attribute:\n an instance of `InputSpec`, or a nested structure of `InputSpec` instances\n (one per input tensor). These objects enable the layer to run input\n compatibility checks for input structure, input rank, input shape, and\n input dtype.\n\n A None entry in a shape is compatible with any dimension,\n a None shape is compatible with any shape.\n\n Args:\n dtype: Expected DataType of the input.\n shape: Shape tuple, expected shape of the input\n (may include None for unchecked axes). Includes the batch size.\n ndim: Integer, expected rank of the input.\n max_ndim: Integer, maximum rank of the input.\n min_ndim: Integer, minimum rank of the input.\n axes: Dictionary mapping integer axes to\n a specific dimension value.\n allow_last_axis_squeeze: If True, then allow inputs of rank N+1 as long\n as the last axis of the input is 1, as well as inputs of rank N-1\n as long as the last axis of the spec is 1.\n name: Expected key corresponding to this input when passing data as\n a dictionary.\n\n Example:\n\n ```python\n class MyLayer(Layer):\n def __init__(self):\n super(MyLayer, self).__init__()\n # The layer will accept inputs with shape (?, 28, 28) & (?, 28, 28, 1)\n # and raise an appropriate error message otherwise.\n self.input_spec = InputSpec(\n shape=(None, 28, 28, 1),\n allow_last_axis_squeeze=True)\n ```\n ", "desc": "Specifies the rank, dtype and shape of every input to a layer.", "type": "API"}, {"name": "tf.compat.v1.layers.Layer", "docs": "Base layer class.\n\n It is considered legacy, and we recommend the use of `tf.keras.layers.Layer`\n instead.\n\n Args:\n trainable: Boolean, whether the layer's variables should be trainable.\n name: String name of the layer.\n dtype: Default dtype of the layer's weights (default of `None` means use the\n type of the first input).\n\n Read-only properties:\n name: The name of the layer (string).\n dtype: Default dtype of the layer's weights (default of `None` means use the\n type of the first input).\n trainable_variables: List of trainable variables.\n non_trainable_variables: List of non-trainable variables.\n variables: List of all variables of this layer, trainable and\n non-trainable.\n updates: List of update ops of this layer.\n losses: List of losses added by this layer.\n trainable_weights: List of variables to be included in backprop.\n non_trainable_weights: List of variables that should not be\n included in backprop.\n weights: The concatenation of the lists trainable_weights and\n non_trainable_weights (in this order).\n\n Mutable properties:\n trainable: Whether the layer should be trained (boolean).\n input_spec: Optional (list of) `InputSpec` object(s) specifying the\n constraints on inputs that can be accepted by the layer.\n ", "desc": "Base layer class.", "type": "API"}, {"name": "tf.compat.v1.layers.max_pooling1d", "docs": "Max Pooling layer for 1D inputs.\n\n Args:\n inputs: The tensor over which to pool. Must have rank 3.\n pool_size: An integer or tuple/list of a single integer,\n representing the size of the pooling window.\n strides: An integer or tuple/list of a single integer, specifying the\n strides of the pooling operation.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n name: A string, the name of the layer.\n\n Returns:\n The output tensor, of rank 3.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.max_pooling1d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.MaxPooling1D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Max Pooling layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.layers.max_pooling2d", "docs": "Max pooling layer for 2D inputs (e.g. images).\n\n Args:\n inputs: The tensor over which to pool. Must have rank 4.\n pool_size: An integer or tuple/list of 2 integers: (pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n name: A string, the name of the layer.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.max_pooling2d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.MaxPooling2D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Max pooling layer for 2D inputs (e.g. images).", "type": "API"}, {"name": "tf.compat.v1.layers.max_pooling3d", "docs": "Max pooling layer for 3D inputs (e.g.\n\n volumes).\n\n Args:\n inputs: The tensor over which to pool. Must have rank 5.\n pool_size: An integer or tuple/list of 3 integers: (pool_depth, pool_height,\n pool_width) specifying the size of the pooling window. Can be a single\n integer to specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides of\n the pooling operation. Can be a single integer to specify the same value\n for all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape `(batch, depth, height,\n width, channels)` while `channels_first` corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n name: A string, the name of the layer.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling3D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.max_pooling3d(x, pool_size=2, strides=2)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.MaxPooling3D(pool_size=2, strides=2)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Max pooling layer for 3D inputs (e.g.", "type": "API"}, {"name": "tf.compat.v1.layers.MaxPooling1D", "docs": "Max Pooling layer for 1D inputs.\n\n Args:\n pool_size: An integer or tuple/list of a single integer,\n representing the size of the pooling window.\n strides: An integer or tuple/list of a single integer, specifying the\n strides of the pooling operation.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.MaxPooling1D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.MaxPooling1D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Max Pooling layer for 1D inputs.", "type": "API"}, {"name": "tf.compat.v1.layers.MaxPooling2D", "docs": "Max pooling layer for 2D inputs (e.g. images).\n\n Args:\n pool_size: An integer or tuple/list of 2 integers: (pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.MaxPooling2D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.MaxPooling2D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Max pooling layer for 2D inputs (e.g. images).", "type": "API"}, {"name": "tf.compat.v1.layers.MaxPooling3D", "docs": "Max pooling layer for 3D inputs (e.g. volumes).\n\n Args:\n pool_size: An integer or tuple/list of 3 integers:\n (pool_depth, pool_height, pool_width)\n specifying the size of the pooling window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the pooling operation.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n padding: A string. The padding method, either 'valid' or 'same'.\n Case-insensitive.\n data_format: A string. The ordering of the dimensions in the inputs.\n `channels_last` (default) and `channels_first` are supported.\n `channels_last` corresponds to inputs with shape\n `(batch, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, depth, height, width)`.\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.MaxPooling3D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n pooling = tf.compat.v1.layers.MaxPooling3D(pool_size=2, strides=2)\n ```\n\n After:\n\n ```python\n pooling = tf.keras.layers.MaxPooling3D(pool_size=2, strides=2)\n ```\n @end_compatibility\n ", "desc": "Max pooling layer for 3D inputs (e.g. volumes).", "type": "API"}, {"name": "tf.compat.v1.layers.separable_conv1d", "docs": "Functional interface for the depthwise separable 1D convolution layer.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n inputs: Input tensor.\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel.\n pointwise_initializer: An initializer for the pointwise convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel.\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n reuse: Boolean, whether to reuse the weights of a previous layer\n by the same name.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.SeparableConv1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.separable_conv1d(x, filters=3, kernel_size=3)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.SeparableConv1D(filters=3, kernels_size=3)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Functional interface for the depthwise separable 1D convolution layer.", "type": "API"}, {"name": "tf.compat.v1.layers.separable_conv2d", "docs": "Functional interface for the depthwise separable 2D convolution layer.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n inputs: Input tensor.\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A tuple or list of 2 integers specifying the spatial\n dimensions of the filters. Can be a single integer to specify the same\n value for all spatial dimensions.\n strides: A tuple or list of 2 positive integers specifying the strides\n of the convolution. Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel.\n pointwise_initializer: An initializer for the pointwise convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel.\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n reuse: Boolean, whether to reuse the weights of a previous layer\n by the same name.\n\n Returns:\n Output tensor.\n\n Raises:\n ValueError: if eager execution is enabled.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.SeparableConv2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n y = tf.compat.v1.layers.separable_conv2d(x, filters=3, kernel_size=3)\n ```\n\n After:\n\n To migrate code using TF1 functional layers use the [Keras Functional API]\n (https://www.tensorflow.org/guide/keras/functional):\n\n ```python\n x = tf.keras.Input((28, 28, 1))\n y = tf.keras.layers.SeparableConv2D(filters=3, kernels_size=3)(x)\n model = tf.keras.Model(x, y)\n ```\n @end_compatibility\n ", "desc": "Functional interface for the depthwise separable 2D convolution layer.", "type": "API"}, {"name": "tf.compat.v1.layers.SeparableConv1D", "docs": "Depthwise separable 1D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel.\n pointwise_initializer: An initializer for the pointwise convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel.\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.SeparableConv1D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.SeparableConv1D(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.SeparableConv1D(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "Depthwise separable 1D convolution.", "type": "API"}, {"name": "tf.compat.v1.layers.SeparableConv2D", "docs": "Depthwise separable 2D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A tuple or list of 2 integers specifying the spatial\n dimensions of the filters. Can be a single integer to specify the same\n value for all spatial dimensions.\n strides: A tuple or list of 2 positive integers specifying the strides\n of the convolution. Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, height, width)`.\n\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function. Set it to None to maintain a\n linear activation.\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel.\n pointwise_initializer: An initializer for the pointwise convolution kernel.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer will be used.\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel.\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training.\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n trainable: Boolean, if `True` also add variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).\n name: A string, the name of the layer.\n\n\n @compatibility(TF2)\n This API is a legacy api that is only compatible with eager execution and\n `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`\n\n Please refer to [tf.layers model mapping section of the migration guide]\n (https://www.tensorflow.org/guide/migrate/model_mapping)\n to learn how to use your TensorFlow v1 model in TF2 with Keras.\n\n The corresponding TensorFlow v2 layer is\n `tf.keras.layers.SeparableConv2D`.\n\n\n #### Structural Mapping to Native TF2\n\n None of the supported arguments have changed name.\n\n Before:\n\n ```python\n conv = tf.compat.v1.layers.SeparableConv2D(filters=3, kernel_size=3)\n ```\n\n After:\n\n ```python\n conv = tf.keras.layers.SeparableConv2D(filters=3, kernels_size=3)\n ```\n @end_compatibility\n ", "desc": "Depthwise separable 2D convolution.", "type": "API"}, {"name": "tf.compat.v1.lbeta", "docs": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.\n\n Given one-dimensional $z = [z_1,...,z_K]$, we define\n\n $$Beta(z) = \\frac{\\prod_j \\Gamma(z_j)}{\\Gamma(\\sum_j z_j)},$$\n\n where $\\Gamma$ is the gamma function.\n\n And for $n + 1$ dimensional $x$ with shape $[N_1, ..., N_n, K]$, we define\n\n $$lbeta(x)[i_1, ..., i_n] = \\log{|Beta(x[i_1, ..., i_n, :])|}.$$\n\n In other words, the last dimension is treated as the $z$ vector.\n\n Note that if $z = [u, v]$, then\n\n $$Beta(z) = \\frac{\\Gamma(u)\\Gamma(v)}{\\Gamma(u + v)}\n = \\int_0^1 t^{u-1} (1 - t)^{v-1} \\mathrm{d}t,$$\n\n which defines the traditional bivariate beta function.\n\n If the last dimension is empty, we follow the convention that the sum over\n the empty set is zero, and the product is one.\n\n Args:\n x: A rank `n + 1` `Tensor`, `n >= 0` with type `float`, or `double`.\n name: A name for the operation (optional).\n\n Returns:\n The logarithm of \\\\(|Beta(x)|\\\\) reducing along the last dimension.\n ", "desc": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.", "type": "API"}, {"name": "tf.compat.v1.less", "docs": "Returns the truth value of (x < y) element-wise.\n\n *NOTE*: `math.less` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less(x, y) ==> [False, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 7])\n tf.math.less(x, y) ==> [False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x < y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.less_equal", "docs": "Returns the truth value of (x <= y) element-wise.\n\n *NOTE*: `math.less_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less_equal(x, y) ==> [True, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 6])\n tf.math.less_equal(x, y) ==> [True, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x <= y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.lgamma", "docs": "Computes the log of the absolute value of `Gamma(x)` element-wise.\n\n For positive numbers, this function computes log((input - 1)!) for every element in the tensor.\n `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539`\n\n Example:\n\n ```python\n x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])\n tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the log of the absolute value of `Gamma(x)` element-wise.", "type": "API"}, {"name": "tf.compat.v1.lin_space", "docs": "Generates evenly-spaced values in an interval along a given axis.\n\n A sequence of `num` evenly-spaced values are generated beginning at `start`\n along a given `axis`.\n If `num > 1`, the values in the sequence increase by\n `(stop - start) / (num - 1)`, so that the last one is exactly `stop`.\n If `num <= 0`, `ValueError` is raised.\n\n Matches\n [np.linspace](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)'s\n behaviour\n except when `num == 0`.\n\n For example:\n\n ```\n tf.linspace(10.0, 12.0, 3, name=\"linspace\") => [ 10.0 11.0 12.0]\n ```\n\n `Start` and `stop` can be tensors of arbitrary size:\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=0)\n \n\n `Axis` is where the values will be generated (the dimension in the\n returned tensor which corresponds to the axis will be equal to `num`)\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=-1)\n \n\n\n\n Args:\n start: A `Tensor`. Must be one of the following types: `bfloat16`,\n `float32`, `float64`. N-D tensor. First entry in the range.\n stop: A `Tensor`. Must have the same type and shape as `start`. N-D tensor.\n Last entry in the range.\n num: A `Tensor`. Must be one of the following types: `int32`, `int64`. 0-D\n tensor. Number of values to generate.\n name: A name for the operation (optional).\n axis: Axis along which the operation is performed (used only when N-D\n tensors are provided).\n\n Returns:\n A `Tensor`. Has the same type as `start`.\n ", "desc": "Generates evenly-spaced values in an interval along a given axis.", "type": "API"}, {"name": "tf.compat.v1.linalg", "docs": "Operations for linear algebra.\n", "desc": "Operations for linear algebra.", "type": "API"}, {"name": "tf.compat.v1.linalg.adjoint", "docs": "Transposes the last two dimensions of and conjugates tensor `matrix`.\n\n For example:\n\n ```python\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.adjoint(x) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n ```\n\n Args:\n matrix: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`,\n or `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op` (optional).\n\n Returns:\n The adjoint (a.k.a. Hermitian transpose a.k.a. conjugate transpose) of\n matrix.\n ", "desc": "Transposes the last two dimensions of and conjugates tensor `matrix`.", "type": "API"}, {"name": "tf.compat.v1.linalg.band_part", "docs": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.\n\n The `band` part is computed as follows:\n Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a\n tensor with the same shape where\n\n `band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.\n\n The indicator function\n\n `in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&\n (num_upper < 0 || (n-m) <= num_upper)`.\n\n For example:\n\n ```\n # if 'input' is [[ 0, 1, 2, 3]\n # [-1, 0, 1, 2]\n # [-2, -1, 0, 1]\n # [-3, -2, -1, 0]],\n\n tf.linalg.band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]\n [-1, 0, 1, 2]\n [ 0, -1, 0, 1]\n [ 0, 0, -1, 0]],\n\n tf.linalg.band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]\n [-1, 0, 1, 0]\n [-2, -1, 0, 1]\n [ 0, -2, -1, 0]]\n ```\n\n Useful special cases:\n\n ```\n tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.\n tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.\n tf.linalg.band_part(input, 0, 0) ==> Diagonal.\n ```\n\n Args:\n input: A `Tensor`. Rank `k` tensor.\n num_lower: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D tensor. Number of subdiagonals to keep. If negative, keep entire\n lower triangle.\n num_upper: A `Tensor`. Must have the same type as `num_lower`.\n 0-D tensor. Number of superdiagonals to keep. If negative, keep\n entire upper triangle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.", "type": "API"}, {"name": "tf.compat.v1.linalg.cholesky", "docs": "Computes the Cholesky decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be symmetric and positive definite. Only the lower-triangular\n part of the input will be used for this operation. The upper-triangular part\n will not be read.\n\n The output is a tensor of the same shape as the input\n containing the Cholesky decompositions for all input submatrices `[..., :, :]`.\n\n **Note**: The gradient computation on GPU is faster for large matrices but\n not for large batch dimensions when the submatrices are small. In this\n case it might be faster to use the CPU.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the Cholesky decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.cholesky_solve", "docs": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.\n\n Specifically, returns `X` from `A X = RHS`, where `A = L L^T`, `L` is the\n `chol` arg and `RHS` is the `rhs` arg.\n\n ```python\n # Solve 10 separate 2x2 linear systems:\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 1\n chol = tf.linalg.cholesky(A) # shape 10 x 2 x 2\n X = tf.linalg.cholesky_solve(chol, RHS) # shape 10 x 2 x 1\n # tf.matmul(A, X) ~ RHS\n X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]\n\n # Solve five linear systems (K = 5) for every member of the length 10 batch.\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 5\n ...\n X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]\n ```\n\n Args:\n chol: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.\n Cholesky factorization of `A`, e.g. `chol = tf.linalg.cholesky(A)`.\n For that reason, only the lower triangular parts (including the diagonal)\n of the last two dimensions of `chol` are used. The strictly upper part is\n assumed to be zero and not accessed.\n rhs: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.\n name: A name to give this `Op`. Defaults to `cholesky_solve`.\n\n Returns:\n Solution to `A x = rhs`, shape `[..., M, K]`.\n ", "desc": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.", "type": "API"}, {"name": "tf.compat.v1.linalg.cross", "docs": "Compute the pairwise cross product.\n\n `a` and `b` must be the same shape; they can either be simple 3-element vectors,\n or any shape where the innermost dimension is 3. In the latter case, each pair\n of corresponding 3-element vectors is cross-multiplied independently.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n A tensor containing 3-element vectors.\n b: A `Tensor`. Must have the same type as `a`.\n Another tensor, of same type and shape as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the pairwise cross product.", "type": "API"}, {"name": "tf.compat.v1.linalg.det", "docs": "Computes the determinant of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor containing the determinants\n for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the determinant of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.diag", "docs": "Returns a batched diagonal tensor with given batched diagonal values.\n\n Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th\n diagonals of a matrix, with everything else padded with `padding`. `num_rows`\n and `num_cols` specify the dimension of the innermost matrix of the output. If\n both are not specified, the op assumes the innermost matrix is square and\n infers its size from `k` and the innermost dimension of `diagonal`. If only\n one of them is specified, the op assumes the unspecified value is the smallest\n possible based on other criteria.\n\n Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor\n has rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only\n one diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has\n rank `r` with shape `[I, J, ..., L, num_rows, num_cols]`.\n\n The second innermost dimension of `diagonal` has double meaning. When `k` is\n scalar or `k[0] == k[1]`, `M` is part of the batch size [I, J, ..., M], and\n the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper\n padding_value ; otherwise\n ```\n\n Otherwise, `M` is treated as the number of diagonals for the matrix in the\n same batch (`M = k[1]-k[0]+1`), and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n padding_value ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)\n [5, 6, 7, 8]])\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)\n [0, 2, 0, 0],\n [0, 0, 3, 0],\n [0, 0, 0, 4]],\n [[5, 0, 0, 0],\n [0, 6, 0, 0],\n [0, 0, 7, 0],\n [0, 0, 0, 8]]]\n\n # A superdiagonal (per batch).\n diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_diag(diagonal, k = 1)\n ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)\n [0, 0, 2, 0],\n [0, 0, 0, 3],\n [0, 0, 0, 0]],\n [[0, 4, 0, 0],\n [0, 0, 5, 0],\n [0, 0, 0, 6],\n [0, 0, 0, 0]]]\n\n # A tridiagonal band (per batch).\n diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [0, 4, 5]],\n [[2, 3, 0],\n [6, 7, 9],\n [0, 9, 1]]])\n tf.matrix_diag(diagonals, k = (-1, 1))\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 2, 3],\n [6, 7, 9],\n [9, 1, 0]]])\n tf.matrix_diag(diagonals, k = (-1, 1), align=\"RIGHT_LEFT\")\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # Rectangular matrix.\n diagonal = np.array([1, 2]) # Input shape: (2)\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)\n ==> [[0, 0, 0, 0], # Output shape: (3, 4)\n [1, 0, 0, 0],\n [0, 2, 0, 0]]\n\n # Rectangular matrix with inferred num_cols and padding_value = 9.\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)\n ==> [[9, 9], # Output shape: (3, 2)\n [1, 9],\n [9, 2]]\n ```\n\n Args:\n diagonal: A `Tensor` with `rank k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n num_rows: The number of rows of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n num_cols: The number of columns of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with given batched diagonal values.", "type": "API"}, {"name": "tf.compat.v1.linalg.diag_part", "docs": "Returns the batched diagonal part of a batched tensor.\n\n Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched\n `input`.\n\n Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`.\n Let `max_diag_len` be the maximum length among all diagonals to be extracted,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n Let `num_diags` be the number of diagonals to extract,\n `num_diags = k[1] - k[0] + 1`.\n\n If `num_diags == 1`, the output tensor is of rank `r - 1` with shape\n `[I, J, ..., L, max_diag_len]` and values:\n\n ```\n diagonal[i, j, ..., l, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.\n\n Otherwise, the output tensor has rank `r` with dimensions\n `[I, J, ..., L, num_diags, max_diag_len]` with values:\n\n ```\n diagonal[i, j, ..., l, m, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)\n [5, 6, 7, 8],\n [9, 8, 7, 6]],\n [[5, 4, 3, 2],\n [1, 2, 3, 4],\n [5, 6, 7, 8]]])\n\n # A main diagonal from each batch.\n tf.linalg.diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)\n [5, 2, 7]]\n\n # A superdiagonal from each batch.\n tf.linalg.diag_part(input, k = 1)\n ==> [[2, 7, 6], # Output shape: (2, 3)\n [4, 3, 8]]\n\n # A band from each batch.\n tf.linalg.diag_part(input, k = (-1, 2))\n ==> [[[3, 8, 0], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [0, 5, 8]],\n [[3, 4, 0],\n [4, 3, 8],\n [5, 2, 7],\n [0, 1, 6]]]\n\n # RIGHT_LEFT alignment.\n tf.linalg.diag_part(input, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[0, 3, 8], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [5, 8, 0]],\n [[0, 3, 4],\n [4, 3, 8],\n [5, 2, 7],\n [1, 6, 0]]]\n\n # max_diag_len can be shorter than the main diagonal.\n tf.linalg.diag_part(input, k = (-2, -1))\n ==> [[[5, 8],\n [0, 9]],\n [[1, 6],\n [0, 5]]]\n\n # padding_value = 9\n tf.linalg.diag_part(input, k = (1, 3), padding_value = 9)\n ==> [[[4, 9, 9], # Output shape: (2, 3, 3)\n [3, 8, 9],\n [2, 7, 6]],\n [[2, 9, 9],\n [3, 4, 9],\n [4, 3, 8]]]\n\n ```\n\n Args:\n input: A `Tensor` with `rank k >= 2`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`.\n\n Raises:\n InvalidArgumentError: When `k` is out of bound or when `k[0]>k[1:]`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.compat.v1.linalg.eigh", "docs": "Computes the eigen decomposition of a batch of self-adjoint matrices.\n\n Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices\n in `tensor` such that\n `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of\n each inner inner matrix is referenced.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order.\n v: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most\n matrices contain eigenvectors of the corresponding matrices in `tensor`\n ", "desc": "Computes the eigen decomposition of a batch of self-adjoint matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.eigvalsh", "docs": "Computes the eigenvalues of one or more self-adjoint matrices.\n\n Note: If your program backpropagates through this function, you should replace\n it with a call to tf.linalg.eigh (possibly ignoring the second output) to\n avoid computing the eigen decomposition twice. This is because the\n eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See\n _SelfAdjointEigV2Grad in linalg_grad.py.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`\n eigenvalues of `tensor[..., :, :]`.\n ", "desc": "Computes the eigenvalues of one or more self-adjoint matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.einsum", "docs": "Tensor contraction over specified indices and outer product.\n\n Einsum allows defining Tensors by defining their element-wise computation.\n This computation is defined by `equation`, a shorthand form based on Einstein\n summation. As an example, consider multiplying two matrices A and B to form a\n matrix C. The elements of C are given by:\n\n $$ C_{i,k} = \\sum_j A_{i,j} B_{j,k} $$\n\n or\n\n ```\n C[i,k] = sum_j A[i,j] * B[j,k]\n ```\n\n The corresponding einsum `equation` is:\n\n ```\n ij,jk->ik\n ```\n\n In general, to convert the element-wise equation into the `equation` string,\n use the following procedure (intermediate strings for matrix multiplication\n example provided in parentheses):\n\n 1. remove variable names, brackets, and commas, (`ik = sum_j ij * jk`)\n 2. replace \"*\" with \",\", (`ik = sum_j ij , jk`)\n 3. drop summation signs, and (`ik = ij, jk`)\n 4. move the output to the right, while replacing \"=\" with \"->\". (`ij,jk->ik`)\n\n Note: If the output indices are not specified repeated indices are summed.\n So `ij,jk->ik` can be simplified to `ij,jk`.\n\n Many common operations can be expressed in this way. For example:\n\n **Matrix multiplication**\n\n >>> m0 = tf.random.normal(shape=[2, 3])\n >>> m1 = tf.random.normal(shape=[3, 5])\n >>> e = tf.einsum('ij,jk->ik', m0, m1)\n >>> # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n Repeated indices are summed if the output indices are not specified.\n\n >>> e = tf.einsum('ij,jk', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n\n **Dot product**\n\n >>> u = tf.random.normal(shape=[5])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,i->', u, v) # output = sum_i u[i]*v[i]\n >>> print(e.shape)\n ()\n\n **Outer product**\n\n >>> u = tf.random.normal(shape=[3])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]\n >>> print(e.shape)\n (3, 5)\n\n **Transpose**\n\n >>> m = tf.ones(2,3)\n >>> e = tf.einsum('ij->ji', m0) # output[j,i] = m0[i,j]\n >>> print(e.shape)\n (3, 2)\n\n **Diag**\n\n >>> m = tf.reshape(tf.range(9), [3,3])\n >>> diag = tf.einsum('ii->i', m)\n >>> print(diag.shape)\n (3,)\n\n **Trace**\n\n >>> # Repeated indices are summed.\n >>> trace = tf.einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]\n >>> assert trace == sum(diag)\n >>> print(trace.shape)\n ()\n\n **Batch matrix multiplication**\n\n >>> s = tf.random.normal(shape=[7,5,3])\n >>> t = tf.random.normal(shape=[7,3,2])\n >>> e = tf.einsum('bij,bjk->bik', s, t)\n >>> # output[a,i,k] = sum_j s[a,i,j] * t[a, j, k]\n >>> print(e.shape)\n (7, 5, 2)\n\n This method does not support broadcasting on named-axes. All axes with\n matching labels should have the same length. If you have length-1 axes,\n use `tf.squeeze` or `tf.reshape` to eliminate them.\n\n To write code that is agnostic to the number of indices in the input\n use an ellipsis. The ellipsis is a placeholder for \"whatever other indices\n fit here\".\n\n For example, to perform a NumPy-style broadcasting-batch-matrix multiplication\n where the matrix multiply acts on the last two axes of the input, use:\n\n >>> s = tf.random.normal(shape=[11, 7, 5, 3])\n >>> t = tf.random.normal(shape=[11, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Einsum **will** broadcast over axes covered by the ellipsis.\n\n >>> s = tf.random.normal(shape=[11, 1, 5, 3])\n >>> t = tf.random.normal(shape=[1, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Args:\n equation: a `str` describing the contraction, in the same format as\n `numpy.einsum`.\n *inputs: the inputs to contract (each one a `Tensor`), whose shapes should\n be consistent with `equation`.\n **kwargs:\n - optimize: Optimization strategy to use to find contraction path using\n opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or\n 'auto'. (optional, default: 'greedy').\n - name: A name for the operation (optional).\n\n Returns:\n The contracted `Tensor`, with shape determined by `equation`.\n\n Raises:\n ValueError: If\n - the format of `equation` is incorrect,\n - number of inputs or their shapes are inconsistent with `equation`.\n ", "desc": "Tensor contraction over specified indices and outer product.", "type": "API"}, {"name": "tf.compat.v1.linalg.experimental", "docs": "Public API for tf.linalg.experimental namespace.\n", "desc": "Public API for tf.linalg.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.linalg.experimental.conjugate_gradient", "docs": "Conjugate gradient solver.\n\n Solves a linear system of equations `A*x = rhs` for self-adjoint, positive\n definite matrix `A` and right-hand side vector `rhs`, using an iterative,\n matrix-free algorithm where the action of the matrix A is represented by\n `operator`. The iteration terminates when either the number of iterations\n exceeds `max_iter` or when the residual norm has been reduced to `tol`\n times its initial value, i.e. \\\\(||rhs - A x_k|| <= tol ||rhs||\\\\).\n\n Args:\n operator: A `LinearOperator` that is self-adjoint and positive definite.\n rhs: A possibly batched vector of shape `[..., N]` containing the right-hand\n size vector.\n preconditioner: A `LinearOperator` that approximates the inverse of `A`.\n An efficient preconditioner could dramatically improve the rate of\n convergence. If `preconditioner` represents matrix `M`(`M` approximates\n `A^{-1}`), the algorithm uses `preconditioner.apply(x)` to estimate\n `A^{-1}x`. For this to be useful, the cost of applying `M` should be\n much lower than computing `A^{-1}` directly.\n x: A possibly batched vector of shape `[..., N]` containing the initial\n guess for the solution.\n tol: A float scalar convergence tolerance.\n max_iter: An integer giving the maximum number of iterations.\n name: A name scope for the operation.\n\n Returns:\n output: A namedtuple representing the final state with fields:\n - i: A scalar `int32` `Tensor`. Number of iterations executed.\n - x: A rank-1 `Tensor` of shape `[..., N]` containing the computed\n solution.\n - r: A rank-1 `Tensor` of shape `[.., M]` containing the residual vector.\n - p: A rank-1 `Tensor` of shape `[..., N]`. `A`-conjugate basis vector.\n - gamma: \\\\(r \\dot M \\dot r\\\\), equivalent to \\\\(||r||_2^2\\\\) when\n `preconditioner=None`.\n ", "desc": "Conjugate gradient solver.", "type": "API"}, {"name": "tf.compat.v1.linalg.expm", "docs": "Computes the matrix exponential of one or more square matrices.\n\n $$exp(A) = \\sum_{n=0}^\\infty A^n/n!$$\n\n The exponential is computed using a combination of the scaling and squaring\n method and the Pade approximation. Details can be found in:\n Nicholas J. Higham, \"The scaling and squaring method for the matrix\n exponential revisited,\" SIAM J. Matrix Anal. Applic., 26:1179-1193, 2005.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the exponential for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or\n `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op` (optional).\n\n Returns:\n the matrix exponential of the input.\n\n Raises:\n ValueError: An unsupported type is provided as input.\n\n @compatibility(scipy)\n Equivalent to scipy.linalg.expm\n @end_compatibility\n ", "desc": "Computes the matrix exponential of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.eye", "docs": "Construct an identity matrix, or a batch of matrices.\n\n See also `tf.ones`, `tf.zeros`, `tf.fill`, `tf.one_hot`.\n\n ```python\n # Construct one identity matrix.\n tf.eye(2)\n ==> [[1., 0.],\n [0., 1.]]\n\n # Construct a batch of 3 identity matrices, each 2 x 2.\n # batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.\n batch_identity = tf.eye(2, batch_shape=[3])\n\n # Construct one 2 x 3 \"identity\" matrix\n tf.eye(2, num_columns=3)\n ==> [[ 1., 0., 0.],\n [ 0., 1., 0.]]\n ```\n\n Args:\n num_rows: Non-negative `int32` scalar `Tensor` giving the number of rows\n in each batch matrix.\n num_columns: Optional non-negative `int32` scalar `Tensor` giving the number\n of columns in each batch matrix. Defaults to `num_rows`.\n batch_shape: A list or tuple of Python integers or a 1-D `int32` `Tensor`.\n If provided, the returned `Tensor` will have leading batch dimensions of\n this shape.\n dtype: The type of an element in the resulting `Tensor`\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `Tensor` of shape `batch_shape + [num_rows, num_columns]`\n ", "desc": "Construct an identity matrix, or a batch of matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.global_norm", "docs": "Computes the global norm of multiple tensors.\n\n Given a tuple or list of tensors `t_list`, this operation returns the\n global norm of the elements in all tensors in `t_list`. The global norm is\n computed as:\n\n `global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`\n\n Any entries in `t_list` that are of type None are ignored.\n\n Args:\n t_list: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.\n name: A name for the operation (optional).\n\n Returns:\n A 0-D (scalar) `Tensor` of type `float`.\n\n Raises:\n TypeError: If `t_list` is not a sequence.\n ", "desc": "Computes the global norm of multiple tensors.", "type": "API"}, {"name": "tf.compat.v1.linalg.inv", "docs": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).\n\n \n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the inverse for all input submatrices `[..., :, :]`.\n\n The op uses LU decomposition with partial pivoting to compute the inverses.\n\n If a matrix is not invertible there is no guarantee what the op does. It\n may detect the condition and raise an exception or it may simply return a\n garbage result.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).", "type": "API"}, {"name": "tf.compat.v1.linalg.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperator", "docs": "Base class defining a [batch of] linear operator[s].\n\n Subclasses of `LinearOperator` provide access to common methods on a\n (batch) matrix, without the need to materialize the matrix. This allows:\n\n * Matrix free computations\n * Operators that take advantage of special structure, while providing a\n consistent API to users.\n\n #### Subclassing\n\n To enable a public method, subclasses should implement the leading-underscore\n version of the method. The argument signature should be identical except for\n the omission of `name=\"...\"`. For example, to enable\n `matmul(x, adjoint=False, name=\"matmul\")` a subclass should implement\n `_matmul(x, adjoint=False)`.\n\n #### Performance contract\n\n Subclasses should only implement the assert methods\n (e.g. `assert_non_singular`) if they can be done in less than `O(N^3)`\n time.\n\n Class docstrings should contain an explanation of computational complexity.\n Since this is a high-performance library, attention should be paid to detail,\n and explanations can include constants as well as Big-O notation.\n\n #### Shape compatibility\n\n `LinearOperator` subclasses should operate on a [batch] matrix with\n compatible shape. Class docstrings should define what is meant by compatible\n shape. Some subclasses may not support batching.\n\n Examples:\n\n `x` is a batch matrix with compatible shape for `matmul` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], b >= 0,\n x.shape = [B1,...,Bb] + [N, R]\n ```\n\n `rhs` is a batch matrix with compatible shape for `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], b >= 0,\n rhs.shape = [B1,...,Bb] + [M, R]\n ```\n\n #### Example docstring for subclasses.\n\n This operator acts like a (batch) matrix `A` with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `m x n` matrix. Again, this matrix `A` may not be materialized, but for\n purposes of identifying and working with compatible arguments the shape is\n relevant.\n\n Examples:\n\n ```python\n some_tensor = ... shape = ????\n operator = MyLinOp(some_tensor)\n\n operator.shape()\n ==> [2, 4, 4]\n\n operator.log_abs_determinant()\n ==> Shape [2] Tensor\n\n x = ... Shape [2, 4, 5] Tensor\n\n operator.matmul(x)\n ==> Shape [2, 4, 5] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on batch matrices with compatible shape.\n FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE\n\n #### Performance\n\n FILL THIS IN\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n\n #### Initialization parameters\n\n All subclasses of `LinearOperator` are expected to pass a `parameters`\n argument to `super().__init__()`. This should be a `dict` containing\n the unadulterated arguments passed to the subclass `__init__`. For example,\n `MyLinearOperator` with an initializer should look like:\n\n ```python\n def __init__(self, operator, is_square=False, name=None):\n parameters = dict(\n operator=operator,\n is_square=is_square,\n name=name\n )\n ...\n super().__init__(..., parameters=parameters)\n ```\n\n Users can then access `my_linear_operator.parameters` to see all arguments\n passed to its initializer.\n ", "desc": "Base class defining a [batch of] linear operator[s].", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorAdjoint", "docs": "`LinearOperator` representing the adjoint of another operator.\n\n This operator represents the adjoint of another operator.\n\n ```python\n # Create a 2 x 2 linear operator.\n operator = LinearOperatorFullMatrix([[1 - i., 3.], [0., 1. + i]])\n operator_adjoint = LinearOperatorAdjoint(operator)\n\n operator_adjoint.to_dense()\n ==> [[1. + i, 0.]\n [3., 1 - i]]\n\n operator_adjoint.shape\n ==> [2, 2]\n\n operator_adjoint.log_abs_determinant()\n ==> - log(2)\n\n x = ... Shape [2, 4] Tensor\n operator_adjoint.matmul(x)\n ==> Shape [2, 4] Tensor, equal to operator.matmul(x, adjoint=True)\n ```\n\n #### Performance\n\n The performance of `LinearOperatorAdjoint` depends on the underlying\n operators performance.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` representing the adjoint of another operator.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorBlockDiag", "docs": "Combines one or more `LinearOperators` in to a Block Diagonal matrix.\n\n This operator combines one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator`, whose underlying matrix representation\n has each operator `opi` on the main diagonal, and zero's elsewhere.\n\n #### Shape compatibility\n\n If `opj` acts like a [batch] matrix `Aj`, then `op_combined` acts like\n the [batch] matrix formed by having each matrix `Aj` on the main\n diagonal.\n\n Each `opj` is required to represent a matrix, and hence will have\n shape `batch_shape_j + [M_j, N_j]`.\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the combined operator\n has shape `broadcast_batch_shape + [sum M_j, sum N_j]`, where\n `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`,\n `j = 1,...,J`, assuming the intermediate batch shapes broadcast.\n\n Arguments to `matmul`, `matvec`, `solve`, and `solvevec` may either be single\n `Tensor`s or lists of `Tensor`s that are interpreted as blocks. The `j`th\n element of a blockwise list of `Tensor`s must have dimensions that match\n `opj` for the given method. If a list of blocks is input, then a list of\n blocks is returned as well.\n\n When the `opj` are not guaranteed to be square, this operator's methods might\n fail due to the combined operator not being square and/or lack of efficient\n methods.\n\n ```python\n # Create a 4 x 4 linear operator combined of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n operator = LinearOperatorBlockDiag([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 2., 0., 0.],\n [3., 4., 0., 0.],\n [0., 0., 1., 0.],\n [0., 0., 0., 1.]]\n\n operator.shape\n ==> [4, 4]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x1 = ... # Shape [2, 2] Tensor\n x2 = ... # Shape [2, 2] Tensor\n x = tf.concat([x1, x2], 0) # Shape [2, 4] Tensor\n operator.matmul(x)\n ==> tf.concat([operator_1.matmul(x1), operator_2.matmul(x2)])\n\n # Create a 5 x 4 linear operator combining three blocks.\n operator_1 = LinearOperatorFullMatrix([[1.], [3.]])\n operator_2 = LinearOperatorFullMatrix([[1., 6.]])\n operator_3 = LinearOperatorFullMatrix([[2.], [7.]])\n operator = LinearOperatorBlockDiag([operator_1, operator_2, operator_3])\n\n operator.to_dense()\n ==> [[1., 0., 0., 0.],\n [3., 0., 0., 0.],\n [0., 1., 6., 0.],\n [0., 0., 0., 2.]]\n [0., 0., 0., 7.]]\n\n operator.shape\n ==> [5, 4]\n\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])\n operator_44 = LinearOperatorFullMatrix(matrix)\n\n # Create a [1, 3] batch of 5 x 5 linear operators.\n matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])\n operator_55 = LinearOperatorFullMatrix(matrix_55)\n\n # Combine to create a [2, 3] batch of 9 x 9 operators.\n operator_99 = LinearOperatorBlockDiag([operator_44, operator_55])\n\n # Create a shape [2, 3, 9] vector.\n x = tf.random.normal(shape=[2, 3, 9])\n operator_99.matmul(x)\n ==> Shape [2, 3, 9] Tensor\n\n # Create a blockwise list of vectors.\n x = [tf.random.normal(shape=[2, 3, 4]), tf.random.normal(shape=[2, 3, 5])]\n operator_99.matmul(x)\n ==> [Shape [2, 3, 4] Tensor, Shape [2, 3, 5] Tensor]\n ```\n\n #### Performance\n\n The performance of `LinearOperatorBlockDiag` on any operation is equal to\n the sum of the individual operators' operations.\n\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Combines one or more `LinearOperators` in to a Block Diagonal matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorBlockLowerTriangular", "docs": "Combines `LinearOperators` into a blockwise lower-triangular matrix.\n\n This operator is initialized with a nested list of linear operators, which\n are combined into a new `LinearOperator` whose underlying matrix\n representation is square and has each operator on or below the main diagonal,\n and zero's elsewhere. Each element of the outer list is a list of\n `LinearOperators` corresponding to a row-partition of the blockwise structure.\n The number of `LinearOperator`s in row-partion `i` must be equal to `i`.\n\n For example, a blockwise `3 x 3` `LinearOperatorBlockLowerTriangular` is\n initialized with the list `[[op_00], [op_10, op_11], [op_20, op_21, op_22]]`,\n where the `op_ij`, `i < 3, j <= i`, are `LinearOperator` instances. The\n `LinearOperatorBlockLowerTriangular` behaves as the following blockwise\n matrix, where `0` represents appropriately-sized [batch] matrices of zeros:\n\n ```none\n [[op_00, 0, 0],\n [op_10, op_11, 0],\n [op_20, op_21, op_22]]\n ```\n\n Each `op_jj` on the diagonal is required to represent a square matrix, and\n hence will have shape `batch_shape_j + [M_j, M_j]`. `LinearOperator`s in row\n `j` of the blockwise structure must have `range_dimension` equal to that of\n `op_jj`, and `LinearOperators` in column `j` must have `domain_dimension`\n equal to that of `op_jj`.\n\n If each `op_jj` on the diagonal has shape `batch_shape_j + [M_j, M_j]`, then\n the combined operator has shape `broadcast_batch_shape + [sum M_j, sum M_j]`,\n where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`,\n `j = 0, 1, ..., J`, assuming the intermediate batch shapes broadcast.\n Even if the combined shape is well defined, the combined operator's\n methods may fail due to lack of broadcasting ability in the defining\n operators' methods.\n\n For example, to create a 4 x 4 linear operator combined of three 2 x 2\n operators:\n >>> operator_0 = tf.linalg.LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n >>> operator_1 = tf.linalg.LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n >>> operator_2 = tf.linalg.LinearOperatorLowerTriangular([[5., 6.], [7., 8]])\n >>> operator = LinearOperatorBlockLowerTriangular(\n ... [[operator_0], [operator_1, operator_2]])\n\n >>> operator.to_dense()\n \n\n >>> operator.shape\n TensorShape([4, 4])\n\n >>> operator.log_abs_determinant()\n \n\n >>> x0 = [[1., 6.], [-3., 4.]]\n >>> x1 = [[0., 2.], [4., 0.]]\n >>> x = tf.concat([x0, x1], 0) # Shape [2, 4] Tensor\n >>> operator.matmul(x)\n \n\n The above `matmul` is equivalent to:\n >>> tf.concat([operator_0.matmul(x0),\n ... operator_1.matmul(x0) + operator_2.matmul(x1)], axis=0)\n \n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n For example:\n\n Create a [2, 3] batch of 4 x 4 linear operators:\n >>> matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])\n >>> operator_44 = tf.linalg.LinearOperatorFullMatrix(matrix_44)\n\n Create a [1, 3] batch of 5 x 4 linear operators:\n >>> matrix_54 = tf.random.normal(shape=[1, 3, 5, 4])\n >>> operator_54 = tf.linalg.LinearOperatorFullMatrix(matrix_54)\n\n Create a [1, 3] batch of 5 x 5 linear operators:\n >>> matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])\n >>> operator_55 = tf.linalg.LinearOperatorFullMatrix(matrix_55)\n\n Combine to create a [2, 3] batch of 9 x 9 operators:\n >>> operator_99 = LinearOperatorBlockLowerTriangular(\n ... [[operator_44], [operator_54, operator_55]])\n >>> operator_99.shape\n TensorShape([2, 3, 9, 9])\n\n Create a shape [2, 1, 9] batch of vectors and apply the operator to it.\n >>> x = tf.random.normal(shape=[2, 1, 9])\n >>> y = operator_99.matvec(x)\n >>> y.shape\n TensorShape([2, 3, 9])\n\n Create a blockwise list of vectors and apply the operator to it. A blockwise\n list is returned.\n >>> x4 = tf.random.normal(shape=[2, 1, 4])\n >>> x5 = tf.random.normal(shape=[2, 3, 5])\n >>> y_blockwise = operator_99.matvec([x4, x5])\n >>> y_blockwise[0].shape\n TensorShape([2, 3, 4])\n >>> y_blockwise[1].shape\n TensorShape([2, 3, 5])\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorBlockLowerTriangular` consisting of `D`\n row-partitions and `D` column-partitions, such that the total number of\n operators is `N = D * (D + 1) // 2`.\n\n * `operator.matmul` has complexity equal to the sum of the `matmul`\n complexities of the individual operators.\n * `operator.solve` has complexity equal to the sum of the `solve` complexities\n of the operators on the diagonal and the `matmul` complexities of the\n operators off the diagonal.\n * `operator.determinant` has complexity equal to the sum of the `determinant`\n complexities of the operators on the diagonal.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Combines `LinearOperators` into a blockwise lower-triangular matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorCirculant", "docs": "`LinearOperator` acting like a circulant matrix.\n\n This operator acts like a circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of circulant matrices\n\n Circulant means the entries of `A` are generated by a single vector, the\n convolution kernel `h`: `A_{mn} := h_{m-n mod N}`. With `h = [w, x, y, z]`,\n\n ```\n A = |w z y x|\n |x w z y|\n |y x w z|\n |z y x w|\n ```\n\n This means that the result of matrix multiplication `v = Au` has `Lth` column\n given circular convolution between `h` with the `Lth` column of `u`.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions. Define the discrete Fourier transform (DFT) and its inverse by\n\n ```\n DFT[ h[n] ] = H[k] := sum_{n = 0}^{N - 1} h_n e^{-i 2pi k n / N}\n IDFT[ H[k] ] = h[n] = N^{-1} sum_{k = 0}^{N - 1} H_k e^{i 2pi k n / N}\n ```\n\n From these definitions, we see that\n\n ```\n H[0] = sum_{n = 0}^{N - 1} h_n\n H[1] = \"the first positive frequency\"\n H[N - 1] = \"the first negative frequency\"\n ```\n\n Loosely speaking, with `*` element-wise multiplication, matrix multiplication\n is equal to the action of a Fourier multiplier: `A u = IDFT[ H * DFT[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT[u]` be the `[N, R]`\n matrix with `rth` column equal to the DFT of the `rth` column of `u`.\n Define the `IDFT` similarly.\n Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT[ H * (DFT[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n Letting `U` be the `kth` Euclidean basis vector, and `U = IDFT[u]`.\n The above formulas show that`A U = H_k * U`. We conclude that the elements\n of `H` are the eigenvalues of this operator. Therefore\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N]`. We say that `H` is a Hermitian spectrum\n if, with `%` meaning modulus division,\n\n ```H[..., n % N] = ComplexConjugate[ H[..., (-n) % N] ]```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n #### Example of a self-adjoint positive definite operator\n\n ```python\n # spectrum is real ==> operator is self-adjoint\n # spectrum is positive ==> operator is positive definite\n spectrum = [6., 4, 2]\n\n operator = LinearOperatorCirculant(spectrum)\n\n # IFFT[spectrum]\n operator.convolution_kernel()\n ==> [4 + 0j, 1 + 0.58j, 1 - 0.58j]\n\n operator.to_dense()\n ==> [[4 + 0.0j, 1 - 0.6j, 1 + 0.6j],\n [1 + 0.6j, 4 + 0.0j, 1 - 0.6j],\n [1 - 0.6j, 1 + 0.6j, 4 + 0.0j]]\n ```\n\n #### Example of defining in terms of a real convolution kernel\n\n ```python\n # convolution_kernel is real ==> spectrum is Hermitian.\n convolution_kernel = [1., 2., 1.]]\n spectrum = tf.signal.fft(tf.cast(convolution_kernel, tf.complex64))\n\n # spectrum is Hermitian ==> operator is real.\n # spectrum is shape [3] ==> operator is shape [3, 3]\n # We force the input/output type to be real, which allows this to operate\n # like a real matrix.\n operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)\n\n operator.to_dense()\n ==> [[ 1, 1, 2],\n [ 2, 1, 1],\n [ 1, 2, 1]]\n ```\n\n #### Example of Hermitian spectrum\n\n ```python\n # spectrum is shape [3] ==> operator is shape [3, 3]\n # spectrum is Hermitian ==> operator is real.\n spectrum = [1, 1j, -1j]\n\n operator = LinearOperatorCirculant(spectrum)\n\n operator.to_dense()\n ==> [[ 0.33 + 0j, 0.91 + 0j, -0.24 + 0j],\n [-0.24 + 0j, 0.33 + 0j, 0.91 + 0j],\n [ 0.91 + 0j, -0.24 + 0j, 0.33 + 0j]\n ```\n\n #### Example of forcing real `dtype` when spectrum is Hermitian\n\n ```python\n # spectrum is shape [4] ==> operator is shape [4, 4]\n # spectrum is real ==> operator is self-adjoint\n # spectrum is Hermitian ==> operator is real\n # spectrum has positive real part ==> operator is positive-definite.\n spectrum = [6., 4, 2, 4]\n\n # Force the input dtype to be float32.\n # Cast the output to float32. This is fine because the operator will be\n # real due to Hermitian spectrum.\n operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)\n\n operator.shape\n ==> [4, 4]\n\n operator.to_dense()\n ==> [[4, 1, 0, 1],\n [1, 4, 1, 0],\n [0, 1, 4, 1],\n [1, 0, 1, 4]]\n\n # convolution_kernel = tf.signal.ifft(spectrum)\n operator.convolution_kernel()\n ==> [4, 1, 0, 1]\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n\n References:\n Toeplitz and Circulant Matrices - A Review:\n [Gray, 2006](https://www.nowpublishers.com/article/Details/CIT-006)\n ([pdf](https://ee.stanford.edu/~gray/toeplitz.pdf))\n ", "desc": "`LinearOperator` acting like a circulant matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorCirculant2D", "docs": "`LinearOperator` acting like a block circulant matrix.\n\n This operator acts like a block circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of block circulant matrices\n\n If `A` is block circulant, with block sizes `N0, N1` (`N0 * N1 = N`):\n `A` has a block circulant structure, composed of `N0 x N0` blocks, with each\n block an `N1 x N1` circulant matrix.\n\n For example, with `W`, `X`, `Y`, `Z` each circulant,\n\n ```\n A = |W Z Y X|\n |X W Z Y|\n |Y X W Z|\n |Z Y X W|\n ```\n\n Note that `A` itself will not in general be circulant.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions.\n\n If `H.shape = [N0, N1]`, (`N0 * N1 = N`):\n Loosely speaking, matrix multiplication is equal to the action of a\n Fourier multiplier: `A u = IDFT2[ H DFT2[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT2[u]` be the\n `[N0, N1, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, R]` and taking\n a two dimensional DFT across the first two dimensions. Let `IDFT2` be the\n inverse of `DFT2`. Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT2[ H * (DFT2[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N0, N1]`, we say that `H` is a Hermitian\n spectrum if, with `%` indicating modulus division,\n\n ```\n H[..., n0 % N0, n1 % N1] = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1 ].\n ```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n ### Example of a self-adjoint positive definite operator\n\n ```python\n # spectrum is real ==> operator is self-adjoint\n # spectrum is positive ==> operator is positive definite\n spectrum = [[1., 2., 3.],\n [4., 5., 6.],\n [7., 8., 9.]]\n\n operator = LinearOperatorCirculant2D(spectrum)\n\n # IFFT[spectrum]\n operator.convolution_kernel()\n ==> [[5.0+0.0j, -0.5-.3j, -0.5+.3j],\n [-1.5-.9j, 0, 0],\n [-1.5+.9j, 0, 0]]\n\n operator.to_dense()\n ==> Complex self adjoint 9 x 9 matrix.\n ```\n\n #### Example of defining in terms of a real convolution kernel,\n\n ```python\n # convolution_kernel is real ==> spectrum is Hermitian.\n convolution_kernel = [[1., 2., 1.], [5., -1., 1.]]\n spectrum = tf.signal.fft2d(tf.cast(convolution_kernel, tf.complex64))\n\n # spectrum is shape [2, 3] ==> operator is shape [6, 6]\n # spectrum is Hermitian ==> operator is real.\n operator = LinearOperatorCirculant2D(spectrum, input_output_dtype=tf.float32)\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a block circulant matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorCirculant3D", "docs": "`LinearOperator` acting like a nested block circulant matrix.\n\n This operator acts like a block circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of block circulant matrices\n\n If `A` is nested block circulant, with block sizes `N0, N1, N2`\n (`N0 * N1 * N2 = N`):\n `A` has a block structure, composed of `N0 x N0` blocks, with each\n block an `N1 x N1` block circulant matrix.\n\n For example, with `W`, `X`, `Y`, `Z` each block circulant,\n\n ```\n A = |W Z Y X|\n |X W Z Y|\n |Y X W Z|\n |Z Y X W|\n ```\n\n Note that `A` itself will not in general be circulant.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions.\n\n If `H.shape = [N0, N1, N2]`, (`N0 * N1 * N2 = N`):\n Loosely speaking, matrix multiplication is equal to the action of a\n Fourier multiplier: `A u = IDFT3[ H DFT3[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT3[u]` be the\n `[N0, N1, N2, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, N2, R]` and\n taking a three dimensional DFT across the first three dimensions. Let `IDFT3`\n be the inverse of `DFT3`. Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT3[ H * (DFT3[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N0, N1, N2]`, we say that `H` is a Hermitian\n spectrum if, with `%` meaning modulus division,\n\n ```\n H[..., n0 % N0, n1 % N1, n2 % N2]\n = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1, (-n2) % N2] ].\n ```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n ### Examples\n\n See `LinearOperatorCirculant` and `LinearOperatorCirculant2D` for examples.\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a nested block circulant matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorComposition", "docs": "Composes one or more `LinearOperators`.\n\n This operator composes one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator` with action defined by:\n\n ```\n op_composed(x) := op1(op2(...(opJ(x)...))\n ```\n\n If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the\n [batch] matrix formed with the multiplication `A1 A2...AJ`.\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have\n `N_j = M_{j+1}`, in which case the composed operator has shape equal to\n `broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the\n mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate\n batch shapes broadcast. Even if the composed shape is well defined, the\n composed operator's methods may fail due to lack of broadcasting ability in\n the defining operators' methods.\n\n ```python\n # Create a 2 x 2 linear operator composed of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n operator = LinearOperatorComposition([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 2.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 5 linear operators.\n matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])\n operator_45 = LinearOperatorFullMatrix(matrix)\n\n # Create a [2, 3] batch of 5 x 6 linear operators.\n matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])\n operator_56 = LinearOperatorFullMatrix(matrix_56)\n\n # Compose to create a [2, 3] batch of 4 x 6 operators.\n operator_46 = LinearOperatorComposition([operator_45, operator_56])\n\n # Create a shape [2, 3, 6, 2] vector.\n x = tf.random.normal(shape=[2, 3, 6, 2])\n operator.matmul(x)\n ==> Shape [2, 3, 4, 2] Tensor\n ```\n\n #### Performance\n\n The performance of `LinearOperatorComposition` on any operation is equal to\n the sum of the individual operators' operations.\n\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Composes one or more `LinearOperators`.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorDiag", "docs": "`LinearOperator` acting like a [batch] square diagonal matrix.\n\n This operator acts like a [batch] diagonal matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorDiag` is initialized with a (batch) vector.\n\n ```python\n # Create a 2 x 2 diagonal linear operator.\n diag = [1., -1.]\n operator = LinearOperatorDiag(diag)\n\n operator.to_dense()\n ==> [[1., 0.]\n [0., -1.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n diag = tf.random.normal(shape=[2, 3, 4])\n operator = LinearOperatorDiag(diag)\n\n # Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible\n # since the batch dimensions, [2, 1], are broadcast to\n # operator.batch_shape = [2, 3].\n y = tf.random.normal(shape=[2, 1, 4, 2])\n x = operator.solve(y)\n ==> operator.matmul(x) = y\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` involves `N * R` multiplications.\n * `operator.solve(x)` involves `N` divisions and `N * R` multiplications.\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square diagonal matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorFullMatrix", "docs": "`LinearOperator` that wraps a [batch] matrix.\n\n This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `M x N` matrix.\n\n ```python\n # Create a 2 x 2 linear operator.\n matrix = [[1., 2.], [3., 4.]]\n operator = LinearOperatorFullMatrix(matrix)\n\n operator.to_dense()\n ==> [[1., 2.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n matrix = tf.random.normal(shape=[2, 3, 4, 4])\n operator = LinearOperatorFullMatrix(matrix)\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n #### Performance\n\n `LinearOperatorFullMatrix` has exactly the same performance as would be\n achieved by using standard `TensorFlow` matrix ops. Intelligent choices are\n made based on the following initialization hints.\n\n * If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a\n Cholesky factorization is used for the determinant and solve.\n\n In all cases, suppose `operator` is a `LinearOperatorFullMatrix` of shape\n `[M, N]`, and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(M * N * R)`.\n * If `M=N`, `operator.solve(x)` is `O(N^3 * R)`.\n * If `M=N`, `operator.determinant()` is `O(N^3)`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` that wraps a [batch] matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorHouseholder", "docs": "`LinearOperator` acting like a [batch] of Householder transformations.\n\n This operator acts like a [batch] of householder reflections with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorHouseholder` is initialized with a (batch) vector.\n\n A Householder reflection, defined via a vector `v`, which reflects points\n in `R^n` about the hyperplane orthogonal to `v` and through the origin.\n\n ```python\n # Create a 2 x 2 householder transform.\n vec = [1 / np.sqrt(2), 1. / np.sqrt(2)]\n operator = LinearOperatorHouseholder(vec)\n\n operator.to_dense()\n ==> [[0., -1.]\n [-1., -0.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of Householder transformations.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorIdentity", "docs": "`LinearOperator` acting like a [batch] square identity matrix.\n\n This operator acts like a [batch] identity matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorIdentity` is initialized with `num_rows`, and optionally\n `batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this\n operator efficiently passes through all arguments. If `batch_shape` is\n provided, broadcasting may occur, which will require making copies.\n\n ```python\n # Create a 2 x 2 identity matrix.\n operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32)\n\n operator.to_dense()\n ==> [[1., 0.]\n [0., 1.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> 0.\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor, same as x.\n\n y = tf.random.normal(shape=[3, 2, 4])\n # Note that y.shape is compatible with operator.shape because operator.shape\n # is broadcast to [3, 2, 2].\n # This broadcast does NOT require copying data, since we can infer that y\n # will be passed through without changing shape. We are always able to infer\n # this if the operator has no batch_shape.\n x = operator.solve(y)\n ==> Shape [3, 2, 4] Tensor, same as y.\n\n # Create a 2-batch of 2x2 identity matrices\n operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2])\n operator.to_dense()\n ==> [[[1., 0.]\n [0., 1.]],\n [[1., 0.]\n [0., 1.]]]\n\n # Here, even though the operator has a batch shape, the input is the same as\n # the output, so x can be passed through without a copy. The operator is able\n # to detect that no broadcast is necessary because both x and the operator\n # have statically defined shape.\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, same as x\n\n # Here the operator and x have different batch_shape, and are broadcast.\n # This requires a copy, since the output is different size than the input.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, equal to [x, x]\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n ### Performance\n\n If `batch_shape` initialization arg is `None`:\n\n * `operator.matmul(x)` is `O(1)`\n * `operator.solve(x)` is `O(1)`\n * `operator.determinant()` is `O(1)`\n\n If `batch_shape` initialization arg is provided, and static checks cannot\n rule out the need to broadcast:\n\n * `operator.matmul(x)` is `O(D1*...*Dd*N*R)`\n * `operator.solve(x)` is `O(D1*...*Dd*N*R)`\n * `operator.determinant()` is `O(B1*...*Bb)`\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square identity matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorInversion", "docs": "`LinearOperator` representing the inverse of another operator.\n\n This operator represents the inverse of another operator.\n\n ```python\n # Create a 2 x 2 linear operator.\n operator = LinearOperatorFullMatrix([[1., 0.], [0., 2.]])\n operator_inv = LinearOperatorInversion(operator)\n\n operator_inv.to_dense()\n ==> [[1., 0.]\n [0., 0.5]]\n\n operator_inv.shape\n ==> [2, 2]\n\n operator_inv.log_abs_determinant()\n ==> - log(2)\n\n x = ... Shape [2, 4] Tensor\n operator_inv.matmul(x)\n ==> Shape [2, 4] Tensor, equal to operator.solve(x)\n ```\n\n #### Performance\n\n The performance of `LinearOperatorInversion` depends on the underlying\n operators performance: `solve` and `matmul` are swapped, and determinant is\n inverted.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` representing the inverse of another operator.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorKronecker", "docs": "Kronecker product between two `LinearOperators`.\n\n This operator composes one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator` representing the Kronecker product:\n `op1 x op2 x .. opJ` (we omit parentheses as the Kronecker product is\n associative).\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the composed operator\n will have shape equal to `broadcast_batch_shape + [prod M_j, prod N_j]`,\n where the product is over all operators.\n\n ```python\n # Create a 4 x 4 linear operator composed of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [2., 1.]])\n operator = LinearOperatorKronecker([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 0., 2., 0.],\n [2., 1., 4., 2.],\n [3., 0., 4., 0.],\n [6., 3., 8., 4.]]\n\n operator.shape\n ==> [4, 4]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [4, 2] Tensor\n operator.matmul(x)\n ==> Shape [4, 2] Tensor\n\n # Create a [2, 3] batch of 4 x 5 linear operators.\n matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])\n operator_45 = LinearOperatorFullMatrix(matrix)\n\n # Create a [2, 3] batch of 5 x 6 linear operators.\n matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])\n operator_56 = LinearOperatorFullMatrix(matrix_56)\n\n # Compose to create a [2, 3] batch of 20 x 30 operators.\n operator_large = LinearOperatorKronecker([operator_45, operator_56])\n\n # Create a shape [2, 3, 20, 2] vector.\n x = tf.random.normal(shape=[2, 3, 6, 2])\n operator_large.matmul(x)\n ==> Shape [2, 3, 30, 2] Tensor\n ```\n\n #### Performance\n\n The performance of `LinearOperatorKronecker` on any operation is equal to\n the sum of the individual operators' operations.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Kronecker product between two `LinearOperators`.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorLowerTriangular", "docs": "`LinearOperator` acting like a [batch] square lower triangular matrix.\n\n This operator acts like a [batch] lower triangular matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix.\n\n `LinearOperatorLowerTriangular` is initialized with a `Tensor` having\n dimensions `[B1,...,Bb, N, N]`. The upper triangle of the last two\n dimensions is ignored.\n\n ```python\n # Create a 2 x 2 lower-triangular linear operator.\n tril = [[1., 2.], [3., 4.]]\n operator = LinearOperatorLowerTriangular(tril)\n\n # The upper triangle is ignored.\n operator.to_dense()\n ==> [[1., 0.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n tril = tf.random.normal(shape=[2, 3, 4, 4])\n operator = LinearOperatorLowerTriangular(tril)\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorLowerTriangular` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` involves `N^2 * R` multiplications.\n * `operator.solve(x)` involves `N * R` size `N` back-substitutions.\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square lower triangular matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorLowRankUpdate", "docs": "Perturb a `LinearOperator` with a rank `K` update.\n\n This operator acts like a [batch] matrix `A` with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `M x N` matrix.\n\n `LinearOperatorLowRankUpdate` represents `A = L + U D V^H`, where\n\n ```\n L, is a LinearOperator representing [batch] M x N matrices\n U, is a [batch] M x K matrix. Typically K << M.\n D, is a [batch] K x K matrix.\n V, is a [batch] N x K matrix. Typically K << N.\n V^H is the Hermitian transpose (adjoint) of V.\n ```\n\n If `M = N`, determinants and solves are done using the matrix determinant\n lemma and Woodbury identities, and thus require L and D to be non-singular.\n\n Solves and determinants will be attempted unless the \"is_non_singular\"\n property of L and D is False.\n\n In the event that L and D are positive-definite, and U = V, solves and\n determinants can be done using a Cholesky factorization.\n\n ```python\n # Create a 3 x 3 diagonal linear operator.\n diag_operator = LinearOperatorDiag(\n diag_update=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True,\n is_positive_definite=True)\n\n # Perturb with a rank 2 perturbation\n operator = LinearOperatorLowRankUpdate(\n operator=diag_operator,\n u=[[1., 2.], [-1., 3.], [0., 0.]],\n diag_update=[11., 12.],\n v=[[1., 2.], [-1., 3.], [10., 10.]])\n\n operator.shape\n ==> [3, 3]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n ### Performance\n\n Suppose `operator` is a `LinearOperatorLowRankUpdate` of shape `[M, N]`,\n made from a rank `K` update of `base_operator` which performs `.matmul(x)` on\n `x` having `x.shape = [N, R]` with `O(L_matmul*N*R)` complexity (and similarly\n for `solve`, `determinant`. Then, if `x.shape = [N, R]`,\n\n * `operator.matmul(x)` is `O(L_matmul*N*R + K*N*R)`\n\n and if `M = N`,\n\n * `operator.solve(x)` is `O(L_matmul*N*R + N*K*R + K^2*R + K^3)`\n * `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)`\n\n If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular`, `self_adjoint`, `positive_definite`,\n `diag_update_positive` and `square`. These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Perturb a `LinearOperator` with a rank `K` update.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorPermutation", "docs": "`LinearOperator` acting like a [batch] of permutation matrices.\n\n This operator acts like a [batch] of permutations with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorPermutation` is initialized with a (batch) vector.\n\n A permutation, is defined by an integer vector `v` whose values are unique\n and are in the range `[0, ... n]`. Applying the permutation on an input\n matrix has the folllowing meaning: the value of `v` at index `i`\n says to move the `v[i]`-th row of the input matrix to the `i`-th row.\n Because all values are unique, this will result in a permutation of the\n rows the input matrix. Note, that the permutation vector `v` has the same\n semantics as `tf.transpose`.\n\n ```python\n # Create a 3 x 3 permutation matrix that swaps the last two columns.\n vec = [0, 2, 1]\n operator = LinearOperatorPermutation(vec)\n\n operator.to_dense()\n ==> [[1., 0., 0.]\n [0., 0., 1.]\n [0., 1., 0.]]\n\n operator.shape\n ==> [3, 3]\n\n # This will be zero.\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of permutation matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorScaledIdentity", "docs": "`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.\n\n This operator acts like a scaled [batch] identity matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n a scaled version of the `N x N` identity matrix.\n\n `LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier`\n (a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the\n `multiplier` determines the scale for each batch member.\n\n ```python\n # Create a 2 x 2 scaled identity matrix.\n operator = LinearOperatorIdentity(num_rows=2, multiplier=3.)\n\n operator.to_dense()\n ==> [[3., 0.]\n [0., 3.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> 2 * Log[3]\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> 3 * x\n\n y = tf.random.normal(shape=[3, 2, 4])\n # Note that y.shape is compatible with operator.shape because operator.shape\n # is broadcast to [3, 2, 2].\n x = operator.solve(y)\n ==> 3 * x\n\n # Create a 2-batch of 2x2 identity matrices\n operator = LinearOperatorIdentity(num_rows=2, multiplier=5.)\n operator.to_dense()\n ==> [[[5., 0.]\n [0., 5.]],\n [[5., 0.]\n [0., 5.]]]\n\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> 5 * x\n\n # Here the operator and x have different batch_shape, and are broadcast.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> 5 * x\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n ### Performance\n\n * `operator.matmul(x)` is `O(D1*...*Dd*N*R)`\n * `operator.solve(x)` is `O(D1*...*Dd*N*R)`\n * `operator.determinant()` is `O(D1*...*Dd)`\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorToeplitz", "docs": "`LinearOperator` acting like a [batch] of toeplitz matrices.\n\n This operator acts like a [batch] Toeplitz matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of toeplitz matrices\n\n Toeplitz means that `A` has constant diagonals. Hence, `A` can be generated\n with two vectors. One represents the first column of the matrix, and the\n other represents the first row.\n\n Below is a 4 x 4 example:\n\n ```\n A = |a b c d|\n |e a b c|\n |f e a b|\n |g f e a|\n ```\n\n #### Example of a Toeplitz operator.\n\n ```python\n # Create a 3 x 3 Toeplitz operator.\n col = [1., 2., 3.]\n row = [1., 4., -9.]\n operator = LinearOperatorToeplitz(col, row)\n\n operator.to_dense()\n ==> [[1., 4., -9.],\n [2., 1., 4.],\n [3., 2., 1.]]\n\n operator.shape\n ==> [3, 3]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of toeplitz matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorTridiag", "docs": "`LinearOperator` acting like a [batch] square tridiagonal matrix.\n\n This operator acts like a [batch] square tridiagonal matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x M` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n Example usage:\n\n Create a 3 x 3 tridiagonal linear operator.\n\n >>> superdiag = [3., 4., 5.]\n >>> diag = [1., -1., 2.]\n >>> subdiag = [6., 7., 8]\n >>> operator = tf.linalg.LinearOperatorTridiag(\n ... [superdiag, diag, subdiag],\n ... diagonals_format='sequence')\n >>> operator.to_dense()\n \n >>> operator.shape\n TensorShape([3, 3])\n\n Scalar Tensor output.\n\n >>> operator.log_abs_determinant()\n \n\n Create a [2, 3] batch of 4 x 4 linear operators.\n\n >>> diagonals = tf.random.normal(shape=[2, 3, 3, 4])\n >>> operator = tf.linalg.LinearOperatorTridiag(\n ... diagonals,\n ... diagonals_format='compact')\n\n Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible\n since the batch dimensions, [2, 1], are broadcast to\n operator.batch_shape = [2, 3].\n\n >>> y = tf.random.normal(shape=[2, 1, 4, 2])\n >>> x = operator.solve(y)\n >>> x\n \n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb].\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorTridiag` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` will take O(N * R) time.\n * `operator.solve(x)` will take O(N * R) time.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square tridiagonal matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.LinearOperatorZeros", "docs": "`LinearOperator` acting like a [batch] zero matrix.\n\n This operator acts like a [batch] zero matrix `A` with shape\n `[B1,...,Bb, N, M]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x M` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorZeros` is initialized with `num_rows`, and optionally\n `num_columns, `batch_shape`, and `dtype` arguments. If `num_columns` is\n `None`, then this operator will be initialized as a square matrix. If\n `batch_shape` is `None`, this operator efficiently passes through all\n arguments. If `batch_shape` is provided, broadcasting may occur, which will\n require making copies.\n\n ```python\n # Create a 2 x 2 zero matrix.\n operator = LinearOperatorZero(num_rows=2, dtype=tf.float32)\n\n operator.to_dense()\n ==> [[0., 0.]\n [0., 0.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.determinant()\n ==> 0.\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor, same as x.\n\n # Create a 2-batch of 2x2 zero matrices\n operator = LinearOperatorZeros(num_rows=2, batch_shape=[2])\n operator.to_dense()\n ==> [[[0., 0.]\n [0., 0.]],\n [[0., 0.]\n [0., 0.]]]\n\n # Here, even though the operator has a batch shape, the input is the same as\n # the output, so x can be passed through without a copy. The operator is able\n # to detect that no broadcast is necessary because both x and the operator\n # have statically defined shape.\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, same as tf.zeros_like(x)\n\n # Here the operator and x have different batch_shape, and are broadcast.\n # This requires a copy, since the output is different size than the input.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, equal to tf.zeros_like([x, x])\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, M], with b >= 0\n x.shape = [C1,...,Cc] + [M, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] zero matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.logdet", "docs": "Computes log of the determinant of a hermitian positive definite matrix.\n\n ```python\n # Compute the determinant of a matrix while reducing the chance of over- or\n underflow:\n A = ... # shape 10 x 10\n det = tf.exp(tf.linalg.logdet(A)) # scalar\n ```\n\n Args:\n matrix: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`,\n or `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op`. Defaults to `logdet`.\n\n Returns:\n The natural log of the determinant of `matrix`.\n\n @compatibility(numpy)\n Equivalent to numpy.linalg.slogdet, although no sign is returned since only\n hermitian positive definite matrices are supported.\n @end_compatibility\n ", "desc": "Computes log of the determinant of a hermitian positive definite matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.logm", "docs": "Computes the matrix logarithm of one or more square matrices:\n\n \n \\\\(log(exp(A)) = A\\\\)\n\n This op is only defined for complex matrices. If A is positive-definite and\n real, then casting to a complex matrix, taking the logarithm and casting back\n to a real matrix will give the correct result.\n\n This function computes the matrix logarithm using the Schur-Parlett algorithm.\n Details of the algorithm can be found in Section 11.6.2 of:\n Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008.\n ISBN 978-0-898716-46-7.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the exponential for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix logarithm of one or more square matrices:", "type": "API"}, {"name": "tf.compat.v1.linalg.lstsq", "docs": "Solves one or more linear least-squares problems.\n\n `matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose\n inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a\n `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K`\n matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares\n sense.\n\n Below we will use the following notation for each pair of matrix and\n right-hand sides in the batch:\n\n `matrix`=\\\\(A \\in \\Re^{m \\times n}\\\\),\n `rhs`=\\\\(B \\in \\Re^{m \\times k}\\\\),\n `output`=\\\\(X \\in \\Re^{n \\times k}\\\\),\n `l2_regularizer`=\\\\(\\lambda\\\\).\n\n If `fast` is `True`, then the solution is computed by solving the normal\n equations using Cholesky decomposition. Specifically, if \\\\(m \\ge n\\\\) then\n \\\\(X = (A^T A + \\lambda I)^{-1} A^T B\\\\), which solves the least-squares\n problem \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||A Z - B||_F^2 +\n \\lambda ||Z||_F^2\\\\). If \\\\(m \\lt n\\\\) then `output` is computed as\n \\\\(X = A^T (A A^T + \\lambda I)^{-1} B\\\\), which (for \\\\(\\lambda = 0\\\\)) is\n the minimum-norm solution to the under-determined linear system, i.e.\n \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||Z||_F^2 \\\\), subject to\n \\\\(A Z = B\\\\). Notice that the fast path is only numerically stable when\n \\\\(A\\\\) is numerically full rank and has a condition number\n \\\\(\\mathrm{cond}(A) \\lt \\frac{1}{\\sqrt{\\epsilon_{mach}}}\\\\) or\\\\(\\lambda\\\\)\n is sufficiently large.\n\n If `fast` is `False` an algorithm based on the numerically robust complete\n orthogonal decomposition is used. This computes the minimum-norm\n least-squares solution, even when \\\\(A\\\\) is rank deficient. This path is\n typically 6-7 times slower than the fast path. If `fast` is `False` then\n `l2_regularizer` is ignored.\n\n Args:\n matrix: `Tensor` of shape `[..., M, N]`.\n rhs: `Tensor` of shape `[..., M, K]`.\n l2_regularizer: 0-D `double` `Tensor`. Ignored if `fast=False`.\n fast: bool. Defaults to `True`.\n name: string, optional name of the operation.\n\n Returns:\n output: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form\n `M`-by-`K` matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least\n squares sense.\n\n Raises:\n NotImplementedError: linalg.lstsq is currently disabled for complex128\n and l2_regularizer != 0 due to poor accuracy.\n ", "desc": "Solves one or more linear least-squares problems.", "type": "API"}, {"name": "tf.compat.v1.linalg.lu", "docs": "Computes the LU decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be invertible.\n\n The output consists of two tensors LU and P containing the LU decomposition\n of all input submatrices `[..., :, :]`. LU encodes the lower triangular and\n upper triangular factors.\n\n For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of\n shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower\n triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose\n entries correspond to the upper triangular part, including the diagonal, of LU.\n\n P represents a permutation matrix encoded as a list of indices each between `0`\n and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to\n P, then the L, U and P satisfies P_mat * input = L * U.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of\n size `[M, M]`.\n output_idx_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (lu, p).\n\n lu: A `Tensor`. Has the same type as `input`.\n p: A `Tensor` of type `output_idx_type`.\n ", "desc": "Computes the LU decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.lu_matrix_inverse", "docs": "Computes the inverse given the LU decomposition(s) of one or more matrices.\n\n This op is conceptually identical to,\n\n ```python\n inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))\n tf.assert_near(tf.matrix_inverse(X), inv_X)\n # ==> True\n ```\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_matrix_inverse').\n\n Returns:\n inv_x: The matrix_inv, i.e.,\n `tf.matrix_inverse(tf.linalg.lu_reconstruct(lu, perm))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n inv_x = tf.linalg.lu_matrix_inverse(*tf.linalg.lu(x))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```\n\n ", "desc": "Computes the inverse given the LU decomposition(s) of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.lu_reconstruct", "docs": "The reconstruct one or more matrices from their LU decomposition(s).\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_reconstruct').\n\n Returns:\n x: The original input to `tf.linalg.lu`, i.e., `x` as in,\n `lu_reconstruct(*tf.linalg.lu(x))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n x_reconstructed = tf.linalg.lu_reconstruct(*tf.linalg.lu(x))\n tf.assert_near(x, x_reconstructed)\n # ==> True\n ```\n\n ", "desc": "The reconstruct one or more matrices from their LU decomposition(s).", "type": "API"}, {"name": "tf.compat.v1.linalg.lu_solve", "docs": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n rhs: Matrix-shaped float `Tensor` representing targets for which to solve;\n `A X = RHS`. To handle vector cases, use: `lu_solve(..., rhs[...,\n tf.newaxis])[..., 0]`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_solve').\n\n Returns:\n x: The `X` in `A @ X = RHS`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[1., 2],\n [3, 4]],\n [[7, 8],\n [3, 4]]]\n inv_x = tf.linalg.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```\n\n ", "desc": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.", "type": "API"}, {"name": "tf.compat.v1.linalg.matmul", "docs": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n where the inner 2 dimensions specify valid matrix multiplication dimensions,\n and any further outer dimensions specify matching batch size.\n\n Both matrices must be of the same type. The supported types are:\n `bfloat16`, `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, `complex128`.\n\n Either matrix can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the matrices contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices (rank-2 tensors) with\n datatypes `bfloat16` or `float32`.\n\n A simple 2-D tensor matrix multiplication:\n\n >>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n >>> a # 2-D tensor\n \n >>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])\n >>> b # 2-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n A batch matrix multiplication with batch shape [2]:\n\n >>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> a # 3-D tensor\n \n >>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])\n >>> b # 3-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n Since python >= 3.5 the @ operator is supported\n (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow,\n it simply calls the `tf.matmul()` function, so the following lines are\n equivalent:\n\n >>> d = a @ b @ [[10], [11]]\n >>> d = tf.matmul(tf.matmul(a, b), [[10], [11]])\n\n Args:\n a: `tf.Tensor` of type `float16`, `float32`, `float64`, `int32`,\n `complex64`, `complex128` and rank > 1.\n b: `tf.Tensor` with same type and rank as `a`.\n transpose_a: If `True`, `a` is transposed before multiplication.\n transpose_b: If `True`, `b` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n adjoint_b: If `True`, `b` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n output_type: The output datatype if needed. Defaults to None in which case\n the output_type is the same as input type. Currently only works when input\n tensors are type (u)int8 and output_type can be int32.\n name: Name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same type as `a` and `b` where each inner-most matrix\n is the product of the corresponding matrices in `a` and `b`, e.g. if all\n transpose or adjoint attributes are `False`:\n\n `output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`,\n for all indices `i`, `j`.\n\n Note: This is matrix product, not element-wise product.\n\n\n Raises:\n ValueError: If `transpose_a` and `adjoint_a`, or `transpose_b` and\n `adjoint_b` are both set to `True`.\n TypeError: If output_type is specified but the types of `a`, `b` and\n `output_type` is not (u)int8, (u)int8 and int32.\n ", "desc": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.compat.v1.linalg.matrix_rank", "docs": "Compute the matrix rank of one or more matrices.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n tol: Threshold below which the singular value is counted as 'zero'.\n Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`).\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: 'matrix_rank'.\n\n Returns:\n matrix_rank: (Batch of) `int32` scalars representing the number of non-zero\n singular values.\n ", "desc": "Compute the matrix rank of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.matrix_transpose", "docs": "Transposes last two dimensions of tensor `a`.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.linalg.matrix_transpose(x) # [[1, 4],\n # [2, 5],\n # [3, 6]]\n\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n\n # Matrix with two batch dimensions.\n # x.shape is [1, 2, 3, 4]\n # tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3]\n ```\n\n Note that `tf.matmul` provides kwargs allowing for transpose of arguments.\n This is done with minimal cost, and is preferable to using this function. E.g.\n\n ```python\n # Good! Transpose is taken at minimal additional cost.\n tf.matmul(matrix, b, transpose_b=True)\n\n # Inefficient!\n tf.matmul(matrix, tf.linalg.matrix_transpose(b))\n ```\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, `linalg.matrix_transpose` returns a new\n tensor with the items permuted.\n @end_compatibility\n\n Args:\n a: A `Tensor` with `rank >= 2`.\n name: A name for the operation (optional).\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.linalg.matrix_transpose(input)).\n\n Returns:\n A transposed batch matrix `Tensor`.\n\n Raises:\n ValueError: If `a` is determined statically to have `rank < 2`.\n ", "desc": "Transposes last two dimensions of tensor `a`.", "type": "API"}, {"name": "tf.compat.v1.linalg.matvec", "docs": "Multiplies matrix `a` by vector `b`, producing `a` * `b`.\n\n The matrix `a` must, following any transpositions, be a tensor of rank >= 2,\n with `shape(a)[-1] == shape(b)[-1]`, and `shape(a)[:-2]` able to broadcast\n with `shape(b)[:-1]`.\n\n Both `a` and `b` must be of the same type. The supported types are:\n `float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.\n\n Matrix `a` can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the inputs contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices/vectors (rank-2/1\n tensors) with datatypes `bfloat16` or `float32`.\n\n For example:\n\n ```python\n # 2-D tensor `a`\n # [[1, 2, 3],\n # [4, 5, 6]]\n a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n\n # 1-D tensor `b`\n # [7, 9, 11]\n b = tf.constant([7, 9, 11], shape=[3])\n\n # `a` * `b`\n # [ 58, 64]\n c = tf.linalg.matvec(a, b)\n\n\n # 3-D tensor `a`\n # [[[ 1, 2, 3],\n # [ 4, 5, 6]],\n # [[ 7, 8, 9],\n # [10, 11, 12]]]\n a = tf.constant(np.arange(1, 13, dtype=np.int32),\n shape=[2, 2, 3])\n\n # 2-D tensor `b`\n # [[13, 14, 15],\n # [16, 17, 18]]\n b = tf.constant(np.arange(13, 19, dtype=np.int32),\n shape=[2, 3])\n\n # `a` * `b`\n # [[ 86, 212],\n # [410, 563]]\n c = tf.linalg.matvec(a, b)\n ```\n\n Args:\n a: `Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`,\n `complex128` and rank > 1.\n b: `Tensor` with same type as `a` and compatible dimensions.\n transpose_a: If `True`, `a` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix.\n name: Name for the operation (optional).\n\n Returns:\n A `Tensor` of the same type as `a` and `b` where each inner-most vector is\n the product of the corresponding matrices in `a` and vectors in `b`, e.g. if\n all transpose or adjoint attributes are `False`:\n\n `output`[..., i] = sum_k (`a`[..., i, k] * `b`[..., k]), for all indices i.\n\n Note: This is matrix-vector product, not element-wise product.\n\n\n Raises:\n ValueError: If transpose_a and adjoint_a are both set to True.\n ", "desc": "Multiplies matrix `a` by vector `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.compat.v1.linalg.norm", "docs": "Computes the norm of vectors, matrices, and tensors. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis function can compute several different vector norms (the 1-norm, the\nEuclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\nmatrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\nArgs:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are 'fro', 'euclidean',\n `1`, `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply:\n a) The Frobenius norm `fro` is not defined for vectors,\n b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', `1`,\n `2`, `np.inf` are supported.\n See the description of `axis` on how to compute norms for a batch of\n vectors or matrices stored in a tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`.\n If `axis` is a Python integer, the input is considered a batch of vectors,\n and `axis` determines the axis in `tensor` over which to compute vector\n norms.\n If `axis` is a 2-tuple of Python integers it is considered a batch of\n matrices and `axis` determines the axes in `tensor` over which to compute\n a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n keepdims: If True, the axis indicated in `axis` are kept with size 1.\n Otherwise, the dimensions in `axis` are removed from the output shape.\n name: The name of the op.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n output: A `Tensor` of the same type as tensor, containing the vector or\n matrix norms. If `keepdims` is True then the rank of output is equal to\n the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,\n if `axis` is an integer, the rank of `output` is one less than the rank\n of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less\n than the rank of `tensor`.\n\nRaises:\n ValueError: If `ord` or `axis` is invalid.\n\n@compatibility(numpy)\nMostly equivalent to numpy.linalg.norm.\nNot supported: ord <= 0, 2-norm for matrices, nuclear norm.\nOther differences:\n a) If axis is `None`, treats the flattened `tensor` as a vector\n regardless of rank.\n b) Explicitly supports 'euclidean' norm as the default, including for\n higher order tensors.\n@end_compatibility", "desc": "Computes the norm of vectors, matrices, and tensors. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.linalg.normalize", "docs": "Normalizes `tensor` along dimension `axis` using specified norm.\n\n This uses `tf.linalg.norm` to compute the norm along `axis`.\n\n This function can compute several different vector norms (the 1-norm, the\n Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\n matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\n Args:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are `'fro'`, `'euclidean'`, `1`,\n `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply: a) The Frobenius norm `'fro'` is not defined for\n vectors, b) If axis is a 2-tuple (matrix norm), only `'euclidean'`,\n '`fro'`, `1`, `2`, `np.inf` are supported. See the description of `axis`\n on how to compute norms for a batch of vectors or matrices stored in a\n tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`. If `axis` is a Python integer, the\n input is considered a batch of vectors, and `axis` determines the axis in\n `tensor` over which to compute vector norms. If `axis` is a 2-tuple of\n Python integers it is considered a batch of matrices and `axis` determines\n the axes in `tensor` over which to compute a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n name: The name of the op.\n\n Returns:\n normalized: A normalized `Tensor` with the same shape as `tensor`.\n norm: The computed norms with the same shape and dtype `tensor` but the\n final axis is 1 instead. Same as running\n `tf.cast(tf.linalg.norm(tensor, ord, axis keepdims=True), tensor.dtype)`.\n\n Raises:\n ValueError: If `ord` or `axis` is invalid.\n ", "desc": "Normalizes `tensor` along dimension `axis` using specified norm.", "type": "API"}, {"name": "tf.compat.v1.linalg.pinv", "docs": "Compute the Moore-Penrose pseudo-inverse of one or more matrices.\n\n Calculate the [generalized inverse of a matrix](\n https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its\n singular-value decomposition (SVD) and including all large singular values.\n\n The pseudo-inverse of a matrix `A`, is defined as: 'the matrix that 'solves'\n [the least-squares problem] `A @ x = b`,' i.e., if `x_hat` is a solution, then\n `A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if\n `U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then\n `A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1]\n\n This function is analogous to [`numpy.linalg.pinv`](\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html).\n It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the\n default `rcond` is `1e-15`. Here the default is\n `10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n rcond: `Tensor` of small singular value cutoffs. Singular values smaller\n (in modulus) than `rcond` * largest_singular_value (again, in modulus) are\n set to zero. Must broadcast against `tf.shape(a)[:-2]`.\n Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`.\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: 'pinv'.\n\n Returns:\n a_pinv: (Batch of) pseudo-inverse of input `a`. Has same shape as `a` except\n rightmost two dimensions are transposed.\n\n Raises:\n TypeError: if input `a` does not have `float`-like `dtype`.\n ValueError: if input `a` has fewer than 2 dimensions.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n a = tf.constant([[1., 0.4, 0.5],\n [0.4, 0.2, 0.25],\n [0.5, 0.25, 0.35]])\n tf.matmul(tf.linalg.pinv(a), a)\n # ==> array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]], dtype=float32)\n\n a = tf.constant([[1., 0.4, 0.5, 1.],\n [0.4, 0.2, 0.25, 2.],\n [0.5, 0.25, 0.35, 3.]])\n tf.matmul(tf.linalg.pinv(a), a)\n # ==> array([[ 0.76, 0.37, 0.21, -0.02],\n [ 0.37, 0.43, -0.33, 0.02],\n [ 0.21, -0.33, 0.81, 0.01],\n [-0.02, 0.02, 0.01, 1. ]], dtype=float32)\n ```\n\n #### References\n\n [1]: G. Strang. 'Linear Algebra and Its Applications, 2nd Ed.' Academic Press,\n Inc., 1980, pp. 139-142.\n ", "desc": "Compute the Moore-Penrose pseudo-inverse of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.qr", "docs": "Computes the QR decompositions of one or more matrices.\n\n Computes the QR decomposition of each inner matrix in `tensor` such that\n `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`\n\n Currently, the gradient for the QR decomposition is well-defined only when\n the first `P` columns of the inner matrix are linearly independent, where\n `P` is the minimum of `M` and `N`, the 2 inner-most dimmensions of `tensor`.\n\n ```python\n # a is a tensor.\n # q is a tensor of orthonormal matrices.\n # r is a tensor of upper triangular matrices.\n q, r = qr(a)\n q_full, r_full = qr(a, full_matrices=True)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.\n full_matrices: An optional `bool`. Defaults to `False`.\n If true, compute full-sized `q` and `r`. If false\n (the default), compute only the leading `P` columns of `q`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (q, r).\n\n q: A `Tensor`. Has the same type as `input`.\n r: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the QR decompositions of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.set_diag", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the specified diagonals of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n `input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or\n `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`.\n Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`.\n `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`.\n `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n\n The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`.\n If `k` is scalar or `k[0] == k[1]`:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n\n Otherwise,\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)\n [7, 7, 7, 7],\n [7, 7, 7, 7]],\n [[7, 7, 7, 7],\n [7, 7, 7, 7],\n [7, 7, 7, 7]]])\n diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_set_diag(input, diagonal)\n ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [7, 2, 7, 7],\n [7, 7, 3, 7]],\n [[4, 7, 7, 7],\n [7, 5, 7, 7],\n [7, 7, 6, 7]]]\n\n # A superdiagonal (per batch).\n tf.matrix_set_diag(input, diagonal, k = 1)\n ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)\n [7, 7, 2, 7],\n [7, 7, 7, 3]],\n [[7, 4, 7, 7],\n [7, 7, 5, 7],\n [7, 7, 7, 6]]]\n\n # A band of diagonals.\n diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [0, 4, 5]],\n [[1, 2, 0],\n [5, 6, 4],\n [6, 1, 2],\n [0, 3, 4]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2))\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 1, 2],\n [5, 6, 4],\n [6, 1, 2],\n [3, 4, 0]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n ```\n\n Args:\n input: A `Tensor` with rank `k + 1`, where `k >= 1`.\n diagonal: A `Tensor` with rank `k`, when `d_lower == d_upper`, or `k + 1`,\n otherwise. `k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.compat.v1.linalg.slogdet", "docs": "Computes the sign and the log of the absolute value of the determinant of\n\n one or more square matrices.\n\n The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions\n form square matrices. The outputs are two tensors containing the signs and\n absolute values of the log determinants for all N input submatrices\n `[..., :, :]` such that `determinant = sign*exp(log_abs_determinant)`.\n The `log_abs_determinant` is computed as `det(P)*sum(log(diag(LU)))` where `LU`\n is the `LU` decomposition of the input and `P` is the corresponding\n permutation matrix.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[N, M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sign, log_abs_determinant).\n\n sign: A `Tensor`. Has the same type as `input`.\n log_abs_determinant: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the sign and the log of the absolute value of the determinant of", "type": "API"}, {"name": "tf.compat.v1.linalg.solve", "docs": "Solves systems of linear equations.\n\n `Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is\n a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix\n satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.\n If `adjoint` is `True` then each output matrix satisfies\n `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n adjoint: An optional `bool`. Defaults to `False`.\n Boolean indicating whether to solve with `matrix` or its (block-wise)\n adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves systems of linear equations.", "type": "API"}, {"name": "tf.compat.v1.linalg.sqrtm", "docs": "Computes the matrix square root of one or more square matrices:\n\n matmul(sqrtm(A), sqrtm(A)) = A\n\n The input matrix should be invertible. If the input matrix is real, it should\n have no eigenvalues which are real and negative (pairs of complex conjugate\n eigenvalues are allowed).\n\n The matrix square root is computed by first reducing the matrix to\n quasi-triangular form with the real Schur decomposition. The square root\n of the quasi-triangular matrix is then computed directly. Details of\n the algorithm can be found in: Nicholas J. Higham, \"Computing real\n square roots of a real matrix\", Linear Algebra Appl., 1987.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the matrix square root for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix square root of one or more square matrices:", "type": "API"}, {"name": "tf.compat.v1.linalg.svd", "docs": "Computes the singular value decompositions of one or more matrices.\n\n Computes the SVD of each inner matrix in `tensor` such that\n `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) *\n transpose(conj(v[..., :, :]))`\n\n ```python\n # a is a tensor.\n # s is a tensor of singular values.\n # u is a tensor of left singular vectors.\n # v is a tensor of right singular vectors.\n s, u, v = svd(a)\n s = svd(a, compute_uv=False)\n ```\n\n Args:\n tensor: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and\n `N`.\n full_matrices: If true, compute full-sized `u` and `v`. If false\n (the default), compute only the leading `P` singular vectors.\n Ignored if `compute_uv` is `False`.\n compute_uv: If `True` then left and right singular vectors will be\n computed and returned in `u` and `v`, respectively. Otherwise, only the\n singular values will be computed, which can be significantly faster.\n name: string, optional name of the operation.\n\n Returns:\n s: Singular values. Shape is `[..., P]`. The values are sorted in reverse\n order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the\n second largest, etc.\n u: Left singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., M, P]`; if `full_matrices` is `True` then shape is\n `[..., M, M]`. Not returned if `compute_uv` is `False`.\n v: Right singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., N, P]`. If `full_matrices` is `True` then shape is\n `[..., N, N]`. Not returned if `compute_uv` is `False`.\n\n @compatibility(numpy)\n Mostly equivalent to numpy.linalg.svd, except that\n * The order of output arguments here is `s`, `u`, `v` when `compute_uv` is\n `True`, as opposed to `u`, `s`, `v` for numpy.linalg.svd.\n * full_matrices is `False` by default as opposed to `True` for\n numpy.linalg.svd.\n * tf.linalg.svd uses the standard definition of the SVD\n \\\\(A = U \\Sigma V^H\\\\), such that the left singular vectors of `a` are\n the columns of `u`, while the right singular vectors of `a` are the\n columns of `v`. On the other hand, numpy.linalg.svd returns the adjoint\n \\\\(V^H\\\\) as the third output argument.\n ```python\n import tensorflow as tf\n import numpy as np\n s, u, v = tf.linalg.svd(a)\n tf_a_approx = tf.matmul(u, tf.matmul(tf.linalg.diag(s), v, adjoint_b=True))\n u, s, v_adj = np.linalg.svd(a, full_matrices=False)\n np_a_approx = np.dot(u, np.dot(np.diag(s), v_adj))\n # tf_a_approx and np_a_approx should be numerically close.\n ```\n @end_compatibility\n ", "desc": "Computes the singular value decompositions of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.tensor_diag", "docs": "Returns a diagonal tensor with a given diagonal values.\n\n Given a `diagonal`, this operation returns a tensor with the `diagonal` and\n everything else padded with zeros. The diagonal is computed as follows:\n\n Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of\n rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:\n\n `output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.\n\n For example:\n\n ```\n # 'diagonal' is [1, 2, 3, 4]\n tf.diag(diagonal) ==> [[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]]\n ```\n\n Args:\n diagonal: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n Rank k tensor where k is at most 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a diagonal tensor with a given diagonal values.", "type": "API"}, {"name": "tf.compat.v1.linalg.tensor_diag_part", "docs": "Returns the diagonal part of the tensor.\n\n This operation returns a tensor with the `diagonal` part\n of the `input`. The `diagonal` part is computed as follows:\n\n Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a\n tensor of rank `k` with dimensions `[D1,..., Dk]` where:\n\n `diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.\n\n For a rank 2 tensor, `linalg.diag_part` and `linalg.tensor_diag_part`\n produce the same result. For rank 3 and higher, linalg.diag_part extracts\n the diagonal of each inner-most matrix in the tensor. An example where\n they differ is given below.\n\n >>> x = [[[[1111,1112],[1121,1122]],\n ... [[1211,1212],[1221,1222]]],\n ... [[[2111, 2112], [2121, 2122]],\n ... [[2211, 2212], [2221, 2222]]]\n ... ]\n >>> tf.linalg.tensor_diag_part(x)\n \n >>> tf.linalg.diag_part(x).shape\n TensorShape([2, 2, 2])\n\n Args:\n input: A `Tensor` with rank `2k`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`, and\n rank `k`.\n ", "desc": "Returns the diagonal part of the tensor.", "type": "API"}, {"name": "tf.compat.v1.linalg.tensordot", "docs": "Tensor contraction of a and b along specified axes and outer product.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `axes`.\n\n This operation corresponds to `numpy.tensordot(a, b, axes)`.\n\n Example 1: When `a` and `b` are matrices (order 2), the case `axes=1`\n is equivalent to matrix multiplication.\n\n Example 2: When `a` and `b` are matrices (order 2), the case\n `axes = [[1], [0]]` is equivalent to matrix multiplication.\n\n Example 3: When `a` and `b` are matrices (order 2), the case `axes=0` gives\n the outer product, a tensor of order 4.\n\n Example 4: Suppose that \\\\(a_{ijk}\\\\) and \\\\(b_{lmn}\\\\) represent two\n tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor\n \\\\(c_{jklm}\\\\) whose entry\n corresponding to the indices \\\\((j,k,l,m)\\\\) is given by:\n\n \\\\( c_{jklm} = \\sum_i a_{ijk} b_{lmi} \\\\).\n\n In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.\n\n Args:\n a: `Tensor` of type `float32` or `float64`.\n b: `Tensor` with the same type as `a`.\n axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].\n If axes is a scalar, sum over the last N axes of a and the first N axes of\n b in order. If axes is a list or `Tensor` the first and second row contain\n the set of unique integers specifying axes along which the contraction is\n computed, for `a` and `b`, respectively. The number of axes for `a` and\n `b` must be equal. If `axes=0`, computes the outer product between `a` and\n `b`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `a`.\n\n Raises:\n ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.\n IndexError: If the values in axes exceed the rank of the corresponding\n tensor.\n ", "desc": "Tensor contraction of a and b along specified axes and outer product.", "type": "API"}, {"name": "tf.compat.v1.linalg.trace", "docs": "Compute the trace of a tensor `x`.\n\n `trace(x)` returns the sum along the main diagonal of each inner-most matrix\n in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output\n is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where\n\n `output[i, j, k, ..., l] = trace(x[i, j, k, ..., l, :, :])`\n\n For example:\n\n ```python\n x = tf.constant([[1, 2], [3, 4]])\n tf.linalg.trace(x) # 5\n\n x = tf.constant([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n tf.linalg.trace(x) # 15\n\n x = tf.constant([[[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]],\n [[-1, -2, -3],\n [-4, -5, -6],\n [-7, -8, -9]]])\n tf.linalg.trace(x) # [15, -15]\n ```\n\n Args:\n x: tensor.\n name: A name for the operation (optional).\n\n Returns:\n The trace of input tensor.\n ", "desc": "Compute the trace of a tensor `x`.", "type": "API"}, {"name": "tf.compat.v1.linalg.transpose", "docs": "Transposes last two dimensions of tensor `a`.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.linalg.matrix_transpose(x) # [[1, 4],\n # [2, 5],\n # [3, 6]]\n\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n\n # Matrix with two batch dimensions.\n # x.shape is [1, 2, 3, 4]\n # tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3]\n ```\n\n Note that `tf.matmul` provides kwargs allowing for transpose of arguments.\n This is done with minimal cost, and is preferable to using this function. E.g.\n\n ```python\n # Good! Transpose is taken at minimal additional cost.\n tf.matmul(matrix, b, transpose_b=True)\n\n # Inefficient!\n tf.matmul(matrix, tf.linalg.matrix_transpose(b))\n ```\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, `linalg.matrix_transpose` returns a new\n tensor with the items permuted.\n @end_compatibility\n\n Args:\n a: A `Tensor` with `rank >= 2`.\n name: A name for the operation (optional).\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.linalg.matrix_transpose(input)).\n\n Returns:\n A transposed batch matrix `Tensor`.\n\n Raises:\n ValueError: If `a` is determined statically to have `rank < 2`.\n ", "desc": "Transposes last two dimensions of tensor `a`.", "type": "API"}, {"name": "tf.compat.v1.linalg.triangular_solve", "docs": "Solve systems of linear equations with upper or lower triangular matrices.\n\n `matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form\n square matrices. If `lower` is `True` then the strictly upper triangular part\n of each inner-most matrix is assumed to be zero and not accessed. If `lower`\n is `False` then the strictly lower triangular part of each inner-most matrix\n is assumed to be zero and not accessed. `rhs` is a tensor of shape\n `[..., M, N]`.\n\n The output is a tensor of shape `[..., M, N]`. If `adjoint` is `True` then the\n innermost matrices in output satisfy matrix equations `\n sum_k matrix[..., i, k] * output[..., k, j] = rhs[..., i, j]`.\n If `adjoint` is `False` then the\n innermost matrices in output satisfy matrix equations\n `sum_k adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.\n\n Example:\n\n >>> a = tf.constant([[3, 0, 0, 0],\n ... [2, 1, 0, 0],\n ... [1, 0, 1, 0],\n ... [1, 1, 1, 1]], dtype=tf.float32)\n\n >>> b = tf.constant([[4], [2], [4], [2]], dtype=tf.float32)\n >>> x = tf.linalg.triangular_solve(a, b, lower=True)\n >>> x\n \n >>> tf.matmul(a, x)\n \n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`,\n `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M,\n N]`.\n lower: An optional `bool`. Defaults to `True`. Boolean indicating whether\n the innermost matrices in matrix are lower or upper triangular.\n adjoint: An optional `bool`. Defaults to `False`. Boolean indicating whether\n to solve with matrix or its (block-wise) adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as matrix, and shape is `[..., M, N]`.\n\n ", "desc": "Solve systems of linear equations with upper or lower triangular matrices.", "type": "API"}, {"name": "tf.compat.v1.linalg.tridiagonal_matmul", "docs": "Multiplies tridiagonal matrix by matrix.\n\n `diagonals` is representation of 3-diagonal NxN matrix, which depends on\n `diagonals_format`.\n\n In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with\n two inner-most dimensions representing the square tridiagonal matrices.\n Elements outside of the three diagonals will be ignored.\n\n If `sequence` format, `diagonals` is list or tuple of three tensors:\n `[superdiag, maindiag, subdiag]`, each having shape [..., M]. Last element\n of `superdiag` first element of `subdiag` are ignored.\n\n In `compact` format the three diagonals are brought together into one tensor\n of shape `[..., 3, M]`, with last two dimensions containing superdiagonals,\n diagonals, and subdiagonals, in order. Similarly to `sequence` format,\n elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.\n\n The `sequence` format is recommended as the one with the best performance.\n\n `rhs` is matrix to the right of multiplication. It has shape `[..., M, N]`.\n\n Example:\n\n ```python\n superdiag = tf.constant([-1, -1, 0], dtype=tf.float64)\n maindiag = tf.constant([2, 2, 2], dtype=tf.float64)\n subdiag = tf.constant([0, -1, -1], dtype=tf.float64)\n diagonals = [superdiag, maindiag, subdiag]\n rhs = tf.constant([[1, 1], [1, 1], [1, 1]], dtype=tf.float64)\n x = tf.linalg.tridiagonal_matmul(diagonals, rhs, diagonals_format='sequence')\n ```\n\n Args:\n diagonals: A `Tensor` or tuple of `Tensor`s describing left-hand sides. The\n shape depends of `diagonals_format`, see description above. Must be\n `float32`, `float64`, `complex64`, or `complex128`.\n rhs: A `Tensor` of shape [..., M, N] and with the same dtype as `diagonals`.\n diagonals_format: one of `sequence`, or `compact`. Default is `compact`.\n name: A name to give this `Op` (optional).\n\n Returns:\n A `Tensor` of shape [..., M, N] containing the result of multiplication.\n\n Raises:\n ValueError: An unsupported type is provided as input, or when the input\n tensors have incorrect shapes.\n ", "desc": "Multiplies tridiagonal matrix by matrix.", "type": "API"}, {"name": "tf.compat.v1.linalg.tridiagonal_solve", "docs": "Solves tridiagonal systems of equations.\n\n The input can be supplied in various formats: `matrix`, `sequence` and\n `compact`, specified by the `diagonals_format` arg.\n\n In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with\n two inner-most dimensions representing the square tridiagonal matrices.\n Elements outside of the three diagonals will be ignored.\n\n In `sequence` format, `diagonals` are supplied as a tuple or list of three\n tensors of shapes `[..., N]`, `[..., M]`, `[..., N]` representing\n superdiagonals, diagonals, and subdiagonals, respectively. `N` can be either\n `M-1` or `M`; in the latter case, the last element of superdiagonal and the\n first element of subdiagonal will be ignored.\n\n In `compact` format the three diagonals are brought together into one tensor\n of shape `[..., 3, M]`, with last two dimensions containing superdiagonals,\n diagonals, and subdiagonals, in order. Similarly to `sequence` format,\n elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.\n\n The `compact` format is recommended as the one with best performance. In case\n you need to cast a tensor into a compact format manually, use `tf.gather_nd`.\n An example for a tensor of shape [m, m]:\n\n ```python\n rhs = tf.constant([...])\n matrix = tf.constant([[...]])\n m = matrix.shape[0]\n dummy_idx = [0, 0] # An arbitrary element to use as a dummy\n indices = [[[i, i + 1] for i in range(m - 1)] + [dummy_idx], # Superdiagonal\n [[i, i] for i in range(m)], # Diagonal\n [dummy_idx] + [[i + 1, i] for i in range(m - 1)]] # Subdiagonal\n diagonals=tf.gather_nd(matrix, indices)\n x = tf.linalg.tridiagonal_solve(diagonals, rhs)\n ```\n\n Regardless of the `diagonals_format`, `rhs` is a tensor of shape `[..., M]` or\n `[..., M, K]`. The latter allows to simultaneously solve K systems with the\n same left-hand sides and K different right-hand sides. If `transpose_rhs`\n is set to `True` the expected shape is `[..., M]` or `[..., K, M]`.\n\n The batch dimensions, denoted as `...`, must be the same in `diagonals` and\n `rhs`.\n\n The output is a tensor of the same shape as `rhs`: either `[..., M]` or\n `[..., M, K]`.\n\n The op isn't guaranteed to raise an error if the input matrix is not\n invertible. `tf.debugging.check_numerics` can be applied to the output to\n detect invertibility problems.\n\n **Note**: with large batch sizes, the computation on the GPU may be slow, if\n either `partial_pivoting=True` or there are multiple right-hand sides\n (`K > 1`). If this issue arises, consider if it's possible to disable pivoting\n and have `K = 1`, or, alternatively, consider using CPU.\n\n On CPU, solution is computed via Gaussian elimination with or without partial\n pivoting, depending on `partial_pivoting` parameter. On GPU, Nvidia's cuSPARSE\n library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv\n\n Args:\n diagonals: A `Tensor` or tuple of `Tensor`s describing left-hand sides. The\n shape depends of `diagonals_format`, see description above. Must be\n `float32`, `float64`, `complex64`, or `complex128`.\n rhs: A `Tensor` of shape [..., M] or [..., M, K] and with the same dtype as\n `diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known\n statically, `rhs` will be treated as a matrix rather than a vector.\n diagonals_format: one of `matrix`, `sequence`, or `compact`. Default is\n `compact`.\n transpose_rhs: If `True`, `rhs` is transposed before solving (has no effect\n if the shape of rhs is [..., M]).\n conjugate_rhs: If `True`, `rhs` is conjugated before solving.\n name: A name to give this `Op` (optional).\n partial_pivoting: whether to perform partial pivoting. `True` by default.\n Partial pivoting makes the procedure more stable, but slower. Partial\n pivoting is unnecessary in some cases, including diagonally dominant and\n symmetric positive definite matrices (see e.g. theorem 9.12 in [1]).\n perturb_singular: whether to perturb singular matrices to return a finite\n result. `False` by default. If true, solutions to systems involving\n a singular matrix will be computed by perturbing near-zero pivots in\n the partially pivoted LU decomposition. Specifically, tiny pivots are\n perturbed by an amount of order `eps * max_{ij} |U(i,j)|` to avoid\n overflow. Here `U` is the upper triangular part of the LU decomposition,\n and `eps` is the machine precision. This is useful for solving\n numerically singular systems when computing eigenvectors by inverse\n iteration.\n If `partial_pivoting` is `False`, `perturb_singular` must be `False` as\n well.\n\n Returns:\n A `Tensor` of shape [..., M] or [..., M, K] containing the solutions.\n If the input matrix is singular, the result is undefined.\n\n Raises:\n ValueError: Is raised if any of the following conditions hold:\n 1. An unsupported type is provided as input,\n 2. the input tensors have incorrect shapes,\n 3. `perturb_singular` is `True` but `partial_pivoting` is not.\n UnimplementedError: Whenever `partial_pivoting` is true and the backend is\n XLA, or whenever `perturb_singular` is true and the backend is\n XLA or GPU.\n\n [1] Nicholas J. Higham (2002). Accuracy and Stability of Numerical Algorithms:\n Second Edition. SIAM. p. 175. ISBN 978-0-89871-802-7.\n\n ", "desc": "Solves tridiagonal systems of equations.", "type": "API"}, {"name": "tf.compat.v1.linspace", "docs": "Generates evenly-spaced values in an interval along a given axis.\n\n A sequence of `num` evenly-spaced values are generated beginning at `start`\n along a given `axis`.\n If `num > 1`, the values in the sequence increase by\n `(stop - start) / (num - 1)`, so that the last one is exactly `stop`.\n If `num <= 0`, `ValueError` is raised.\n\n Matches\n [np.linspace](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)'s\n behaviour\n except when `num == 0`.\n\n For example:\n\n ```\n tf.linspace(10.0, 12.0, 3, name=\"linspace\") => [ 10.0 11.0 12.0]\n ```\n\n `Start` and `stop` can be tensors of arbitrary size:\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=0)\n \n\n `Axis` is where the values will be generated (the dimension in the\n returned tensor which corresponds to the axis will be equal to `num`)\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=-1)\n \n\n\n\n Args:\n start: A `Tensor`. Must be one of the following types: `bfloat16`,\n `float32`, `float64`. N-D tensor. First entry in the range.\n stop: A `Tensor`. Must have the same type and shape as `start`. N-D tensor.\n Last entry in the range.\n num: A `Tensor`. Must be one of the following types: `int32`, `int64`. 0-D\n tensor. Number of values to generate.\n name: A name for the operation (optional).\n axis: Axis along which the operation is performed (used only when N-D\n tensors are provided).\n\n Returns:\n A `Tensor`. Has the same type as `start`.\n ", "desc": "Generates evenly-spaced values in an interval along a given axis.", "type": "API"}, {"name": "tf.compat.v1.lite", "docs": "Public API for tf.lite namespace.\n", "desc": "Public API for tf.lite namespace.", "type": "API"}, {"name": "tf.compat.v1.lite.constants", "docs": "Public API for tf.lite.constants namespace.\n", "desc": "Public API for tf.lite.constants namespace.", "type": "API"}, {"name": "tf.compat.v1.lite.experimental", "docs": "Public API for tf.lite.experimental namespace.\n", "desc": "Public API for tf.lite.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.lite.experimental.convert_op_hints_to_stubs", "docs": "Converts a graphdef with LiteOp hints into stub operations. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease follow instructions under https://www.tensorflow.org/lite/convert/operation_fusion for operationfusion in tflite.\n\nThis is used to prepare for toco conversion of complex intrinsic usages.\nNote: only one of session or graph_def should be used, not both.\n\nArgs:\n session: A TensorFlow session that contains the graph to convert.\n graph_def: A graph def that we should convert.\n write_callback: A function pointer that can be used to write intermediate\n steps of graph transformation (optional).\n\nReturns:\n A new graphdef with all ops contained in OpHints being replaced by\n a single op call with the right parameters.\nRaises:\n ValueError: If both session and graph_def are provided.", "desc": "Converts a graphdef with LiteOp hints into stub operations. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.lite.experimental.load_delegate", "docs": "Returns loaded Delegate object.\n\n Example usage:\n\n ```\n import tensorflow as tf\n\n try:\n delegate = tf.lite.experimental.load_delegate('delegate.so')\n except ValueError:\n // Fallback to CPU\n\n if delegate:\n interpreter = tf.lite.Interpreter(\n model_path='model.tflite',\n experimental_delegates=[delegate])\n else:\n interpreter = tf.lite.Interpreter(model_path='model.tflite')\n ```\n\n This is typically used to leverage EdgeTPU for running TensorFlow Lite models.\n For more information see: https://coral.ai/docs/edgetpu/tflite-python/\n\n Args:\n library: Name of shared library containing the\n [TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates).\n options: Dictionary of options that are required to load the delegate. All\n keys and values in the dictionary should be convertible to str. Consult\n the documentation of the specific delegate for required and legal options.\n (default None)\n\n Returns:\n Delegate object.\n\n Raises:\n ValueError: Delegate failed to load.\n RuntimeError: If delegate loading is used on unsupported platform.\n ", "desc": "Returns loaded Delegate object.", "type": "API"}, {"name": "tf.compat.v1.lite.experimental.OpResolverType", "docs": "Different types of op resolvers for Tensorflow Lite.\n\n * `AUTO`: Indicates the op resolver that is chosen by default in TfLite\n Python, which is the \"BUILTIN\" as described below.\n * `BUILTIN`: Indicates the op resolver for built-in ops with optimized kernel\n implementation.\n * `BUILTIN_REF`: Indicates the op resolver for built-in ops with reference\n kernel implementation. It's generally used for testing and debugging.\n * `BUILTIN_WITHOUT_DEFAULT_DELEGATES`: Indicates the op resolver for\n built-in ops with optimized kernel implementation, but it will disable\n the application of default TfLite delegates (like the XNNPACK delegate) to\n the model graph. Generally this should not be used unless there are issues\n with the default configuration.\n ", "desc": "Different types of op resolvers for Tensorflow Lite.", "type": "API"}, {"name": "tf.compat.v1.lite.Interpreter", "docs": "Interpreter interface for running TensorFlow Lite models.\n\n Models obtained from `TfLiteConverter` can be run in Python with\n `Interpreter`.\n\n As an example, lets generate a simple Keras model and convert it to TFLite\n (`TfLiteConverter` also supports other input formats with `from_saved_model`\n and `from_concrete_function`)\n\n >>> x = np.array([[1.], [2.]])\n >>> y = np.array([[2.], [4.]])\n >>> model = tf.keras.models.Sequential([\n ... tf.keras.layers.Dropout(0.2),\n ... tf.keras.layers.Dense(units=1, input_shape=[1])\n ... ])\n >>> model.compile(optimizer='sgd', loss='mean_squared_error')\n >>> model.fit(x, y, epochs=1)\n >>> converter = tf.lite.TFLiteConverter.from_keras_model(model)\n >>> tflite_model = converter.convert()\n\n `tflite_model` can be saved to a file and loaded later, or directly into the\n `Interpreter`. Since TensorFlow Lite pre-plans tensor allocations to optimize\n inference, the user needs to call `allocate_tensors()` before any inference.\n\n >>> interpreter = tf.lite.Interpreter(model_content=tflite_model)\n >>> interpreter.allocate_tensors() # Needed before execution!\n\n Sample execution:\n\n >>> output = interpreter.get_output_details()[0] # Model has single output.\n >>> input = interpreter.get_input_details()[0] # Model has single input.\n >>> input_data = tf.constant(1., shape=[1, 1])\n >>> interpreter.set_tensor(input['index'], input_data)\n >>> interpreter.invoke()\n >>> interpreter.get_tensor(output['index']).shape\n (1, 1)\n\n Use `get_signature_runner()` for a more user-friendly inference API.\n ", "desc": "Interpreter interface for running TensorFlow Lite models.", "type": "API"}, {"name": "tf.compat.v1.lite.OpHint", "docs": "A class that helps build tflite function invocations. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease follow instructions under https://www.tensorflow.org/lite/convert/operation_fusion for operationfusion in tflite.\n\nIt allows you to take a bunch of TensorFlow ops and annotate the construction\nsuch that toco knows how to convert it to tflite. This embeds a pseudo\nfunction in a TensorFlow graph. This allows embedding high-level API usage\ninformation in a lower level TensorFlow implementation so that an alternative\nimplementation can be substituted later.\n\nEssentially, any \"input\" into this pseudo op is fed into an identity, and\nattributes are added to that input before being used by the constituent ops\nthat make up the pseudo op. A similar process is done to any output that\nis to be exported from the current op.", "desc": "A class that helps build tflite function invocations. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.lite.OpHint.OpHintArgumentTracker", "docs": "Conceptually tracks indices of arguments of \"OpHint functions\".\n\n The inputs and arguments of these functions both use an instance\n of the class so they can have independent numbering.\n ", "desc": "Conceptually tracks indices of arguments of \"OpHint functions\".", "type": "API"}, {"name": "tf.compat.v1.lite.OpsSet", "docs": "Enum class defining the sets of ops available to generate TFLite models.\n\n WARNING: Experimental interface, subject to change.\n ", "desc": "Enum class defining the sets of ops available to generate TFLite models.", "type": "API"}, {"name": "tf.compat.v1.lite.Optimize", "docs": "Enum defining the optimizations to apply when generating a tflite model.\n\n DEFAULT\n Default optimization strategy that quantizes model weights. Enhanced\n optimizations are gained by providing a representative dataset that\n quantizes biases and activations as well.\n Converter will do its best to reduce size and latency, while minimizing\n the loss in accuracy.\n\n OPTIMIZE_FOR_SIZE\n Deprecated. Does the same as DEFAULT.\n\n OPTIMIZE_FOR_LATENCY\n Deprecated. Does the same as DEFAULT.\n\n EXPERIMENTAL_SPARSITY\n Experimental flag, subject to change.\n\n Enable optimization by taking advantage of the sparse model weights\n trained with pruning.\n\n The converter will inspect the sparsity pattern of the model weights and\n do its best to improve size and latency.\n The flag can be used alone to optimize float32 models with sparse weights.\n It can also be used together with the DEFAULT optimization mode to\n optimize quantized models with sparse weights.\n ", "desc": "Enum defining the optimizations to apply when generating a tflite model.", "type": "API"}, {"name": "tf.compat.v1.lite.RepresentativeDataset", "docs": "Representative dataset used to optimize the model.\n\n This is a generator function that provides a small dataset to calibrate or\n estimate the range, i.e, (min, max) of all floating-point arrays in the model\n (such as model input, activation outputs of intermediate layers, and model\n output) for quantization. Usually, this is a small subset of a few hundred\n samples randomly chosen, in no particular order, from the training or\n evaluation dataset.\n ", "desc": "Representative dataset used to optimize the model.", "type": "API"}, {"name": "tf.compat.v1.lite.TargetSpec", "docs": "Specification of target device used to optimize the model.\n\n Attributes:\n supported_ops: Experimental flag, subject to change. Set of `tf.lite.OpsSet`\n options, where each option represents a set of operators supported by the\n target device. (default {tf.lite.OpsSet.TFLITE_BUILTINS}))\n supported_types: Set of `tf.dtypes.DType` data types supported on the target\n device. If initialized, optimization might be driven by the smallest type\n in this set. (default set())\n experimental_select_user_tf_ops: Experimental flag, subject to change. Set\n of user's TensorFlow operators' names that are required in the TensorFlow\n Lite runtime. These ops will be exported as select TensorFlow ops in the\n model (in conjunction with the tf.lite.OpsSet.SELECT_TF_OPS flag). This is\n an advanced feature that should only be used if the client is using TF ops\n that may not be linked in by default with the TF ops that are provided\n when using the SELECT_TF_OPS path. The client is responsible for linking\n these ops into the target runtime.\n experimental_supported_backends: Experimental flag, subject to change.\n Set containing names of supported backends. Currently only \"GPU\" is\n supported, more options will be available later.\n ", "desc": "Specification of target device used to optimize the model.", "type": "API"}, {"name": "tf.compat.v1.lite.TFLiteConverter", "docs": "Convert a TensorFlow model into `output_format`.\n\n This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras\n model into either a TFLite FlatBuffer or graph visualization.\n\n Attributes:\n optimizations: Experimental flag, subject to change. Set of optimizations to\n apply. e.g {tf.lite.Optimize.DEFAULT}. (default None, must be None or a\n set of values of type `tf.lite.Optimize`)\n representative_dataset: A generator function used for integer quantization\n where each generated sample has the same order, type and shape as the\n inputs to the model. Usually, this is a small subset of a few hundred\n samples randomly chosen, in no particular order, from the training or\n evaluation dataset. This is an optional attribute, but required for full\n integer quantization, i.e, if `tf.int8` is the only supported type in\n `target_spec.supported_types`. Refer to `tf.lite.RepresentativeDataset`.\n (default None)\n target_spec: Experimental flag, subject to change. Specifications of target\n device, including supported ops set, supported types and a set of user's\n defined TensorFlow operators required in the TensorFlow Lite runtime.\n Refer to `tf.lite.TargetSpec`.\n inference_type: Data type of numeric arrays, excluding the input layer.\n (default tf.float32, must be in {tf.float32, tf.int8, tf.uint8})\n inference_input_type: Data type of the numeric arrays in the input layer. If\n `inference_input_type` is in {tf.int8, tf.uint8}, then\n `quantized_input_stats` must be provided. (default is the value assigned\n to `inference_type`, must be in {tf.float32, tf.int8, tf.uint8})\n inference_output_type: Data type of the numeric arrays in the output layer.\n (default is the value assigned to `inference_type`, must be in\n {tf.float32, tf.int8, tf.uint8})\n quantized_input_stats: Map of input tensor names to a tuple of floats\n representing the mean and standard deviation of the training data.\n (e.g., {\"foo\" : (0., 1.)}). Required if `inference_input_type` is tf.int8\n or tf.uint8. (default None)\n default_ranges_stats: Tuple of integers (min, max) representing range values\n for all numeric arrays without a specified range. Intended for\n experimenting with quantization via \"dummy quantization\". (default None)\n allow_custom_ops: Boolean indicating whether to allow custom operations.\n When False any unknown operation is an error. When True, custom ops are\n created for any op that is unknown. The developer will need to provide\n these to the TensorFlow Lite runtime with a custom resolver. (default\n False)\n drop_control_dependency: Boolean indicating whether to drop control\n dependencies silently. This is due to TFLite not supporting control\n dependencies. (default True)\n reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant\n nodes in unexpected locations. Used when the location of the FakeQuant\n nodes is preventing graph transformations necessary to convert the graph.\n Results in a graph that differs from the quantized training graph,\n potentially causing differing arithmetic behavior. (default False)\n change_concat_input_ranges: Boolean to change behavior of min/max ranges for\n inputs and outputs of the concat operator for quantized models. Changes\n the ranges of concat operator overlap when true. (default False)\n output_format: Output file format. (default\n tf.compat.v1.lite.constants.TFLITE, must be in\n {tf.compat.v1.lite.constants.TFLITE,\n tf.compat.v1.lite.constants.GRAPHVIZ_DOT})\n dump_graphviz_dir: Full filepath of folder to dump the graphs at various\n stages of processing GraphViz .dot files. Preferred over\n `output_format=tf.compat.v1.lite.constants.GRAPHVIZ_DOT` in order to keep\n the requirements of the output file. (default None)\n dump_graphviz_video: Boolean indicating whether to dump the GraphViz .dot\n files after every graph transformation. Requires the `dump_graphviz_dir`\n flag to be specified. (default False)\n conversion_summary_dir: Full path of the directory to store conversion logs.\n (default None)\n exclude_conversion_metadata: Whether not to embed the conversion metadata\n into the converted model. (default False)\n target_ops: Deprecated. Please use `target_spec.supported_ops` instead.\n post_training_quantize: Deprecated. Please use `optimizations` instead and\n set it to `{tf.lite.Optimize.DEFAULT}`. (default False)\n experimental_new_converter: Experimental flag, subject to change. Enables\n MLIR-based conversion. (default True)\n experimental_new_quantizer: Experimental flag, subject to change. Enables\n MLIR-based quantization conversion instead of Flatbuffer-based conversion.\n (default True)\n\n Example usage:\n\n ```python\n # Converting a GraphDef from session.\n converter = tf.compat.v1.lite.TFLiteConverter.from_session(\n sess, in_tensors, out_tensors)\n tflite_model = converter.convert()\n open(\"converted_model.tflite\", \"wb\").write(tflite_model)\n\n # Converting a GraphDef from file.\n converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(\n graph_def_file, input_arrays, output_arrays)\n tflite_model = converter.convert()\n open(\"converted_model.tflite\", \"wb\").write(tflite_model)\n\n # Converting a SavedModel.\n converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(\n saved_model_dir)\n tflite_model = converter.convert()\n open(\"converted_model.tflite\", \"wb\").write(tflite_model)\n\n # Converting a tf.keras model.\n converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(\n keras_model)\n tflite_model = converter.convert()\n open(\"converted_model.tflite\", \"wb\").write(tflite_model)\n ```\n ", "desc": "Convert a TensorFlow model into `output_format`.", "type": "API"}, {"name": "tf.compat.v1.lite.toco_convert", "docs": "Convert a TensorFlow GraphDef to TFLite. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `lite.TFLiteConverter` instead.\n\nThis function is deprecated. Please use `tf.lite.TFLiteConverter` API instead.\nConversion can be customized by providing arguments that are forwarded to\n`build_model_flags` and `build_conversion_flags` (see documentation for\ndetails).\nArgs:\n input_data: Input data (i.e. often `sess.graph_def`).\n input_tensors: List of input tensors. Type and shape are computed using\n `foo.shape` and `foo.dtype`.\n output_tensors: List of output tensors (only .name is used from this).\n *args: See `build_model_flags` and `build_conversion_flags`.\n **kwargs: See `build_model_flags` and `build_conversion_flags`.\n\nReturns:\n The converted TensorFlow Lite model in a bytes array.\n\nRaises:\n Defined in `convert`.", "desc": "Convert a TensorFlow GraphDef to TFLite. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.lite.TocoConverter", "docs": "Convert a TensorFlow model into `output_format`.\n\n This class has been deprecated. Please use `lite.TFLiteConverter` instead.\n ", "desc": "Convert a TensorFlow model into `output_format`.", "type": "API"}, {"name": "tf.compat.v1.LMDBReader", "docs": "A Reader that outputs the records from a LMDB file.\n\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs the records from a LMDB file.", "type": "API"}, {"name": "tf.compat.v1.load_file_system_library", "docs": "Loads a TensorFlow plugin, containing file system implementation. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.load_library` instead.\n\nPass `library_filename` to a platform-specific mechanism for dynamically\nloading a library. The rules for determining the exact location of the\nlibrary are platform-specific and are not documented here.\n\nArgs:\n library_filename: Path to the plugin.\n Relative or absolute filesystem path to a dynamic library file.\n\nReturns:\n None.\n\nRaises:\n RuntimeError: when unable to load the library.", "desc": "Loads a TensorFlow plugin, containing file system implementation. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.load_library", "docs": "Loads a TensorFlow plugin.\n\n \"library_location\" can be a path to a specific shared object, or a folder.\n If it is a folder, all shared objects that are named \"libtfkernel*\" will be\n loaded. When the library is loaded, kernels registered in the library via the\n `REGISTER_*` macros are made available in the TensorFlow process.\n\n Args:\n library_location: Path to the plugin or the folder of plugins.\n Relative or absolute filesystem path to a dynamic library file or folder.\n\n Returns:\n None\n\n Raises:\n OSError: When the file to be loaded is not found.\n RuntimeError: when unable to load the library.\n ", "desc": "Loads a TensorFlow plugin.", "type": "API"}, {"name": "tf.compat.v1.load_op_library", "docs": "Loads a TensorFlow plugin, containing custom ops and kernels.\n\n Pass \"library_filename\" to a platform-specific mechanism for dynamically\n loading a library. The rules for determining the exact location of the\n library are platform-specific and are not documented here. When the\n library is loaded, ops and kernels registered in the library via the\n `REGISTER_*` macros are made available in the TensorFlow process. Note\n that ops with the same name as an existing op are rejected and not\n registered with the process.\n\n Args:\n library_filename: Path to the plugin.\n Relative or absolute filesystem path to a dynamic library file.\n\n Returns:\n A python module containing the Python wrappers for Ops defined in\n the plugin.\n\n Raises:\n RuntimeError: when unable to load the library or get the python wrappers.\n ", "desc": "Loads a TensorFlow plugin, containing custom ops and kernels.", "type": "API"}, {"name": "tf.compat.v1.local_variables", "docs": "Returns local variables.\n\n Local variables - per process variables, usually not saved/restored to\n checkpoint and used for temporary or intermediate values.\n For example, they can be used as counters for metrics computation or\n number of epochs this machine has read data.\n The `tf.contrib.framework.local_variable()` function automatically adds the\n new variable to `GraphKeys.LOCAL_VARIABLES`.\n This convenience function returns the contents of that collection.\n\n An alternative to local variables are global variables. See\n `tf.compat.v1.global_variables`\n\n Args:\n scope: (Optional.) A string. If supplied, the resulting list is filtered to\n include only items whose `name` attribute matches `scope` using\n `re.match`. Items without a `name` attribute are never returned if a scope\n is supplied. The choice of `re.match` means that a `scope` without special\n tokens filters by prefix.\n\n Returns:\n A list of local `Variable` objects.\n ", "desc": "Returns local variables.", "type": "API"}, {"name": "tf.compat.v1.local_variables_initializer", "docs": "Returns an Op that initializes all local variables.\n\n This is just a shortcut for `variables_initializer(local_variables())`\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Returns:\n An Op that initializes all local variables in the graph.\n ", "desc": "Returns an Op that initializes all local variables.", "type": "API"}, {"name": "tf.compat.v1.log", "docs": "Computes natural logarithm of x element-wise.\n\n I.e., \\\\(y = \\log_e x\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log(x)\n \n\n See: https://en.wikipedia.org/wiki/Logarithm\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.log_sigmoid", "docs": "Computes log sigmoid of `x` element-wise.\n\n Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability,\n we use `y = -tf.nn.softplus(-x)`.\n\n Args:\n x: A Tensor with type `float32` or `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n If a positive number is large, then its log_sigmoid will approach to 0 since\n the formula will be `y = log( / (1 + ) )` which\n approximates to `log (1)` which is 0.\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.log_sigmoid(x)\n \n\n If a negative number is large, its log_sigmoid will approach to the number\n itself since the formula will be `y = log( 1 / (1 + ) )` which is\n `log (1) - log ( (1 + ) )` which approximates to `- `\n that is the number itself.\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.log_sigmoid(x)\n \n ", "desc": "Computes log sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.log1p", "docs": "Computes natural logarithm of (1 + x) element-wise.\n\n I.e., \\\\(y = \\log_e (1 + x)\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log1p(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of (1 + x) element-wise.", "type": "API"}, {"name": "tf.compat.v1.logging", "docs": "Logging and Summary Operations.\n", "desc": "Logging and Summary Operations.", "type": "API"}, {"name": "tf.compat.v1.logging.debug", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.error", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.fatal", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.flush", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.get_verbosity", "docs": "Return how much logging output will be produced.", "desc": "Return how much logging output will be produced.", "type": "API"}, {"name": "tf.compat.v1.logging.info", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.log", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.log_every_n", "docs": "Log 'msg % args' at level 'level' once per 'n' times.\n\n Logs the 1st call, (N+1)st call, (2N+1)st call, etc.\n Not threadsafe.\n\n Args:\n level: The level at which to log.\n msg: The message to be logged.\n n: The number of times this should be called before it is logged.\n *args: The args to be substituted into the msg.\n ", "desc": "Log 'msg % args' at level 'level' once per 'n' times.", "type": "API"}, {"name": "tf.compat.v1.logging.log_first_n", "docs": "Log 'msg % args' at level 'level' only first 'n' times.\n\n Not threadsafe.\n\n Args:\n level: The level at which to log.\n msg: The message to be logged.\n n: The number of times this should be called before it is logged.\n *args: The args to be substituted into the msg.\n ", "desc": "Log 'msg % args' at level 'level' only first 'n' times.", "type": "API"}, {"name": "tf.compat.v1.logging.log_if", "docs": "Log 'msg % args' at level 'level' only if condition is fulfilled.", "desc": "Log 'msg % args' at level 'level' only if condition is fulfilled.", "type": "API"}, {"name": "tf.compat.v1.logging.set_verbosity", "docs": "Sets the threshold for what messages will be logged.", "desc": "Sets the threshold for what messages will be logged.", "type": "API"}, {"name": "tf.compat.v1.logging.TaskLevelStatusMessage", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.vlog", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.warn", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logging.warning", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.logical_and", "docs": "Returns the truth value of x AND y element-wise.\n\n Logical AND function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical AND with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical AND of the two input tensors.\n\n You can also use the `&` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_and(a, b)\n \n >>> a & b\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_and(c, x)\n \n >>> c & x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_and(y, z)\n \n >>> y & z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_and([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_all`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x AND y element-wise.", "type": "API"}, {"name": "tf.compat.v1.logical_not", "docs": "Returns the truth value of `NOT x` element-wise.\n\n Example:\n\n >>> tf.math.logical_not(tf.constant([True, False]))\n \n\n Args:\n x: A `Tensor` of type `bool`. A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of `NOT x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.logical_or", "docs": "Returns the truth value of x OR y element-wise.\n\n Logical OR function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical OR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical OR of the two input tensors.\n\n You can also use the `|` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_or(a, b)\n \n >>> a | b\n \n\n >>> c = tf.constant([False])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_or(c, x)\n \n >>> c | x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_or(y, z)\n \n >>> y | z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_or([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_any`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x OR y element-wise.", "type": "API"}, {"name": "tf.compat.v1.logical_xor", "docs": "Logical XOR function.\n\n x ^ y = (x | y) & ~(x & y)\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical XOR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical XOR of the two input tensors.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_xor(a, b)\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_xor(c, x)\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_xor(y, z)\n \n\n Args:\n x: A `tf.Tensor` type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n ", "desc": "Logical XOR function.", "type": "API"}, {"name": "tf.compat.v1.LogMessage", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.lookup", "docs": "Public API for tf.lookup namespace.\n", "desc": "Public API for tf.lookup namespace.", "type": "API"}, {"name": "tf.compat.v1.lookup.experimental", "docs": "Public API for tf.lookup.experimental namespace.\n", "desc": "Public API for tf.lookup.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.lookup.experimental.DenseHashTable", "docs": "A mutable hash table with faster lookups and higher memory usage.\n\n Data can be inserted by calling the `insert` method and removed by calling the\n `remove` method. It does not support initialization via the init method.\n\n Compared to `MutableHashTable`, `DenseHashTable` offers generally faster\n `insert`, `remove` and `lookup` operations, in exchange for a higher overall\n memory footprint.\n\n It uses \"open addressing\" with quadratic reprobing to resolve collisions. This\n requires specifying two keys in the key space, `empty_key` and `deleted_key`,\n that can never inserted into the table.\n\n Unlike `MutableHashTable`, `DenseHashTable` does not require additional memory\n for temporary tensors created during checkpointing and restore operations.\n\n Example usage:\n\n >>> table = tf.lookup.experimental.DenseHashTable(\n ... key_dtype=tf.string,\n ... value_dtype=tf.int64,\n ... default_value=-1,\n ... empty_key='',\n ... deleted_key='$')\n >>> keys = tf.constant(['a', 'b', 'c'])\n >>> values = tf.constant([0, 1, 2], dtype=tf.int64)\n >>> table.insert(keys, values)\n >>> table.remove(tf.constant(['c']))\n >>> table.lookup(tf.constant(['a', 'b', 'c','d'])).numpy()\n array([ 0, 1, -1, -1])\n ", "desc": "A mutable hash table with faster lookups and higher memory usage.", "type": "API"}, {"name": "tf.compat.v1.lookup.KeyValueTensorInitializer", "docs": "Table initializers given `keys` and `values` tensors.\n\n >>> keys_tensor = tf.constant(['a', 'b', 'c'])\n >>> vals_tensor = tf.constant([7, 8, 9])\n >>> input_tensor = tf.constant(['a', 'f'])\n >>> init = tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor)\n >>> table = tf.lookup.StaticHashTable(\n ... init,\n ... default_value=-1)\n >>> table.lookup(input_tensor).numpy()\n array([ 7, -1], dtype=int32)\n\n ", "desc": "Table initializers given `keys` and `values` tensors.", "type": "API"}, {"name": "tf.compat.v1.lookup.StaticHashTable", "docs": "A generic hash table that is immutable once initialized.\n\n When running in graph mode, you must evaluate the tensor returned by\n `tf.tables_initializer()` before evaluating the tensor returned by\n this class's `lookup()` method. Example usage in graph mode:\n\n ```python\n keys_tensor = tf.constant([1, 2])\n vals_tensor = tf.constant([3, 4])\n input_tensor = tf.constant([1, 5])\n table = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1)\n out = table.lookup(input_tensor)\n with tf.Session() as sess:\n sess.run(tf.tables_initializer())\n print(sess.run(out))\n ```\n\n Note that in graph mode if you set `experimental_is_anonymous` to\n `True`, you should only call `Session.run` once, otherwise each\n `Session.run` will create (and destroy) a new table unrelated to\n each other, leading to errors such as \"Table not initialized\".\n You can do so like this:\n\n ```python\n keys_tensor = tf.constant([1, 2])\n vals_tensor = tf.constant([3, 4])\n input_tensor = tf.constant([1, 5])\n table = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1,\n experimental_is_anonymous=True)\n with tf.control_dependencies([tf.tables_initializer()]):\n out = table.lookup(input_tensor)\n with tf.Session() as sess:\n print(sess.run(out))\n ```\n\n In eager mode, no special code is needed to initialize the table.\n Example usage in eager mode:\n\n ```python\n tf.enable_eager_execution()\n keys_tensor = tf.constant([1, 2])\n vals_tensor = tf.constant([3, 4])\n input_tensor = tf.constant([1, 5])\n table = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1)\n print(table.lookup(input_tensor))\n ```\n ", "desc": "A generic hash table that is immutable once initialized.", "type": "API"}, {"name": "tf.compat.v1.lookup.StaticVocabularyTable", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.lookup.TextFileIndex", "docs": "The key and value content to get from each line.\n\n This class defines the key and value used for `tf.lookup.TextFileInitializer`.\n\n The key and value content to get from each line is specified either\n by the following, or a value `>=0`.\n * `TextFileIndex.LINE_NUMBER` means use the line number starting from zero,\n expects data type int64.\n * `TextFileIndex.WHOLE_LINE` means use the whole line content, expects data\n type string.\n\n A value `>=0` means use the index (starting at zero) of the split line based\n on `delimiter`.\n ", "desc": "The key and value content to get from each line.", "type": "API"}, {"name": "tf.compat.v1.lookup.TextFileInitializer", "docs": "Table initializers from a text file.\n\n This initializer assigns one entry in the table for each line in the file.\n\n The key and value type of the table to initialize is given by `key_dtype` and\n `value_dtype`.\n\n The key and value content to get from each line is specified by\n the `key_index` and `value_index`.\n\n * `TextFileIndex.LINE_NUMBER` means use the line number starting from zero,\n expects data type int64.\n * `TextFileIndex.WHOLE_LINE` means use the whole line content, expects data\n type string.\n * A value `>=0` means use the index (starting at zero) of the split line based\n on `delimiter`.\n\n For example if we have a file with the following content:\n\n >>> import tempfile\n >>> f = tempfile.NamedTemporaryFile(delete=False)\n >>> content='\\n'.join([\"emerson 10\", \"lake 20\", \"palmer 30\",])\n >>> f.file.write(content.encode('utf-8'))\n >>> f.file.close()\n\n The following snippet initializes a table with the first column as keys and\n second column as values:\n\n * `emerson -> 10`\n * `lake -> 20`\n * `palmer -> 30`\n\n >>> init= tf.lookup.TextFileInitializer(\n ... filename=f.name,\n ... key_dtype=tf.string, key_index=0,\n ... value_dtype=tf.int64, value_index=1,\n ... delimiter=\" \")\n >>> table = tf.lookup.StaticHashTable(init, default_value=-1)\n >>> table.lookup(tf.constant(['palmer','lake','tarkus'])).numpy()\n\n Similarly to initialize the whole line as keys and the line number as values.\n\n * `emerson 10 -> 0`\n * `lake 20 -> 1`\n * `palmer 30 -> 2`\n\n >>> init = tf.lookup.TextFileInitializer(\n ... filename=f.name,\n ... key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE,\n ... value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER)\n >>> table = tf.lookup.StaticHashTable(init, -1)\n >>> table.lookup(tf.constant('palmer 30')).numpy()\n 2\n ", "desc": "Table initializers from a text file.", "type": "API"}, {"name": "tf.compat.v1.losses", "docs": "Loss operations for use in neural networks.\n\nNote: All the losses are added to the `GraphKeys.LOSSES` collection by default.\n\n", "desc": "Loss operations for use in neural networks.", "type": "API"}, {"name": "tf.compat.v1.losses.absolute_difference", "docs": "Adds an Absolute Difference loss to the training procedure.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided, then\n the loss is simply scaled by the given value. If `weights` is a `Tensor` of\n shape `[batch_size]`, then the total loss for each sample of the batch is\n rescaled by the corresponding element in the `weights` vector. If the shape of\n `weights` matches the shape of `predictions`, then the loss of each\n measurable element of `predictions` is scaled by the corresponding value of\n `weights`.\n\n Args:\n labels: The ground truth output tensor, same dimensions as 'predictions'.\n predictions: The predicted outputs.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which this loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `predictions` doesn't match that of\n `labels` or if the shape of `weights` is invalid or if `labels`\n or `predictions` is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Adds an Absolute Difference loss to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.add_loss", "docs": "Adds a externally defined loss to the collection of losses.\n\n Args:\n loss: A loss `Tensor`.\n loss_collection: Optional collection to add the loss to.\n ", "desc": "Adds a externally defined loss to the collection of losses.", "type": "API"}, {"name": "tf.compat.v1.losses.compute_weighted_loss", "docs": "Computes the weighted loss.\n\n Args:\n losses: `Tensor` of shape `[batch_size, d1, ... dN]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `losses`, and must be broadcastable to `losses` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n scope: the scope for the operations performed in computing the loss.\n loss_collection: the loss will be added to these collections.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss `Tensor` of the same type as `losses`. If `reduction` is\n `NONE`, this has the same shape as `losses`; otherwise, it is scalar.\n\n Raises:\n ValueError: If `weights` is `None` or the shape is not compatible with\n `losses`, or if the number of dimensions (rank) of either `losses` or\n `weights` is missing.\n\n Note:\n When calculating the gradient of a weighted loss contributions from\n both `losses` and `weights` are considered. If your `weights` depend\n on some model parameters but you do not want this to affect the loss\n gradient, you need to apply `tf.stop_gradient` to `weights` before\n passing them to `compute_weighted_loss`.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Computes the weighted loss.", "type": "API"}, {"name": "tf.compat.v1.losses.cosine_distance", "docs": "Adds a cosine-distance loss to the training procedure. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nNote that the function assumes that `predictions` and `labels` are already\nunit-normalized.\n\nArgs:\n labels: `Tensor` whose shape matches 'predictions'\n predictions: An arbitrary matrix.\n axis: The dimension along which the cosine distance is computed.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which this loss will be added.\n reduction: Type of reduction to apply to loss.\n dim: The old (deprecated) name for `axis`.\n\nReturns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\nRaises:\n ValueError: If `predictions` shape doesn't match `labels` shape, or\n `axis`, `labels`, `predictions` or `weights` is `None`.\n\n@compatibility(eager)\nThe `loss_collection` argument is ignored when executing eagerly. Consider\nholding on to the return value or collecting losses via a `tf.keras.Model`.\n@end_compatibility", "desc": "Adds a cosine-distance loss to the training procedure. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.losses.get_losses", "docs": "Gets the list of losses from the loss_collection.\n\n Args:\n scope: An optional scope name for filtering the losses to return.\n loss_collection: Optional losses collection.\n\n Returns:\n a list of loss tensors.\n ", "desc": "Gets the list of losses from the loss_collection.", "type": "API"}, {"name": "tf.compat.v1.losses.get_regularization_loss", "docs": "Gets the total regularization loss.\n\n Args:\n scope: An optional scope name for filtering the losses to return.\n name: The name of the returned tensor.\n\n Returns:\n A scalar regularization loss.\n ", "desc": "Gets the total regularization loss.", "type": "API"}, {"name": "tf.compat.v1.losses.get_regularization_losses", "docs": "Gets the list of regularization losses.\n\n Args:\n scope: An optional scope name for filtering the losses to return.\n\n Returns:\n A list of regularization losses as Tensors.\n ", "desc": "Gets the list of regularization losses.", "type": "API"}, {"name": "tf.compat.v1.losses.get_total_loss", "docs": "Returns a tensor whose value represents the total loss.\n\n In particular, this adds any losses you have added with `tf.add_loss()` to\n any regularization losses that have been added by regularization parameters\n on layers constructors e.g. `tf.layers`. Be very sure to use this if you\n are constructing a loss_op manually. Otherwise regularization arguments\n on `tf.layers` methods will not function.\n\n Args:\n add_regularization_losses: A boolean indicating whether or not to use the\n regularization losses in the sum.\n name: The name of the returned tensor.\n scope: An optional scope name for filtering the losses to return. Note that\n this filters the losses added with `tf.add_loss()` as well as the\n regularization losses to that scope.\n\n Returns:\n A `Tensor` whose value represents the total loss.\n\n Raises:\n ValueError: if `losses` is not iterable.\n ", "desc": "Returns a tensor whose value represents the total loss.", "type": "API"}, {"name": "tf.compat.v1.losses.hinge_loss", "docs": "Adds a hinge loss to the training procedure.\n\n Args:\n labels: The ground truth output tensor. Its shape should match the shape of\n logits. The values of the tensor are expected to be 0.0 or 1.0. Internally\n the {0,1} labels are converted to {-1,1} when calculating the hinge loss.\n logits: The logits, a float tensor. Note that logits are assumed to be\n unbounded and 0-centered. A value > 0 (resp. < 0) is considered a positive\n (resp. negative) binary prediction.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shapes of `logits` and `labels` don't match or\n if `labels` or `logits` is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Adds a hinge loss to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.huber_loss", "docs": "Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure.\n\n For each value x in `error=labels-predictions`, the following is calculated:\n\n ```\n 0.5 * x^2 if |x| <= d\n 0.5 * d^2 + d * (|x| - d) if |x| > d\n ```\n\n where d is `delta`.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided, then\n the loss is simply scaled by the given value. If `weights` is a tensor of size\n `[batch_size]`, then the total loss for each sample of the batch is rescaled\n by the corresponding element in the `weights` vector. If the shape of\n `weights` matches the shape of `predictions`, then the loss of each\n measurable element of `predictions` is scaled by the corresponding value of\n `weights`.\n\n Args:\n labels: The ground truth output tensor, same dimensions as 'predictions'.\n predictions: The predicted outputs.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n delta: `float`, the point where the huber loss function changes from a\n quadratic to linear.\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `predictions` doesn't match that of `labels` or\n if the shape of `weights` is invalid. Also if `labels` or\n `predictions` is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Adds a [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) term to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.log_loss", "docs": "Adds a Log Loss term to the training procedure.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided, then\n the loss is simply scaled by the given value. If `weights` is a tensor of size\n `[batch_size]`, then the total loss for each sample of the batch is rescaled\n by the corresponding element in the `weights` vector. If the shape of\n `weights` matches the shape of `predictions`, then the loss of each\n measurable element of `predictions` is scaled by the corresponding value of\n `weights`.\n\n Args:\n labels: The ground truth output tensor, same dimensions as 'predictions'.\n predictions: The predicted outputs.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n epsilon: A small increment to add to avoid taking a log of zero.\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `predictions` doesn't match that of `labels` or\n if the shape of `weights` is invalid. Also if `labels` or `predictions`\n is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Adds a Log Loss term to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.mean_pairwise_squared_error", "docs": "Adds a pairwise-errors-squared loss to the training procedure.\n\n Unlike `mean_squared_error`, which is a measure of the differences between\n corresponding elements of `predictions` and `labels`,\n `mean_pairwise_squared_error` is a measure of the differences between pairs of\n corresponding elements of `predictions` and `labels`.\n\n For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are\n three pairs of differences are summed to compute the loss:\n loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3\n\n Note that since the inputs are of shape `[batch_size, d0, ... dN]`, the\n corresponding pairs are computed within each batch sample but not across\n samples within a batch. For example, if `predictions` represents a batch of\n 16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs\n is drawn from each image, but not across images.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided, then\n the loss is simply scaled by the given value. If `weights` is a tensor of size\n `[batch_size]`, then the total loss for each sample of the batch is rescaled\n by the corresponding element in the `weights` vector.\n\n Args:\n labels: The ground truth output tensor, whose shape must match the shape of\n `predictions`.\n predictions: The predicted outputs, a tensor of size\n `[batch_size, d0, .. dN]` where N+1 is the total number of dimensions in\n `predictions`.\n weights: Coefficients for the loss a scalar, a tensor of shape\n `[batch_size]` or a tensor whose shape matches `predictions`.\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n\n Returns:\n A scalar `Tensor` that returns the weighted loss.\n\n Raises:\n ValueError: If the shape of `predictions` doesn't match that of `labels` or\n if the shape of `weights` is invalid. Also if `labels` or `predictions`\n is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Adds a pairwise-errors-squared loss to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.mean_squared_error", "docs": "Adds a Sum-of-Squares loss to the training procedure.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided, then\n the loss is simply scaled by the given value. If `weights` is a tensor of size\n `[batch_size]`, then the total loss for each sample of the batch is rescaled\n by the corresponding element in the `weights` vector. If the shape of\n `weights` matches the shape of `predictions`, then the loss of each\n measurable element of `predictions` is scaled by the corresponding value of\n `weights`.\n\n Args:\n labels: The ground truth output tensor, same dimensions as 'predictions'.\n predictions: The predicted outputs.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `losses` dimension).\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `predictions` doesn't match that of `labels` or\n if the shape of `weights` is invalid. Also if `labels` or `predictions`\n is None.\n\n @compatibility(TF2)\n\n `tf.compat.v1.losses.mean_squared_error` is mostly compatible with eager\n execution and `tf.function`. But, the `loss_collection` argument is\n ignored when executing eagerly and no loss will be written to the loss\n collections. You will need to either hold on to the return value manually\n or rely on `tf.keras.Model` loss tracking.\n\n\n To switch to native TF2 style, instantiate the\n `tf.keras.losses.MeanSquaredError` class and call the object instead.\n\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n loss = tf.compat.v1.losses.mean_squared_error(\n labels=labels,\n predictions=predictions,\n weights=weights,\n reduction=reduction)\n ```\n\n After:\n\n ```python\n loss_fn = tf.keras.losses.MeanSquaredError(\n reduction=reduction)\n loss = loss_fn(\n y_true=labels,\n y_pred=predictions,\n sample_weight=weights)\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :--------------- | :------------------------- |\n | `labels` | `y_true` | In `__call__()` method |\n | `predictions` | `y_pred` | In `__call__()` method |\n | `weights` | `sample_weight` | In `__call__()` method. |\n : : : The shape requirements for `sample_weight` is different from :\n : : : `weights`. Please check the [argument definition][api_docs] for :\n : : : details. :\n | `scope` | Not supported | - |\n | `loss_collection` | Not supported | Losses should be tracked |\n : : : explicitly or with Keras APIs, for example, [add_loss][add_loss], :\n : : : instead of via collections :\n | `reduction` | `reduction` | In constructor. Value of |\n : : : `tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE`, :\n : : : `tf.compat.v1.losses.Reduction.SUM`, :\n : : : `tf.compat.v1.losses.Reduction.NONE` in :\n : : : `tf.compat.v1.losses.softmax_cross_entropy` correspond to :\n : : : `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE`, :\n : : : `tf.keras.losses.Reduction.SUM`, :\n : : : `tf.keras.losses.Reduction.NONE`, respectively. If you :\n : : : used other value for `reduction`, including the default value :\n : : : `tf.compat.v1.losses.Reduction.SUM_BY_NONZERO_WEIGHTS`, there is :\n : : : no directly corresponding value. Please modify the loss :\n : : : implementation manually. :\n\n [add_loss]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_loss\n [api_docs]:https://www.tensorflow.org/api_docs/python/tf/keras/losses/MeanSquaredError#__call__\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> y_true = [1, 2, 3]\n >>> y_pred = [1, 3, 5]\n >>> weights = [0, 1, 0.25]\n >>> # samples with zero-weight are excluded from calculation when `reduction`\n >>> # argument is set to default value `Reduction.SUM_BY_NONZERO_WEIGHTS`\n >>> tf.compat.v1.losses.mean_squared_error(\n ... labels=y_true,\n ... predictions=y_pred,\n ... weights=weights).numpy()\n 1.0\n\n >>> tf.compat.v1.losses.mean_squared_error(\n ... labels=y_true,\n ... predictions=y_pred,\n ... weights=weights,\n ... reduction=tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE).numpy()\n 0.66667\n\n After:\n\n >>> y_true = [[1.0], [2.0], [3.0]]\n >>> y_pred = [[1.0], [3.0], [5.0]]\n >>> weights = [1, 1, 0.25]\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE)\n >>> mse(y_true=y_true, y_pred=y_pred, sample_weight=weights).numpy()\n 0.66667\n\n @end_compatibility\n ", "desc": "Adds a Sum-of-Squares loss to the training procedure.", "type": "API"}, {"name": "tf.compat.v1.losses.Reduction", "docs": "Types of loss reduction.\n\n Contains the following values:\n\n * `NONE`: Un-reduced weighted losses with the same shape as input.\n * `SUM`: Scalar sum of weighted losses.\n * `MEAN`: Scalar `SUM` divided by sum of weights. DEPRECATED.\n * `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses.\n * `SUM_OVER_NONZERO_WEIGHTS`: Scalar `SUM` divided by number of non-zero\n weights. DEPRECATED.\n * `SUM_BY_NONZERO_WEIGHTS`: Same as `SUM_OVER_NONZERO_WEIGHTS`. DEPRECATED.\n ", "desc": "Types of loss reduction.", "type": "API"}, {"name": "tf.compat.v1.losses.sigmoid_cross_entropy", "docs": "Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided,\n then the loss is simply scaled by the given value. If `weights` is a\n tensor of shape `[batch_size]`, then the loss weights apply to each\n corresponding sample.\n\n If `label_smoothing` is nonzero, smooth the labels towards 1/2:\n\n new_multiclass_labels = multiclass_labels * (1 - label_smoothing)\n + 0.5 * label_smoothing\n\n Args:\n multi_class_labels: `[batch_size, num_classes]` target integer labels in\n `{0, 1}`.\n logits: Float `[batch_size, num_classes]` logits outputs of the network.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `multi_class_labels`, and must be broadcastable to `multi_class_labels`\n (i.e., all dimensions must be either `1`, or the same as the\n corresponding `losses` dimension).\n label_smoothing: If greater than `0` then smooth the labels.\n scope: The scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss `Tensor` of the same type as `logits`. If `reduction` is\n `NONE`, this has the same shape as `logits`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `logits` doesn't match that of\n `multi_class_labels` or if the shape of `weights` is invalid, or if\n `weights` is None. Also if `multi_class_labels` or `logits` is None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.", "type": "API"}, {"name": "tf.compat.v1.losses.softmax_cross_entropy", "docs": "Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided,\n then the loss is simply scaled by the given value. If `weights` is a\n tensor of shape `[batch_size]`, then the loss weights apply to each\n corresponding sample.\n\n If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes:\n new_onehot_labels = onehot_labels * (1 - label_smoothing)\n + label_smoothing / num_classes\n\n Note that `onehot_labels` and `logits` must have the same shape,\n e.g. `[batch_size, num_classes]`. The shape of `weights` must be\n broadcastable to loss, whose shape is decided by the shape of `logits`.\n In case the shape of `logits` is `[batch_size, num_classes]`, loss is\n a `Tensor` of shape `[batch_size]`.\n\n Args:\n onehot_labels: One-hot-encoded labels.\n logits: Logits outputs of the network.\n weights: Optional `Tensor` that is broadcastable to loss.\n label_smoothing: If greater than 0 then smooth the labels.\n scope: the scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss `Tensor` of the same type as `logits`. If `reduction` is\n `NONE`, this has shape `[batch_size]`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `logits` doesn't match that of `onehot_labels`\n or if the shape of `weights` is invalid or if `weights` is None. Also if\n `onehot_labels` or `logits` is None.\n\n @compatibility(TF2)\n\n `tf.compat.v1.losses.softmax_cross_entropy` is mostly compatible with eager\n execution and `tf.function`. But, the `loss_collection` argument is\n ignored when executing eagerly and no loss will be written to the loss\n collections. You will need to either hold on to the return value manually\n or rely on `tf.keras.Model` loss tracking.\n\n\n To switch to native TF2 style, instantiate the\n `tf.keras.losses.CategoricalCrossentropy` class with `from_logits` set\n as `True` and call the object instead.\n\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n loss = tf.compat.v1.losses.softmax_cross_entropy(\n onehot_labels=onehot_labels,\n logits=logits,\n weights=weights,\n label_smoothing=smoothing)\n ```\n\n After:\n\n ```python\n loss_fn = tf.keras.losses.CategoricalCrossentropy(\n from_logits=True,\n label_smoothing=smoothing)\n loss = loss_fn(\n y_true=onehot_labels,\n y_pred=logits,\n sample_weight=weights)\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :--------------- | :------------------------- |\n | - | `from_logits` | Set `from_logits` as True |\n : : : to have identical behavior :\n | `onehot_labels` | `y_true` | In `__call__()` method |\n | `logits` | `y_pred` | In `__call__()` method |\n | `weights` | `sample_weight` | In `__call__()` method |\n | `label_smoothing` | `label_smoothing`| In constructor |\n | `scope` | Not supported | - |\n | `loss_collection` | Not supported | Losses should be tracked |\n : : : explicitly or with Keras :\n : : : APIs, for example, :\n : : : [add_loss][add_loss], :\n : : : instead of via collections :\n | `reduction` | `reduction` | In constructor. Value of |\n : : : `tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE`, :\n : : : `tf.compat.v1.losses.Reduction.SUM`, :\n : : : `tf.compat.v1.losses.Reduction.NONE` in :\n : : : `tf.compat.v1.losses.softmax_cross_entropy` correspond to :\n : : : `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE`, :\n : : : `tf.keras.losses.Reduction.SUM`, :\n : : : `tf.keras.losses.Reduction.NONE`, respectively. If you :\n : : : used other value for `reduction`, including the default value :\n : : : `tf.compat.v1.losses.Reduction.SUM_BY_NONZERO_WEIGHTS`, there is :\n : : : no directly corresponding value. Please modify the loss :\n : : : implementation manually. :\n\n [add_loss]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_loss\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> weights = [0.3, 0.7]\n >>> smoothing = 0.2\n >>> tf.compat.v1.losses.softmax_cross_entropy(y_true, y_pred, weights=weights,\n ... label_smoothing=smoothing).numpy()\n 0.57618\n\n After:\n\n >>> cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True,\n ... label_smoothing=smoothing)\n >>> cce(y_true, y_pred, sample_weight=weights).numpy()\n 0.57618\n\n @end_compatibility\n ", "desc": "Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.", "type": "API"}, {"name": "tf.compat.v1.losses.sparse_softmax_cross_entropy", "docs": "Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.\n\n `weights` acts as a coefficient for the loss. If a scalar is provided,\n then the loss is simply scaled by the given value. If `weights` is a\n tensor of shape `[batch_size]`, then the loss weights apply to each\n corresponding sample.\n\n Args:\n labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of\n `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`\n must be an index in `[0, num_classes)`. Other values will raise an\n exception when this op is run on CPU, and return `NaN` for corresponding\n loss and gradient rows on GPU.\n logits: Unscaled log probabilities of shape\n `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float16`, `float32` or\n `float64`.\n weights: Coefficients for the loss. This must be scalar or broadcastable to\n `labels` (i.e. same rank and each dimension is either 1 or the same).\n scope: the scope for the operations performed in computing the loss.\n loss_collection: collection to which the loss will be added.\n reduction: Type of reduction to apply to loss.\n\n Returns:\n Weighted loss `Tensor` of the same type as `logits`. If `reduction` is\n `NONE`, this has the same shape as `labels`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shapes of `logits`, `labels`, and `weights` are\n incompatible, or if any of them are None.\n\n @compatibility(eager)\n The `loss_collection` argument is ignored when executing eagerly. Consider\n holding on to the return value or collecting losses via a `tf.keras.Model`.\n @end_compatibility\n ", "desc": "Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`.", "type": "API"}, {"name": "tf.compat.v1.make_ndarray", "docs": "Create a numpy ndarray from a tensor.\n\n Create a numpy ndarray with the same shape and data as the tensor.\n\n For example:\n\n ```python\n # Tensor a has shape (2,3)\n a = tf.constant([[1,2,3],[4,5,6]])\n proto_tensor = tf.make_tensor_proto(a) # convert `tensor a` to a proto tensor\n tf.make_ndarray(proto_tensor) # output: array([[1, 2, 3],\n # [4, 5, 6]], dtype=int32)\n # output has shape (2,3)\n ```\n\n Args:\n tensor: A TensorProto.\n\n Returns:\n A numpy array with the tensor contents.\n\n Raises:\n TypeError: if tensor has unsupported type.\n\n ", "desc": "Create a numpy ndarray from a tensor.", "type": "API"}, {"name": "tf.compat.v1.make_template", "docs": "Given an arbitrary function, wrap it so that it does variable sharing.\n\n @compatibility(TF2)\n `tf.compat.v1.make_template` is a legacy API that is only compatible\n with eager execution enabled and `tf.function` if you combine it with\n `tf.compat.v1.keras.utils.track_tf1_style_variables`. See the model mapping\n migration guide section on `make_template` for more info:\n\n https://www.tensorflow.org/guide/migrate/model_mapping#using_tfcompatv1make_template_in_the_decorated_method\n\n Even if you use legacy apis for `variable_scope`-based variable reuse,\n we recommend using\n `tf.compat.v1.keras.utils.track_tf1_style_variables` directly and not using\n `tf.compat.v1.make_template`, as it interoperates with eager execution in a\n simpler and more predictable fashion than `make_template`.\n\n The TF2 API approach would be tracking your variables using\n `tf.Module`s or Keras layers and models rather than relying on\n `make_template`.\n @end_compatibility\n\n This wraps `func_` in a Template and partially evaluates it. Templates are\n functions that create variables the first time they are called and reuse them\n thereafter. In order for `func_` to be compatible with a `Template` it must\n have the following properties:\n\n * The function should create all trainable variables and any variables that\n should be reused by calling `tf.compat.v1.get_variable`. If a trainable\n variable is\n created using `tf.Variable`, then a ValueError will be thrown. Variables\n that are intended to be locals can be created by specifying\n `tf.Variable(..., trainable=false)`.\n * The function may use variable scopes and other templates internally to\n create and reuse variables, but it shouldn't use\n `tf.compat.v1.global_variables` to\n capture variables that are defined outside of the scope of the function.\n * Internal scopes and variable names should not depend on any arguments that\n are not supplied to `make_template`. In general you will get a ValueError\n telling you that you are trying to reuse a variable that doesn't exist\n if you make a mistake.\n\n In the following example, both `z` and `w` will be scaled by the same `y`. It\n is important to note that if we didn't assign `scalar_name` and used a\n different name for z and w that a `ValueError` would be thrown because it\n couldn't reuse the variable.\n\n ```python\n def my_op(x, scalar_name):\n var1 = tf.compat.v1.get_variable(scalar_name,\n shape=[],\n initializer=tf.compat.v1.constant_initializer(1))\n return x * var1\n\n scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y')\n\n z = scale_by_y(input1)\n w = scale_by_y(input2)\n ```\n\n As a safe-guard, the returned function will raise a `ValueError` after the\n first call if trainable variables are created by calling `tf.Variable`.\n\n If all of these are true, then 2 properties are enforced by the template:\n\n 1. Calling the same template multiple times will share all non-local\n variables.\n 2. Two different templates are guaranteed to be unique, unless you reenter the\n same variable scope as the initial definition of a template and redefine\n it. An examples of this exception:\n\n ```python\n def my_op(x, scalar_name):\n var1 = tf.compat.v1.get_variable(scalar_name,\n shape=[],\n initializer=tf.compat.v1.constant_initializer(1))\n return x * var1\n\n with tf.compat.v1.variable_scope('scope') as vs:\n scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op,\n scalar_name='y')\n z = scale_by_y(input1)\n w = scale_by_y(input2)\n\n # Creates a template that reuses the variables above.\n with tf.compat.v1.variable_scope(vs, reuse=True):\n scale_by_y2 = tf.compat.v1.make_template('scale_by_y', my_op,\n scalar_name='y')\n z2 = scale_by_y2(input1)\n w2 = scale_by_y2(input2)\n ```\n\n Depending on the value of `create_scope_now_`, the full variable scope may be\n captured either at the time of first call or at the time of construction. If\n this option is set to True, then all Tensors created by repeated calls to the\n template will have an extra trailing _N+1 to their name, as the first time the\n scope is entered in the Template constructor no Tensors are created.\n\n Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to\n reduce the likelihood of collisions with kwargs.\n\n Args:\n name_: A name for the scope created by this template. If necessary, the name\n will be made unique by appending `_N` to the name.\n func_: The function to wrap.\n create_scope_now_: Boolean controlling whether the scope should be created\n when the template is constructed or when the template is called. Default\n is False, meaning the scope is created when the template is called.\n unique_name_: When used, it overrides name_ and is not made unique. If a\n template of the same scope/unique_name already exists and reuse is false,\n an error is raised. Defaults to None.\n custom_getter_: Optional custom getter for variables used in `func_`. See\n the `tf.compat.v1.get_variable` `custom_getter` documentation for more\n information.\n **kwargs: Keyword arguments to apply to `func_`.\n\n Returns:\n A function to encapsulate a set of variables which should be created once\n and reused. An enclosing scope will be created either when `make_template`\n is called or when the result is called, depending on the value of\n `create_scope_now_`. Regardless of the value, the first time the template\n is called it will enter the scope with no reuse, and call `func_` to create\n variables, which are guaranteed to be unique. All subsequent calls will\n re-enter the scope and reuse those variables.\n\n Raises:\n ValueError: if `name_` is None.\n ", "desc": "Given an arbitrary function, wrap it so that it does variable sharing.", "type": "API"}, {"name": "tf.compat.v1.make_tensor_proto", "docs": "Create a TensorProto.\n\n In TensorFlow 2.0, representing tensors as protos should no longer be a\n common workflow. That said, this utility function is still useful for\n generating TF Serving request protos:\n\n ```python\n request = tensorflow_serving.apis.predict_pb2.PredictRequest()\n request.model_spec.name = \"my_model\"\n request.model_spec.signature_name = \"serving_default\"\n request.inputs[\"images\"].CopyFrom(tf.make_tensor_proto(X_new))\n ```\n\n `make_tensor_proto` accepts \"values\" of a python scalar, a python list, a\n numpy ndarray, or a numpy scalar.\n\n If \"values\" is a python scalar or a python list, make_tensor_proto\n first convert it to numpy ndarray. If dtype is None, the\n conversion tries its best to infer the right numpy data\n type. Otherwise, the resulting numpy array has a compatible data\n type with the given dtype.\n\n In either case above, the numpy ndarray (either the caller provided\n or the auto-converted) must have the compatible type with dtype.\n\n `make_tensor_proto` then converts the numpy array to a tensor proto.\n\n If \"shape\" is None, the resulting tensor proto represents the numpy\n array precisely.\n\n Otherwise, \"shape\" specifies the tensor's shape and the numpy array\n can not have more elements than what \"shape\" specifies.\n\n Args:\n values: Values to put in the TensorProto.\n dtype: Optional tensor_pb2 DataType value.\n shape: List of integers representing the dimensions of tensor.\n verify_shape: Boolean that enables verification of a shape of values.\n allow_broadcast: Boolean that enables allowing scalars and 1 length vector\n broadcasting. Cannot be true when verify_shape is true.\n\n Returns:\n A `TensorProto`. Depending on the type, it may contain data in the\n \"tensor_content\" attribute, which is not directly useful to Python programs.\n To access the values you should convert the proto back to a numpy ndarray\n with `tf.make_ndarray(proto)`.\n\n If `values` is a `TensorProto`, it is immediately returned; `dtype` and\n `shape` are ignored.\n\n Raises:\n TypeError: if unsupported types are provided.\n ValueError: if arguments have inappropriate values or if verify_shape is\n True and shape of values is not equals to a shape from the argument.\n\n ", "desc": "Create a TensorProto.", "type": "API"}, {"name": "tf.compat.v1.manip", "docs": "Operators for manipulating tensors.\n", "desc": "Operators for manipulating tensors.", "type": "API"}, {"name": "tf.compat.v1.manip.batch_to_space_nd", "docs": "BatchToSpace for N-D tensors of type T.\n\n This operation reshapes the \"batch\" dimension 0 into `M + 1` dimensions of shape\n `block_shape + [batch]`, interleaves these blocks back into the grid defined by\n the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as\n the input. The spatial dimensions of this intermediate result are then\n optionally cropped according to `crops` to produce the output. This is the\n reverse of SpaceToBatch. See below for a precise description.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has M dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n crops: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input\n dimension `i + 1`, which corresponds to spatial dimension `i`. It is\n required that\n `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.\n\n This operation is equivalent to the following steps:\n\n 1. Reshape `input` to `reshaped` of shape:\n [block_shape[0], ..., block_shape[M-1],\n batch / prod(block_shape),\n input_shape[1], ..., input_shape[N-1]]\n\n 2. Permute dimensions of `reshaped` to produce `permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1], block_shape[0],\n ...,\n input_shape[M], block_shape[M-1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n 3. Reshape `permuted` to produce `reshaped_permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0],\n ...,\n input_shape[M] * block_shape[M-1],\n\n input_shape[M+1],\n ...,\n input_shape[N-1]]\n\n 4. Crop the start and end of dimensions `[1, ..., M]` of\n `reshaped_permuted` according to `crops` to produce the output of shape:\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],\n ...,\n input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n Some examples:\n\n (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 3]` and value:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n The output tensor has shape `[1, 4, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [2, 0]]`:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n The output tensor has shape `[2, 2, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for N-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.manip.gather_nd", "docs": "Gather slices from `params` into a Tensor with shape specified by `indices`.\n\n `indices` is a `Tensor` of indices into `params`. The index vectors are\n arranged along the last axis of `indices`.\n\n This is similar to `tf.gather`, in which `indices` defines slices into the\n first dimension of `params`. In `tf.gather_nd`, `indices` defines slices into the\n first `N` dimensions of `params`, where `N = indices.shape[-1]`.\n\n Caution: On CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n\n ## Gathering scalars\n\n In the simplest case the vectors in `indices` index the full rank of `params`:\n\n >>> tf.gather_nd(\n ... indices=[[0, 0],\n ... [1, 1]],\n ... params = [['a', 'b'],\n ... ['c', 'd']]).numpy()\n array([b'a', b'd'], dtype=object)\n\n In this case the result has 1-axis fewer than `indices`, and each index vector\n is replaced by the scalar indexed from `params`.\n\n In this case the shape relationship is:\n\n ```\n index_depth = indices.shape[-1]\n assert index_depth == params.shape.rank\n result_shape = indices.shape[:-1]\n ```\n\n If `indices` has a rank of `K`, it is helpful to think `indices` as a\n (K-1)-dimensional tensor of indices into `params`.\n\n ## Gathering slices\n\n If the index vectors do not index the full rank of `params` then each location\n in the result contains a slice of params. This example collects rows from a\n matrix:\n\n >>> tf.gather_nd(\n ... indices = [[1],\n ... [0]],\n ... params = [['a', 'b', 'c'],\n ... ['d', 'e', 'f']]).numpy()\n array([[b'd', b'e', b'f'],\n [b'a', b'b', b'c']], dtype=object)\n\n Here `indices` contains `[2]` index vectors, each with a length of `1`.\n The index vectors each refer to rows of the `params` matrix. Each\n row has a shape of `[3]` so the output shape is `[2, 3]`.\n\n In this case, the relationship between the shapes is:\n\n ```\n index_depth = indices.shape[-1]\n outer_shape = indices.shape[:-1]\n assert index_depth <= params.shape.rank\n inner_shape = params.shape[index_depth:]\n output_shape = outer_shape + inner_shape\n ```\n\n It is helpful to think of the results in this case as tensors-of-tensors.\n The shape of the outer tensor is set by the leading dimensions of `indices`.\n While the shape of the inner tensors is the shape of a single slice.\n\n ## Batches\n\n Additionally both `params` and `indices` can have `M` leading batch\n dimensions that exactly match. In this case `batch_dims` must be set to `M`.\n\n For example, to collect one row from each of a batch of matrices you could\n set the leading elements of the index vectors to be their location in the\n batch:\n\n >>> tf.gather_nd(\n ... indices = [[0, 1],\n ... [1, 0],\n ... [2, 4],\n ... [3, 2],\n ... [4, 1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n The `batch_dims` argument lets you omit those leading location dimensions\n from the index:\n\n >>> tf.gather_nd(\n ... batch_dims=1,\n ... indices = [[1],\n ... [0],\n ... [4],\n ... [2],\n ... [1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n This is equivalent to caling a separate `gather_nd` for each location in the\n batch dimensions.\n\n\n >>> params=tf.zeros([5, 7, 3])\n >>> indices=tf.zeros([5, 1])\n >>> batch_dims = 1\n >>>\n >>> index_depth = indices.shape[-1]\n >>> batch_shape = indices.shape[:batch_dims]\n >>> assert params.shape[:batch_dims] == batch_shape\n >>> outer_shape = indices.shape[batch_dims:-1]\n >>> assert index_depth <= params.shape.rank\n >>> inner_shape = params.shape[batch_dims + index_depth:]\n >>> output_shape = batch_shape + outer_shape + inner_shape\n >>> output_shape.as_list()\n [5, 3]\n\n ### More examples\n\n Indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'a1', b'b1'],\n [b'c1', b'd1']]], dtype=object)\n\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 1], [1, 0]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 0, 1], [1, 0, 1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([b'b0', b'b1'], dtype=object)\n\n The examples below are for the case when only indices have leading extra\n dimensions. If both 'params' and 'indices' have leading batch dimensions, use\n the 'batch_dims' parameter to run gather_nd in batch mode.\n\n Batched indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0]], [[0, 1]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[b'a'],\n [b'b']], dtype=object)\n\n\n\n Batched slice indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[[b'c', b'd']],\n [[b'a', b'b']]], dtype=object)\n\n\n Batched indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[[b'a1', b'b1'],\n [b'c1', b'd1']]],\n [[[b'a0', b'b0'],\n [b'c0', b'd0']]]], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0'],\n [b'a1', b'b1']],\n [[b'a0', b'b0'],\n [b'c1', b'd1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'b0', b'b1'],\n [b'd0', b'c1']], dtype=object)\n\n\n Examples with batched 'params' and 'indices':\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[1],\n ... [0]],\n ... params = [[['a0', 'b0'],\n ... ['c0', 'd0']],\n ... [['a1', 'b1'],\n ... ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0']],\n [[b'a1', b'b1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1, 0]], [[0, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0'],\n [b'b1']], dtype=object)\n\n\n See also `tf.gather`.\n\n Args:\n params: A `Tensor`. The tensor from which to gather values.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n name: A name for the operation (optional).\n batch_dims: An integer or a scalar 'Tensor'. The number of batch dimensions.\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` into a Tensor with shape specified by `indices`.", "type": "API"}, {"name": "tf.compat.v1.manip.reshape", "docs": "Reshapes a tensor.\n\n Given `tensor`, this operation returns a new `tf.Tensor` that has the same\n values as `tensor` in the same order, except with a new shape given by\n `shape`.\n\n >>> t1 = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> print(tf.shape(t1).numpy())\n [2 3]\n >>> t2 = tf.reshape(t1, [6])\n >>> t2\n \n >>> tf.reshape(t2, [3, 2])\n \n\n The `tf.reshape` does not change the order of or the total number of elements\n in the tensor, and so it can reuse the underlying data buffer. This makes it\n a fast operation independent of how big of a tensor it is operating on.\n\n >>> tf.reshape([1, 2, 3], [2, 2])\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Input to reshape is a tensor with 3 values, but the\n requested shape has 4\n\n To instead reorder the data to rearrange the dimensions of a tensor, see\n `tf.transpose`.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [3, 2]).numpy()\n array([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n >>> tf.transpose(t, perm=[1, 0]).numpy()\n array([[1, 4],\n [2, 5],\n [3, 6]], dtype=int32)\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total size remains constant. In particular,\n a `shape` of `[-1]` flattens into 1-D. At most one component of `shape` can\n be -1.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [-1])\n \n >>> tf.reshape(t, [3, -1])\n \n >>> tf.reshape(t, [-1, 2])\n \n\n `tf.reshape(t, [])` reshapes a tensor `t` with one element to a scalar.\n\n >>> tf.reshape([7], []).numpy()\n 7\n\n More examples:\n\n >>> t = [1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> print(tf.shape(t).numpy())\n [9]\n >>> tf.reshape(t, [3, 3])\n \n\n >>> t = [[[1, 1], [2, 2]],\n ... [[3, 3], [4, 4]]]\n >>> print(tf.shape(t).numpy())\n [2 2 2]\n >>> tf.reshape(t, [2, 4])\n \n\n >>> t = [[[1, 1, 1],\n ... [2, 2, 2]],\n ... [[3, 3, 3],\n ... [4, 4, 4]],\n ... [[5, 5, 5],\n ... [6, 6, 6]]]\n >>> print(tf.shape(t).numpy())\n [3 2 3]\n >>> # Pass '[-1]' to flatten 't'.\n >>> tf.reshape(t, [-1])\n \n >>> # -- Using -1 to infer the shape --\n >>> # Here -1 is inferred to be 9:\n >>> tf.reshape(t, [2, -1])\n \n >>> # -1 is inferred to be 2:\n >>> tf.reshape(t, [-1, 9])\n \n >>> # -1 is inferred to be 3:\n >>> tf.reshape(t, [ 2, -1, 3])\n \n\n Args:\n tensor: A `Tensor`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Defines the shape of the output tensor.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reshapes a tensor.", "type": "API"}, {"name": "tf.compat.v1.manip.reverse", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `int32` tensor `axis` representing the set of\n dimensions of `tensor` to reverse. This operation reverses each dimension\n `i` for which there exists `j` s.t. `axis[j] == i`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions specified\n in `axis` may be 0 or more entries. If an index is specified more than\n once, a InvalidArgument error is raised.\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [3] or 'dims' is [-1]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is '[1]' (or 'dims' is '[-3]')\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is '[2]' (or 'dims' is '[-2]')\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `int64`, `uint64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. The indices of the dimensions to reverse. Must be in the range\n `[-rank(tensor), rank(tensor))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.manip.roll", "docs": "Rolls the elements of a tensor along an axis.\n\n The elements are shifted positively (towards larger indices) by the offset of\n `shift` along the dimension of `axis`. Negative `shift` values will shift\n elements in the opposite direction. Elements that roll passed the last position\n will wrap around to the first and vice versa. Multiple shifts along multiple\n axes may be specified.\n\n For example:\n\n ```\n # 't' is [0, 1, 2, 3, 4]\n roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]\n\n # shifting along multiple dimensions\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]\n\n # shifting along the same axis multiple times\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]\n ```\n\n Args:\n input: A `Tensor`.\n shift: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which\n elements are shifted positively (towards larger indices) along the dimension\n specified by `axis[i]`. Negative shifts will roll the elements in the opposite\n direction.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift\n `shift[i]` should occur. If the same axis is referenced more than once, the\n total shift for that axis will be the sum of all the shifts that belong to that\n axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Rolls the elements of a tensor along an axis.", "type": "API"}, {"name": "tf.compat.v1.manip.scatter_nd", "docs": "Scatters `updates` into a tensor of shape `shape` according to `indices`.\n\n Update the input tensor by scattering sparse `updates` according to individual values at the specified `indices`.\n This op returns an `output` tensor with the `shape` you specify. This op is the\n inverse of the `tf.gather_nd` operator which extracts values or slices from a\n given tensor.\n\n This operation is similar to `tf.tensor_scatter_nd_add`, except that the tensor\n is zero-initialized. Calling `tf.scatter_nd(indices, values, shape)`\n is identical to calling\n `tf.tensor_scatter_nd_add(tf.zeros(shape, values.dtype), indices, values)`\n\n If `indices` contains duplicates, the duplicate `values` are accumulated\n (summed).\n\n **WARNING**: The order in which updates are applied is nondeterministic, so the\n output will be nondeterministic if `indices` contains duplicates;\n numbers summed in different order may yield different results because of some\n numerical approximation issues.\n\n `indices` is an integer tensor of shape `shape`. The last dimension\n of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices of elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`.\n\n `updates` is a tensor with shape:\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of the scatter op is to insert individual elements in\n a tensor by index. Consider an example where you want to insert 4 scattered\n elements in a rank-1 tensor with 8 elements.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n shape = tf.constant([8])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [0, 11, 0, 10, 9, 0, 0, 12]\n\n You can also insert entire slices of a higher rank tensor all at once. For\n example, you can insert two slices in the first dimension of a rank-3 tensor\n with two matrices of new values.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n shape = tf.constant([4, 4, 4])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],\n [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Tensor of indices.\n updates: A `Tensor`. Values to scatter into the output tensor.\n shape: A `Tensor`. Must have the same type as `indices`.\n 1-D. The shape of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `updates`.\n ", "desc": "Scatters `updates` into a tensor of shape `shape` according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.manip.space_to_batch_nd", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.manip.tile", "docs": "Constructs a tensor by tiling a given tensor.\n\n This operation creates a new tensor by replicating `input` `multiples` times.\n The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,\n and the values of `input` are replicated `multiples[i]` times along the 'i'th\n dimension. For example, tiling `[a b c d]` by `[2]` produces\n `[a b c d a b c d]`.\n\n >>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32)\n >>> b = tf.constant([1,2], tf.int32)\n >>> tf.tile(a, b)\n \n >>> c = tf.constant([2,1], tf.int32)\n >>> tf.tile(a, c)\n \n >>> d = tf.constant([2,2], tf.int32)\n >>> tf.tile(a, d)\n \n\n Args:\n input: A `Tensor`. 1-D or higher.\n multiples: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. Length must be the same as the number of dimensions in `input`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Constructs a tensor by tiling a given tensor.", "type": "API"}, {"name": "tf.compat.v1.map_fn", "docs": "Transforms `elems` by applying `fn` to each element unstacked on axis 0. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dtype)`. They will be removed in a future version.\nInstructions for updating:\nUse fn_output_signature instead\n\nSee also `tf.scan`.\n\n`map_fn` unstacks `elems` on axis 0 to obtain a sequence of elements;\ncalls `fn` to transform each element; and then stacks the transformed\nvalues back together.\n\n#### Mapping functions with single-Tensor inputs and outputs\n\nIf `elems` is a single tensor and `fn`'s signature is `tf.Tensor->tf.Tensor`,\nthen `map_fn(fn, elems)` is equivalent to\n`tf.stack([fn(elem) for elem in tf.unstack(elems)])`. E.g.:\n\n>>> tf.map_fn(fn=lambda t: tf.range(t, t + 3), elems=tf.constant([3, 5, 2]))\n\n\n`map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape`.\n\n#### Mapping functions with multi-arity inputs and outputs\n\n`map_fn` also supports functions with multi-arity inputs and outputs:\n\n* If `elems` is a tuple (or nested structure) of tensors, then those tensors\n must all have the same outer-dimension size (`num_elems`); and `fn` is\n used to transform each tuple (or structure) of corresponding slices from\n `elems`. E.g., if `elems` is a tuple `(t1, t2, t3)`, then `fn` is used to\n transform each tuple of slices `(t1[i], t2[i], t3[i])`\n (where `0 <= i < num_elems`).\n\n* If `fn` returns a tuple (or nested structure) of tensors, then the\n result is formed by stacking corresponding elements from those structures.\n\n#### Specifying `fn`'s output signature\n\nIf `fn`'s input and output signatures are different, then the output\nsignature must be specified using `fn_output_signature`. (The input and\noutput signatures are differ if their structures, dtypes, or tensor types do\nnot match). E.g.:\n\n>>> tf.map_fn(fn=tf.strings.length, # input & output have different dtypes\n... elems=tf.constant([\"hello\", \"moon\"]),\n... fn_output_signature=tf.int32)\n\n>>> tf.map_fn(fn=tf.strings.join, # input & output have different structures\n... elems=[tf.constant(['The', 'A']), tf.constant(['Dog', 'Cat'])],\n... fn_output_signature=tf.string)\n\n\n`fn_output_signature` can be specified using any of the following:\n\n* A `tf.DType` or `tf.TensorSpec` (to describe a `tf.Tensor`)\n* A `tf.RaggedTensorSpec` (to describe a `tf.RaggedTensor`)\n* A `tf.SparseTensorSpec` (to describe a `tf.sparse.SparseTensor`)\n* A (possibly nested) tuple, list, or dict containing the above types.\n\n#### RaggedTensors\n\n`map_fn` supports `tf.RaggedTensor` inputs and outputs. In particular:\n\n* If `elems` is a `RaggedTensor`, then `fn` will be called with each\n row of that ragged tensor.\n * If `elems` has only one ragged dimension, then the values passed to\n `fn` will be `tf.Tensor`s.\n * If `elems` has multiple ragged dimensions, then the values passed to\n `fn` will be `tf.RaggedTensor`s with one fewer ragged dimension.\n\n* If the result of `map_fn` should be a `RaggedTensor`, then use a\n `tf.RaggedTensorSpec` to specify `fn_output_signature`.\n * If `fn` returns `tf.Tensor`s with varying sizes, then use a\n `tf.RaggedTensorSpec` with `ragged_rank=0` to combine them into a\n single ragged tensor (which will have ragged_rank=1).\n * If `fn` returns `tf.RaggedTensor`s, then use a `tf.RaggedTensorSpec`\n with the same `ragged_rank`.\n\n>>> # Example: RaggedTensor input\n>>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n>>> tf.map_fn(tf.reduce_sum, rt, fn_output_signature=tf.int32)\n\n\n>>> # Example: RaggedTensor output\n>>> elems = tf.constant([3, 5, 0, 2])\n>>> tf.map_fn(tf.range, elems,\n... fn_output_signature=tf.RaggedTensorSpec(shape=[None],\n... dtype=tf.int32))\n\n\nNote: `map_fn` should only be used if you need to map a function over the\n*rows* of a `RaggedTensor`. If you wish to map a function over the\nindividual values, then you should use:\n\n* `tf.ragged.map_flat_values(fn, rt)`\n (if fn is expressible as TensorFlow ops)\n* `rt.with_flat_values(map_fn(fn, rt.flat_values))`\n (otherwise)\n\nE.g.:\n\n>>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n>>> tf.ragged.map_flat_values(lambda x: x + 2, rt)\n\n\n#### SparseTensors\n\n`map_fn` supports `tf.sparse.SparseTensor` inputs and outputs. In particular:\n\n* If `elems` is a `SparseTensor`, then `fn` will be called with each row\n of that sparse tensor. In particular, the value passed to `fn` will be a\n `tf.sparse.SparseTensor` with one fewer dimension than `elems`.\n\n* If the result of `map_fn` should be a `SparseTensor`, then use a\n `tf.SparseTensorSpec` to specify `fn_output_signature`. The individual\n `SparseTensor`s returned by `fn` will be stacked into a single\n `SparseTensor` with one more dimension.\n\n>>> # Example: SparseTensor input\n>>> st = tf.sparse.SparseTensor([[0, 0], [2, 0], [2, 1]], [2, 3, 4], [4, 4])\n>>> tf.map_fn(tf.sparse.reduce_sum, st, fn_output_signature=tf.int32)\n\n\n>>> # Example: SparseTensor output\n>>> tf.sparse.to_dense(\n... tf.map_fn(tf.sparse.eye, tf.constant([2, 3]),\n... fn_output_signature=tf.SparseTensorSpec(None, tf.float32)))\n\n\nNote: `map_fn` should only be used if you need to map a function over the\n*rows* of a `SparseTensor`. If you wish to map a function over the nonzero\nvalues, then you should use:\n\n* If the function is expressible as TensorFlow ops, use:\n ```python\n tf.sparse.SparseTensor(st.indices, fn(st.values), st.dense_shape)\n ```\n* Otherwise, use:\n ```python\n tf.sparse.SparseTensor(st.indices, tf.map_fn(fn, st.values),\n st.dense_shape)\n ```\n\n#### `map_fn` vs. vectorized operations\n\n`map_fn` will apply the operations used by `fn` to each element of `elems`,\nresulting in `O(elems.shape[0])` total operations. This is somewhat\nmitigated by the fact that `map_fn` can process elements in parallel.\nHowever, a transform expressed using `map_fn` is still typically less\nefficient than an equivalent transform expressed using vectorized operations.\n\n`map_fn` should typically only be used if one of the following is true:\n\n* It is difficult or expensive to express the desired transform with\n vectorized operations.\n* `fn` creates large intermediate values, so an equivalent vectorized\n transform would take too much memory.\n* Processing elements in parallel is more efficient than an equivalent\n vectorized transform.\n* Efficiency of the transform is not critical, and using `map_fn` is\n more readable.\n\nE.g., the example given above that maps `fn=lambda t: tf.range(t, t + 3)`\nacross `elems` could be rewritten more efficiently using vectorized ops:\n\n>>> elems = tf.constant([3, 5, 2])\n>>> tf.range(3) + tf.expand_dims(elems, 1)\n\n\nIn some cases, `tf.vectorized_map` can be used to automatically convert a\nfunction to a vectorized equivalent.\n\n#### Eager execution\n\nWhen executing eagerly, `map_fn` does not execute in parallel even if\n`parallel_iterations` is set to a value > 1. You can still get the\nperformance benefits of running a function in parallel by using the\n`tf.function` decorator:\n\n>>> fn=lambda t: tf.range(t, t + 3)\n>>> @tf.function\n... def func(elems):\n... return tf.map_fn(fn, elems, parallel_iterations=3)\n>>> func(tf.constant([3, 5, 2]))\n\n\n\nNote: if you use the `tf.function` decorator, any non-TensorFlow Python\ncode that you may have written in your function won't get executed. See\n`tf.function` for more details. The recommendation would be to debug without\n`tf.function` but switch to it to get performance benefits of running `map_fn`\nin parallel.\n\nArgs:\n fn: The callable to be performed. It accepts one argument, which will have\n the same (possibly nested) structure as `elems`. Its output must have the\n same structure as `fn_output_signature` if one is provided; otherwise it\n must have the same structure as `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unstacked along their first dimension. `fn` will be applied to the\n nested sequence of the resulting slices. `elems` may include ragged and\n sparse tensors. `elems` must consist of at least one tensor.\n dtype: Deprecated: Equivalent to `fn_output_signature`.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel. When graph building, the default value is 10. While executing\n eagerly, the default value is set to 1.\n back_prop: (optional) False disables support for back propagation.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n infer_shape: (optional) False disables tests for consistent output shapes.\n name: (optional) Name prefix for the returned tensors.\n fn_output_signature: The output signature of `fn`. Must be specified if\n `fn`'s input and output signatures are different (i.e., if their\n structures, dtypes, or tensor types do not match).\n `fn_output_signature` can be specified using any of the following:\n\n * A `tf.DType` or `tf.TensorSpec` (to describe a `tf.Tensor`)\n * A `tf.RaggedTensorSpec` (to describe a `tf.RaggedTensor`)\n * A `tf.SparseTensorSpec` (to describe a `tf.sparse.SparseTensor`)\n * A (possibly nested) tuple, list, or dict containing the above types.\n\nReturns:\n A tensor or (possibly nested) sequence of tensors. Each tensor stacks the\n results of applying `fn` to tensors unstacked from `elems` along the first\n dimension, from first to last. The result may include ragged and sparse\n tensors.\n\nRaises:\n TypeError: if `fn` is not callable or the structure of the output of\n `fn` and `fn_output_signature` do not match.\n ValueError: if the lengths of the output of `fn` and `fn_output_signature`\n do not match, or if the `elems` does not contain any tensor.\n\nExamples:\n\n >>> elems = np.array([1, 2, 3, 4, 5, 6])\n >>> tf.map_fn(lambda x: x * x, elems)\n \n\n >>> elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))\n >>> tf.map_fn(lambda x: x[0] * x[1], elems, fn_output_signature=tf.int64)\n \n\n >>> elems = np.array([1, 2, 3])\n >>> tf.map_fn(lambda x: (x, -x), elems,\n ... fn_output_signature=(tf.int64, tf.int64))\n (,\n )", "desc": "Transforms `elems` by applying `fn` to each element unstacked on axis 0. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.matching_files", "docs": "Returns the set of files matching one or more glob patterns.\n\n Note that this routine only supports wildcard characters in the\n basename portion of the pattern, not in the directory portion.\n Note also that the order of filenames returned is deterministic.\n\n Args:\n pattern: A `Tensor` of type `string`.\n Shell wildcard pattern(s). Scalar or vector of type string.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the set of files matching one or more glob patterns.", "type": "API"}, {"name": "tf.compat.v1.math", "docs": "Math Operations.\n\nNote: Functions taking `Tensor` arguments can also take anything accepted by\n`tf.convert_to_tensor`.\n\nNote: Elementwise binary operations in TensorFlow follow [numpy-style\nbroadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).\n\nTensorFlow provides a variety of math functions including:\n\n* Basic arithmetic operators and trigonometric functions.\n* Special math functions (like: `tf.math.igamma` and `tf.math.zeta`)\n* Complex number functions (like: `tf.math.imag` and `tf.math.angle`)\n* Reductions and scans (like: `tf.math.reduce_mean` and `tf.math.cumsum`)\n* Segment functions (like: `tf.math.segment_sum`)\n\nSee: `tf.linalg` for matrix and tensor functions.\n\n\n\n## About Segmentation\n\nTensorFlow provides several operations that you can use to perform common\nmath computations on tensor segments.\nHere a segmentation is a partitioning of a tensor along\nthe first dimension, i.e. it defines a mapping from the first dimension onto\n`segment_ids`. The `segment_ids` tensor should be the size of\nthe first dimension, `d0`, with consecutive IDs in the range `0` to `k`,\nwhere `k [[0 0 0 0]\n# [5 6 7 8]]\n```\n\nThe standard `segment_*` functions assert that the segment indices are sorted.\nIf you have unsorted indices use the equivalent `unsorted_segment_` function.\nThese functions take an additional argument `num_segments` so that the output\ntensor can be efficiently allocated.\n\n``` python\nc = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\ntf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2)\n# ==> [[ 6, 8, 10, 12],\n# [-1, -2, -3, -4]]\n```\n\n\n", "desc": "Math Operations.", "type": "API"}, {"name": "tf.compat.v1.math.abs", "docs": "Computes the absolute value of a tensor.\n\n Given a tensor of integer or floating-point values, this operation returns a\n tensor of the same type, where each element contains the absolute value of the\n corresponding element in the input.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of type\n `float32` or `float64` that is the absolute value of each element in `x`. For\n a complex number \\\\(a + bj\\\\), its absolute value is computed as\n \\\\(\\sqrt{a^2 + b^2}\\\\).\n\n For example:\n\n >>> # real number\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.abs(x)\n \n\n >>> # complex number\n >>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])\n >>> tf.abs(x)\n \n\n Args:\n x: A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`,\n `int32`, `int64`, `complex64` or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`,\n with absolute values. Note, for `complex64` or `complex128` input, the\n returned `Tensor` will be of type `float32` or `float64`, respectively.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)`", "desc": "Computes the absolute value of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.accumulate_n", "docs": "Returns the element-wise sum of a list of tensors.\n\n Optionally, pass `shape` and `tensor_dtype` for shape and type checking,\n otherwise, these are inferred.\n\n `accumulate_n` performs the same operation as `tf.math.add_n`.\n\n For example:\n\n ```python\n a = tf.constant([[1, 2], [3, 4]])\n b = tf.constant([[5, 0], [0, 6]])\n tf.math.accumulate_n([a, b, a]) # [[7, 4], [6, 14]]\n\n # Explicitly pass shape and type\n tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)\n # [[7, 4],\n # [6, 14]]\n ```\n\n Args:\n inputs: A list of `Tensor` objects, each with same shape and type.\n shape: Expected shape of elements of `inputs` (optional). Also controls the\n output shape of this op, which may affect type inference in other ops. A\n value of `None` means \"infer the input shape from the shapes in `inputs`\".\n tensor_dtype: Expected data type of `inputs` (optional). A value of `None`\n means \"infer the input dtype from `inputs[0]`\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Returns the element-wise sum of a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.math.acos", "docs": "Computes acos of x element-wise.\n\n Provided an input tensor, the `tf.math.acos` operation\n returns the inverse cosine of each element of the tensor.\n If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`.\n\n Input range is `[-1, 1]` and the output has a range of `[0, pi]`.\n\n For example:\n\n >>> x = tf.constant([1.0, -0.5, 3.4, 0.2, 0.0, -2], dtype = tf.float32)\n >>> tf.math.acos(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Computes acos of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.acosh", "docs": "Computes inverse hyperbolic cosine of x element-wise.\n\n Given an input tensor, the function computes inverse hyperbolic cosine of every element.\n Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.\n\n ```python\n x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.add", "docs": "Returns x + y element-wise.\n\n Example usages below.\n\n Add a scalar and a list:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.add(x, y)\n \n\n Note that binary `+` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x + y\n \n\n Add a tensor and a list of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([1, 2, 3, 4, 5])\n >>> tf.add(x, y)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**7 + 1, 2**7 + 2]\n >>> tf.add(x, y)\n \n\n When adding two input values of different shapes, `Add` follows NumPy\n broadcasting rules. The two input array shapes are compared element-wise.\n Starting with the trailing dimensions, the two dimensions either have to be\n equal or one of them needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(1, 2, 1, 3)\n >>> y = np.ones(6).reshape(2, 1, 3, 1)\n >>> tf.add(x, y).shape.as_list()\n [2, 2, 3, 3]\n\n Another example with two arrays of different dimension.\n\n >>> x = np.ones([1, 2, 1, 4])\n >>> y = np.ones([3, 4])\n >>> tf.add(x, y).shape.as_list()\n [1, 2, 3, 4]\n\n The reduction version of this elementwise operation is `tf.math.reduce_sum`\n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: bfloat16, half,\n float32, float64, uint8, int8, int16, int32, int64, complex64, complex128,\n string.\n y: A `tf.Tensor`. Must have the same type as x.\n name: A name for the operation (optional)\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.add_n", "docs": "Adds all input tensors element-wise.\n\n `tf.math.add_n` performs the same operation as `tf.math.accumulate_n`.\n\n This op does not [broadcast](\n https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)\n its inputs. If you need broadcasting, use `tf.math.add` (or the `+` operator)\n instead.\n\n For example:\n\n >>> a = tf.constant([[3, 5], [4, 8]])\n >>> b = tf.constant([[1, 6], [2, 9]])\n >>> tf.math.add_n([a, b, a])\n \n\n Args:\n inputs: A list of `tf.Tensor` or `tf.IndexedSlices` objects, each with the\n same shape and type. `tf.IndexedSlices` objects will be converted into\n dense tensors prior to adding.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Adds all input tensors element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.angle", "docs": "Returns the element-wise argument of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the argument of each element in `input` considered as a complex number.\n\n The elements in `input` are considered to be complex numbers of the form\n \\\\(a + bj\\\\), where *a* is the real part and *b* is the imaginary part.\n If `input` is real then *b* is zero by definition.\n\n The argument returned by this function is of the form \\\\(atan2(b, a)\\\\).\n If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```\n input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64)\n tf.math.angle(input).numpy()\n # ==> array([2.0131705, 1.056345 ], dtype=float32)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the element-wise argument of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.math.argmax", "docs": "Returns the index with the largest value across axes of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nNote that in case of ties the identity of the return value is not guaranteed.\n\nUsage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmax(input = a)\n c = tf.keras.backend.eval(b)\n # c = 4\n # here a[4] = 166.32 which is the largest element of a across axis 0\n ```\n\nArgs:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n axis: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n int16, int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int16, tf.uint16, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` of type `output_type`.", "desc": "Returns the index with the largest value across axes of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.argmin", "docs": "Returns the index with the smallest value across axes of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dimension)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nNote that in case of ties the identity of the return value is not guaranteed.\n\nUsage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n\nArgs:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` of type `output_type`.", "desc": "Returns the index with the smallest value across axes of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.asin", "docs": "Computes the trignometric inverse sine of x element-wise.\n\n The `tf.math.asin` operation returns the inverse of `tf.math.sin`, such that\n if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.\n\n **Note**: The output of `tf.math.asin` will lie within the invertible range\n of sine, i.e [-pi/2, pi/2].\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.sin(x) # [0.8659266, 0.7068252]\n\n tf.math.asin(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.asinh", "docs": "Computes inverse hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic sine\n for every element in the tensor. Both input and output has a range of\n `[-inf, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.atan", "docs": "Computes the trignometric inverse tangent of x element-wise.\n\n The `tf.math.atan` operation returns the inverse of `tf.math.tan`, such that\n if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.\n\n **Note**: The output of `tf.math.atan` will lie within the invertible range\n of tan, i.e (-pi/2, pi/2).\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.tan(x) # [1.731261, 0.99920404]\n\n tf.math.atan(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse tangent of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.atan2", "docs": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.\n\n This is the angle \\\\( \\theta \\in [-\\pi, \\pi] \\\\) such that\n \\\\[ x = r \\cos(\\theta) \\\\]\n and\n \\\\[ y = r \\sin(\\theta) \\\\]\n where \\\\(r = \\sqrt{x^2 + y^2} \\\\).\n\n For example:\n\n >>> x = [1., 1.]\n >>> y = [1., -1.]\n >>> print((tf.math.atan2(y,x) * (180 / np.pi)).numpy())\n [ 45. -45.]\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.", "type": "API"}, {"name": "tf.compat.v1.math.atanh", "docs": "Computes inverse hyperbolic tangent of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic tangent\n for every element in the tensor. Input range is `[-1,1]` and output range is\n `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the\n input is `1`, output will be `inf`. Values outside the range will have\n `nan` as output.\n\n ```python\n x = tf.constant([-float(\"inf\"), -1, -0.5, 1, 0, 0.5, 10, float(\"inf\")])\n tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic tangent of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.bessel_i0", "docs": "Computes the Bessel i0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `i0e(x)` instead.\n\n >>> tf.math.special.bessel_i0([-1., -0.5, 0.5, 1.]).numpy()\n array([1.26606588, 1.06348337, 1.06348337, 1.26606588], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0\n @end_compatibility\n ", "desc": "Computes the Bessel i0 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.bessel_i0e", "docs": "Computes the Bessel i0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_i0e([-1., -0.5, 0.5, 1.]).numpy()\n array([0.46575961, 0.64503527, 0.64503527, 0.46575961], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i0e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i0e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.bessel_i1", "docs": "Computes the Bessel i1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `i1e(x)` instead.\n\n >>> tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1\n @end_compatibility\n ", "desc": "Computes the Bessel i1 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.bessel_i1e", "docs": "Computes the Bessel i1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_i1e([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.20791042, -0.15642083, 0.15642083, 0.20791042], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i1e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i1e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.betainc", "docs": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).\n\n The regularized incomplete beta integral is defined as:\n\n\n \\\\(I_x(a, b) = \\frac{B(x; a, b)}{B(a, b)}\\\\)\n\n where\n\n\n \\\\(B(x; a, b) = \\int_0^x t^{a-1} (1 - t)^{b-1} dt\\\\)\n\n\n is the incomplete beta function and \\\\(B(a, b)\\\\) is the *complete*\n beta function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n b: A `Tensor`. Must have the same type as `a`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).", "type": "API"}, {"name": "tf.compat.v1.math.bincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector with length\n `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise.\n If `weights` are non-None, then index `i` of the output stores the sum of the\n value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Args:\n arr: An int32 tensor of non-negative values.\n weights: If non-None, must be the same shape as arr. For each value in\n `arr`, the bin will be incremented by the corresponding weight instead of\n 1.\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `arr` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n dtype: If `weights` is None, determines the type of the output bins.\n\n Returns:\n A vector with the same dtype as `weights` or the given `dtype`. The bin\n values.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.compat.v1.math.ceil", "docs": "Return the ceiling of the input, element-wise.\n\n For example:\n\n >>> tf.math.ceil([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`. `int32`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.ceil\n @end_compatibility\n ", "desc": "Return the ceiling of the input, element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.confusion_matrix", "docs": "Computes the confusion matrix from predictions and labels.\n\n The matrix columns represent the prediction labels and the rows represent the\n real labels. The confusion matrix is always a 2-D array of shape `[n, n]`,\n where `n` is the number of valid labels for a given classification task. Both\n prediction and labels must be 1-D arrays of the same shape in order for this\n function to work.\n\n If `num_classes` is `None`, then `num_classes` will be set to one plus the\n maximum value in either predictions or labels. Class labels are expected to\n start at 0. For example, if `num_classes` is 3, then the possible labels\n would be `[0, 1, 2]`.\n\n If `weights` is not `None`, then each prediction contributes its\n corresponding weight to the total value of the confusion matrix cell.\n\n For example:\n\n ```python\n tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>\n [[0 0 0 0 0]\n [0 0 1 0 0]\n [0 0 1 0 0]\n [0 0 0 0 0]\n [0 0 0 0 1]]\n ```\n\n Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`,\n resulting in a 5x5 confusion matrix.\n\n Args:\n labels: 1-D `Tensor` of real labels for the classification task.\n predictions: 1-D `Tensor` of predictions for a given classification.\n num_classes: The possible number of labels the classification task can have.\n If this value is not provided, it will be calculated using both\n predictions and labels array.\n dtype: Data type of the confusion matrix.\n name: Scope name.\n weights: An optional `Tensor` whose shape matches `predictions`.\n\n Returns:\n A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion\n matrix, where `n` is the number of possible labels in the classification\n task.\n\n Raises:\n ValueError: If both predictions and labels are not 1-D vectors and have\n mismatched shapes, or if `weights` is not `None` and its shape doesn't\n match `predictions`.\n ", "desc": "Computes the confusion matrix from predictions and labels.", "type": "API"}, {"name": "tf.compat.v1.math.conj", "docs": "Returns the complex conjugate of a complex number.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of\n complex numbers that are the complex conjugate of each element in `x`. The\n complex numbers in `x` must be of the form \\\\(a + bj\\\\), where `a` is the\n real part and `b` is the imaginary part.\n\n The complex conjugate returned by this operation is of the form \\\\(a - bj\\\\).\n\n For example:\n\n >>> x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n >>> tf.math.conj(x)\n \n\n If `x` is real, it is returned unchanged.\n\n For example:\n\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.math.conj(x)\n \n\n Args:\n x: `Tensor` to conjugate. Must have numeric or variant type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that is the conjugate of `x` (with the same type).\n\n Raises:\n TypeError: If `x` is not a numeric tensor.\n\n @compatibility(numpy)\n Equivalent to numpy.conj.\n @end_compatibility\n ", "desc": "Returns the complex conjugate of a complex number.", "type": "API"}, {"name": "tf.compat.v1.math.cos", "docs": "Computes cos of x element-wise.\n\n Given an input tensor, this function computes cosine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes cos of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.cosh", "docs": "Computes hyperbolic cosine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic cosine of every\n element in the tensor. Input range is `[-inf, inf]` and output range\n is `[1, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.count_nonzero", "docs": "Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_indices)`. They will be removed in a future version.\nInstructions for updating:\nreduction_indices is deprecated, use axis instead\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nentry in `axis`. If `keepdims` is true, the reduced dimensions\nare retained with length 1.\n\nIf `axis` has no entries, all dimensions are reduced, and a\ntensor with a single element is returned.\n\n**NOTE** Floating point comparison to zero is done by exact floating point\nequality check. Small values are **not** rounded to zero for purposes of\nthe nonzero check.\n\nFor example:\n\n```python\nx = tf.constant([[0, 1, 0], [1, 1, 0]])\ntf.math.count_nonzero(x) # 3\ntf.math.count_nonzero(x, 0) # [1, 2, 0]\ntf.math.count_nonzero(x, 1) # [1, 2]\ntf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]]\ntf.math.count_nonzero(x, [0, 1]) # 3\n```\n\n**NOTE** Strings are compared against zero-length empty string `\"\"`. Any\nstring with a size greater than zero is already considered as nonzero.\n\nFor example:\n```python\nx = tf.constant([\"\", \"a\", \" \", \"b\", \"\"])\ntf.math.count_nonzero(x) # 3, with \"a\", \" \", and \"b\" as nonzero strings.\n```\n\nArgs:\n input_tensor: The tensor to reduce. Should be of numeric type, `bool`, or\n `string`.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n dtype: The output dtype; defaults to `tf.int64`.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n input: Overrides input_tensor. For compatibility.\n\nReturns:\n The reduced tensor (number of nonzero values).", "desc": "Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.cumprod", "docs": "Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the\n first element of the input is identical to the first element of the output:\n\n ```python\n tf.math.cumprod([a, b, c]) # [a, a * b, a * b * c]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumprod is\n performed\n instead:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True) # [1, a, a * b]\n ```\n\n By setting the `reverse` kwarg to `True`, the cumprod is performed in the\n opposite direction:\n\n ```python\n tf.math.cumprod([a, b, c], reverse=True) # [a * b * c, b * c, c]\n ```\n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True, reverse=True) # [b * c, c, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumprod.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative product of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.compat.v1.math.cumsum", "docs": "Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:\n For example:\n\n >>> # tf.cumsum([a, b, c]) # [a, a + b, a + b + c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x)\n \n\n >>> # using varying `axis` values\n >>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]])\n >>> tf.cumsum(y, axis=0)\n \n >>> tf.cumsum(y, axis=1)\n \n\n By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed\n instead:\n\n >>> # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True)\n \n\n By setting the `reverse` kwarg to `True`, the cumsum is performed in the\n opposite direction:\n\n >>> # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, reverse=True)\n \n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n >>> # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True, reverse=True)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumsum.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative sum of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.compat.v1.math.cumulative_logsumexp", "docs": "Compute the cumulative log-sum-exp of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumulative log-sum-exp, which means\n that the first element of the input is identical to the first element of\n the output.\n\n This operation is significantly more numerically stable than the equivalent\n tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although\n computes the same result given infinite numerical precision. However, note\n that in some cases, it may be less stable than `tf.math.reduce_logsumexp`\n for a given element, as it applies the \"log-sum-exp trick\" in a different\n way.\n\n More precisely, where `tf.math.reduce_logsumexp` uses the following trick:\n\n ```\n log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x)\n ```\n\n it cannot be directly used here as there is no fast way of applying it\n to each prefix `x[:i]`. Instead, this function implements a prefix\n scan using pairwise log-add-exp, which is a commutative and associative\n (up to floating point precision) operator:\n\n ```\n log_add_exp(x, y) = log(exp(x) + exp(y))\n = log(1 + exp(min(x, y) - max(x, y))) + max(x, y)\n ```\n\n However, reducing using the above operator leads to a different computation\n tree (logs are taken repeatedly instead of only at the end), and the maximum\n is only computed pairwise instead of over the entire prefix. In general, this\n leads to a different and slightly less precise computation.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float16`, `float32`,\n `float64`.\n axis: A `Tensor` of type `int32` or `int64` (default: 0). Must be in the\n range `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumulative log-sum-exp.\n reverse: If `True`, performs the cumulative log-sum-exp in the reverse\n direction.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same shape and type as `x`.\n ", "desc": "Compute the cumulative log-sum-exp of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.compat.v1.math.digamma", "docs": "Computes Psi, the derivative of Lgamma (the log of the absolute value of\n\n `Gamma(x)`), element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes Psi, the derivative of Lgamma (the log of the absolute value of", "type": "API"}, {"name": "tf.compat.v1.math.divide", "docs": "Computes Python style division of `x` by `y`.\n\n For example:\n\n >>> x = tf.constant([16, 12, 11])\n >>> y = tf.constant([4, 6, 2])\n >>> tf.divide(x,y)\n \n\n Args:\n x: A `Tensor`\n y: A `Tensor`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with same shape as input\n ", "desc": "Computes Python style division of `x` by `y`.", "type": "API"}, {"name": "tf.compat.v1.math.divide_no_nan", "docs": "Computes a safe divide which returns 0 if `y` (denominator) is zero.\n\n For example:\n\n >>> tf.constant(3.0) / 0.0\n \n >>> tf.math.divide_no_nan(3.0, 0.0)\n \n\n Note that 0 is returned if `y` is 0 even if `x` is nonfinite:\n\n >>> tf.math.divide_no_nan(np.nan, 0.0)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for the operation (optional).\n\n Returns:\n The element-wise value of the x divided by y.\n ", "desc": "Computes a safe divide which returns 0 if `y` (denominator) is zero.", "type": "API"}, {"name": "tf.compat.v1.math.equal", "docs": "Returns the truth value of (x == y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise equality comparison, returning a Tensor of\n boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x == y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.erf", "docs": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.\n\n For example:\n\n >>> tf.math.erf([[1.0, 2.0, 3.0], [0.0, -1.0, -2.0]])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.erf(x.values, ...), x.dense_shape)`", "desc": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.", "type": "API"}, {"name": "tf.compat.v1.math.erfc", "docs": "Computes the complementary error function of `x` element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the complementary error function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.erfcinv", "docs": "Computes the inverse of complementary error function.\n\n Given `x`, compute the inverse complementary error function of `x`.\n This function is the inverse of `tf.math.erfc`, and is defined on\n `[0, 2]`.\n\n >>> tf.math.erfcinv([0., 0.2, 1., 1.5, 2.])\n \n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse complementary error function of `x`.\n\n @compatibility(numpy)\n Equivalent to scipy.special.erfcinv\n @end_compatibility\n ", "desc": "Computes the inverse of complementary error function.", "type": "API"}, {"name": "tf.compat.v1.math.erfinv", "docs": "Compute inverse error function.\n\n Given `x`, compute the inverse error function of `x`. This function\n is the inverse of `tf.math.erf`.\n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse error function of `x`.\n ", "desc": "Compute inverse error function.", "type": "API"}, {"name": "tf.compat.v1.math.exp", "docs": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).\n\n This function computes the exponential of the input tensor element-wise.\n i.e. `math.exp(x)` or \\\\(e^x\\\\), where `x` is the input tensor.\n \\\\(e\\\\) denotes Euler's number and is approximately equal to 2.718281.\n Output is positive for any real input.\n\n >>> x = tf.constant(2.0)\n >>> tf.math.exp(x)\n \n\n >>> x = tf.constant([2.0, 8.0])\n >>> tf.math.exp(x)\n \n\n For complex numbers, the exponential value is calculated as\n $$\n e^{x+iy} = {e^x} {e^{iy}} = {e^x} ({\\cos (y) + i \\sin (y)})\n $$\n\n For `1+1j` the value would be computed as:\n $$\n e^1 (\\cos (1) + i \\sin (1)) = 2.7182817 \\times (0.5403023+0.84147096j)\n $$\n\n >>> x = tf.constant(1 + 1j)\n >>> tf.math.exp(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.exp\n @end_compatibility\n ", "desc": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).", "type": "API"}, {"name": "tf.compat.v1.math.expm1", "docs": "Computes `exp(x) - 1` element-wise.\n\n i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor.\n `e` denotes Euler's number and is approximately equal to 2.718281.\n\n ```python\n x = tf.constant(2.0)\n tf.math.expm1(x) ==> 6.389056\n\n x = tf.constant([2.0, 8.0])\n tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)\n\n x = tf.constant(1 + 1j)\n tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes `exp(x) - 1` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.floor", "docs": "Returns element-wise largest integer not greater than x.\n\n Both input range is `(-inf, inf)` and the\n output range consists of all integer values.\n\n For example:\n\n >>> x = tf.constant([1.3324, -1.5, 5.555, -2.532, 0.99, float(\"inf\")])\n >>> tf.floor(x).numpy()\n array([ 1., -2., 5., -3., 0., inf], dtype=float32)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Returns element-wise largest integer not greater than x.", "type": "API"}, {"name": "tf.compat.v1.math.floordiv", "docs": "Divides `x / y` elementwise, rounding toward the most negative integer.\n\n Mathematically, this is equivalent to floor(x / y). For example:\n floor(8.4 / 4.0) = floor(2.1) = 2.0\n floor(-8.4 / 4.0) = floor(-2.1) = -3.0\n This is equivalent to the '//' operator in Python 3.0 and above.\n\n Note: `x` and `y` must have the same type, and the result will have the same\n type as well.\n\n Args:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` rounded toward -infinity.\n\n Raises:\n TypeError: If the inputs are complex.\n ", "desc": "Divides `x / y` elementwise, rounding toward the most negative integer.", "type": "API"}, {"name": "tf.compat.v1.math.floormod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.compat.v1.math.greater", "docs": "Returns the truth value of (x > y) element-wise.\n\n *NOTE*: `math.greater` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 2, 5])\n tf.math.greater(x, y) ==> [False, True, True]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.greater(x, y) ==> [False, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x > y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.greater_equal", "docs": "Returns the truth value of (x >= y) element-wise.\n\n *NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5, 2, 5, 10])\n tf.math.greater_equal(x, y) ==> [True, True, True, False]\n\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5])\n tf.math.greater_equal(x, y) ==> [True, False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x >= y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.igamma", "docs": "Compute the lower regularized incomplete Gamma function `P(a, x)`.\n\n The lower regularized incomplete Gamma function is defined as:\n\n\n \\\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\\\)\n\n where\n\n \\\\(gamma(a, x) = \\\\int_{0}^{x} t^{a-1} exp(-t) dt\\\\)\n\n is the lower incomplete Gamma function.\n\n Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the lower regularized incomplete Gamma function `P(a, x)`.", "type": "API"}, {"name": "tf.compat.v1.math.igammac", "docs": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.\n\n The upper regularized incomplete Gamma function is defined as:\n\n \\\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\\\)\n\n where\n\n \\\\(Gamma(a, x) = \\int_{x}^{\\infty} t^{a-1} exp(-t) dt\\\\)\n\n is the upper incomplete Gamma function.\n\n Note, above `P(a, x)` (`Igamma`) is the lower regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.", "type": "API"}, {"name": "tf.compat.v1.math.imag", "docs": "Returns the imaginary part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the imaginary part of each element in `input` considered as a complex\n number. If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.imag(x) # [4.75, 5.75]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the imaginary part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.math.in_top_k", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is finite (not inf, -inf, or nan) and among\n the top `k` predictions among all predictions for example `i`. Note that the\n behavior of `InTopK` differs from the `TopK` op in its handling of ties; if\n multiple classes have the same prediction value and straddle the top-`k`\n boundary, all of those classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: An `int`. Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.math.invert_permutation", "docs": "Computes the inverse permutation of a tensor.\n\n This operation computes the inverse of an index permutation. It takes a 1-D\n integer tensor `x`, which represents the indices of a zero-based array, and\n swaps each value with its index position. In other words, for an output tensor\n `y` and an input tensor `x`, this operation computes the following:\n\n `y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`\n\n The values must include 0. There can be no duplicate values or negative values.\n\n For example:\n\n ```\n # tensor `x` is [3, 4, 0, 2, 1]\n invert_permutation(x) ==> [2, 4, 3, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the inverse permutation of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.is_finite", "docs": "Returns which elements of x are finite.\n\n @compatibility(numpy)\n Equivalent to np.isfinite\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])\n tf.math.is_finite(x) ==> [True, True, True, False, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are finite.", "type": "API"}, {"name": "tf.compat.v1.math.is_inf", "docs": "Returns which elements of x are Inf.\n\n @compatibility(numpy)\n Equivalent to np.isinf\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.inf, 6.8, np.inf])\n tf.math.is_inf(x) ==> [False, True, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are Inf.", "type": "API"}, {"name": "tf.compat.v1.math.is_nan", "docs": "Returns which elements of x are NaN.\n\n @compatibility(numpy)\n Equivalent to np.isnan\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])\n tf.math.is_nan(x) ==> [False, True, False, True, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are NaN.", "type": "API"}, {"name": "tf.compat.v1.math.is_non_decreasing", "docs": "Returns `True` if `x` is non-decreasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.\n If `x` has less than two elements, it is trivially non-decreasing.\n\n See also: `is_strictly_increasing`\n\n >>> x1 = tf.constant([1.0, 1.0, 3.0])\n >>> tf.math.is_non_decreasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_non_decreasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional). Defaults to \"is_non_decreasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is non-decreasing.", "type": "API"}, {"name": "tf.compat.v1.math.is_strictly_increasing", "docs": "Returns `True` if `x` is strictly increasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.\n If `x` has less than two elements, it is trivially strictly increasing.\n\n See also: `is_non_decreasing`\n\n >>> x1 = tf.constant([1.0, 2.0, 3.0])\n >>> tf.math.is_strictly_increasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_strictly_increasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional).\n Defaults to \"is_strictly_increasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is strictly increasing.", "type": "API"}, {"name": "tf.compat.v1.math.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.lbeta", "docs": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.\n\n Given one-dimensional $z = [z_1,...,z_K]$, we define\n\n $$Beta(z) = \\frac{\\prod_j \\Gamma(z_j)}{\\Gamma(\\sum_j z_j)},$$\n\n where $\\Gamma$ is the gamma function.\n\n And for $n + 1$ dimensional $x$ with shape $[N_1, ..., N_n, K]$, we define\n\n $$lbeta(x)[i_1, ..., i_n] = \\log{|Beta(x[i_1, ..., i_n, :])|}.$$\n\n In other words, the last dimension is treated as the $z$ vector.\n\n Note that if $z = [u, v]$, then\n\n $$Beta(z) = \\frac{\\Gamma(u)\\Gamma(v)}{\\Gamma(u + v)}\n = \\int_0^1 t^{u-1} (1 - t)^{v-1} \\mathrm{d}t,$$\n\n which defines the traditional bivariate beta function.\n\n If the last dimension is empty, we follow the convention that the sum over\n the empty set is zero, and the product is one.\n\n Args:\n x: A rank `n + 1` `Tensor`, `n >= 0` with type `float`, or `double`.\n name: A name for the operation (optional).\n\n Returns:\n The logarithm of \\\\(|Beta(x)|\\\\) reducing along the last dimension.\n ", "desc": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.", "type": "API"}, {"name": "tf.compat.v1.math.less", "docs": "Returns the truth value of (x < y) element-wise.\n\n *NOTE*: `math.less` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less(x, y) ==> [False, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 7])\n tf.math.less(x, y) ==> [False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x < y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.less_equal", "docs": "Returns the truth value of (x <= y) element-wise.\n\n *NOTE*: `math.less_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less_equal(x, y) ==> [True, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 6])\n tf.math.less_equal(x, y) ==> [True, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x <= y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.lgamma", "docs": "Computes the log of the absolute value of `Gamma(x)` element-wise.\n\n For positive numbers, this function computes log((input - 1)!) for every element in the tensor.\n `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539`\n\n Example:\n\n ```python\n x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])\n tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the log of the absolute value of `Gamma(x)` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.log", "docs": "Computes natural logarithm of x element-wise.\n\n I.e., \\\\(y = \\log_e x\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log(x)\n \n\n See: https://en.wikipedia.org/wiki/Logarithm\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.log_sigmoid", "docs": "Computes log sigmoid of `x` element-wise.\n\n Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability,\n we use `y = -tf.nn.softplus(-x)`.\n\n Args:\n x: A Tensor with type `float32` or `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n If a positive number is large, then its log_sigmoid will approach to 0 since\n the formula will be `y = log( / (1 + ) )` which\n approximates to `log (1)` which is 0.\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.log_sigmoid(x)\n \n\n If a negative number is large, its log_sigmoid will approach to the number\n itself since the formula will be `y = log( 1 / (1 + ) )` which is\n `log (1) - log ( (1 + ) )` which approximates to `- `\n that is the number itself.\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.log_sigmoid(x)\n \n ", "desc": "Computes log sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.log_softmax", "docs": "Computes log softmax activations. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor each batch `i` and class `j` we have\n\n logsoftmax = logits - log(reduce_sum(exp(logits), axis))\n\nArgs:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n dim: Deprecated alias for `axis`.\n\nReturns:\n A `Tensor`. Has the same type as `logits`. Same shape as `logits`.\n\nRaises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.", "desc": "Computes log softmax activations. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.log1p", "docs": "Computes natural logarithm of (1 + x) element-wise.\n\n I.e., \\\\(y = \\log_e (1 + x)\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log1p(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of (1 + x) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.logical_and", "docs": "Returns the truth value of x AND y element-wise.\n\n Logical AND function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical AND with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical AND of the two input tensors.\n\n You can also use the `&` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_and(a, b)\n \n >>> a & b\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_and(c, x)\n \n >>> c & x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_and(y, z)\n \n >>> y & z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_and([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_all`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x AND y element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.logical_not", "docs": "Returns the truth value of `NOT x` element-wise.\n\n Example:\n\n >>> tf.math.logical_not(tf.constant([True, False]))\n \n\n Args:\n x: A `Tensor` of type `bool`. A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of `NOT x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.logical_or", "docs": "Returns the truth value of x OR y element-wise.\n\n Logical OR function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical OR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical OR of the two input tensors.\n\n You can also use the `|` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_or(a, b)\n \n >>> a | b\n \n\n >>> c = tf.constant([False])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_or(c, x)\n \n >>> c | x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_or(y, z)\n \n >>> y | z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_or([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_any`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x OR y element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.logical_xor", "docs": "Logical XOR function.\n\n x ^ y = (x | y) & ~(x & y)\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical XOR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical XOR of the two input tensors.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_xor(a, b)\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_xor(c, x)\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_xor(y, z)\n \n\n Args:\n x: A `tf.Tensor` type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n ", "desc": "Logical XOR function.", "type": "API"}, {"name": "tf.compat.v1.math.maximum", "docs": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.\n\n Example:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-2., 0., 2., 5.])\n >>> tf.math.maximum(x, y)\n \n\n Note that `maximum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.maximum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_max`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.minimum", "docs": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.\n\n Both inputs are number-type tensors (except complex). `minimum` expects that\n both tensors have the same `dtype`.\n\n Examples:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-5., -2., 0., 3.])\n >>> tf.math.minimum(x, y)\n \n\n Note that `minimum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.minimum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_min`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.mod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.compat.v1.math.multiply", "docs": "Returns an element-wise x * y.\n\n For example:\n\n >>> x = tf.constant(([1, 2, 3, 4]))\n >>> tf.math.multiply(x, x)\n \n\n Since `tf.math.multiply` will convert its arguments to `Tensor`s, you can also\n pass in non-`Tensor` arguments:\n\n >>> tf.math.multiply(7,6)\n \n\n If `x.shape` is not the same as `y.shape`, they will be broadcast to a\n compatible shape. (More about broadcasting\n [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)\n\n For example:\n\n >>> x = tf.ones([1, 2]);\n >>> y = tf.ones([2, 1]);\n >>> x * y # Taking advantage of operator overriding\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_prod`\n\n Args:\n x: A Tensor. Must be one of the following types: `bfloat16`,\n `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`,\n `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n\n A `Tensor`. Has the same type as `x`.\n\n Raises:\n\n * InvalidArgumentError: When `x` and `y` have incompatible shapes or types.\n ", "desc": "Returns an element-wise x * y.", "type": "API"}, {"name": "tf.compat.v1.math.multiply_no_nan", "docs": "Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.\n\n Note this is noncommutative: if y is NaN or infinite and x is 0, the result\n will be NaN.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for the operation (optional).\n\n Returns:\n The element-wise value of the x times y.\n ", "desc": "Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.", "type": "API"}, {"name": "tf.compat.v1.math.ndtri", "docs": "Compute quantile of Standard Normal.\n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse error function of `x`.\n ", "desc": "Compute quantile of Standard Normal.", "type": "API"}, {"name": "tf.compat.v1.math.negative", "docs": "Computes numerical negative value element-wise.\n\n I.e., \\\\(y = -x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)`", "desc": "Computes numerical negative value element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.nextafter", "docs": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.\n\n This operation returns the same result as the C++ std::nextafter function.\n\n It can also return a subnormal number.\n\n @compatibility(cpp)\n Equivalent to C++ std::nextafter function.\n @end_compatibility\n\n Args:\n x1: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n x2: A `Tensor`. Must have the same type as `x1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x1`.\n ", "desc": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.not_equal", "docs": "Returns the truth value of (x != y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise inequality comparison, returning a Tensor\n of boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.not_equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.not_equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x != y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.polygamma", "docs": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).\n\n The polygamma function is defined as:\n\n\n \\\\(\\psi^{(a)}(x) = \\frac{d^a}{dx^a} \\psi(x)\\\\)\n\n where \\\\(\\psi(x)\\\\) is the digamma function.\n The polygamma function is defined only for non-negative integer orders \\\\a\\\\.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).", "type": "API"}, {"name": "tf.compat.v1.math.polyval", "docs": "Computes the elementwise value of a polynomial.\n\n If `x` is a tensor and `coeffs` is a list n + 1 tensors,\n this function returns the value of the n-th order polynomial\n\n `p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1)`\n\n evaluated using Horner's method, i.e.\n\n ```python\n p(x) = coeffs[n-1] + x * (coeffs[n-2] + ... + x * (coeffs[1] + x * coeffs[0]))\n ```\n\n Usage Example:\n\n >>> coefficients = [1.0, 2.5, -4.2]\n >>> x = 5.0\n >>> y = tf.math.polyval(coefficients, x)\n >>> y\n \n\n Usage Example:\n\n >>> tf.math.polyval([2, 1, 0], 3) # evaluates 2 * (3**2) + 1 * (3**1) + 0 * (3**0)\n \n\n `tf.math.polyval` can also be used in polynomial regression. Taking\n advantage of this function can facilitate writing a polynomial equation\n as compared to explicitly writing it out, especially for higher degree\n polynomials.\n\n >>> x = tf.constant(3)\n >>> theta1 = tf.Variable(2)\n >>> theta2 = tf.Variable(1)\n >>> theta3 = tf.Variable(0)\n >>> tf.math.polyval([theta1, theta2, theta3], x)\n \n\n Args:\n coeffs: A list of `Tensor` representing the coefficients of the polynomial.\n x: A `Tensor` representing the variable of the polynomial.\n name: A name for the operation (optional).\n\n Returns:\n A `tensor` of the shape as the expression p(x) with usual broadcasting\n rules for element-wise addition and multiplication applied.\n\n @compatibility(numpy)\n Equivalent to numpy.polyval.\n @end_compatibility\n ", "desc": "Computes the elementwise value of a polynomial.", "type": "API"}, {"name": "tf.compat.v1.math.pow", "docs": "Computes the power of one value to another.\n\n Given a tensor `x` and a tensor `y`, this operation computes \\\\(x^y\\\\) for\n corresponding elements in `x` and `y`. For example:\n\n ```python\n x = tf.constant([[2, 2], [3, 3]])\n y = tf.constant([[8, 16], [2, 3]])\n tf.pow(x, y) # [[256, 65536], [9, 27]]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n y: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`.\n ", "desc": "Computes the power of one value to another.", "type": "API"}, {"name": "tf.compat.v1.math.real", "docs": "Returns the real part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the real part of each element in `input` considered as a complex number.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.real(x) # [-2.25, 3.25]\n ```\n\n If `input` is already real, it is returned unchanged.\n\n Args:\n input: A `Tensor`. Must have numeric type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the real part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.math.reciprocal", "docs": "Computes the reciprocal of x element-wise.\n\n I.e., \\\\(y = 1 / x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the reciprocal of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.reciprocal_no_nan", "docs": "Performs a safe reciprocal operation, element wise.\n\n If a particular element is zero, the reciprocal for that element is\n also set to zero.\n\n For example:\n ```python\n x = tf.constant([2.0, 0.5, 0, 1], dtype=tf.float32)\n tf.math.reciprocal_no_nan(x) # [ 0.5, 2, 0.0, 1.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64` `complex64` or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n\n Raises:\n TypeError: x must be of a valid dtype.\n\n ", "desc": "Performs a safe reciprocal operation, element wise.", "type": "API"}, {"name": "tf.compat.v1.math.reduce_all", "docs": "Computes `tf.math.logical_and` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.logical_and` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.math.reduce_all(x)\n \n >>> tf.math.reduce_all(x, 0)\n \n >>> tf.math.reduce_all(x, 1)\n \n\nArgs:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.all\n@end_compatibility", "desc": "Computes `tf.math.logical_and` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_any", "docs": "Computes `tf.math.logical_or` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.logical_or` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.reduce_any(x)\n \n >>> tf.reduce_any(x, 0)\n \n >>> tf.reduce_any(x, 1)\n \n\nArgs:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.any\n@end_compatibility", "desc": "Computes `tf.math.logical_or` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_euclidean_norm", "docs": "Computes the Euclidean norm of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [1, 1, 1]]) # x.dtype is tf.int32\n tf.math.reduce_euclidean_norm(x) # returns 4 as dtype is tf.int32\n y = tf.constant([[1, 2, 3], [1, 1, 1]], dtype = tf.float32)\n tf.math.reduce_euclidean_norm(y) # returns 4.1231055 which is sqrt(17)\n tf.math.reduce_euclidean_norm(y, 0) # [sqrt(2), sqrt(5), sqrt(10)]\n tf.math.reduce_euclidean_norm(y, 1) # [sqrt(14), sqrt(3)]\n tf.math.reduce_euclidean_norm(y, 1, keepdims=True) # [[sqrt(14)], [sqrt(3)]]\n tf.math.reduce_euclidean_norm(y, [0, 1]) # sqrt(17)\n ```\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor.\n ", "desc": "Computes the Euclidean norm of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.reduce_logsumexp", "docs": "Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` has no entries, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nThis function is more numerically stable than log(sum(exp(input))). It avoids\noverflows caused by taking the exp of large inputs and underflows caused by\ntaking the log of small inputs.\n\nFor example:\n\n```python\nx = tf.constant([[0., 0., 0.], [0., 0., 0.]])\ntf.reduce_logsumexp(x) # log(6)\ntf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)]\ntf.reduce_logsumexp(x, 1) # [log(3), log(3)]\ntf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]]\ntf.reduce_logsumexp(x, [0, 1]) # log(6)\n```\n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_max", "docs": "Computes `tf.math.maximum` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.maximum` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nUsage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_max(x)\n \n\nSee the numpy docs for `np.amax` and `np.nanmax` behavior.\n\nArgs:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes `tf.math.maximum` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_mean", "docs": "Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis` by computing the\n mean of elements across the dimensions in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a tensor with a single\n element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 1.], [2., 2.]])\n >>> tf.reduce_mean(x)\n \n >>> tf.reduce_mean(x, 0)\n \n >>> tf.reduce_mean(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.mean\n\n Please note that `np.mean` has a `dtype` parameter that could be used to\n specify the output type. By default this is `dtype=float64`. On the other\n hand, `tf.reduce_mean` has an aggressive type inference from `input_tensor`,\n for example:\n\n >>> x = tf.constant([1, 0, 1, 0])\n >>> tf.reduce_mean(x)\n \n >>> y = tf.constant([1., 0., 1., 0.])\n >>> tf.reduce_mean(y)\n \n\n @end_compatibility\n ", "desc": "Computes the mean of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.reduce_min", "docs": "Computes the `tf.math.minimum` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.minimum` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nUsage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_min(x)\n \n\nSee the numpy docs for `np.amin` and `np.nanmin` behavior.\n\nArgs:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes the `tf.math.minimum` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_prod", "docs": "Computes `tf.math.multiply` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.multiply` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_prod(x)\n \n >>> tf.math.reduce_prod(x, 0)\n \n >>> tf.math.reduce_prod(x, 1)\n \n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.prod\n@end_compatibility", "desc": "Computes `tf.math.multiply` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_std", "docs": "Computes the standard deviation of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_std(x)\n \n >>> tf.math.reduce_std(x, 0)\n \n >>> tf.math.reduce_std(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real or complex type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name scope for the associated operations (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor. Note, for\n `complex64` or `complex128` input, the returned `Tensor` will be of type\n `float32` or `float64`, respectively.\n\n @compatibility(numpy)\n Equivalent to np.std\n\n Please note `np.std` has a `dtype` parameter that could be used to specify the\n output type. By default this is `dtype=float64`. On the other hand,\n `tf.math.reduce_std` has aggressive type inference from `input_tensor`.\n @end_compatibility\n ", "desc": "Computes the standard deviation of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.reduce_sum", "docs": "Computes the sum of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.add` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> # x has a shape of (2, 3) (two rows and three columns):\n >>> x = tf.constant([[1, 1, 1], [1, 1, 1]])\n >>> x.numpy()\n array([[1, 1, 1],\n [1, 1, 1]], dtype=int32)\n >>> # sum all the elements\n >>> # 1 + 1 + 1 + 1 + 1+ 1 = 6\n >>> tf.reduce_sum(x).numpy()\n 6\n >>> # reduce along the first dimension\n >>> # the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> tf.reduce_sum(x, 0).numpy()\n array([2, 2, 2], dtype=int32)\n >>> # reduce along the second dimension\n >>> # the result is [1, 1] + [1, 1] + [1, 1] = [3, 3]\n >>> tf.reduce_sum(x, 1).numpy()\n array([3, 3], dtype=int32)\n >>> # keep the original dimensions\n >>> tf.reduce_sum(x, 1, keepdims=True).numpy()\n array([[3],\n [3]], dtype=int32)\n >>> # reduce along both dimensions\n >>> # the result is 1 + 1 + 1 + 1 + 1 + 1 = 6\n >>> # or, equivalently, reduce along rows, then reduce the resultant array\n >>> # [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> # 2 + 2 + 2 = 6\n >>> tf.reduce_sum(x, [0, 1]).numpy()\n 6\n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor, of the same dtype as the input_tensor.\n\n@compatibility(numpy)\nEquivalent to np.sum apart the fact that numpy upcast uint8 and int32 to\nint64 while tensorflow returns the same dtype as the input.\n@end_compatibility", "desc": "Computes the sum of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.math.reduce_variance", "docs": "Computes the variance of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_variance(x)\n \n >>> tf.math.reduce_variance(x, 0)\n \n >>> tf.math.reduce_variance(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real or complex type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name scope for the associated operations (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor. Note, for\n `complex64` or `complex128` input, the returned `Tensor` will be of type\n `float32` or `float64`, respectively.\n\n @compatibility(numpy)\n Equivalent to np.var\n\n Please note `np.var` has a `dtype` parameter that could be used to specify the\n output type. By default this is `dtype=float64`. On the other hand,\n `tf.math.reduce_variance` has aggressive type inference from `input_tensor`.\n @end_compatibility\n ", "desc": "Computes the variance of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.rint", "docs": "Returns element-wise integer closest to x.\n\n If the result is midway between two representable values,\n the even representable is chosen.\n For example:\n\n ```\n rint(-1.5) ==> -2.0\n rint(0.5000001) ==> 1.0\n rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise integer closest to x.", "type": "API"}, {"name": "tf.compat.v1.math.round", "docs": "Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example:\n\n ```python\n x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5])\n tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, or `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n ", "desc": "Rounds the values of a tensor to the nearest integer, element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.rsqrt", "docs": "Computes reciprocal of square root of x element-wise.\n\n For example:\n\n >>> x = tf.constant([2., 0., -2.])\n >>> tf.math.rsqrt(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n ", "desc": "Computes reciprocal of square root of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.scalar_mul", "docs": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.\n\n This is a special case of `tf.math.multiply`, where the first value must be a\n `scalar`. Unlike the general form of `tf.math.multiply`, this is operation is\n guaranteed to be efficient for `tf.IndexedSlices`.\n\n >>> x = tf.reshape(tf.range(30, dtype=tf.float32), [10, 3])\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = tf.gather(x, [1, 2]) # IndexedSlices\n ... z = tf.math.scalar_mul(10.0, y)\n\n Args:\n scalar: A 0-D scalar `Tensor`. Must have known shape.\n x: A `Tensor` or `IndexedSlices` to be scaled.\n name: A name for the operation (optional).\n\n Returns:\n `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.\n\n Raises:\n ValueError: if scalar is not a 0-D `scalar`.\n ", "desc": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.", "type": "API"}, {"name": "tf.compat.v1.math.segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\max_j(data_j)\\\\) where `max` is over `j` such\n that `segment_ids[j] == i`.\n\n If the max is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\frac{\\sum_j data_j}{N}\\\\) where `mean` is\n over `j` such that `segment_ids[j] == i` and `N` is the total number of\n values summed.\n\n If the mean is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as a smaller following index when computing the numerator\n of the mean.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy()\n array([[2.5, 2.5, 2.5, 2.5],\n [5., 6., 7., 8.]], dtype=float32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\min_j(data_j)\\\\) where `min` is over `j` such\n that `segment_ids[j] == i`.\n\n If the min is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\prod_j data_j\\\\) where the product is over `j` such\n that `segment_ids[j] == i`.\n\n If the product is empty for a given segment ID `i`, `output[i] = 1`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\sum_j data_j\\\\) where sum is over `j` such\n that `segment_ids[j] == i`.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_sum(c, tf.constant([0, 0, 1])).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.sign", "docs": "Returns an element-wise indication of the sign of a number.\n\n `y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0`.\n\n For complex numbers, `y = sign(x) = x / |x| if x != 0, otherwise y = 0`.\n\n Example usage:\n\n >>> # real number\n >>> tf.math.sign([0., 2., -3.])\n \n\n >>> # complex number\n >>> tf.math.sign([1 + 1j, 0 + 0j])\n \n\n Args:\n x: A Tensor. Must be one of the following types: bfloat16, half, float32,\n float64, int32, int64, complex64, complex128.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor. Has the same type as x.\n\n If x is a SparseTensor, returns SparseTensor(x.indices,\n tf.math.sign(x.values, ...), x.dense_shape).\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sign(x.values, ...), x.dense_shape)`", "desc": "Returns an element-wise indication of the sign of a number.", "type": "API"}, {"name": "tf.compat.v1.math.sin", "docs": "Computes sine of x element-wise.\n\n Given an input tensor, this function computes sine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10, float(\"inf\")])\n tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.sinh", "docs": "Computes hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic sine of every\n element in the tensor. Input range is `[-inf,inf]` and output range\n is `[-inf,inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.sobol_sample", "docs": "Generates points from the Sobol sequence.\n\n Creates a Sobol sequence with `num_results` samples. Each sample has dimension\n `dim`. Skips the first `skip` samples.\n\n Args:\n dim: Positive scalar `Tensor` representing each sample's dimension.\n num_results: Positive scalar `Tensor` of dtype int32. The number of Sobol\n points to return in the output.\n skip: (Optional) Positive scalar `Tensor` of dtype int32. The number of\n initial points of the Sobol sequence to skip. Default value is 0.\n dtype: (Optional) The `tf.Dtype` of the sample. One of: `tf.float32` or\n `tf.float64`. Defaults to `tf.float32`.\n name: (Optional) Python `str` name prefixed to ops created by this function.\n\n Returns:\n `Tensor` of samples from Sobol sequence with `shape` [num_results, dim].\n ", "desc": "Generates points from the Sobol sequence.", "type": "API"}, {"name": "tf.compat.v1.math.softmax", "docs": "Computes softmax activations.\n\n Used for multi-class predictions. The sum of all outputs generated by softmax\n is 1.\n\n This function performs the equivalent of\n\n ```python\n softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis, keepdims=True)\n ```\n Example usage:\n\n >>> softmax = tf.nn.softmax([-1, 0., 1.])\n >>> softmax\n \n >>> sum(softmax)\n \n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type and shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes softmax activations.", "type": "API"}, {"name": "tf.compat.v1.math.softplus", "docs": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.\n\n `softplus` is a smooth approximation of `relu`. Like `relu`, `softplus` always\n takes on positive values.\n\n \n\n Example:\n\n >>> import tensorflow as tf\n >>> tf.math.softplus(tf.range(0, 2, dtype=tf.float32)).numpy()\n array([0.6931472, 1.3132616], dtype=float32)\n\n Args:\n features: `Tensor`\n name: Optional: name to associate with this operation.\n Returns:\n `Tensor`\n ", "desc": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.math.softsign", "docs": "Computes softsign: `features / (abs(features) + 1)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softsign: `features / (abs(features) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.math.special", "docs": "Public API for tf.math.special namespace.\n", "desc": "Public API for tf.math.special namespace.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_i0", "docs": "Computes the Bessel i0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `i0e(x)` instead.\n\n >>> tf.math.special.bessel_i0([-1., -0.5, 0.5, 1.]).numpy()\n array([1.26606588, 1.06348337, 1.06348337, 1.26606588], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0\n @end_compatibility\n ", "desc": "Computes the Bessel i0 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_i0e", "docs": "Computes the Bessel i0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_i0e([-1., -0.5, 0.5, 1.]).numpy()\n array([0.46575961, 0.64503527, 0.64503527, 0.46575961], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i0e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i0e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_i1", "docs": "Computes the Bessel i1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `i1e(x)` instead.\n\n >>> tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1\n @end_compatibility\n ", "desc": "Computes the Bessel i1 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_i1e", "docs": "Computes the Bessel i1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_i1e([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.20791042, -0.15642083, 0.15642083, 0.20791042], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i1e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i1e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_j0", "docs": "Computes the Bessel j0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_j0([0.5, 1., 2., 4.]).numpy()\n array([ 0.93846981, 0.76519769, 0.22389078, -0.39714981], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.j0\n @end_compatibility\n ", "desc": "Computes the Bessel j0 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_j1", "docs": "Computes the Bessel j1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_j1([0.5, 1., 2., 4.]).numpy()\n array([ 0.24226846, 0.44005059, 0.57672481, -0.06604333], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.j1\n @end_compatibility\n ", "desc": "Computes the Bessel j1 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_k0", "docs": "Computes the Bessel k0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `k0e(x)` instead.\n\n >>> tf.math.special.bessel_k0([0.5, 1., 2., 4.]).numpy()\n array([0.92441907, 0.42102444, 0.11389387, 0.01115968], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k0\n @end_compatibility\n ", "desc": "Computes the Bessel k0 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_k0e", "docs": "Computes the Bessel k0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_k0e([0.5, 1., 2., 4.]).numpy()\n array([1.52410939, 1.14446308, 0.84156822, 0.60929767], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k0e\n @end_compatibility\n ", "desc": "Computes the Bessel k0e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_k1", "docs": "Computes the Bessel k1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `k1e(x)` instead.\n\n >>> tf.math.special.bessel_k1([0.5, 1., 2., 4.]).numpy()\n array([1.65644112, 0.60190723, 0.13986588, 0.0124835 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k1\n @end_compatibility\n ", "desc": "Computes the Bessel k1 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_k1e", "docs": "Computes the Bessel k1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_k1e([0.5, 1., 2., 4.]).numpy()\n array([2.73100971, 1.63615349, 1.03347685, 0.68157595], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k1e\n @end_compatibility\n ", "desc": "Computes the Bessel k1e function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_y0", "docs": "Computes the Bessel y0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_y0([0.5, 1., 2., 4.]).numpy()\n array([-0.44451873, 0.08825696, 0.51037567, -0.01694074], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.y0\n @end_compatibility\n ", "desc": "Computes the Bessel y0 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.bessel_y1", "docs": "Computes the Bessel y1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_y1([0.5, 1., 2., 4.]).numpy()\n array([-1.47147239, -0.78121282, -0.10703243, 0.39792571], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.y1\n @end_compatibility\n ", "desc": "Computes the Bessel y1 function of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.dawsn", "docs": "Computes Dawson's integral of `x` element-wise.\n\n Dawson's integral is defined as `exp(-x**2)` times the integral of\n `exp(t**2)` from `0` to `x`, with the domain of definition all real numbers.\n\n Dawson's function is odd.\n >>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5380795, -0.4244364, 0.4244364, 0.5380795], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.dawsn\n @end_compatibility\n ", "desc": "Computes Dawson's integral of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.expint", "docs": "Computes the Exponential integral of `x` element-wise.\n\n The Exponential integral is defined as the integral of `exp(t) / t` from\n `-inf` to `x`, with the domain of definition all positive real numbers.\n\n >>> tf.math.special.expint([1., 1.1, 2.1, 4.1]).numpy()\n array([ 1.8951179, 2.1673784, 5.3332353, 21.048464], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.expi\n @end_compatibility\n ", "desc": "Computes the Exponential integral of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.fresnel_cos", "docs": "Computes Fresnel's cosine integral of `x` element-wise.\n\n The Fresnel cosine integral is defined as the integral of `cos(t^2)` from\n `0` to `x`, with the domain of definition all real numbers.\n\n The Fresnel cosine integral is odd.\n >>> tf.math.special.fresnel_cos([-1., -0.1, 0.1, 1.]).numpy()\n array([-0.7798934 , -0.09999753, 0.09999753, 0.7798934 ], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.fresnel second output.\n @end_compatibility\n ", "desc": "Computes Fresnel's cosine integral of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.fresnel_sin", "docs": "Computes Fresnel's sine integral of `x` element-wise.\n\n The Fresnel sine integral is defined as the integral of `sin(t^2)` from\n `0` to `x`, with the domain of definition all real numbers.\n\n >>> tf.math.special.fresnel_sin([-1., -0.1, 0.1, 1.]).numpy()\n array([-0.43825912, -0.00052359, 0.00052359, 0.43825912], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.fresnel first output.\n @end_compatibility\n ", "desc": "Computes Fresnel's sine integral of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.special.spence", "docs": "Computes Spence's integral of `x` element-wise.\n\n Spence's integral is defined as the integral of `log(t) / (1 - t)` from\n `1` to `x`, with the domain of definition all non-negative real numbers.\n\n >>> tf.math.special.spence([0.5, 1., 2., 3.]).numpy()\n array([ 0.58224034, 0. , -0.82246685, -1.4367464], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.spence\n @end_compatibility\n ", "desc": "Computes Spence's integral of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.sqrt", "docs": "Computes element-wise square root of the input tensor.\n\n Note: This operation does not support integer types.\n\n >>> x = tf.constant([[4.0], [16.0]])\n >>> tf.sqrt(x)\n \n >>> y = tf.constant([[-4.0], [16.0]])\n >>> tf.sqrt(y)\n \n >>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)\n >>> tf.sqrt(z)\n \n\n Note: In order to support complex type, please provide an input tensor\n of `complex64` or `complex128`.\n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of same size, type and sparsity as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape)`", "desc": "Computes element-wise square root of the input tensor.", "type": "API"}, {"name": "tf.compat.v1.math.square", "docs": "Computes square of x element-wise.\n\n I.e., \\\\(y = x * x = x^2\\\\).\n\n >>> tf.math.square([-2., 0., 3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)`", "desc": "Computes square of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.squared_difference", "docs": "Returns conj(x - y)(x - y) element-wise.\n\n *NOTE*: `math.squared_difference` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns conj(x - y)(x - y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.subtract", "docs": "Returns x - y element-wise.\n\n *NOTE*: `tf.subtract` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Both input and output have a range `(-inf, inf)`.\n\n Example usages below.\n\n Subtract operation between an array and a scalar:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.subtract(x, y)\n \n >>> tf.subtract(y, x)\n \n\n Note that binary `-` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x - y\n \n\n Subtract operation between an array and a tensor of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([5, 4, 3, 2, 1])\n >>> tf.subtract(y, x)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**8 + 1, 2**8 + 2]\n >>> tf.subtract(x, y)\n \n\n When subtracting two input values of different shapes, `tf.subtract` follows the\n [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules)\n . The two input array shapes are compared element-wise. Starting with the\n trailing dimensions, the two dimensions either have to be equal or one of them\n needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(2, 1, 3)\n >>> tf.subtract(x, y)\n \n\n Example with inputs of different dimensions:\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(1, 6)\n >>> tf.subtract(x, y)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x - y element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.tan", "docs": "Computes tan of x element-wise.\n\n Given an input tensor, this function computes tangent of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `(-inf, inf)`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes tan of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.math.top_k", "docs": "Finds values and indices of the `k` largest entries for the last dimension.\n\n If the input is a vector (rank=1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n >>> result = tf.math.top_k([1, 2, 98, 1, 1, 99, 3, 1, 3, 96, 4, 1],\n ... k=3)\n >>> result.values.numpy()\n array([99, 98, 96], dtype=int32)\n >>> result.indices.numpy()\n array([5, 2, 9], dtype=int32)\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n >>> input = tf.random.normal(shape=(3,4,5,6))\n >>> k = 2\n >>> values, indices = tf.math.top_k(input, k=k)\n >>> values.shape.as_list()\n [3, 4, 5, 2]\n >>>\n >>> values.shape == indices.shape == input.shape[:-1] + [k]\n True\n\n The indices can be used to `gather` from a tensor who's shape matches `input`.\n\n >>> gathered_values = tf.gather(input, indices, batch_dims=-1)\n >>> assert tf.reduce_all(gathered_values == values)\n\n If two elements are equal, the lower-index element appears first.\n\n >>> result = tf.math.top_k([1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],\n ... k=3)\n >>> result.indices.numpy()\n array([0, 1, 3], dtype=int32)\n\n Args:\n input: 1-D or higher `Tensor` with last dimension at least `k`.\n k: 0-D `int32` `Tensor`. Number of top elements to look for along the last\n dimension (along each row for matrices).\n sorted: If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: Optional name for the operation.\n\n Returns:\n A tuple with two named fields:\n values: The `k` largest elements along each last dimensional slice.\n indices: The indices of `values` within the last dimension of `input`.\n ", "desc": "Finds values and indices of the `k` largest entries for the last dimension.", "type": "API"}, {"name": "tf.compat.v1.math.truediv", "docs": "Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 division operator semantics where all integer\n arguments are cast to floating types first. This op is generated by normal\n `x / y` division in Python 3 and in Python 2.7 with\n `from __future__ import division`. If you want integer division that rounds\n down, use `x // y` or `tf.math.floordiv`.\n\n `x` and `y` must have the same numeric type. If the inputs are floating\n point, the output will have the same type. If the inputs are integral, the\n inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`\n and `int64` (matching the behavior of Numpy).\n\n Args:\n x: `Tensor` numerator of numeric type.\n y: `Tensor` denominator of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` evaluated in floating point.\n\n Raises:\n TypeError: If `x` and `y` have different dtypes.\n ", "desc": "Divides x / y elementwise (using Python 3 division operator semantics).", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the maximum such that:\n\n \\\\(output_i = \\max_{j...} data[j...]\\\\) where max is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the maximum is empty for a given segment ID `i`, it outputs the smallest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::lowest()`.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Instead of computing the sum over segments, it computes the mean of all\n entries belonging to a segment such that:\n\n \\\\(output_i = 1/N_i \\sum_{j...} data[j...]\\\\) where the sum is over tuples\n `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the number of\n occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the minimum such that:\n\n \\\\(output_i = \\min_{j...} data_[j...]\\\\) where min is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the minimum is empty for a given segment ID `i`, it outputs the largest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::max()`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the product of all\n entries belonging to a segment such that:\n\n \\\\(output_i = \\prod_{j...} data[j...]\\\\) where the product is over tuples\n `j...` such that `segment_ids[j...] == i`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n If there is no entry for a given segment ID `i`, it outputs 1.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_sqrt_n", "docs": "Computes the sum along segments of a tensor divided by the sqrt(N).\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Additionally to computing the sum over segments, it divides the results by\n sqrt(N).\n\n \\\\(output_i = 1/sqrt(N_i) \\sum_{j...} data[j...]\\\\) where the sum is over\n tuples `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the\n number of occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n Note that this op only supports floating point and complex dtypes,\n due to tf.sqrt only supporting these types.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be in the range `[0, num_segments)`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the sum along segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.compat.v1.math.unsorted_segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output[i] = \\sum_{j...} data[j...]\\\\) where the sum is over tuples `j...` such\n that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`\n need not be sorted and need not cover all values in the full\n range of valid values.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n If the given segment ID `i` is negative, the value is dropped and will not be\n added to the sum of the segment.\n\n `num_segments` should equal the number of distinct segment IDs.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n >>> c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]]\n >>> tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.math.xdivy", "docs": "Returns 0 if x == 0, and x / y otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x / y otherwise, elementwise.", "type": "API"}, {"name": "tf.compat.v1.math.xlog1py", "docs": "Compute x * log1p(y).\n\n Given `x` and `y`, compute `x * log1p(y)`. This function safely returns\n zero when `x = 0`, no matter what the value of `y` is.\n\n Example:\n\n >>> tf.math.xlog1py(0., 1.)\n \n >>> tf.math.xlog1py(1., 1.)\n \n >>> tf.math.xlog1py(2., 2.)\n \n >>> tf.math.xlog1py(0., -1.)\n \n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n y: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n `x * log1p(y)`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.xlog1py\n @end_compatibility\n ", "desc": "Compute x * log1p(y).", "type": "API"}, {"name": "tf.compat.v1.math.xlogy", "docs": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.", "type": "API"}, {"name": "tf.compat.v1.math.zero_fraction", "docs": "Returns the fraction of zeros in `value`.\n\n If `value` is empty, the result is `nan`.\n\n This is useful in summaries to measure and report sparsity. For example,\n\n ```python\n z = tf.nn.relu(...)\n summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z))\n ```\n\n Args:\n value: A tensor of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n The fraction of zeros in `value`, with type `float32`.\n ", "desc": "Returns the fraction of zeros in `value`.", "type": "API"}, {"name": "tf.compat.v1.math.zeta", "docs": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).\n\n The Hurwitz zeta function is defined as:\n\n\n \\\\(\\zeta(x, q) = \\sum_{n=0}^{\\infty} (q + n)^{-x}\\\\)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n q: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).", "type": "API"}, {"name": "tf.compat.v1.matmul", "docs": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n where the inner 2 dimensions specify valid matrix multiplication dimensions,\n and any further outer dimensions specify matching batch size.\n\n Both matrices must be of the same type. The supported types are:\n `bfloat16`, `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, `complex128`.\n\n Either matrix can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the matrices contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices (rank-2 tensors) with\n datatypes `bfloat16` or `float32`.\n\n A simple 2-D tensor matrix multiplication:\n\n >>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n >>> a # 2-D tensor\n \n >>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])\n >>> b # 2-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n A batch matrix multiplication with batch shape [2]:\n\n >>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> a # 3-D tensor\n \n >>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])\n >>> b # 3-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n Since python >= 3.5 the @ operator is supported\n (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow,\n it simply calls the `tf.matmul()` function, so the following lines are\n equivalent:\n\n >>> d = a @ b @ [[10], [11]]\n >>> d = tf.matmul(tf.matmul(a, b), [[10], [11]])\n\n Args:\n a: `tf.Tensor` of type `float16`, `float32`, `float64`, `int32`,\n `complex64`, `complex128` and rank > 1.\n b: `tf.Tensor` with same type and rank as `a`.\n transpose_a: If `True`, `a` is transposed before multiplication.\n transpose_b: If `True`, `b` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n adjoint_b: If `True`, `b` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n output_type: The output datatype if needed. Defaults to None in which case\n the output_type is the same as input type. Currently only works when input\n tensors are type (u)int8 and output_type can be int32.\n name: Name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same type as `a` and `b` where each inner-most matrix\n is the product of the corresponding matrices in `a` and `b`, e.g. if all\n transpose or adjoint attributes are `False`:\n\n `output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`,\n for all indices `i`, `j`.\n\n Note: This is matrix product, not element-wise product.\n\n\n Raises:\n ValueError: If `transpose_a` and `adjoint_a`, or `transpose_b` and\n `adjoint_b` are both set to `True`.\n TypeError: If output_type is specified but the types of `a`, `b` and\n `output_type` is not (u)int8, (u)int8 and int32.\n ", "desc": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.compat.v1.matrix_band_part", "docs": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.\n\n The `band` part is computed as follows:\n Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a\n tensor with the same shape where\n\n `band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.\n\n The indicator function\n\n `in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&\n (num_upper < 0 || (n-m) <= num_upper)`.\n\n For example:\n\n ```\n # if 'input' is [[ 0, 1, 2, 3]\n # [-1, 0, 1, 2]\n # [-2, -1, 0, 1]\n # [-3, -2, -1, 0]],\n\n tf.linalg.band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]\n [-1, 0, 1, 2]\n [ 0, -1, 0, 1]\n [ 0, 0, -1, 0]],\n\n tf.linalg.band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]\n [-1, 0, 1, 0]\n [-2, -1, 0, 1]\n [ 0, -2, -1, 0]]\n ```\n\n Useful special cases:\n\n ```\n tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.\n tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.\n tf.linalg.band_part(input, 0, 0) ==> Diagonal.\n ```\n\n Args:\n input: A `Tensor`. Rank `k` tensor.\n num_lower: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D tensor. Number of subdiagonals to keep. If negative, keep entire\n lower triangle.\n num_upper: A `Tensor`. Must have the same type as `num_lower`.\n 0-D tensor. Number of superdiagonals to keep. If negative, keep\n entire upper triangle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.", "type": "API"}, {"name": "tf.compat.v1.matrix_determinant", "docs": "Computes the determinant of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor containing the determinants\n for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the determinant of one or more square matrices.", "type": "API"}, {"name": "tf.compat.v1.matrix_diag", "docs": "Returns a batched diagonal tensor with given batched diagonal values.\n\n Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th\n diagonals of a matrix, with everything else padded with `padding`. `num_rows`\n and `num_cols` specify the dimension of the innermost matrix of the output. If\n both are not specified, the op assumes the innermost matrix is square and\n infers its size from `k` and the innermost dimension of `diagonal`. If only\n one of them is specified, the op assumes the unspecified value is the smallest\n possible based on other criteria.\n\n Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor\n has rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only\n one diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has\n rank `r` with shape `[I, J, ..., L, num_rows, num_cols]`.\n\n The second innermost dimension of `diagonal` has double meaning. When `k` is\n scalar or `k[0] == k[1]`, `M` is part of the batch size [I, J, ..., M], and\n the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper\n padding_value ; otherwise\n ```\n\n Otherwise, `M` is treated as the number of diagonals for the matrix in the\n same batch (`M = k[1]-k[0]+1`), and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n padding_value ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)\n [5, 6, 7, 8]])\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)\n [0, 2, 0, 0],\n [0, 0, 3, 0],\n [0, 0, 0, 4]],\n [[5, 0, 0, 0],\n [0, 6, 0, 0],\n [0, 0, 7, 0],\n [0, 0, 0, 8]]]\n\n # A superdiagonal (per batch).\n diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_diag(diagonal, k = 1)\n ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)\n [0, 0, 2, 0],\n [0, 0, 0, 3],\n [0, 0, 0, 0]],\n [[0, 4, 0, 0],\n [0, 0, 5, 0],\n [0, 0, 0, 6],\n [0, 0, 0, 0]]]\n\n # A tridiagonal band (per batch).\n diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [0, 4, 5]],\n [[2, 3, 0],\n [6, 7, 9],\n [0, 9, 1]]])\n tf.matrix_diag(diagonals, k = (-1, 1))\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 2, 3],\n [6, 7, 9],\n [9, 1, 0]]])\n tf.matrix_diag(diagonals, k = (-1, 1), align=\"RIGHT_LEFT\")\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # Rectangular matrix.\n diagonal = np.array([1, 2]) # Input shape: (2)\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)\n ==> [[0, 0, 0, 0], # Output shape: (3, 4)\n [1, 0, 0, 0],\n [0, 2, 0, 0]]\n\n # Rectangular matrix with inferred num_cols and padding_value = 9.\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)\n ==> [[9, 9], # Output shape: (3, 2)\n [1, 9],\n [9, 2]]\n ```\n\n Args:\n diagonal: A `Tensor` with `rank k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n num_rows: The number of rows of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n num_cols: The number of columns of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with given batched diagonal values.", "type": "API"}, {"name": "tf.compat.v1.matrix_diag_part", "docs": "Returns the batched diagonal part of a batched tensor.\n\n Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched\n `input`.\n\n Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`.\n Let `max_diag_len` be the maximum length among all diagonals to be extracted,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n Let `num_diags` be the number of diagonals to extract,\n `num_diags = k[1] - k[0] + 1`.\n\n If `num_diags == 1`, the output tensor is of rank `r - 1` with shape\n `[I, J, ..., L, max_diag_len]` and values:\n\n ```\n diagonal[i, j, ..., l, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.\n\n Otherwise, the output tensor has rank `r` with dimensions\n `[I, J, ..., L, num_diags, max_diag_len]` with values:\n\n ```\n diagonal[i, j, ..., l, m, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)\n [5, 6, 7, 8],\n [9, 8, 7, 6]],\n [[5, 4, 3, 2],\n [1, 2, 3, 4],\n [5, 6, 7, 8]]])\n\n # A main diagonal from each batch.\n tf.linalg.diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)\n [5, 2, 7]]\n\n # A superdiagonal from each batch.\n tf.linalg.diag_part(input, k = 1)\n ==> [[2, 7, 6], # Output shape: (2, 3)\n [4, 3, 8]]\n\n # A band from each batch.\n tf.linalg.diag_part(input, k = (-1, 2))\n ==> [[[3, 8, 0], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [0, 5, 8]],\n [[3, 4, 0],\n [4, 3, 8],\n [5, 2, 7],\n [0, 1, 6]]]\n\n # RIGHT_LEFT alignment.\n tf.linalg.diag_part(input, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[0, 3, 8], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [5, 8, 0]],\n [[0, 3, 4],\n [4, 3, 8],\n [5, 2, 7],\n [1, 6, 0]]]\n\n # max_diag_len can be shorter than the main diagonal.\n tf.linalg.diag_part(input, k = (-2, -1))\n ==> [[[5, 8],\n [0, 9]],\n [[1, 6],\n [0, 5]]]\n\n # padding_value = 9\n tf.linalg.diag_part(input, k = (1, 3), padding_value = 9)\n ==> [[[4, 9, 9], # Output shape: (2, 3, 3)\n [3, 8, 9],\n [2, 7, 6]],\n [[2, 9, 9],\n [3, 4, 9],\n [4, 3, 8]]]\n\n ```\n\n Args:\n input: A `Tensor` with `rank k >= 2`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`.\n\n Raises:\n InvalidArgumentError: When `k` is out of bound or when `k[0]>k[1:]`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.compat.v1.matrix_inverse", "docs": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).\n\n \n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the inverse for all input submatrices `[..., :, :]`.\n\n The op uses LU decomposition with partial pivoting to compute the inverses.\n\n If a matrix is not invertible there is no guarantee what the op does. It\n may detect the condition and raise an exception or it may simply return a\n garbage result.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).", "type": "API"}, {"name": "tf.compat.v1.matrix_set_diag", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the specified diagonals of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n `input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or\n `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`.\n Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`.\n `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`.\n `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n\n The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`.\n If `k` is scalar or `k[0] == k[1]`:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n\n Otherwise,\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)\n [7, 7, 7, 7],\n [7, 7, 7, 7]],\n [[7, 7, 7, 7],\n [7, 7, 7, 7],\n [7, 7, 7, 7]]])\n diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_set_diag(input, diagonal)\n ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [7, 2, 7, 7],\n [7, 7, 3, 7]],\n [[4, 7, 7, 7],\n [7, 5, 7, 7],\n [7, 7, 6, 7]]]\n\n # A superdiagonal (per batch).\n tf.matrix_set_diag(input, diagonal, k = 1)\n ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)\n [7, 7, 2, 7],\n [7, 7, 7, 3]],\n [[7, 4, 7, 7],\n [7, 7, 5, 7],\n [7, 7, 7, 6]]]\n\n # A band of diagonals.\n diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [0, 4, 5]],\n [[1, 2, 0],\n [5, 6, 4],\n [6, 1, 2],\n [0, 3, 4]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2))\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 1, 2],\n [5, 6, 4],\n [6, 1, 2],\n [3, 4, 0]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n ```\n\n Args:\n input: A `Tensor` with rank `k + 1`, where `k >= 1`.\n diagonal: A `Tensor` with rank `k`, when `d_lower == d_upper`, or `k + 1`,\n otherwise. `k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.compat.v1.matrix_solve", "docs": "Solves systems of linear equations.\n\n `Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is\n a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix\n satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.\n If `adjoint` is `True` then each output matrix satisfies\n `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n adjoint: An optional `bool`. Defaults to `False`.\n Boolean indicating whether to solve with `matrix` or its (block-wise)\n adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves systems of linear equations.", "type": "API"}, {"name": "tf.compat.v1.matrix_solve_ls", "docs": "Solves one or more linear least-squares problems.\n\n `matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose\n inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a\n `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K`\n matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares\n sense.\n\n Below we will use the following notation for each pair of matrix and\n right-hand sides in the batch:\n\n `matrix`=\\\\(A \\in \\Re^{m \\times n}\\\\),\n `rhs`=\\\\(B \\in \\Re^{m \\times k}\\\\),\n `output`=\\\\(X \\in \\Re^{n \\times k}\\\\),\n `l2_regularizer`=\\\\(\\lambda\\\\).\n\n If `fast` is `True`, then the solution is computed by solving the normal\n equations using Cholesky decomposition. Specifically, if \\\\(m \\ge n\\\\) then\n \\\\(X = (A^T A + \\lambda I)^{-1} A^T B\\\\), which solves the least-squares\n problem \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||A Z - B||_F^2 +\n \\lambda ||Z||_F^2\\\\). If \\\\(m \\lt n\\\\) then `output` is computed as\n \\\\(X = A^T (A A^T + \\lambda I)^{-1} B\\\\), which (for \\\\(\\lambda = 0\\\\)) is\n the minimum-norm solution to the under-determined linear system, i.e.\n \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||Z||_F^2 \\\\), subject to\n \\\\(A Z = B\\\\). Notice that the fast path is only numerically stable when\n \\\\(A\\\\) is numerically full rank and has a condition number\n \\\\(\\mathrm{cond}(A) \\lt \\frac{1}{\\sqrt{\\epsilon_{mach}}}\\\\) or\\\\(\\lambda\\\\)\n is sufficiently large.\n\n If `fast` is `False` an algorithm based on the numerically robust complete\n orthogonal decomposition is used. This computes the minimum-norm\n least-squares solution, even when \\\\(A\\\\) is rank deficient. This path is\n typically 6-7 times slower than the fast path. If `fast` is `False` then\n `l2_regularizer` is ignored.\n\n Args:\n matrix: `Tensor` of shape `[..., M, N]`.\n rhs: `Tensor` of shape `[..., M, K]`.\n l2_regularizer: 0-D `double` `Tensor`. Ignored if `fast=False`.\n fast: bool. Defaults to `True`.\n name: string, optional name of the operation.\n\n Returns:\n output: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form\n `M`-by-`K` matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least\n squares sense.\n\n Raises:\n NotImplementedError: linalg.lstsq is currently disabled for complex128\n and l2_regularizer != 0 due to poor accuracy.\n ", "desc": "Solves one or more linear least-squares problems.", "type": "API"}, {"name": "tf.compat.v1.matrix_square_root", "docs": "Computes the matrix square root of one or more square matrices:\n\n matmul(sqrtm(A), sqrtm(A)) = A\n\n The input matrix should be invertible. If the input matrix is real, it should\n have no eigenvalues which are real and negative (pairs of complex conjugate\n eigenvalues are allowed).\n\n The matrix square root is computed by first reducing the matrix to\n quasi-triangular form with the real Schur decomposition. The square root\n of the quasi-triangular matrix is then computed directly. Details of\n the algorithm can be found in: Nicholas J. Higham, \"Computing real\n square roots of a real matrix\", Linear Algebra Appl., 1987.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the matrix square root for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix square root of one or more square matrices:", "type": "API"}, {"name": "tf.compat.v1.matrix_transpose", "docs": "Transposes last two dimensions of tensor `a`.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.linalg.matrix_transpose(x) # [[1, 4],\n # [2, 5],\n # [3, 6]]\n\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n\n # Matrix with two batch dimensions.\n # x.shape is [1, 2, 3, 4]\n # tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3]\n ```\n\n Note that `tf.matmul` provides kwargs allowing for transpose of arguments.\n This is done with minimal cost, and is preferable to using this function. E.g.\n\n ```python\n # Good! Transpose is taken at minimal additional cost.\n tf.matmul(matrix, b, transpose_b=True)\n\n # Inefficient!\n tf.matmul(matrix, tf.linalg.matrix_transpose(b))\n ```\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, `linalg.matrix_transpose` returns a new\n tensor with the items permuted.\n @end_compatibility\n\n Args:\n a: A `Tensor` with `rank >= 2`.\n name: A name for the operation (optional).\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.linalg.matrix_transpose(input)).\n\n Returns:\n A transposed batch matrix `Tensor`.\n\n Raises:\n ValueError: If `a` is determined statically to have `rank < 2`.\n ", "desc": "Transposes last two dimensions of tensor `a`.", "type": "API"}, {"name": "tf.compat.v1.matrix_triangular_solve", "docs": "Solve systems of linear equations with upper or lower triangular matrices.\n\n `matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form\n square matrices. If `lower` is `True` then the strictly upper triangular part\n of each inner-most matrix is assumed to be zero and not accessed. If `lower`\n is `False` then the strictly lower triangular part of each inner-most matrix\n is assumed to be zero and not accessed. `rhs` is a tensor of shape\n `[..., M, N]`.\n\n The output is a tensor of shape `[..., M, N]`. If `adjoint` is `True` then the\n innermost matrices in output satisfy matrix equations `\n sum_k matrix[..., i, k] * output[..., k, j] = rhs[..., i, j]`.\n If `adjoint` is `False` then the\n innermost matrices in output satisfy matrix equations\n `sum_k adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.\n\n Example:\n\n >>> a = tf.constant([[3, 0, 0, 0],\n ... [2, 1, 0, 0],\n ... [1, 0, 1, 0],\n ... [1, 1, 1, 1]], dtype=tf.float32)\n\n >>> b = tf.constant([[4], [2], [4], [2]], dtype=tf.float32)\n >>> x = tf.linalg.triangular_solve(a, b, lower=True)\n >>> x\n \n >>> tf.matmul(a, x)\n \n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`,\n `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M,\n N]`.\n lower: An optional `bool`. Defaults to `True`. Boolean indicating whether\n the innermost matrices in matrix are lower or upper triangular.\n adjoint: An optional `bool`. Defaults to `False`. Boolean indicating whether\n to solve with matrix or its (block-wise) adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as matrix, and shape is `[..., M, N]`.\n\n ", "desc": "Solve systems of linear equations with upper or lower triangular matrices.", "type": "API"}, {"name": "tf.compat.v1.maximum", "docs": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.\n\n Example:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-2., 0., 2., 5.])\n >>> tf.math.maximum(x, y)\n \n\n Note that `maximum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.maximum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_max`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.meshgrid", "docs": "Broadcasts parameters for evaluation on an N-D grid.\n\n Given N one-dimensional coordinate arrays `*args`, returns a list `outputs`\n of N-D coordinate arrays for evaluating expressions on an N-D grid.\n\n Notes:\n\n `meshgrid` supports cartesian ('xy') and matrix ('ij') indexing conventions.\n When the `indexing` argument is set to 'xy' (the default), the broadcasting\n instructions for the first two dimensions are swapped.\n\n Examples:\n\n Calling `X, Y = meshgrid(x, y)` with the tensors\n\n ```python\n x = [1, 2, 3]\n y = [4, 5, 6]\n X, Y = tf.meshgrid(x, y)\n # X = [[1, 2, 3],\n # [1, 2, 3],\n # [1, 2, 3]]\n # Y = [[4, 4, 4],\n # [5, 5, 5],\n # [6, 6, 6]]\n ```\n\n Args:\n *args: `Tensor`s with rank 1.\n **kwargs:\n - indexing: Either 'xy' or 'ij' (optional, default: 'xy').\n - name: A name for the operation (optional).\n\n Returns:\n outputs: A list of N `Tensor`s with rank N.\n\n Raises:\n TypeError: When no keyword arguments (kwargs) are passed.\n ValueError: When indexing keyword argument is not one of `xy` or `ij`.\n ", "desc": "Broadcasts parameters for evaluation on an N-D grid.", "type": "API"}, {"name": "tf.compat.v1.MetaGraphDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.MetaGraphDef.CollectionDefEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.MetaGraphDef.MetaInfoDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.MetaGraphDef.MetaInfoDef.FunctionAliasesEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.MetaGraphDef.SignatureDefEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.metrics", "docs": "Evaluation-related metrics.\n", "desc": "Evaluation-related metrics.", "type": "API"}, {"name": "tf.compat.v1.metrics.accuracy", "docs": "Calculates how often `predictions` matches `labels`.\n\n The `accuracy` function creates two local variables, `total` and\n `count` that are used to compute the frequency with which `predictions`\n matches `labels`. This frequency is ultimately returned as `accuracy`: an\n idempotent operation that simply divides `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the `accuracy`.\n Internally, an `is_correct` operation computes a `Tensor` with elements 1.0\n where the corresponding elements of `predictions` and `labels` match and 0.0\n otherwise. Then `update_op` increments `total` with the reduced sum of the\n product of `weights` and `is_correct`, and it increments `count` with the\n reduced sum of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose shape matches\n `predictions`.\n predictions: The predicted values, a `Tensor` of any shape.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `accuracy` should\n be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n accuracy: A `Tensor` representing the accuracy, the value of `total` divided\n by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `accuracy`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n\n @compatibility(TF2)\n `tf.compat.v1.metrics.accuracy` is not compatible with eager\n execution or `tf.function`.\n Please use `tf.keras.metrics.Accuracy` instead for TF2 migration. After\n instantiating a `tf.keras.metrics.Accuracy` object, you can first call the\n `update_state()` method to record the prediction/labels, and then call the\n `result()` method to get the accuracy eagerly. You can also attach it to a\n Keras model when calling the `compile` method. Please refer to [this\n guide](https://www.tensorflow.org/guide/migrate#new-style_metrics_and_losses)\n for more details.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n accuracy, update_op = tf.compat.v1.metrics.accuracy(\n labels=labels,\n predictions=predictions,\n weights=weights,\n metrics_collections=metrics_collections,\n update_collections=update_collections,\n name=name)\n ```\n\n After:\n\n ```python\n m = tf.keras.metrics.Accuracy(\n name=name,\n dtype=None)\n\n m.update_state(\n y_true=labels,\n y_pred=predictions,\n sample_weight=weights)\n\n accuracy = m.result()\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `label` | `y_true` | In `update_state()` method |\n | `predictions` | `y_true` | In `update_state()` method |\n | `weights` | `sample_weight` | In `update_state()` method |\n | `metrics_collections` | Not supported | Metrics should be tracked |\n : : : explicitly or with Keras :\n : : : APIs, for example, :\n : : : [add_metric][add_metric], :\n : : : instead of via collections :\n | `updates_collections` | Not supported | - |\n | `name` | `name` | In constructor |\n\n [add_metric]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_metric\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... logits = [1, 2, 3]\n ... labels = [0, 2, 3]\n ... acc, acc_op = tf.compat.v1.metrics.accuracy(logits, labels)\n ... global_init = tf.compat.v1.global_variables_initializer()\n ... local_init = tf.compat.v1.local_variables_initializer()\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> sess.run([global_init, local_init])\n >>> print(sess.run([acc, acc_op]))\n [0.0, 0.66667]\n\n\n After:\n\n >>> m = tf.keras.metrics.Accuracy()\n >>> m.update_state([1, 2, 3], [0, 2, 3])\n >>> m.result().numpy()\n 0.66667\n\n ```python\n # Used within Keras model\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Accuracy()])\n ```\n\n @end_compatibility\n ", "desc": "Calculates how often `predictions` matches `labels`.", "type": "API"}, {"name": "tf.compat.v1.metrics.auc", "docs": "Computes the approximate AUC via a Riemann sum. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThe value of AUC returned by this may race with the update so this is deprecated. Please use tf.keras.metrics.AUC instead.\n\nThe `auc` function creates four local variables, `true_positives`,\n`true_negatives`, `false_positives` and `false_negatives` that are used to\ncompute the AUC. To discretize the AUC curve, a linearly spaced set of\nthresholds is used to compute pairs of recall and precision values. The area\nunder the ROC-curve is therefore computed using the height of the recall\nvalues by the false positive rate, while the area under the PR-curve is the\ncomputed using the height of the precision values by the recall.\n\nThis value is ultimately returned as `auc`, an idempotent operation that\ncomputes the area under a discretized curve of precision versus recall values\n(computed using the aforementioned variables). The `num_thresholds` variable\ncontrols the degree of discretization with larger numbers of thresholds more\nclosely approximating the true AUC. The quality of the approximation may vary\ndramatically depending on `num_thresholds`.\n\nFor best results, `predictions` should be distributed approximately uniformly\nin the range [0, 1] and not peaked around 0 or 1. The quality of the AUC\napproximation may be poor if this is not the case. Setting `summation_method`\nto 'minoring' or 'majoring' can help quantify the error in the approximation\nby providing lower or upper bound estimate of the AUC. The `thresholds`\nparameter can be used to manually specify thresholds which split the\npredictions more evenly.\n\nFor estimation of the metric over a stream of data, the function creates an\n`update_op` operation that updates these variables and returns the `auc`.\n\nIf `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\nArgs:\n labels: A `Tensor` whose shape matches `predictions`. Will be cast to\n `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n num_thresholds: The number of thresholds to use when discretizing the roc\n curve.\n metrics_collections: An optional list of collections that `auc` should be\n added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n curve: Specifies the name of the curve to be computed, 'ROC' [default] or\n 'PR' for the Precision-Recall-curve.\n name: An optional variable_scope name.\n summation_method: Specifies the Riemann summation method used\n (https://en.wikipedia.org/wiki/Riemann_sum): 'trapezoidal' [default] that\n applies the trapezoidal rule; 'careful_interpolation', a variant of it\n differing only by a more correct interpolation scheme for PR-AUC -\n interpolating (true/false) positives but not the ratio that is precision;\n 'minoring' that applies left summation for increasing intervals and right\n summation for decreasing intervals; 'majoring' that does the opposite.\n Note that 'careful_interpolation' is strictly preferred to 'trapezoidal'\n (to be deprecated soon) as it applies the same method for ROC, and a\n better one (see Davis & Goadrich 2006 for details) for the PR curve.\n thresholds: An optional list of floating point values to use as the\n thresholds for discretizing the curve. If set, the `num_thresholds`\n parameter is ignored. Values should be in [0, 1]. Endpoint thresholds\n equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be\n automatically included with these to correctly handle predictions equal to\n exactly 0 or 1.\n\nReturns:\n auc: A scalar `Tensor` representing the current area-under-curve.\n update_op: An operation that increments the `true_positives`,\n `true_negatives`, `false_positives` and `false_negatives` variables\n appropriately and whose value matches `auc`.\n\nRaises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.", "desc": "Computes the approximate AUC via a Riemann sum. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.metrics.average_precision_at_k", "docs": "Computes average precision@k of predictions with respect to sparse labels.\n\n `average_precision_at_k` creates two local variables,\n `average_precision_at_/total` and `average_precision_at_/max`, that\n are used to compute the frequency. This frequency is ultimately returned as\n `average_precision_at_`: an idempotent operation that simply divides\n `average_precision_at_/total` by `average_precision_at_/max`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `precision_at_`. Internally, a `top_k` operation computes a `Tensor`\n indicating the top `k` `predictions`. Set operations applied to `top_k` and\n `labels` calculate the true positives and false positives weighted by\n `weights`. Then `update_op` increments `true_positive_at_` and\n `false_positive_at_` using these values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: `int64` `Tensor` or `SparseTensor` with shape\n [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies\n num_labels=1. N >= 1 and num_labels is the number of target classes for\n the associated prediction. Commonly, N=1 and `labels` has shape\n [batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values\n should be in range [0, num_classes), where num_classes is the last\n dimension of `predictions`. Values outside this range are ignored.\n predictions: Float `Tensor` with shape [D1, ... DN, num_classes] where\n N >= 1. Commonly, N=1 and `predictions` has shape\n [batch size, num_classes]. The final dimension contains the logit values\n for each class. [D1, ... DN] must match `labels`.\n k: Integer, k for @k metric. This will calculate an average precision for\n range `[1,k]`, as documented above.\n weights: `Tensor` whose rank is either 0, or n-1, where n is the rank of\n `labels`. If the latter, it must be broadcastable to `labels` (i.e., all\n dimensions must be either `1`, or the same as the corresponding `labels`\n dimension).\n metrics_collections: An optional list of collections that values should\n be added to.\n updates_collections: An optional list of collections that updates should\n be added to.\n name: Name of new update operation, and namespace for other dependent ops.\n\n Returns:\n mean_average_precision: Scalar `float64` `Tensor` with the mean average\n precision values.\n update: `Operation` that increments variables appropriately, and whose\n value matches `metric`.\n\n Raises:\n ValueError: if k is invalid.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes average precision@k of predictions with respect to sparse labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.false_negatives", "docs": "Computes the total number of false negatives.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n value_tensor: A `Tensor` representing the current value of the metric.\n update_op: An operation that accumulates the error from a batch of data.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match `values`,\n or if either `metrics_collections` or `updates_collections` are not a list\n or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the total number of false negatives.", "type": "API"}, {"name": "tf.compat.v1.metrics.false_negatives_at_thresholds", "docs": "Computes false negatives at provided threshold values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` whose shape matches `predictions`. Will be cast to\n `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `false_negatives`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n false_negatives: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that updates the `false_negatives` variable and\n returns its current value.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes false negatives at provided threshold values.", "type": "API"}, {"name": "tf.compat.v1.metrics.false_positives", "docs": "Sum the weights of false positives.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n value_tensor: A `Tensor` representing the current value of the metric.\n update_op: An operation that accumulates the error from a batch of data.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Sum the weights of false positives.", "type": "API"}, {"name": "tf.compat.v1.metrics.false_positives_at_thresholds", "docs": "Computes false positives at provided threshold values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` whose shape matches `predictions`. Will be cast to\n `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `false_positives`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n false_positives: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that updates the `false_positives` variable and\n returns its current value.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes false positives at provided threshold values.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean", "docs": "Computes the (weighted) mean of the given values.\n\n The `mean` function creates two local variables, `total` and `count`\n that are used to compute the average of `values`. This average is ultimately\n returned as `mean` which is an idempotent operation that simply divides\n `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the `mean`.\n `update_op` increments `total` with the reduced sum of the product of `values`\n and `weights`, and it increments `count` with the reduced sum of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n values: A `Tensor` of arbitrary dimensions.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `values`, and must be broadcastable to `values` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `values` dimension).\n metrics_collections: An optional list of collections that `mean`\n should be added to.\n updates_collections: An optional list of collections that `update_op`\n should be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean: A `Tensor` representing the current mean, the value of `total` divided\n by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `mean_value`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match `values`,\n or if either `metrics_collections` or `updates_collections` are not a list\n or tuple.\n RuntimeError: If eager execution is enabled.\n\n @compatibility(TF2)\n `tf.compat.v1.metrics.mean` is not compatible with eager\n execution or `tf.function`.\n Please use `tf.keras.metrics.Mean` instead for TF2 migration. After\n instantiating a `tf.keras.metrics.Mean` object, you can first call the\n `update_state()` method to record the new values, and then call the\n `result()` method to get the mean eagerly. You can also attach it to a\n Keras model with the `add_metric` method. Please refer to the [migration\n guide](https://www.tensorflow.org/guide/migrate#new-style_metrics_and_losses)\n for more details.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n mean, update_op = tf.compat.v1.metrics.mean(\n values=values,\n weights=weights,\n metrics_collections=metrics_collections,\n update_collections=update_collections,\n name=name)\n ```\n\n After:\n\n ```python\n m = tf.keras.metrics.Mean(\n name=name)\n\n m.update_state(\n values=values,\n sample_weight=weights)\n\n mean = m.result()\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `values` | `values` | In `update_state()` method |\n | `weights` | `sample_weight` | In `update_state()` method |\n | `metrics_collections` | Not supported | Metrics should be tracked |\n : : : explicitly or with Keras :\n : : : APIs, for example, :\n : : : [add_metric][add_metric], :\n : : : instead of via collections :\n | `updates_collections` | Not supported | - |\n | `name` | `name` | In constructor |\n\n [add_metric]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_metric\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... values = [1, 2, 3]\n ... mean, update_op = tf.compat.v1.metrics.mean(values)\n ... global_init = tf.compat.v1.global_variables_initializer()\n ... local_init = tf.compat.v1.local_variables_initializer()\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> sess.run([global_init, local_init])\n >>> sess.run(update_op)\n >>> sess.run(mean)\n 2.0\n\n\n After:\n\n >>> m = tf.keras.metrics.Mean()\n >>> m.update_state([1, 2, 3])\n >>> m.result().numpy()\n 2.0\n\n ```python\n # Used within Keras model\n model.add_metric(tf.keras.metrics.Mean()(values))\n ```\n\n @end_compatibility\n ", "desc": "Computes the (weighted) mean of the given values.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_absolute_error", "docs": "Computes the mean absolute error between the labels and predictions.\n\n The `mean_absolute_error` function creates two local variables,\n `total` and `count` that are used to compute the mean absolute error. This\n average is weighted by `weights`, and it is ultimately returned as\n `mean_absolute_error`: an idempotent operation that simply divides `total` by\n `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `mean_absolute_error`. Internally, an `absolute_errors` operation computes the\n absolute value of the differences between `predictions` and `labels`. Then\n `update_op` increments `total` with the reduced sum of the product of\n `weights` and `absolute_errors`, and it increments `count` with the reduced\n sum of `weights`\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of the same shape as `predictions`.\n predictions: A `Tensor` of arbitrary shape.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that\n `mean_absolute_error` should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_absolute_error: A `Tensor` representing the current mean, the value of\n `total` divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `mean_absolute_error`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the mean absolute error between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_cosine_distance", "docs": "Computes the cosine distance between the labels and predictions.\n\n The `mean_cosine_distance` function creates two local variables,\n `total` and `count` that are used to compute the average cosine distance\n between `predictions` and `labels`. This average is weighted by `weights`,\n and it is ultimately returned as `mean_distance`, which is an idempotent\n operation that simply divides `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `mean_distance`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of arbitrary shape.\n predictions: A `Tensor` of the same shape as `labels`.\n dim: The dimension along which the cosine distance is computed.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension). Also,\n dimension `dim` must be `1`.\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_distance: A `Tensor` representing the current mean, the value of\n `total` divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the cosine distance between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_iou", "docs": "Calculate per-step mean Intersection-Over-Union (mIOU).\n\n Mean Intersection-Over-Union is a common evaluation metric for\n semantic image segmentation, which first computes the IOU for each\n semantic class and then computes the average over classes.\n IOU is defined as follows:\n IOU = true_positive / (true_positive + false_positive + false_negative).\n The predictions are accumulated in a confusion matrix, weighted by `weights`,\n and mIOU is then calculated from it.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the `mean_iou`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of ground truth labels with shape [batch size] and of\n type `int32` or `int64`. The tensor will be flattened if its rank > 1.\n predictions: A `Tensor` of prediction results for semantic labels, whose\n shape is [batch size] and type `int32` or `int64`. The tensor will be\n flattened if its rank > 1.\n num_classes: The possible number of labels the prediction task can\n have. This value must be provided, since a confusion matrix of\n dimension = [num_classes, num_classes] will be allocated.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `mean_iou`\n should be added to.\n updates_collections: An optional list of collections `update_op` should be\n added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_iou: A `Tensor` representing the mean intersection-over-union.\n update_op: An operation that increments the confusion matrix.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Calculate per-step mean Intersection-Over-Union (mIOU).", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_per_class_accuracy", "docs": "Calculates the mean of the per-class accuracies.\n\n Calculates the accuracy for each class, then takes the mean of that.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates the accuracy of each class and returns\n them.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of ground truth labels with shape [batch size] and of\n type `int32` or `int64`. The tensor will be flattened if its rank > 1.\n predictions: A `Tensor` of prediction results for semantic labels, whose\n shape is [batch size] and type `int32` or `int64`. The tensor will be\n flattened if its rank > 1.\n num_classes: The possible number of labels the prediction task can\n have. This value must be provided, since two variables with shape =\n [num_classes] will be allocated.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that\n `mean_per_class_accuracy'\n should be added to.\n updates_collections: An optional list of collections `update_op` should be\n added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_accuracy: A `Tensor` representing the mean per class accuracy.\n update_op: An operation that updates the accuracy tensor.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Calculates the mean of the per-class accuracies.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_relative_error", "docs": "Computes the mean relative error by normalizing with the given values.\n\n The `mean_relative_error` function creates two local variables,\n `total` and `count` that are used to compute the mean relative absolute error.\n This average is weighted by `weights`, and it is ultimately returned as\n `mean_relative_error`: an idempotent operation that simply divides `total` by\n `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `mean_reative_error`. Internally, a `relative_errors` operation divides the\n absolute value of the differences between `predictions` and `labels` by the\n `normalizer`. Then `update_op` increments `total` with the reduced sum of the\n product of `weights` and `relative_errors`, and it increments `count` with the\n reduced sum of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of the same shape as `predictions`.\n predictions: A `Tensor` of arbitrary shape.\n normalizer: A `Tensor` of the same shape as `predictions`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that\n `mean_relative_error` should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_relative_error: A `Tensor` representing the current mean, the value of\n `total` divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `mean_relative_error`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the mean relative error by normalizing with the given values.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_squared_error", "docs": "Computes the mean squared error between the labels and predictions.\n\n The `mean_squared_error` function creates two local variables,\n `total` and `count` that are used to compute the mean squared error.\n This average is weighted by `weights`, and it is ultimately returned as\n `mean_squared_error`: an idempotent operation that simply divides `total` by\n `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `mean_squared_error`. Internally, a `squared_error` operation computes the\n element-wise square of the difference between `predictions` and `labels`. Then\n `update_op` increments `total` with the reduced sum of the product of\n `weights` and `squared_error`, and it increments `count` with the reduced sum\n of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of the same shape as `predictions`.\n predictions: A `Tensor` of arbitrary shape.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that\n `mean_squared_error` should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean_squared_error: A `Tensor` representing the current mean, the value of\n `total` divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `mean_squared_error`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the mean squared error between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.metrics.mean_tensor", "docs": "Computes the element-wise (weighted) mean of the given tensors.\n\n In contrast to the `mean` function which returns a scalar with the\n mean, this function returns an average tensor with the same shape as the\n input tensors.\n\n The `mean_tensor` function creates two local variables,\n `total_tensor` and `count_tensor` that are used to compute the average of\n `values`. This average is ultimately returned as `mean` which is an idempotent\n operation that simply divides `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the `mean`.\n `update_op` increments `total` with the reduced sum of the product of `values`\n and `weights`, and it increments `count` with the reduced sum of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n values: A `Tensor` of arbitrary dimensions.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `values`, and must be broadcastable to `values` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `values` dimension).\n metrics_collections: An optional list of collections that `mean`\n should be added to.\n updates_collections: An optional list of collections that `update_op`\n should be added to.\n name: An optional variable_scope name.\n\n Returns:\n mean: A float `Tensor` representing the current mean, the value of `total`\n divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `mean_value`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match `values`,\n or if either `metrics_collections` or `updates_collections` are not a list\n or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the element-wise (weighted) mean of the given tensors.", "type": "API"}, {"name": "tf.compat.v1.metrics.percentage_below", "docs": "Computes the percentage of values less than the given threshold.\n\n The `percentage_below` function creates two local variables,\n `total` and `count` that are used to compute the percentage of `values` that\n fall below `threshold`. This rate is weighted by `weights`, and it is\n ultimately returned as `percentage` which is an idempotent operation that\n simply divides `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `percentage`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n values: A numeric `Tensor` of arbitrary size.\n threshold: A scalar threshold.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `values`, and must be broadcastable to `values` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `values` dimension).\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n percentage: A `Tensor` representing the current mean, the value of `total`\n divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match `values`,\n or if either `metrics_collections` or `updates_collections` are not a list\n or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the percentage of values less than the given threshold.", "type": "API"}, {"name": "tf.compat.v1.metrics.precision", "docs": "Computes the precision of the predictions with respect to the labels.\n\n The `precision` function creates two local variables,\n `true_positives` and `false_positives`, that are used to compute the\n precision. This value is ultimately returned as `precision`, an idempotent\n operation that simply divides `true_positives` by the sum of `true_positives`\n and `false_positives`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `precision`. `update_op` weights each prediction by the corresponding value in\n `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `precision` should\n be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n precision: Scalar float `Tensor` with the value of `true_positives`\n divided by the sum of `true_positives` and `false_positives`.\n update_op: `Operation` that increments `true_positives` and\n `false_positives` variables appropriately and whose value matches\n `precision`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the precision of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.precision_at_k", "docs": "Computes precision@k of the predictions with respect to sparse labels.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is in the top-k highest\n `predictions`, and computing the fraction of them for which `class_id` is\n indeed a correct label.\n If `class_id` is not specified, we'll calculate precision as how often on\n average a class among the top-k classes with the highest predicted values\n of a batch entry is correct and can be found in the label for that entry.\n\n `precision_at_k` creates two local variables,\n `true_positive_at_` and `false_positive_at_`, that are used to compute\n the precision@k frequency. This frequency is ultimately returned as\n `precision_at_`: an idempotent operation that simply divides\n `true_positive_at_` by total (`true_positive_at_` +\n `false_positive_at_`).\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `precision_at_`. Internally, a `top_k` operation computes a `Tensor`\n indicating the top `k` `predictions`. Set operations applied to `top_k` and\n `labels` calculate the true positives and false positives weighted by\n `weights`. Then `update_op` increments `true_positive_at_` and\n `false_positive_at_` using these values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: `int64` `Tensor` or `SparseTensor` with shape\n [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies\n num_labels=1. N >= 1 and num_labels is the number of target classes for\n the associated prediction. Commonly, N=1 and `labels` has shape\n [batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values\n should be in range [0, num_classes), where num_classes is the last\n dimension of `predictions`. Values outside this range are ignored.\n predictions: Float `Tensor` with shape [D1, ... DN, num_classes] where\n N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].\n The final dimension contains the logit values for each class. [D1, ... DN]\n must match `labels`.\n k: Integer, k for @k metric.\n class_id: Integer class ID for which we want binary metrics. This should be\n in range [0, num_classes], where num_classes is the last dimension of\n `predictions`. If `class_id` is outside this range, the method returns\n NAN.\n weights: `Tensor` whose rank is either 0, or n-1, where n is the rank of\n `labels`. If the latter, it must be broadcastable to `labels` (i.e., all\n dimensions must be either `1`, or the same as the corresponding `labels`\n dimension).\n metrics_collections: An optional list of collections that values should\n be added to.\n updates_collections: An optional list of collections that updates should\n be added to.\n name: Name of new update operation, and namespace for other dependent ops.\n\n Returns:\n precision: Scalar `float64` `Tensor` with the value of `true_positives`\n divided by the sum of `true_positives` and `false_positives`.\n update_op: `Operation` that increments `true_positives` and\n `false_positives` variables appropriately, and whose value matches\n `precision`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match\n `predictions`, or if either `metrics_collections` or `updates_collections`\n are not a list or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes precision@k of the predictions with respect to sparse labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.precision_at_thresholds", "docs": "Computes precision values for different `thresholds` on `predictions`.\n\n The `precision_at_thresholds` function creates four local variables,\n `true_positives`, `true_negatives`, `false_positives` and `false_negatives`\n for various values of thresholds. `precision[i]` is defined as the total\n weight of values in `predictions` above `thresholds[i]` whose corresponding\n entry in `labels` is `True`, divided by the total weight of values in\n `predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] +\n false_positives[i])`).\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `precision`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `auc` should be\n added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n precision: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that increments the `true_positives`,\n `true_negatives`, `false_positives` and `false_negatives` variables that\n are used in the computation of `precision`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes precision values for different `thresholds` on `predictions`.", "type": "API"}, {"name": "tf.compat.v1.metrics.precision_at_top_k", "docs": "Computes precision@k of the predictions with respect to sparse labels.\n\n Differs from `sparse_precision_at_k` in that predictions must be in the form\n of top `k` class indices, whereas `sparse_precision_at_k` expects logits.\n Refer to `sparse_precision_at_k` for more details.\n\n Args:\n labels: `int64` `Tensor` or `SparseTensor` with shape\n [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies\n num_labels=1. N >= 1 and num_labels is the number of target classes for\n the associated prediction. Commonly, N=1 and `labels` has shape\n [batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values\n should be in range [0, num_classes), where num_classes is the last\n dimension of `predictions`. Values outside this range are ignored.\n predictions_idx: Integer `Tensor` with shape [D1, ... DN, k] where\n N >= 1. Commonly, N=1 and predictions has shape [batch size, k].\n The final dimension contains the top `k` predicted class indices.\n [D1, ... DN] must match `labels`.\n k: Integer, k for @k metric. Only used for the default op name.\n class_id: Integer class ID for which we want binary metrics. This should be\n in range [0, num_classes], where num_classes is the last dimension of\n `predictions`. If `class_id` is outside this range, the method returns\n NAN.\n weights: `Tensor` whose rank is either 0, or n-1, where n is the rank of\n `labels`. If the latter, it must be broadcastable to `labels` (i.e., all\n dimensions must be either `1`, or the same as the corresponding `labels`\n dimension).\n metrics_collections: An optional list of collections that values should\n be added to.\n updates_collections: An optional list of collections that updates should\n be added to.\n name: Name of new update operation, and namespace for other dependent ops.\n\n Returns:\n precision: Scalar `float64` `Tensor` with the value of `true_positives`\n divided by the sum of `true_positives` and `false_positives`.\n update_op: `Operation` that increments `true_positives` and\n `false_positives` variables appropriately, and whose value matches\n `precision`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match\n `predictions`, or if either `metrics_collections` or `updates_collections`\n are not a list or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes precision@k of the predictions with respect to sparse labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.recall", "docs": "Computes the recall of the predictions with respect to the labels.\n\n The `recall` function creates two local variables, `true_positives`\n and `false_negatives`, that are used to compute the recall. This value is\n ultimately returned as `recall`, an idempotent operation that simply divides\n `true_positives` by the sum of `true_positives` and `false_negatives`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` that updates these variables and returns the `recall`. `update_op`\n weights each prediction by the corresponding value in `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `recall` should\n be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n recall: Scalar float `Tensor` with the value of `true_positives` divided\n by the sum of `true_positives` and `false_negatives`.\n update_op: `Operation` that increments `true_positives` and\n `false_negatives` variables appropriately and whose value matches\n `recall`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the recall of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.recall_at_k", "docs": "Computes recall@k of the predictions with respect to sparse labels.\n\n If `class_id` is specified, we calculate recall by considering only the\n entries in the batch for which `class_id` is in the label, and computing\n the fraction of them for which `class_id` is in the top-k `predictions`.\n If `class_id` is not specified, we'll calculate recall as how often on\n average a class among the labels of a batch entry is in the top-k\n `predictions`.\n\n `sparse_recall_at_k` creates two local variables,\n `true_positive_at_` and `false_negative_at_`, that are used to compute\n the recall_at_k frequency. This frequency is ultimately returned as\n `recall_at_`: an idempotent operation that simply divides\n `true_positive_at_` by total (`true_positive_at_` +\n `false_negative_at_`).\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `recall_at_`. Internally, a `top_k` operation computes a `Tensor`\n indicating the top `k` `predictions`. Set operations applied to `top_k` and\n `labels` calculate the true positives and false negatives weighted by\n `weights`. Then `update_op` increments `true_positive_at_` and\n `false_negative_at_` using these values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: `int64` `Tensor` or `SparseTensor` with shape\n [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies\n num_labels=1. N >= 1 and num_labels is the number of target classes for\n the associated prediction. Commonly, N=1 and `labels` has shape\n [batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values\n should be in range [0, num_classes), where num_classes is the last\n dimension of `predictions`. Values outside this range always count\n towards `false_negative_at_`.\n predictions: Float `Tensor` with shape [D1, ... DN, num_classes] where\n N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].\n The final dimension contains the logit values for each class. [D1, ... DN]\n must match `labels`.\n k: Integer, k for @k metric.\n class_id: Integer class ID for which we want binary metrics. This should be\n in range [0, num_classes), where num_classes is the last dimension of\n `predictions`. If class_id is outside this range, the method returns NAN.\n weights: `Tensor` whose rank is either 0, or n-1, where n is the rank of\n `labels`. If the latter, it must be broadcastable to `labels` (i.e., all\n dimensions must be either `1`, or the same as the corresponding `labels`\n dimension).\n metrics_collections: An optional list of collections that values should\n be added to.\n updates_collections: An optional list of collections that updates should\n be added to.\n name: Name of new update operation, and namespace for other dependent ops.\n\n Returns:\n recall: Scalar `float64` `Tensor` with the value of `true_positives` divided\n by the sum of `true_positives` and `false_negatives`.\n update_op: `Operation` that increments `true_positives` and\n `false_negatives` variables appropriately, and whose value matches\n `recall`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match\n `predictions`, or if either `metrics_collections` or `updates_collections`\n are not a list or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes recall@k of the predictions with respect to sparse labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.recall_at_thresholds", "docs": "Computes various recall values for different `thresholds` on `predictions`.\n\n The `recall_at_thresholds` function creates four local variables,\n `true_positives`, `true_negatives`, `false_positives` and `false_negatives`\n for various values of thresholds. `recall[i]` is defined as the total weight\n of values in `predictions` above `thresholds[i]` whose corresponding entry in\n `labels` is `True`, divided by the total weight of `True` values in `labels`\n (`true_positives[i] / (true_positives[i] + false_negatives[i])`).\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the `recall`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `recall` should be\n added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n recall: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that increments the `true_positives`,\n `true_negatives`, `false_positives` and `false_negatives` variables that\n are used in the computation of `recall`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes various recall values for different `thresholds` on `predictions`.", "type": "API"}, {"name": "tf.compat.v1.metrics.recall_at_top_k", "docs": "Computes recall@k of top-k predictions with respect to sparse labels.\n\n Differs from `recall_at_k` in that predictions must be in the form of top `k`\n class indices, whereas `recall_at_k` expects logits. Refer to `recall_at_k`\n for more details.\n\n Args:\n labels: `int64` `Tensor` or `SparseTensor` with shape\n [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies\n num_labels=1. N >= 1 and num_labels is the number of target classes for\n the associated prediction. Commonly, N=1 and `labels` has shape\n [batch_size, num_labels]. [D1, ... DN] must match `predictions`. Values\n should be in range [0, num_classes), where num_classes is the last\n dimension of `predictions`. Values outside this range always count\n towards `false_negative_at_`.\n predictions_idx: Integer `Tensor` with shape [D1, ... DN, k] where N >= 1.\n Commonly, N=1 and predictions has shape [batch size, k]. The final\n dimension contains the top `k` predicted class indices. [D1, ... DN] must\n match `labels`.\n k: Integer, k for @k metric. Only used for the default op name.\n class_id: Integer class ID for which we want binary metrics. This should be\n in range [0, num_classes), where num_classes is the last dimension of\n `predictions`. If class_id is outside this range, the method returns NAN.\n weights: `Tensor` whose rank is either 0, or n-1, where n is the rank of\n `labels`. If the latter, it must be broadcastable to `labels` (i.e., all\n dimensions must be either `1`, or the same as the corresponding `labels`\n dimension).\n metrics_collections: An optional list of collections that values should\n be added to.\n updates_collections: An optional list of collections that updates should\n be added to.\n name: Name of new update operation, and namespace for other dependent ops.\n\n Returns:\n recall: Scalar `float64` `Tensor` with the value of `true_positives` divided\n by the sum of `true_positives` and `false_negatives`.\n update_op: `Operation` that increments `true_positives` and\n `false_negatives` variables appropriately, and whose value matches\n `recall`.\n\n Raises:\n ValueError: If `weights` is not `None` and its shape doesn't match\n `predictions`, or if either `metrics_collections` or `updates_collections`\n are not a list or tuple.\n ", "desc": "Computes recall@k of top-k predictions with respect to sparse labels.", "type": "API"}, {"name": "tf.compat.v1.metrics.root_mean_squared_error", "docs": "Computes the root mean squared error between the labels and predictions.\n\n The `root_mean_squared_error` function creates two local variables,\n `total` and `count` that are used to compute the root mean squared error.\n This average is weighted by `weights`, and it is ultimately returned as\n `root_mean_squared_error`: an idempotent operation that takes the square root\n of the division of `total` by `count`.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `root_mean_squared_error`. Internally, a `squared_error` operation computes\n the element-wise square of the difference between `predictions` and `labels`.\n Then `update_op` increments `total` with the reduced sum of the product of\n `weights` and `squared_error`, and it increments `count` with the reduced sum\n of `weights`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` of the same shape as `predictions`.\n predictions: A `Tensor` of arbitrary shape.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that\n `root_mean_squared_error` should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n root_mean_squared_error: A `Tensor` representing the current mean, the value\n of `total` divided by `count`.\n update_op: An operation that increments the `total` and `count` variables\n appropriately and whose value matches `root_mean_squared_error`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the root mean squared error between the labels and predictions.", "type": "API"}, {"name": "tf.compat.v1.metrics.sensitivity_at_specificity", "docs": "Computes the specificity at a given sensitivity.\n\n The `sensitivity_at_specificity` function creates four local\n variables, `true_positives`, `true_negatives`, `false_positives` and\n `false_negatives` that are used to compute the sensitivity at the given\n specificity value. The threshold for the given specificity value is computed\n and used to evaluate the corresponding sensitivity.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `sensitivity`. `update_op` increments the `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` counts with the weight of each case\n found in the `predictions` and `labels`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n For additional information about specificity and sensitivity, see the\n following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n specificity: A scalar value in range `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n num_thresholds: The number of thresholds to use for matching the given\n specificity.\n metrics_collections: An optional list of collections that `sensitivity`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n sensitivity: A scalar `Tensor` representing the sensitivity at the given\n `specificity` value.\n update_op: An operation that increments the `true_positives`,\n `true_negatives`, `false_positives` and `false_negatives` variables\n appropriately and whose value matches `sensitivity`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n `specificity` is not between 0 and 1, or if either `metrics_collections`\n or `updates_collections` are not a list or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the specificity at a given sensitivity.", "type": "API"}, {"name": "tf.compat.v1.metrics.sparse_average_precision_at_k", "docs": "Renamed to `average_precision_at_k`, please use that method instead. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse average_precision_at_k instead", "desc": "Renamed to `average_precision_at_k`, please use that method instead. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.metrics.sparse_precision_at_k", "docs": "Renamed to `precision_at_k`, please use that method instead. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse precision_at_k instead", "desc": "Renamed to `precision_at_k`, please use that method instead. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.metrics.specificity_at_sensitivity", "docs": "Computes the specificity at a given sensitivity.\n\n The `specificity_at_sensitivity` function creates four local\n variables, `true_positives`, `true_negatives`, `false_positives` and\n `false_negatives` that are used to compute the specificity at the given\n sensitivity value. The threshold for the given sensitivity value is computed\n and used to evaluate the corresponding specificity.\n\n For estimation of the metric over a stream of data, the function creates an\n `update_op` operation that updates these variables and returns the\n `specificity`. `update_op` increments the `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` counts with the weight of each case\n found in the `predictions` and `labels`.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n For additional information about specificity and sensitivity, see the\n following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n sensitivity: A scalar value in range `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n num_thresholds: The number of thresholds to use for matching the given\n sensitivity.\n metrics_collections: An optional list of collections that `specificity`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n specificity: A scalar `Tensor` representing the specificity at the given\n `sensitivity` value.\n update_op: An operation that increments the `true_positives`,\n `true_negatives`, `false_positives` and `false_negatives` variables\n appropriately and whose value matches `specificity`.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n `sensitivity` is not between 0 and 1, or if either `metrics_collections`\n or `updates_collections` are not a list or tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes the specificity at a given sensitivity.", "type": "API"}, {"name": "tf.compat.v1.metrics.true_negatives", "docs": "Sum the weights of true_negatives.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n value_tensor: A `Tensor` representing the current value of the metric.\n update_op: An operation that accumulates the error from a batch of data.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Sum the weights of true_negatives.", "type": "API"}, {"name": "tf.compat.v1.metrics.true_negatives_at_thresholds", "docs": "Computes true negatives at provided threshold values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` whose shape matches `predictions`. Will be cast to\n `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `true_negatives`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n true_negatives: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that updates the `true_negatives` variable and\n returns its current value.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes true negatives at provided threshold values.", "type": "API"}, {"name": "tf.compat.v1.metrics.true_positives", "docs": "Sum the weights of true_positives.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: The ground truth values, a `Tensor` whose dimensions must match\n `predictions`. Will be cast to `bool`.\n predictions: The predicted values, a `Tensor` of arbitrary dimensions. Will\n be cast to `bool`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that the metric\n value variable should be added to.\n updates_collections: An optional list of collections that the metric update\n ops should be added to.\n name: An optional variable_scope name.\n\n Returns:\n value_tensor: A `Tensor` representing the current value of the metric.\n update_op: An operation that accumulates the error from a batch of data.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Sum the weights of true_positives.", "type": "API"}, {"name": "tf.compat.v1.metrics.true_positives_at_thresholds", "docs": "Computes true positives at provided threshold values.\n\n If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.\n\n Args:\n labels: A `Tensor` whose shape matches `predictions`. Will be cast to\n `bool`.\n predictions: A floating point `Tensor` of arbitrary shape and whose values\n are in the range `[0, 1]`.\n thresholds: A python list or tuple of float thresholds in `[0, 1]`.\n weights: Optional `Tensor` whose rank is either 0, or the same rank as\n `labels`, and must be broadcastable to `labels` (i.e., all dimensions must\n be either `1`, or the same as the corresponding `labels` dimension).\n metrics_collections: An optional list of collections that `true_positives`\n should be added to.\n updates_collections: An optional list of collections that `update_op` should\n be added to.\n name: An optional variable_scope name.\n\n Returns:\n true_positives: A float `Tensor` of shape `[len(thresholds)]`.\n update_op: An operation that updates the `true_positives` variable and\n returns its current value.\n\n Raises:\n ValueError: If `predictions` and `labels` have mismatched shapes, or if\n `weights` is not `None` and its shape doesn't match `predictions`, or if\n either `metrics_collections` or `updates_collections` are not a list or\n tuple.\n RuntimeError: If eager execution is enabled.\n ", "desc": "Computes true positives at provided threshold values.", "type": "API"}, {"name": "tf.compat.v1.min_max_variable_partitioner", "docs": "Partitioner to allocate minimum size per slice.\n\n Returns a partitioner that partitions the variable of given shape and dtype\n such that each partition has a minimum of `min_slice_size` slice of the\n variable. The maximum number of such partitions (upper bound) is given by\n `max_partitions`.\n\n Args:\n max_partitions: Upper bound on the number of partitions. Defaults to 1.\n axis: Axis along which to partition the variable. Defaults to 0.\n min_slice_size: Minimum size of the variable slice per partition. Defaults\n to 256K.\n bytes_per_string_element: If the `Variable` is of type string, this provides\n an estimate of how large each scalar in the `Variable` is.\n\n Returns:\n A partition function usable as the `partitioner` argument to\n `variable_scope` and `get_variable`.\n\n ", "desc": "Partitioner to allocate minimum size per slice.", "type": "API"}, {"name": "tf.compat.v1.minimum", "docs": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.\n\n Both inputs are number-type tensors (except complex). `minimum` expects that\n both tensors have the same `dtype`.\n\n Examples:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-5., -2., 0., 3.])\n >>> tf.math.minimum(x, y)\n \n\n Note that `minimum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.minimum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_min`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision", "docs": "Public API for tf.mixed_precision namespace.\n", "desc": "Public API for tf.mixed_precision namespace.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.disable_mixed_precision_graph_rewrite", "docs": "Disables the mixed precision graph rewrite.\n\n After this is called, the mixed precision graph rewrite will no longer run for\n new Sessions, and so float32 operations will no longer be converted to float16\n in such Sessions. However, any existing Sessions will continue to have the\n graph rewrite enabled if they were created after\n `enable_mixed_precision_graph_rewrite` was called but before\n `disable_mixed_precision_graph_rewrite` was called.\n\n This does not undo the effects of loss scaling. Any optimizers wrapped with a\n LossScaleOptimizer will continue to do loss scaling, although this loss\n scaling will no longer be useful if the optimizer is used in new Sessions, as\n the graph rewrite no longer converts the graph to use float16.\n\n This function is useful for unit testing. A unit tests can test using the\n mixed precision graph rewrite, then disable it so future unit tests continue\n using float32. If this is done, unit tests should not share a single session,\n as `enable_mixed_precision_graph_rewrite` and\n `disable_mixed_precision_graph_rewrite` have no effect on existing sessions.\n ", "desc": "Disables the mixed precision graph rewrite.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.DynamicLossScale", "docs": "Loss scale that dynamically adjusts itself.\n\n Dynamic loss scaling works by adjusting the loss scale as training progresses.\n The goal is to keep the loss scale as high as possible without overflowing the\n gradients. As long as the gradients do not overflow, raising the loss scale\n never hurts.\n\n The algorithm starts by setting the loss scale to an initial value. Every N\n steps that the gradients are finite, the loss scale is increased by some\n factor. However, if a NaN or Inf gradient is found, the gradients for that\n step are not applied, and the loss scale is decreased by the factor. This\n process tends to keep the loss scale as high as possible without gradients\n overflowing.\n ", "desc": "Loss scale that dynamically adjusts itself.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite", "docs": "Enable mixed precision via a graph rewrite.\n\n Mixed precision is the use of both float32 and float16 data types when\n training a model to improve performance. This is achieved via a graph rewrite\n operation and a loss-scale optimizer.\n\n Performing arithmetic operations in float16 takes advantage of specialized\n processing units, such as NVIDIA Tensor Cores, for much higher arithmetic\n throughput. However, due to the smaller representable range, performing the\n entire training with float16 can result in gradient underflow, that is, small\n gradient values becoming zeroes. Instead, performing only select arithmetic\n operations in float16 results in higher throughput and decreased training\n time when using compatible hardware accelerators while also reducing memory\n usage, typically without sacrificing model accuracy.\n\n Note: While the mixed precision rewrite changes the datatype of various\n layers throughout the model, the same accuracy reached in float32 is\n expected. If a `NaN` gradient occurs with dynamic loss scaling, the model\n update for that batch is skipped. In this case, the global step count is not\n incremented, and the `LossScaleOptimizer` attempts to decrease the loss\n scaling value to avoid `NaN` values in subsequent iterations. This approach\n has been shown to achieve the same accuracy as float32 and, in most cases,\n better training throughput.\n\n Example:\n\n ```python\n model = tf.keras.models.Sequential([\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(64, activation='softmax'),\n ])\n\n opt = tf.keras.optimizers.SGD()\n opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)\n model.compile(loss=\"mse\", optimizer=opt)\n\n x_train = np.random.random((1024, 64))\n y_train = np.random.random((1024, 64))\n model.fit(x_train, y_train)\n ```\n\n Calling `enable_mixed_precision_graph_rewrite(opt)` enables the graph rewrite\n operation before computing gradients. The function additionally returns an\n `Optimizer` (`opt`) wrapped with a `LossScaleOptimizer`. This prevents\n underflow in the float16 tensors during the backward pass. An optimizer of\n type `tf.train.Optimizer` or `tf.keras.optimizers.Optimizer` must be passed\n to this function, which will then be wrapped to use loss scaling.\n\n The graph rewrite operation changes the `dtype` of certain operations in the\n graph from float32 to float16. There are several categories of operations\n that are either included or excluded by this rewrite operation. The following\n categories of Ops are defined inside corresponding functions under the class\n `AutoMixedPrecisionLists` in\n \n auto_mixed_precision_lists.h:\n\n * `ClearList`: Ops that do not have numerically significant adverse effects.\n E.g. `ArgMax` and `Floor`.\n * `AllowList`: Ops that are considered numerically safe for execution in\n float16, and thus are always converted. E.g. `Conv2D`.\n * `DenyList`: Ops that are numerically unsafe to execute in float16 and\n can negatively affect downstream nodes. E.g. `Softmax`.\n * `GrayList`: Ops that are considered numerically safe for execution in\n float16 unless downstream from a DenyList Op. E.g. `Add` and `AvgPool`.\n\n When this function is used, gradients should only be computed and applied\n with the returned optimizer, either by calling `opt.minimize()` or\n `opt.compute_gradients()` followed by `opt.apply_gradients()`.\n Gradients should not be computed with `tf.gradients` or `tf.GradientTape`.\n This is because the returned optimizer will apply loss scaling, and\n `tf.gradients` or `tf.GradientTape` will not. If you do directly use\n `tf.gradients` or `tf.GradientTape`, your model may not converge due to\n float16 underflow problems.\n\n When eager execution is enabled, the mixed precision graph rewrite is only\n enabled within `tf.function`s, as outside `tf.function`s, there is no graph.\n\n For NVIDIA GPUs with Tensor cores, as a general performance guide, dimensions\n (such as batch size, input size, output size, and channel counts)\n should be powers of two if under 256, or otherwise divisible by 8 if above\n 256. For more information, check out the\n [NVIDIA Deep Learning Performance Guide](\n https://docs.nvidia.com/deeplearning/sdk/dl-performance-guide/index.html).\n\n Currently, mixed precision is only enabled on NVIDIA Tensor Core GPUs with\n Compute Capability 7.0 and above (Volta, Turing, or newer architectures). The\n parts of the graph on CPUs and TPUs are untouched by the graph rewrite.\n\n Raises:\n `ValueError`, if the `tf.keras.mixed_precision` API is also used by calling\n `tf.keras.mixed_precision.set_global_policy`. Only one mixed precision\n API can be used.\n\n Args:\n opt: An instance of a `tf.keras.optimizers.Optimizer` or a\n `tf.train.Optimizer`.\n loss_scale: Either an int/float, the string `\"dynamic\"`, or an instance of\n a `tf.mixed_precision.experimental.LossScale`. The loss scale to use. It\n is recommended to keep this as its default value of `\"dynamic\"`, which\n will adjust the scaling automatically to prevent `Inf` or `NaN` values.\n\n Returns:\n A version of `opt` that will use loss scaling to prevent underflow.\n ", "desc": "Enable mixed precision via a graph rewrite.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.experimental", "docs": "Public API for tf.mixed_precision.experimental namespace.\n", "desc": "Public API for tf.mixed_precision.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.experimental.DynamicLossScale", "docs": "Loss scale that dynamically adjusts itself.\n\n Dynamic loss scaling works by adjusting the loss scale as training progresses.\n The goal is to keep the loss scale as high as possible without overflowing the\n gradients. As long as the gradients do not overflow, raising the loss scale\n never hurts.\n\n The algorithm starts by setting the loss scale to an initial value. Every N\n steps that the gradients are finite, the loss scale is increased by some\n factor. However, if a NaN or Inf gradient is found, the gradients for that\n step are not applied, and the loss scale is decreased by the factor. This\n process tends to keep the loss scale as high as possible without gradients\n overflowing.\n ", "desc": "Loss scale that dynamically adjusts itself.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.experimental.FixedLossScale", "docs": "Loss scale with a fixed value.\n\n The loss scale is not updated for the lifetime of instances of this class.\n A given instance of this class always returns the same number when called.\n ", "desc": "Loss scale with a fixed value.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.experimental.LossScale", "docs": "Base class for all TF1 loss scales.\n\n This is an abstract base class, so you cannot instantiate it directly.\n Instead, use one of its concrete subclasses:\n * `tf.compat.v1.mixed_precision.DynamicLossScale`\n * `tf.compat.v1.mixed_precision.FixedLossScale`\n\n Loss scaling is a process that multiplies the loss by a multiplier called the\n loss scale, and divides each gradient by the same multiplier. The pseudocode\n for this process is:\n\n ```\n loss = ...\n loss *= loss_scale\n grads = gradients(loss, vars)\n grads /= loss_scale\n ```\n\n Mathematically, loss scaling has no effect, but can help avoid numerical\n underflow in intermediate gradients when float16 tensors are used for mixed\n precision training. By multiplying the loss, each intermediate gradient will\n have the same multiplier applied.\n\n Instances of this class represent a loss scale. Calling instances of this\n class returns the loss scale as a scalar float32 tensor, while method\n `update()` updates the loss scale depending on the values of the gradients.\n Optimizers use instances of this class to scale loss and gradients.\n\n In most functions that accept a LossScale, you can also pass an int (such as\n 8) to create a `FixedLossScale` or the string `\"dynamic\"` to create a dynamic\n loss scale.\n ", "desc": "Base class for all TF1 loss scales.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.FixedLossScale", "docs": "Loss scale with a fixed value.\n\n The loss scale is not updated for the lifetime of instances of this class.\n A given instance of this class always returns the same number when called.\n ", "desc": "Loss scale with a fixed value.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.LossScale", "docs": "Base class for all TF1 loss scales.\n\n This is an abstract base class, so you cannot instantiate it directly.\n Instead, use one of its concrete subclasses:\n * `tf.compat.v1.mixed_precision.DynamicLossScale`\n * `tf.compat.v1.mixed_precision.FixedLossScale`\n\n Loss scaling is a process that multiplies the loss by a multiplier called the\n loss scale, and divides each gradient by the same multiplier. The pseudocode\n for this process is:\n\n ```\n loss = ...\n loss *= loss_scale\n grads = gradients(loss, vars)\n grads /= loss_scale\n ```\n\n Mathematically, loss scaling has no effect, but can help avoid numerical\n underflow in intermediate gradients when float16 tensors are used for mixed\n precision training. By multiplying the loss, each intermediate gradient will\n have the same multiplier applied.\n\n Instances of this class represent a loss scale. Calling instances of this\n class returns the loss scale as a scalar float32 tensor, while method\n `update()` updates the loss scale depending on the values of the gradients.\n Optimizers use instances of this class to scale loss and gradients.\n\n In most functions that accept a LossScale, you can also pass an int (such as\n 8) to create a `FixedLossScale` or the string `\"dynamic\"` to create a dynamic\n loss scale.\n ", "desc": "Base class for all TF1 loss scales.", "type": "API"}, {"name": "tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer", "docs": "An optimizer that applies loss scaling.\n\n Loss scaling is a process that multiplies the loss by a multiplier called the\n loss scale, and divides each gradient by the same multiplier. The pseudocode\n for this process is:\n\n ```\n loss = ...\n loss *= loss_scale\n grads = gradients(loss, vars)\n grads /= loss_scale\n ```\n\n Mathematically, loss scaling has no effect, but can help avoid numerical\n underflow in intermediate gradients when float16 tensors are used for mixed\n precision training. By multiplying the loss, each intermediate gradient will\n have the same multiplier applied.\n\n The loss scale can either be a fixed constant, chosen by the user, or be\n dynamically determined. Dynamically determining the loss scale is convenient\n as a loss scale does not have to be explicitly chosen. However it reduces\n performance.\n\n This optimizer wraps another optimizer and applies loss scaling to it via a\n `LossScale`. Loss scaling is applied whenever gradients are\n computed, such as through `minimize()`.\n ", "desc": "An optimizer that applies loss scaling.", "type": "API"}, {"name": "tf.compat.v1.mlir", "docs": "Public API for tf.mlir namespace.\n", "desc": "Public API for tf.mlir namespace.", "type": "API"}, {"name": "tf.compat.v1.mlir.experimental", "docs": "Public API for tf.mlir.experimental namespace.\n", "desc": "Public API for tf.mlir.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.mlir.experimental.convert_function", "docs": "Import a ConcreteFunction and convert it to a textual MLIR module.\n\n This API is only intended for inspecting the internals of TensorFlow and the\n string returned is at the moment intended for debugging purposes.\n\n A [tf.function](https://www.tensorflow.org/api_docs/python/tf/function) can be\n imported and converted from TensorFlow to TensorFlow MLIR with this API by\n extracting its ConcreteFunction (eagerly-executing wrapper around a\n [tf.Graph](https://www.tensorflow.org/api_docs/python/tf/Graph)).\n\n For example:\n >>> @tf.function\n ... def add(a, b):\n ... return a + b\n\n >>> concrete_function = add.get_concrete_function(\n ... tf.TensorSpec(None, tf.dtypes.float32),\n ... tf.TensorSpec(None, tf.dtypes.float32))\n >>> tf.mlir.experimental.convert_function(concrete_function)\n '...module attributes {...} {...}...'\n\n Args:\n concrete_function: An object of type ConcreteFunction.\n pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the\n module, see MLIR documentation for the\n [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification).\n show_debug_info: Whether to include locations in the emitted textual form.\n\n Returns:\n A textual representation of the MLIR module corresponding to the\n ConcreteFunction.\n\n Raises:\n InvalidArgumentError: if concrete_function is invalid or cannot be converted\n to MLIR.\n\n ", "desc": "Import a ConcreteFunction and convert it to a textual MLIR module.", "type": "API"}, {"name": "tf.compat.v1.mlir.experimental.convert_graph_def", "docs": "Import a GraphDef and convert it to a textual MLIR module.\n\n This API is only intended for inspecting the internals of TensorFlow and the\n string returned is at the moment intended for debugging purposes.\n\n Args:\n graph_def: An object of type graph_pb2.GraphDef or a textual proto\n representation of a valid GraphDef.\n pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the\n module, see MLIR documentation for the\n [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification).\n show_debug_info: Whether to include locations in the emitted textual form.\n\n Returns:\n A textual representation of the MLIR module corresponding to the graphdef.\n\n Raises:\n InvalidArgumentError: if graph_def is invalid or cannot be converted to\n MLIR.\n\n ", "desc": "Import a GraphDef and convert it to a textual MLIR module.", "type": "API"}, {"name": "tf.compat.v1.mod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.compat.v1.model_variables", "docs": "Returns all variables in the MODEL_VARIABLES collection.\n\n Args:\n scope: (Optional.) A string. If supplied, the resulting list is filtered to\n include only items whose `name` attribute matches `scope` using\n `re.match`. Items without a `name` attribute are never returned if a scope\n is supplied. The choice of `re.match` means that a `scope` without special\n tokens filters by prefix.\n\n Returns:\n A list of local Variable objects.\n ", "desc": "Returns all variables in the MODEL_VARIABLES collection.", "type": "API"}, {"name": "tf.compat.v1.Module", "docs": "Base neural network module class.\n\n A module is a named container for `tf.Variable`s, other `tf.Module`s and\n functions which apply to user input. For example a dense layer in a neural\n network might be implemented as a `tf.Module`:\n\n >>> class Dense(tf.Module):\n ... def __init__(self, input_dim, output_size, name=None):\n ... super(Dense, self).__init__(name=name)\n ... self.w = tf.Variable(\n ... tf.random.normal([input_dim, output_size]), name='w')\n ... self.b = tf.Variable(tf.zeros([output_size]), name='b')\n ... def __call__(self, x):\n ... y = tf.matmul(x, self.w) + self.b\n ... return tf.nn.relu(y)\n\n You can use the Dense layer as you would expect:\n\n >>> d = Dense(input_dim=3, output_size=2)\n >>> d(tf.ones([1, 3]))\n \n\n\n By subclassing `tf.Module` instead of `object` any `tf.Variable` or\n `tf.Module` instances assigned to object properties can be collected using\n the `variables`, `trainable_variables` or `submodules` property:\n\n >>> d.variables\n (,\n )\n\n\n Subclasses of `tf.Module` can also take advantage of the `_flatten` method\n which can be used to implement tracking of any other types.\n\n All `tf.Module` classes have an associated `tf.name_scope` which can be used\n to group operations in TensorBoard and create hierarchies for variable names\n which can help with debugging. We suggest using the name scope when creating\n nested submodules/parameters or for forward methods whose graph you might want\n to inspect in TensorBoard. You can enter the name scope explicitly using\n `with self.name_scope:` or you can annotate methods (apart from `__init__`)\n with `@tf.Module.with_name_scope`.\n\n >>> class MLP(tf.Module):\n ... def __init__(self, input_size, sizes, name=None):\n ... super(MLP, self).__init__(name=name)\n ... self.layers = []\n ... with self.name_scope:\n ... for size in sizes:\n ... self.layers.append(Dense(input_dim=input_size, output_size=size))\n ... input_size = size\n ... @tf.Module.with_name_scope\n ... def __call__(self, x):\n ... for layer in self.layers:\n ... x = layer(x)\n ... return x\n\n >>> module = MLP(input_size=5, sizes=[5, 5])\n >>> module.variables\n (,\n ,\n ,\n )\n ", "desc": "Base neural network module class.", "type": "API"}, {"name": "tf.compat.v1.moving_average_variables", "docs": "Returns all variables that maintain their moving averages.\n\n If an `ExponentialMovingAverage` object is created and the `apply()`\n method is called on a list of variables, these variables will\n be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.\n This convenience function returns the contents of that collection.\n\n Args:\n scope: (Optional.) A string. If supplied, the resulting list is filtered to\n include only items whose `name` attribute matches `scope` using\n `re.match`. Items without a `name` attribute are never returned if a scope\n is supplied. The choice of `re.match` means that a `scope` without special\n tokens filters by prefix.\n\n Returns:\n A list of Variable objects.\n ", "desc": "Returns all variables that maintain their moving averages.", "type": "API"}, {"name": "tf.compat.v1.multinomial", "docs": "Draws samples from a multinomial distribution. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.random.categorical` instead.\n\nExample:\n\n```python\n# samples has shape [1, 5], where each value is either 0 or 1 with equal\n# probability.\nsamples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5)\n```\n\nArgs:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for behavior.\n name: Optional name for the operation.\n output_dtype: The integer type of the output: `int32` or `int64`. Defaults\n to `int64`.\n\nReturns:\n The drawn samples of shape `[batch_size, num_samples]`.", "desc": "Draws samples from a multinomial distribution. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.multiply", "docs": "Returns an element-wise x * y.\n\n For example:\n\n >>> x = tf.constant(([1, 2, 3, 4]))\n >>> tf.math.multiply(x, x)\n \n\n Since `tf.math.multiply` will convert its arguments to `Tensor`s, you can also\n pass in non-`Tensor` arguments:\n\n >>> tf.math.multiply(7,6)\n \n\n If `x.shape` is not the same as `y.shape`, they will be broadcast to a\n compatible shape. (More about broadcasting\n [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)\n\n For example:\n\n >>> x = tf.ones([1, 2]);\n >>> y = tf.ones([2, 1]);\n >>> x * y # Taking advantage of operator overriding\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_prod`\n\n Args:\n x: A Tensor. Must be one of the following types: `bfloat16`,\n `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`,\n `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n\n A `Tensor`. Has the same type as `x`.\n\n Raises:\n\n * InvalidArgumentError: When `x` and `y` have incompatible shapes or types.\n ", "desc": "Returns an element-wise x * y.", "type": "API"}, {"name": "tf.compat.v1.name_scope", "docs": "A context manager for use when defining a Python op.\n\n This context manager validates that the given `values` are from the\n same graph, makes that graph the default graph, and pushes a\n name scope in that graph (see\n `tf.Graph.name_scope`\n for more details on that).\n\n For example, to define a new Python op called `my_op`:\n\n ```python\n def my_op(a, b, c, name=None):\n with tf.name_scope(name, \"MyOp\", [a, b, c]) as scope:\n a = tf.convert_to_tensor(a, name=\"a\")\n b = tf.convert_to_tensor(b, name=\"b\")\n c = tf.convert_to_tensor(c, name=\"c\")\n # Define some computation that uses `a`, `b`, and `c`.\n return foo_op(..., name=scope)\n ```\n ", "desc": "A context manager for use when defining a Python op.", "type": "API"}, {"name": "tf.compat.v1.NameAttrList", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.NameAttrList.AttrEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.negative", "docs": "Computes numerical negative value element-wise.\n\n I.e., \\\\(y = -x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)`", "desc": "Computes numerical negative value element-wise.", "type": "API"}, {"name": "tf.compat.v1.nest", "docs": "Functions that work with structures.\n\nA structure is either:\n\n* one of the recognized Python collections, holding _nested structures_;\n* a value of any other type, typically a TensorFlow data type like Tensor,\n Variable, or of compatible types such as int, float, ndarray, etc. these are\n commonly referred to as _atoms_ of the structure.\n\nA structure of type `T` is a structure whose atomic items are of type `T`.\nFor example, a structure of `tf.Tensor` only contains `tf.Tensor` as its atoms.\n\nHistorically a _nested structure_ was called a _nested sequence_ in TensorFlow.\nA nested structure is sometimes called a _nest_ or a _tree_, but the formal\nname _nested structure_ is preferred.\n\nRefer to [Nesting Data Structures]\n(https://en.wikipedia.org/wiki/Nesting_(computing)#Data_structures).\n\nThe following collection types are recognized by `tf.nest` as nested\nstructures:\n\n* `collections.abc.Sequence` (except `string` and `bytes`).\n This includes `list`, `tuple`, and `namedtuple`.\n* `collections.abc.Mapping` (with sortable keys).\n This includes `dict` and `collections.OrderedDict`.\n* `collections.abc.MappingView` (with sortable keys).\n* [`attr.s` classes](https://www.attrs.org/).\n\nAny other values are considered **atoms**. Not all collection types are\nconsidered nested structures. For example, the following types are\nconsidered atoms:\n\n* `set`; `{\"a\", \"b\"}` is an atom, while `[\"a\", \"b\"]` is a nested structure.\n* [`dataclass` classes](https://docs.python.org/library/dataclasses.html)\n* `tf.Tensor`\n* `numpy.array`\n\n`tf.nest.is_nested` checks whether an object is a nested structure or an atom.\nFor example:\n\n >>> tf.nest.is_nested(\"1234\")\n False\n >>> tf.nest.is_nested([1, 3, [4, 5]])\n True\n >>> tf.nest.is_nested(((7, 8), (5, 6)))\n True\n >>> tf.nest.is_nested([])\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2})\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.keys())\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.values())\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.items())\n True\n >>> tf.nest.is_nested(set([1, 2]))\n False\n >>> ones = tf.ones([2, 3])\n >>> tf.nest.is_nested(ones)\n False\n\nNote: A proper structure shall form a tree. The user shall ensure there is no\ncyclic references within the items in the structure,\ni.e., no references in the structure of the input of these functions\nshould be recursive. The behavior is undefined if there is a cycle.\n\n\n", "desc": "Functions that work with structures.", "type": "API"}, {"name": "tf.compat.v1.nest.assert_same_structure", "docs": "Asserts that two structures are nested in the same way.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n Note the method does not check the types of atoms inside the structures.\n\n Examples:\n\n * These atom vs. atom comparisons will pass:\n\n >>> tf.nest.assert_same_structure(1.5, tf.Variable(1, tf.uint32))\n >>> tf.nest.assert_same_structure(\"abc\", np.array([1, 2]))\n\n * These nested structure vs. nested structure comparisons will pass:\n\n >>> structure1 = (((1, 2), 3), 4, (5, 6))\n >>> structure2 = (((\"foo1\", \"foo2\"), \"foo3\"), \"foo4\", (\"foo5\", \"foo6\"))\n >>> structure3 = [((\"a\", \"b\"), \"c\"), \"d\", [\"e\", \"f\"]]\n >>> tf.nest.assert_same_structure(structure1, structure2)\n >>> tf.nest.assert_same_structure(structure1, structure3, check_types=False)\n\n >>> import collections\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple(\"bar\", \"a b\")(1, 2),\n ... collections.namedtuple(\"foo\", \"a b\")(2, 3),\n ... check_types=False)\n\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple(\"bar\", \"a b\")(1, 2),\n ... { \"a\": 1, \"b\": 2 },\n ... check_types=False)\n\n >>> tf.nest.assert_same_structure(\n ... { \"a\": 1, \"b\": 2, \"c\": 3 },\n ... { \"c\": 6, \"b\": 5, \"a\": 4 })\n\n >>> ragged_tensor1 = tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... row_splits=[0, 4, 4, 7, 8, 8])\n >>> ragged_tensor2 = tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4],\n ... row_splits=[0, 3])\n >>> tf.nest.assert_same_structure(\n ... ragged_tensor1,\n ... ragged_tensor2,\n ... expand_composites=True)\n\n * These examples will raise exceptions:\n\n >>> tf.nest.assert_same_structure([0, 1], np.array([0, 1]))\n Traceback (most recent call last):\n ...\n ValueError: The two structures don't have the same nested structure\n\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple('bar', 'a b')(1, 2),\n ... collections.namedtuple('foo', 'a b')(2, 3))\n Traceback (most recent call last):\n ...\n TypeError: The two structures don't have the same nested structure\n\n Args:\n nest1: an atom or a nested structure.\n nest2: an atom or a nested structure.\n check_types: if `True` (default) types of structures are checked as well,\n including the keys of dictionaries. If set to `False`, for example a list\n and a tuple of objects will look the same if they have the same size. Note\n that namedtuples with identical name and fields are always considered to\n have the same shallow structure. Two types will also be considered the\n same if they are both list subtypes (which allows \"list\" and\n \"_ListWrapper\" from trackable dependency tracking to compare equal).\n `check_types=True` only checks type of sub-structures. The types of atoms\n are not checked.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Raises:\n ValueError: If the two structures do not have the same number of atoms or\n if the two structures are not nested in the same way.\n TypeError: If the two structures differ in the type of sequence in any of\n their substructures. Only possible if `check_types` is `True`.\n ", "desc": "Asserts that two structures are nested in the same way.", "type": "API"}, {"name": "tf.compat.v1.nest.flatten", "docs": "Returns a flat list from a given structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n If the structure is an atom, then returns a single-item list: [structure].\n\n This is the inverse of the `nest.pack_sequence_as` method that takes in a\n flattened list and re-packs it into the nested structure.\n\n In the case of dict instances, the sequence consists of the values, sorted by\n key to ensure deterministic behavior. This is true also for OrderedDict\n instances: their sequence order is ignored, the sorting order of keys is used\n instead. The same convention is followed in `nest.pack_sequence_as`. This\n correctly repacks dicts and OrderedDicts after they have been flattened, and\n also allows flattening an OrderedDict and then repacking it back using a\n corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys\n cannot be flattened.\n\n Users must not modify any collections used in nest while this function is\n running.\n\n Examples:\n\n 1. Python dict (ordered by key):\n\n >>> dict = { \"key3\": \"value3\", \"key1\": \"value1\", \"key2\": \"value2\" }\n >>> tf.nest.flatten(dict)\n ['value1', 'value2', 'value3']\n\n 2. For a nested python tuple:\n\n >>> tuple = ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0)\n >>> tf.nest.flatten(tuple)\n [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]\n\n 3. For a nested dictionary of dictionaries:\n\n >>> dict = { \"key3\": {\"c\": (1.0, 2.0), \"a\": (3.0)},\n ... \"key1\": {\"m\": \"val1\", \"g\": \"val2\"} }\n >>> tf.nest.flatten(dict)\n ['val2', 'val1', 3.0, 1.0, 2.0]\n\n 4. Numpy array (will not flatten):\n\n >>> array = np.array([[1, 2], [3, 4]])\n >>> tf.nest.flatten(array)\n [array([[1, 2],\n [3, 4]])]\n\n 5. `tf.Tensor` (will not flatten):\n\n >>> tensor = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> tf.nest.flatten(tensor)\n []\n\n 6. `tf.RaggedTensor`: This is a composite tensor thats representation consists\n of a flattened list of 'values' and a list of 'row_splits' which indicate how\n to chop up the flattened list into different rows. For more details on\n `tf.RaggedTensor`, please visit\n https://www.tensorflow.org/api_docs/python/tf/RaggedTensor.\n\n with `expand_composites=False`, we just return the RaggedTensor as is.\n\n >>> tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]])\n >>> tf.nest.flatten(tensor, expand_composites=False)\n []\n\n with `expand_composites=True`, we return the component Tensors that make up\n the RaggedTensor representation (the values and row_splits tensors)\n\n >>> tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]])\n >>> tf.nest.flatten(tensor, expand_composites=True)\n [,\n ]\n\n Args:\n structure: an atom or a nested structure. Note, numpy arrays are considered\n atoms and are not flattened.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Returns:\n A Python list, the flattened version of the input.\n\n Raises:\n TypeError: The nest is or contains a dict with non-sortable keys.\n ", "desc": "Returns a flat list from a given structure.", "type": "API"}, {"name": "tf.compat.v1.nest.is_nested", "docs": "Returns true if its input is a nested structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a nested structure.\n\n Args:\n seq: the value to test.\n\n Returns:\n True if the input is a nested structure.\n ", "desc": "Returns true if its input is a nested structure.", "type": "API"}, {"name": "tf.compat.v1.nest.map_structure", "docs": "Creates a new structure by applying `func` to each atom in `structure`.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n Applies `func(x[0], x[1], ...)` where x[i] enumerates all atoms in\n `structure[i]`. All items in `structure` must have the same arity,\n and the return value will contain results with the same structure layout.\n\n Examples:\n\n * A single Python dict:\n\n >>> a = {\"hello\": 24, \"world\": 76}\n >>> tf.nest.map_structure(lambda p: p * 2, a)\n {'hello': 48, 'world': 152}\n\n * Multiple Python dictionaries:\n\n >>> d1 = {\"hello\": 24, \"world\": 76}\n >>> d2 = {\"hello\": 36, \"world\": 14}\n >>> tf.nest.map_structure(lambda p1, p2: p1 + p2, d1, d2)\n {'hello': 60, 'world': 90}\n\n * A single Python list:\n\n >>> a = [24, 76, \"ab\"]\n >>> tf.nest.map_structure(lambda p: p * 2, a)\n [48, 152, 'abab']\n\n * Scalars:\n\n >>> tf.nest.map_structure(lambda x, y: x + y, 3, 4)\n 7\n\n * Empty structures:\n\n >>> tf.nest.map_structure(lambda x: x + 1, ())\n ()\n\n * Check the types of iterables:\n\n >>> s1 = (((1, 2), 3), 4, (5, 6))\n >>> s1_list = [[[1, 2], 3], 4, [5, 6]]\n >>> tf.nest.map_structure(lambda x, y: None, s1, s1_list)\n Traceback (most recent call last):\n ...\n TypeError: The two structures don't have the same nested structure\n\n * Type check is set to False:\n\n >>> s1 = (((1, 2), 3), 4, (5, 6))\n >>> s1_list = [[[1, 2], 3], 4, [5, 6]]\n >>> tf.nest.map_structure(lambda x, y: None, s1, s1_list, check_types=False)\n (((None, None), None), None, (None, None))\n\n Args:\n func: A callable that accepts as many arguments as there are structures.\n *structure: atom or nested structure.\n **kwargs: Valid keyword args are:\n * `check_types`: If set to `True` (default) the types of iterables within\n the structures have to be same (e.g. `map_structure(func, [1], (1,))`\n raises a `TypeError` exception). To allow this set this argument to\n `False`. Note that namedtuples with identical name and fields are always\n considered to have the same shallow structure.\n * `expand_composites`: If set to `True`, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors. If `False` (the default), then composite tensors are\n not expanded.\n\n Returns:\n A new structure with the same arity as `structure[0]`, whose atoms\n correspond to `func(x[0], x[1], ...)` where `x[i]` is the atom in the\n corresponding location in `structure[i]`. If there are different structure\n types and `check_types` is `False` the structure types of the first\n structure will be used.\n\n Raises:\n TypeError: If `func` is not callable or if the structures do not match\n each other by depth tree.\n ValueError: If no structure is provided or if the structures do not match\n each other by type.\n ValueError: If wrong keyword arguments are provided.\n ", "desc": "Creates a new structure by applying `func` to each atom in `structure`.", "type": "API"}, {"name": "tf.compat.v1.nest.pack_sequence_as", "docs": "Returns a given flattened sequence packed into a given structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n If `structure` is an atom, `flat_sequence` must be a single-item list;\n in this case the return value is `flat_sequence[0]`.\n\n If `structure` is or contains a dict instance, the keys will be sorted to\n pack the flat sequence in deterministic order. This is true also for\n `OrderedDict` instances: their sequence order is ignored, the sorting order of\n keys is used instead. The same convention is followed in `flatten`.\n This correctly repacks dicts and `OrderedDict`s after they have been\n flattened, and also allows flattening an `OrderedDict` and then repacking it\n back using a corresponding plain dict, or vice-versa.\n Dictionaries with non-sortable keys cannot be flattened.\n\n Examples:\n\n 1. Python dict:\n\n >>> structure = { \"key3\": \"\", \"key1\": \"\", \"key2\": \"\" }\n >>> flat_sequence = [\"value1\", \"value2\", \"value3\"]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n {'key3': 'value3', 'key1': 'value1', 'key2': 'value2'}\n\n 2. For a nested python tuple:\n\n >>> structure = (('a','b'), ('c','d','e'), 'f')\n >>> flat_sequence = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0)\n\n 3. For a nested dictionary of dictionaries:\n\n >>> structure = { \"key3\": {\"c\": ('alpha', 'beta'), \"a\": ('gamma')},\n ... \"key1\": {\"e\": \"val1\", \"d\": \"val2\"} }\n >>> flat_sequence = ['val2', 'val1', 3.0, 1.0, 2.0]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n {'key3': {'c': (1.0, 2.0), 'a': 3.0}, 'key1': {'e': 'val1', 'd': 'val2'}}\n\n 4. Numpy array (considered a scalar):\n\n >>> structure = ['a']\n >>> flat_sequence = [np.array([[1, 2], [3, 4]])]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n [array([[1, 2],\n [3, 4]])]\n\n 5. tf.Tensor (considered a scalar):\n\n >>> structure = ['a']\n >>> flat_sequence = [tf.constant([[1., 2., 3.], [4., 5., 6.]])]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n []\n\n 6. `tf.RaggedTensor`: This is a composite tensor thats representation consists\n of a flattened list of 'values' and a list of 'row_splits' which indicate how\n to chop up the flattened list into different rows. For more details on\n `tf.RaggedTensor`, please visit\n https://www.tensorflow.org/api_docs/python/tf/RaggedTensor.\n\n With `expand_composites=False`, we treat RaggedTensor as a scalar.\n\n >>> structure = { \"foo\": tf.ragged.constant([[1, 2], [3]]),\n ... \"bar\": tf.constant([[5]]) }\n >>> flat_sequence = [ \"one\", \"two\" ]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence,\n ... expand_composites=False)\n {'foo': 'two', 'bar': 'one'}\n\n With `expand_composites=True`, we expect that the flattened input contains\n the tensors making up the ragged tensor i.e. the values and row_splits\n tensors.\n\n >>> structure = { \"foo\": tf.ragged.constant([[1., 2.], [3.]]),\n ... \"bar\": tf.constant([[5.]]) }\n >>> tensors = tf.nest.flatten(structure, expand_composites=True)\n >>> print(tensors)\n [,\n ,\n ]\n >>> verified_tensors = [tf.debugging.check_numerics(t, 'invalid tensor: ')\n ... if t.dtype==tf.float32 else t\n ... for t in tensors]\n >>> tf.nest.pack_sequence_as(structure, verified_tensors,\n ... expand_composites=True)\n {'foo': ,\n 'bar': }\n\n Args:\n structure: Nested structure, whose structure is given by nested lists,\n tuples, and dicts. Note: numpy arrays and strings are considered\n scalars.\n flat_sequence: flat sequence to pack.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Returns:\n packed: `flat_sequence` converted to have the same recursive structure as\n `structure`.\n\n Raises:\n ValueError: If `flat_sequence` and `structure` have different\n atom counts.\n TypeError: `structure` is or contains a dict with non-sortable keys.\n ", "desc": "Returns a given flattened sequence packed into a given structure.", "type": "API"}, {"name": "tf.compat.v1.nn", "docs": "Primitive Neural Net (NN) Operations.\n\n## Notes on padding\n\nSeveral neural network operations, such as `tf.nn.conv2d` and\n`tf.nn.max_pool2d`, take a `padding` parameter, which controls how the input is\npadded before running the operation. The input is padded by inserting values\n(typically zeros) before and after the tensor in each spatial dimension. The\n`padding` parameter can either be the string `'VALID'`, which means use no\npadding, or `'SAME'` which adds padding according to a formula which is\ndescribed below. Certain ops also allow the amount of padding per dimension to\nbe explicitly specified by passing a list to `padding`.\n\nIn the case of convolutions, the input is padded with zeros. In case of pools,\nthe padded input values are ignored. For example, in a max pool, the sliding\nwindow ignores padded values, which is equivalent to the padded values being\n`-infinity`.\n\n### `'VALID'` padding\n\nPassing `padding='VALID'` to an op causes no padding to be used. This causes the\noutput size to typically be smaller than the input size, even when the stride is\none. In the 2D case, the output size is computed as:\n\n```python\nout_height = ceil((in_height - filter_height + 1) / stride_height)\nout_width = ceil((in_width - filter_width + 1) / stride_width)\n```\n\nThe 1D and 3D cases are similar. Note `filter_height` and `filter_width` refer\nto the filter size after dilations (if any) for convolutions, and refer to the\nwindow size for pools.\n\n### `'SAME'` padding\n\nWith `'SAME'` padding, padding is applied to each spatial dimension. When the\nstrides are 1, the input is padded such that the output size is the same as the\ninput size. In the 2D case, the output size is computed as:\n\n```python\nout_height = ceil(in_height / stride_height)\nout_width = ceil(in_width / stride_width)\n```\n\nThe amount of padding used is the smallest amount that results in the output\nsize. The formula for the total amount of padding per dimension is:\n\n```python\nif (in_height % strides[1] == 0):\n pad_along_height = max(filter_height - stride_height, 0)\nelse:\n pad_along_height = max(filter_height - (in_height % stride_height), 0)\nif (in_width % strides[2] == 0):\n pad_along_width = max(filter_width - stride_width, 0)\nelse:\n pad_along_width = max(filter_width - (in_width % stride_width), 0)\n```\n\nFinally, the padding on the top, bottom, left and right are:\n\n```python\npad_top = pad_along_height // 2\npad_bottom = pad_along_height - pad_top\npad_left = pad_along_width // 2\npad_right = pad_along_width - pad_left\n```\n\nNote that the division by 2 means that there might be cases when the padding on\nboth sides (top vs bottom, right vs left) are off by one. In this case, the\nbottom and right sides always get the one additional padded pixel. For example,\nwhen pad_along_height is 5, we pad 2 pixels at the top and 3 pixels at the\nbottom. Note that this is different from existing libraries such as PyTorch and\nCaffe, which explicitly specify the number of padded pixels and always pad the\nsame number of pixels on both sides.\n\nHere is an example of `'SAME'` padding:\n\n>>> in_height = 5\n>>> filter_height = 3\n>>> stride_height = 2\n>>>\n>>> in_width = 2\n>>> filter_width = 2\n>>> stride_width = 1\n>>>\n>>> inp = tf.ones((2, in_height, in_width, 2))\n>>> filter = tf.ones((filter_height, filter_width, 2, 2))\n>>> strides = [stride_height, stride_width]\n>>> output = tf.nn.conv2d(inp, filter, strides, padding='SAME')\n>>> output.shape[1] # output_height: ceil(5 / 2)\n3\n>>> output.shape[2] # output_width: ceil(2 / 1)\n2\n\n### Explicit padding\n\nCertain ops, like `tf.nn.conv2d`, also allow a list of explicit padding amounts\nto be passed to the `padding` parameter. This list is in the same format as what\nis passed to `tf.pad`, except the padding must be a nested list, not a tensor.\nFor example, in the 2D case, the list is in the format `[[0, 0], [pad_top,\npad_bottom], [pad_left, pad_right], [0, 0]]` when `data_format` is its default\nvalue of `'NHWC'`. The two `[0, 0]` pairs indicate the batch and channel\ndimensions have no padding, which is required, as only spatial dimensions can\nhave padding.\n\nFor example:\n\n>>> inp = tf.ones((1, 3, 3, 1))\n>>> filter = tf.ones((2, 2, 1, 1))\n>>> strides = [1, 1]\n>>> padding = [[0, 0], [1, 2], [0, 1], [0, 0]]\n>>> output = tf.nn.conv2d(inp, filter, strides, padding=padding)\n>>> tuple(output.shape)\n(1, 5, 3, 1)\n>>> # Equivalently, tf.pad can be used, since convolutions pad with zeros.\n>>> inp = tf.pad(inp, padding)\n>>> # 'VALID' means to use no padding in conv2d (we already padded inp)\n>>> output2 = tf.nn.conv2d(inp, filter, strides, padding='VALID')\n>>> tf.debugging.assert_equal(output, output2)\n\n", "desc": "Primitive Neural Net (NN) Operations.", "type": "API"}, {"name": "tf.compat.v1.nn.all_candidate_sampler", "docs": "Generate the set of all classes.\n\n Deterministically generates and returns the set of all possible classes.\n For testing purposes. There is no need to use this, since you might as\n well use full softmax or full logistic regression.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of possible classes.\n unique: A `bool`. Ignored.\n unique.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n This operation deterministically returns the entire range\n `[0, num_sampled]`.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`. All returned values are 1.0.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`. All returned values are 1.0.\n ", "desc": "Generate the set of all classes.", "type": "API"}, {"name": "tf.compat.v1.nn.atrous_conv2d", "docs": "Atrous convolution (a.k.a. convolution with holes or dilated convolution).\n\n This function is a simpler wrapper around the more general\n `tf.nn.convolution`, and exists only for backwards compatibility. You can\n use `tf.nn.convolution` to perform 1-D, 2-D, or 3-D atrous convolution.\n\n Computes a 2-D atrous convolution, also known as convolution with holes or\n dilated convolution, given 4-D `value` and `filters` tensors. If the `rate`\n parameter is equal to one, it performs regular 2-D convolution. If the `rate`\n parameter is greater than one, it performs convolution with holes, sampling\n the input values every `rate` pixels in the `height` and `width` dimensions.\n This is equivalent to convolving the input with a set of upsampled filters,\n produced by inserting `rate - 1` zeros between two consecutive values of the\n filters along the `height` and `width` dimensions, hence the name atrous\n convolution or convolution with holes (the French word trous means holes in\n English).\n\n More specifically:\n\n ```\n output[batch, height, width, out_channel] =\n sum_{dheight, dwidth, in_channel} (\n filters[dheight, dwidth, in_channel, out_channel] *\n value[batch, height + rate*dheight, width + rate*dwidth, in_channel]\n )\n ```\n\n Atrous convolution allows us to explicitly control how densely to compute\n feature responses in fully convolutional networks. Used in conjunction with\n bilinear interpolation, it offers an alternative to `conv2d_transpose` in\n dense prediction tasks such as semantic image segmentation, optical flow\n computation, or depth estimation. It also allows us to effectively enlarge\n the field of view of filters without increasing the number of parameters or\n the amount of computation.\n\n For a description of atrous convolution and how it can be used for dense\n feature extraction, please see: (Chen et al., 2015). The same operation is\n investigated further in (Yu et al., 2016). Previous works that effectively\n use atrous convolution in different ways are, among others,\n (Sermanet et al., 2014) and (Giusti et al., 2013).\n Atrous convolution is also closely related to the so-called noble identities\n in multi-rate signal processing.\n\n There are many different ways to implement atrous convolution (see the refs\n above). The implementation here reduces\n\n ```python\n atrous_conv2d(value, filters, rate, padding=padding)\n ```\n\n to the following three operations:\n\n ```python\n paddings = ...\n net = space_to_batch(value, paddings, block_size=rate)\n net = conv2d(net, filters, strides=[1, 1, 1, 1], padding=\"VALID\")\n crops = ...\n net = batch_to_space(net, crops, block_size=rate)\n ```\n\n Advanced usage. Note the following optimization: A sequence of `atrous_conv2d`\n operations with identical `rate` parameters, 'SAME' `padding`, and filters\n with odd heights/ widths:\n\n ```python\n net = atrous_conv2d(net, filters1, rate, padding=\"SAME\")\n net = atrous_conv2d(net, filters2, rate, padding=\"SAME\")\n ...\n net = atrous_conv2d(net, filtersK, rate, padding=\"SAME\")\n ```\n\n can be equivalently performed cheaper in terms of computation and memory as:\n\n ```python\n pad = ... # padding so that the input dims are multiples of rate\n net = space_to_batch(net, paddings=pad, block_size=rate)\n net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding=\"SAME\")\n net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding=\"SAME\")\n ...\n net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding=\"SAME\")\n net = batch_to_space(net, crops=pad, block_size=rate)\n ```\n\n because a pair of consecutive `space_to_batch` and `batch_to_space` ops with\n the same `block_size` cancel out when their respective `paddings` and `crops`\n inputs are identical.\n\n Args:\n value: A 4-D `Tensor` of type `float`. It needs to be in the default \"NHWC\"\n format. Its shape is `[batch, in_height, in_width, in_channels]`.\n filters: A 4-D `Tensor` with the same type as `value` and shape\n `[filter_height, filter_width, in_channels, out_channels]`. `filters`'\n `in_channels` dimension must match that of `value`. Atrous convolution is\n equivalent to standard convolution with upsampled filters with effective\n height `filter_height + (filter_height - 1) * (rate - 1)` and effective\n width `filter_width + (filter_width - 1) * (rate - 1)`, produced by\n inserting `rate - 1` zeros along consecutive elements across the\n `filters`' spatial dimensions.\n rate: A positive int32. The stride with which we sample input values across\n the `height` and `width` dimensions. Equivalently, the rate by which we\n upsample the filter values by inserting zeros across the `height` and\n `width` dimensions. In the literature, the same parameter is sometimes\n called `input stride` or `dilation`.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `value`.\n Output shape with `'VALID'` padding is:\n\n [batch, height - 2 * (filter_width - 1),\n width - 2 * (filter_height - 1), out_channels].\n\n Output shape with `'SAME'` padding is:\n\n [batch, height, width, out_channels].\n\n Raises:\n ValueError: If input/output depth does not match `filters`' shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n\n References:\n Multi-Scale Context Aggregation by Dilated Convolutions:\n [Yu et al., 2016](https://arxiv.org/abs/1511.07122)\n ([pdf](https://arxiv.org/pdf/1511.07122.pdf))\n Semantic Image Segmentation with Deep Convolutional Nets and Fully\n Connected CRFs:\n [Chen et al., 2015](http://arxiv.org/abs/1412.7062)\n ([pdf](https://arxiv.org/pdf/1412.7062))\n OverFeat - Integrated Recognition, Localization and Detection using\n Convolutional Networks:\n [Sermanet et al., 2014](https://arxiv.org/abs/1312.6229)\n ([pdf](https://arxiv.org/pdf/1312.6229.pdf))\n Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks:\n [Giusti et al., 2013]\n (https://ieeexplore.ieee.org/abstract/document/6738831)\n ([pdf](https://arxiv.org/pdf/1302.1700.pdf))\n ", "desc": "Atrous convolution (a.k.a. convolution with holes or dilated convolution).", "type": "API"}, {"name": "tf.compat.v1.nn.atrous_conv2d_transpose", "docs": "The transpose of `atrous_conv2d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of\n `atrous_conv2d` rather than an actual deconvolution.\n\n Args:\n value: A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC`\n format. Its shape is `[batch, in_height, in_width, in_channels]`.\n filters: A 4-D `Tensor` with the same type as `value` and shape\n `[filter_height, filter_width, out_channels, in_channels]`. `filters`'\n `in_channels` dimension must match that of `value`. Atrous convolution is\n equivalent to standard convolution with upsampled filters with effective\n height `filter_height + (filter_height - 1) * (rate - 1)` and effective\n width `filter_width + (filter_width - 1) * (rate - 1)`, produced by\n inserting `rate - 1` zeros along consecutive elements across the\n `filters`' spatial dimensions.\n output_shape: A 1-D `Tensor` of shape representing the output shape of the\n deconvolution op, of form `[batch, out_height, out_width, out_channels]`.\n rate: A positive int32. The stride with which we sample input values across\n the `height` and `width` dimensions. Equivalently, the rate by which we\n upsample the filter values by inserting zeros across the `height` and\n `width` dimensions. In the literature, the same parameter is sometimes\n called `input stride` or `dilation`.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError: If input/output depth does not match `filters`' shape, or if\n padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less\n than one, or if the output_shape is not a tensor with 4 elements.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `atrous_conv2d`.", "type": "API"}, {"name": "tf.compat.v1.nn.avg_pool", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n value: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type\n `float32`, `float64`, `qint8`, `quint8`, or `qint32`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm.\n See the \"returns\" section of `tf.nn.convolution` for details.\n data_format: A string. 'NHWC' and 'NCHW' are supported.\n name: Optional name for the operation.\n input: Alias for value.\n\n Returns:\n A `Tensor` with the same type as `value`. The average pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.avg_pool_v2", "docs": "Performs the avg pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n input: Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape +\n [num_channels]` if `data_format` does not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n ksize: An int or list of `ints` that has length `1`, `N` or `N+2`. The size\n of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. Specifies the channel dimension. For N=1 it can be\n either \"NWC\" (default) or \"NCW\", for N=2 it can be either \"NHWC\" (default)\n or \"NCHW\" and for N=3 either \"NDHWC\" (default) or \"NCDHW\".\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The average pooled output tensor.\n ", "desc": "Performs the avg pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.avg_pool1d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Note internally this op reshapes and uses the underlying 2d operation.\n\n Args:\n input: A 3-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1` or `3`. The size of the\n window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1` or `3`. The stride of\n the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional string from: \"NWC\", \"NCW\". Defaults to \"NWC\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.avg_pool2d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n value: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type\n `float32`, `float64`, `qint8`, `quint8`, or `qint32`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm.\n See the \"returns\" section of `tf.nn.convolution` for details.\n data_format: A string. 'NHWC' and 'NCHW' are supported.\n name: Optional name for the operation.\n input: Alias for value.\n\n Returns:\n A `Tensor` with the same type as `value`. The average pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.avg_pool3d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n input: A 5-D `Tensor` of shape `[batch, depth, height, width, channels]`\n and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.\n ksize: An int or list of `ints` that has length `1`, `3` or `5`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `3` or `5`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. 'NDHWC' and 'NCDHW' are supported.\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` with the same type as `value`. The average pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.batch_norm_with_global_normalization", "docs": "Batch normalization.\n\n This op is deprecated. See `tf.nn.batch_normalization`.\n\n Args:\n t: A 4D input Tensor.\n m: A 1D mean Tensor with size matching the last dimension of t.\n This is the first output from tf.nn.moments,\n or a saved moving average thereof.\n v: A 1D variance Tensor with size matching the last dimension of t.\n This is the second output from tf.nn.moments,\n or a saved moving average thereof.\n beta: A 1D beta Tensor with size matching the last dimension of t.\n An offset to be added to the normalized tensor.\n gamma: A 1D gamma Tensor with size matching the last dimension of t.\n If \"scale_after_normalization\" is true, this tensor will be multiplied\n with the normalized tensor.\n variance_epsilon: A small float number to avoid dividing by 0.\n scale_after_normalization: A bool indicating whether the resulted tensor\n needs to be multiplied with gamma.\n name: A name for this operation (optional).\n input: Alias for t.\n mean: Alias for m.\n variance: Alias for v.\n\n Returns:\n A batch-normalized `t`.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.compat.v1.nn.batch_normalization", "docs": "Batch normalization.\n\n Normalizes a tensor by `mean` and `variance`, and applies (optionally) a\n `scale` \\\\(\\gamma\\\\) to it, as well as an `offset` \\\\(\\beta\\\\):\n\n \\\\(\\frac{\\gamma(x-\\mu)}{\\sigma}+\\beta\\\\)\n\n `mean`, `variance`, `offset` and `scale` are all expected to be of one of two\n shapes:\n\n * In all generality, they can have the same number of dimensions as the\n input `x`, with identical sizes as `x` for the dimensions that are not\n normalized over (the 'depth' dimension(s)), and dimension 1 for the\n others which are being normalized over.\n `mean` and `variance` in this case would typically be the outputs of\n `tf.nn.moments(..., keepdims=True)` during training, or running averages\n thereof during inference.\n * In the common case where the 'depth' dimension is the last dimension in\n the input tensor `x`, they may be one dimensional tensors of the same\n size as the 'depth' dimension.\n This is the case for example for the common `[batch, depth]` layout of\n fully-connected layers, and `[batch, height, width, depth]` for\n convolutions.\n `mean` and `variance` in this case would typically be the outputs of\n `tf.nn.moments(..., keepdims=False)` during training, or running averages\n thereof during inference.\n\n See equation 11 in Algorithm 2 of source:\n [Batch Normalization: Accelerating Deep Network Training by\n Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy]\n (http://arxiv.org/abs/1502.03167).\n\n Args:\n x: Input `Tensor` of arbitrary dimensionality.\n mean: A mean `Tensor`.\n variance: A variance `Tensor`.\n offset: An offset `Tensor`, often denoted \\\\(\\beta\\\\) in equations, or\n None. If present, will be added to the normalized tensor.\n scale: A scale `Tensor`, often denoted \\\\(\\gamma\\\\) in equations, or\n `None`. If present, the scale is applied to the normalized tensor.\n variance_epsilon: A small float number to avoid dividing by 0.\n name: A name for this operation (optional).\n\n Returns:\n the normalized, scaled, offset tensor.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://arxiv.org/abs/1502.03167)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.compat.v1.nn.bias_add", "docs": "Adds `bias` to `value`.\n\n This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D.\n Broadcasting is supported, so `value` may have any number of dimensions.\n Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the\n case where both types are quantized.\n\n Args:\n value: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,\n `int16`, `int8`, `complex64`, or `complex128`.\n bias: A 1-D `Tensor` with size matching the channel dimension of `value`.\n Must be the same type as `value` unless `value` is a quantized type,\n in which case a different quantized type may be used.\n data_format: A string. 'N...C' and 'NC...' are supported. If `None` (the\n default) is specified then 'N..C' is assumed.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError if data format is unrecognized, if `value` has less than two\n dimensions when `data_format` is 'N..C'/`None` or `value` has less\n then three dimensions when `data_format` is `NC..`, if `bias` does not\n have exactly one dimension (is a vector), or if the size of `bias`\n does not match the size of the channel dimension of `value`.\n ", "desc": "Adds `bias` to `value`.", "type": "API"}, {"name": "tf.compat.v1.nn.bidirectional_dynamic_rnn", "docs": "Creates a dynamic version of bidirectional recurrent neural network. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API\n\nTakes input and builds independent forward and backward RNNs. The input_size\nof forward and backward cell must match. The initial state for both directions\nis zero by default (but can be set optionally) and no intermediate states are\never returned -- the network is fully unrolled for the given (passed in)\nlength(s) of the sequence(s) or completely unrolled if length(s) is not\ngiven.\n\nArgs:\n cell_fw: An instance of RNNCell, to be used for forward direction.\n cell_bw: An instance of RNNCell, to be used for backward direction.\n inputs: The RNN inputs.\n If time_major == False (default), this must be a tensor of shape:\n `[batch_size, max_time, ...]`, or a nested tuple of such elements.\n If time_major == True, this must be a tensor of shape: `[max_time,\n batch_size, ...]`, or a nested tuple of such elements.\n sequence_length: (optional) An int32/int64 vector, size `[batch_size]`,\n containing the actual lengths for each of the sequences in the batch. If\n not provided, all batch entries are assumed to be full sequences; and time\n reversal is applied from time `0` to `max_time` for each sequence.\n initial_state_fw: (optional) An initial state for the forward RNN. This must\n be a tensor of appropriate type and shape `[batch_size,\n cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a\n tuple of tensors having shapes `[batch_size, s] for s in\n cell_fw.state_size`.\n initial_state_bw: (optional) Same as for `initial_state_fw`, but using the\n corresponding properties of `cell_bw`.\n dtype: (optional) The data type for the initial states and expected output.\n Required if initial_states are not provided or RNN states have a\n heterogeneous dtype.\n parallel_iterations: (Default: 32). The number of iterations to run in\n parallel. Those operations which do not have any temporal dependency and\n can be run in parallel, will be. This parameter trades off time for\n space. Values >> 1 use more memory but take less time, while smaller\n values use less memory but computations take longer.\n swap_memory: Transparently swap the tensors produced in forward inference\n but needed for back prop from GPU to CPU. This allows training RNNs which\n would typically not fit on a single GPU, with very minimal (or no)\n performance penalty.\n time_major: The shape format of the `inputs` and `outputs` Tensors. If true,\n these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,\n these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using\n `time_major = True` is a bit more efficient because it avoids transposes\n at the beginning and end of the RNN calculation. However, most TensorFlow\n data is batch-major, so by default this function accepts input and emits\n output in batch-major form.\n scope: VariableScope for the created subgraph; defaults to\n \"bidirectional_rnn\"\n\nReturns:\n A tuple (outputs, output_states) where:\n outputs: A tuple (output_fw, output_bw) containing the forward and\n the backward rnn output `Tensor`.\n If time_major == False (default),\n output_fw will be a `Tensor` shaped:\n `[batch_size, max_time, cell_fw.output_size]`\n and output_bw will be a `Tensor` shaped:\n `[batch_size, max_time, cell_bw.output_size]`.\n If time_major == True,\n output_fw will be a `Tensor` shaped:\n `[max_time, batch_size, cell_fw.output_size]`\n and output_bw will be a `Tensor` shaped:\n `[max_time, batch_size, cell_bw.output_size]`.\n It returns a tuple instead of a single concatenated `Tensor`, unlike\n in the `bidirectional_rnn`. If the concatenated one is preferred,\n the forward and backward outputs can be concatenated as\n `tf.concat(outputs, 2)`.\n output_states: A tuple (output_state_fw, output_state_bw) containing\n the forward and the backward final states of bidirectional rnn.\n\nRaises:\n TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.", "desc": "Creates a dynamic version of bidirectional recurrent neural network. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.collapse_repeated", "docs": "Merge repeated labels into single labels.\n\n Args:\n labels: Tensor of shape [batch, max value in seq_length]\n seq_length: Tensor of shape [batch], sequence length of each batch element.\n name: A name for this `Op`. Defaults to \"collapse_repeated_labels\".\n\n Returns:\n A tuple `(collapsed_labels, new_seq_length)` where\n\n collapsed_labels: Tensor of shape [batch, max_seq_length] with repeated\n labels collapsed and padded to max_seq_length, eg:\n `[[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]`\n\n new_seq_length: int tensor of shape [batch] with new sequence lengths.\n ", "desc": "Merge repeated labels into single labels.", "type": "API"}, {"name": "tf.compat.v1.nn.compute_accidental_hits", "docs": "Compute the position ids in `sampled_candidates` matching `true_classes`.\n\n In Candidate Sampling, this operation facilitates virtually removing\n sampled classes which happen to match target classes. This is done\n in Sampled Softmax and Sampled Logistic.\n\n See our [Candidate Sampling Algorithms\n Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n\n We presuppose that the `sampled_candidates` are unique.\n\n We call it an 'accidental hit' when one of the target classes\n matches one of the sampled classes. This operation reports\n accidental hits as triples `(index, id, weight)`, where `index`\n represents the row number in `true_classes`, `id` represents the\n position in `sampled_candidates`, and weight is `-FLOAT_MAX`.\n\n The result of this op should be passed through a `sparse_to_dense`\n operation, then added to the logits of the sampled classes. This\n removes the contradictory effect of accidentally sampling the true\n target classes as noise classes for the same example.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled_candidates output of CandidateSampler.\n num_true: An `int`. The number of target classes per training example.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n indices: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.\n Values indicate rows in `true_classes`.\n ids: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.\n Values indicate positions in `sampled_candidates`.\n weights: A `Tensor` of type `float` and shape `[num_accidental_hits]`.\n Each value is `-FLOAT_MAX`.\n\n ", "desc": "Compute the position ids in `sampled_candidates` matching `true_classes`.", "type": "API"}, {"name": "tf.compat.v1.nn.compute_average_loss", "docs": "Scales per-example losses with sample_weights and computes their average.\n\n Usage with distribution strategy and custom training loop:\n\n ```python\n with strategy.scope():\n def compute_loss(labels, predictions, sample_weight=None):\n\n # If you are using a `Loss` class instead, set reduction to `NONE` so that\n # we can do the reduction afterwards and divide by global batch size.\n per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, predictions)\n\n # Compute loss that is scaled by sample_weight and by global batch size.\n return tf.nn.compute_average_loss(\n per_example_loss,\n sample_weight=sample_weight,\n global_batch_size=GLOBAL_BATCH_SIZE)\n ```\n\n Args:\n per_example_loss: Per-example loss.\n sample_weight: Optional weighting for each example.\n global_batch_size: Optional global batch size value. Defaults to (size of\n first dimension of `losses`) * (number of replicas).\n\n Returns:\n Scalar loss value.\n ", "desc": "Scales per-example losses with sample_weights and computes their average.", "type": "API"}, {"name": "tf.compat.v1.nn.conv_transpose", "docs": "The transpose of `convolution`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d`\n rather than an actual deconvolution.\n\n Args:\n input: An N+2 dimensional `Tensor` of shape\n `[batch_size] + input_spatial_shape + [in_channels]` if data_format does\n not start with \"NC\" (default), or\n `[batch_size, in_channels] + input_spatial_shape` if data_format starts\n with \"NC\". It must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n filters: An N+2 dimensional `Tensor` with the same type as `input` and\n shape `spatial_filter_shape + [in_channels, out_channels]`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the spatial dimensions. By default\n the `N` and `C` dimensions are set to 0. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n dilations: An int or list of `ints` that has length `1`, `N` or `N+2`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the spatial dimensions. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details.\n name: A name for the operation (optional). If not specified \"conv_transpose\"\n is used.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `convolution`.", "type": "API"}, {"name": "tf.compat.v1.nn.conv1d", "docs": "Computes a 1-D convolution of input with rank `>=3` and a `3-D` filter. (deprecated argument values) (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NCHW')`. They will be removed in a future version.\nInstructions for updating:\n`NCHW` for data_format is deprecated, use `NCW` instead\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(data_format='NHWC')`. They will be removed in a future version.\nInstructions for updating:\n`NHWC` for data_format is deprecated, use `NWC` instead\n\nGiven an input tensor of shape\n `batch_shape + [in_width, in_channels]`\nif `data_format` is `\"NWC\"`, or\n `batch_shape + [in_channels, in_width]`\nif `data_format` is `\"NCW\"`,\nand a filter / kernel tensor of shape\n`[filter_width, in_channels, out_channels]`, this op reshapes\nthe arguments to pass them to `conv2d` to perform the equivalent\nconvolution operation.\n\nInternally, this op reshapes the input tensors and invokes `tf.nn.conv2d`.\nFor example, if `data_format` does not start with \"NC\", a tensor of shape\n `batch_shape + [in_width, in_channels]`\nis reshaped to\n `batch_shape + [1, in_width, in_channels]`,\nand the filter is reshaped to\n `[1, filter_width, in_channels, out_channels]`.\nThe result is then reshaped back to\n `batch_shape + [out_width, out_channels]`\n\\(where out_width is a function of the stride and padding as in conv2d\\) and\nreturned to the caller.\n\nArgs:\n value: A Tensor of rank at least 3. Must be of type `float16`, `float32`, or\n `float64`.\n filters: A Tensor of rank at least 3. Must have the same type as `value`.\n stride: An int or list of `ints` that has length `1` or `3`. The number of\n entries by which the filter is moved right at each step.\n padding: 'SAME' or 'VALID'\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n data_format: An optional `string` from `\"NWC\", \"NCW\"`. Defaults to `\"NWC\"`,\n the data is stored in the order of `batch_shape + [in_width,\n in_channels]`. The `\"NCW\"` format stores data as `batch_shape +\n [in_channels, in_width]`.\n name: A name for the operation (optional).\n input: Alias for value.\n dilations: An int or list of `ints` that has length `1` or `3` which\n defaults to 1. The dilation factor for each dimension of input. If set to\n k > 1, there will be k-1 skipped cells between each filter element on that\n dimension. Dilations in the batch and depth dimensions must be 1.\n\nReturns:\n A `Tensor`. Has the same type as input.\n\nRaises:\n ValueError: if `data_format` is invalid.", "desc": "Computes a 1-D convolution of input with rank `>=3` and a `3-D` filter. (deprecated argument values) (deprecated argument values)", "type": "API"}, {"name": "tf.compat.v1.nn.conv1d_transpose", "docs": "The transpose of `conv1d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is actually the transpose (gradient) of `conv1d`\n rather than an actual deconvolution.\n\n Args:\n input: A 3-D `Tensor` of type `float` and shape\n `[batch, in_width, in_channels]` for `NWC` data format or\n `[batch, in_channels, in_width]` for `NCW` data format.\n filters: A 3-D `Tensor` with the same type as `input` and shape\n `[filter_width, output_channels, in_channels]`. `filter`'s\n `in_channels` dimension must match that of `input`.\n output_shape: A 1-D `Tensor`, containing three elements, representing the\n output shape of the deconvolution op.\n strides: An int or list of `ints` that has length `1` or `3`. The number of\n entries by which the filter is moved right at each step.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. `'NWC'` and `'NCW'` are supported.\n dilations: An int or list of `ints` that has length `1` or `3` which\n defaults to 1. The dilation factor for each dimension of input. If set to\n k > 1, there will be k-1 skipped cells between each filter element on that\n dimension. Dilations in the batch and depth dimensions must be 1.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `input`.\n\n Raises:\n ValueError: If input/output depth does not match `filter`'s shape, if\n `output_shape` is not at 3-element vector, if `padding` is other than\n `'VALID'` or `'SAME'`, or if `data_format` is invalid.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv1d`.", "type": "API"}, {"name": "tf.compat.v1.nn.conv2d", "docs": "Computes a 2-D convolution given 4-D `input` and `filter` tensors.\n\n Given an input tensor of shape `[batch, in_height, in_width, in_channels]`\n and a filter / kernel tensor of shape\n `[filter_height, filter_width, in_channels, out_channels]`, this op\n performs the following:\n\n 1. Flattens the filter to a 2-D matrix with shape\n `[filter_height * filter_width * in_channels, output_channels]`.\n 2. Extracts image patches from the input tensor to form a *virtual*\n tensor of shape `[batch, out_height, out_width,\n filter_height * filter_width * in_channels]`.\n 3. For each patch, right-multiplies the filter matrix and the image patch\n vector.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k] =\n sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q]\n * filter[di, dj, q, k]\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n A 4-D tensor. The dimension order is interpreted according to the value\n of `data_format`, see below for details.\n filter: A `Tensor`. Must have the same type as `input`.\n A 4-D tensor of shape\n `[filter_height, filter_width, in_channels, out_channels]`\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the `H` and `W` dimension. By default\n the `N` and `C` dimensions are set to 1. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. When explicit padding is used and\n data_format is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top,\n pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used\n and data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`.\n Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, height, width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An int or list of `ints` that has length `1`, `2` or `4`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `H` and `W` dimension. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 4-d tensor\n must be 1.\n name: A name for the operation (optional).\n filters: Alias for filter.\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 2-D convolution given 4-D `input` and `filter` tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.conv2d_backprop_filter", "docs": "Computes the gradients of convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape `[batch, in_height, in_width, in_channels]`.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 4-D\n `[filter_height, filter_width, in_channels, out_channels]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution. Must be in the same order as the dimension specified\n with format.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. When explicit padding is used and\n data_format is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top,\n pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used\n and data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`.\n Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by\n the value of `data_format`, see above for details. Dilations in the batch\n and depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of convolution with respect to the filter.", "type": "API"}, {"name": "tf.compat.v1.nn.conv2d_backprop_input", "docs": "Computes the gradients of convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`.\n An integer vector representing the shape of `input`,\n where `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape\n `[filter_height, filter_width, in_channels, out_channels]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`.\n 4-D with shape `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution. Must be in the same order as the dimension specified\n with format.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. When explicit padding is used and\n data_format is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top,\n pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used\n and data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`.\n Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by\n the value of `data_format`, see above for details. Dilations in the batch\n and depth dimensions must be 1.\n name: A name for the operation (optional).\n filters: Alias for filter.\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of convolution with respect to the input.", "type": "API"}, {"name": "tf.compat.v1.nn.conv2d_transpose", "docs": "The transpose of `conv2d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of `conv2d`\n rather than an actual deconvolution.\n\n Args:\n value: A 4-D `Tensor` of type `float` and shape\n `[batch, height, width, in_channels]` for `NHWC` data format or\n `[batch, in_channels, height, width]` for `NCHW` data format.\n filter: A 4-D `Tensor` with the same type as `value` and shape\n `[height, width, output_channels, in_channels]`. `filter`'s\n `in_channels` dimension must match that of `value`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the `H` and `W` dimension. By default\n the `N` and `C` dimensions are set to 0. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm.\n See the \"returns\" section of `tf.nn.convolution` for details.\n data_format: A string. 'NHWC' and 'NCHW' are supported.\n name: Optional name for the returned tensor.\n input: Alias for value.\n filters: Alias for filter.\n dilations: An int or list of `ints` that has length `1`, `2` or `4`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `H` and `W` dimension. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 4-d tensor\n must be 1.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError: If input/output depth does not match `filter`'s shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv2d`.", "type": "API"}, {"name": "tf.compat.v1.nn.conv3d", "docs": "Computes a 3-D convolution given 5-D `input` and `filter` tensors.\n\n In signal processing, cross-correlation is a measure of similarity of\n two waveforms as a function of a time-lag applied to one of them. This\n is also known as a sliding dot product or sliding inner-product.\n\n Our Conv3D implements a form of cross-correlation.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, in_depth, in_height, in_width, in_channels]`.\n filter: A `Tensor`. Must have the same type as `input`.\n Shape `[filter_depth, filter_height, filter_width, in_channels,\n out_channels]`. `in_channels` must match between `input` and `filter`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 3-D convolution given 5-D `input` and `filter` tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.conv3d_backprop_filter", "docs": "Computes the gradients of 3-D convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, in_channels]`.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 5-D\n `[filter_depth, filter_height, filter_width, in_channels, out_channels]`\n tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the filter.", "type": "API"}, {"name": "tf.compat.v1.nn.conv3d_backprop_filter_v2", "docs": "Computes the gradients of 3-D convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, in_channels]`.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 5-D\n `[filter_depth, filter_height, filter_width, in_channels, out_channels]`\n tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the filter.", "type": "API"}, {"name": "tf.compat.v1.nn.conv3d_transpose", "docs": "The transpose of `conv3d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d`\n rather than an actual deconvolution.\n\n Args:\n value: A 5-D `Tensor` of type `float` and shape\n `[batch, depth, height, width, in_channels]`.\n filter: A 5-D `Tensor` with the same type as `value` and shape\n `[depth, height, width, output_channels, in_channels]`. `filter`'s\n `in_channels` dimension must match that of `value`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: A list of ints. The stride of the sliding window for each\n dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm.\n See the \"returns\" section of `tf.nn.convolution` for details.\n data_format: A string, either `'NDHWC'` or `'NCDHW`' specifying the layout\n of the input and output tensors. Defaults to `'NDHWC'`.\n name: Optional name for the returned tensor.\n input: Alias of value.\n filters: Alias of filter.\n dilations: An int or list of `ints` that has length `1`, `3` or `5`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `D`, `H` and `W` dimension.\n By default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 5-d tensor\n must be 1.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError: If input/output depth does not match `filter`'s shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv3d`.", "type": "API"}, {"name": "tf.compat.v1.nn.convolution", "docs": "Computes sums of N-D convolutions (actually cross-correlation).\n\n This also supports either output striding via the optional `strides` parameter\n or atrous convolution (also known as convolution with holes or dilated\n convolution, based on the French word \"trous\" meaning holes in English) via\n the optional `dilation_rate` parameter. Currently, however, output striding\n is not supported for atrous convolutions.\n\n Specifically, in the case that `data_format` does not start with \"NC\", given\n a rank (N+2) `input` Tensor of shape\n\n [num_batches,\n input_spatial_shape[0],\n ...,\n input_spatial_shape[N-1],\n num_input_channels],\n\n a rank (N+2) `filter` Tensor of shape\n\n [spatial_filter_shape[0],\n ...,\n spatial_filter_shape[N-1],\n num_input_channels,\n num_output_channels],\n\n an optional `dilation_rate` tensor of shape N (defaults to `[1]*N`) specifying\n the filter upsampling/input downsampling rate, and an optional list of N\n `strides` (defaults to `[1]*N`), this computes for each N-D spatial output\n position `(x[0], ..., x[N-1])`:\n\n ```\n output[b, x[0], ..., x[N-1], k] =\n sum_{z[0], ..., z[N-1], q}\n filter[z[0], ..., z[N-1], q, k] *\n padded_input[b,\n x[0]*strides[0] + dilation_rate[0]*z[0],\n ...,\n x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],\n q]\n ```\n\n where b is the index into the batch, k is the output channel number, q is the\n input channel number, and z is the N-D spatial offset within the filter. Here,\n `padded_input` is obtained by zero padding the input using an effective\n spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and\n output striding `strides`.\n\n In the case that `data_format` does start with `\"NC\"`, the `input` and output\n (but not the `filter`) are simply transposed as follows:\n\n ```python\n convolution(input, data_format, **kwargs) =\n tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]),\n **kwargs),\n [0, N+1] + range(1, N+1))\n ```\n\n It is required that 1 <= N <= 3.\n\n Args:\n input: An (N+2)-D `Tensor` of type `T`, of shape\n `[batch_size] + input_spatial_shape + [in_channels]` if data_format does\n not start with \"NC\" (default), or\n `[batch_size, in_channels] + input_spatial_shape` if data_format starts\n with \"NC\".\n filter: An (N+2)-D `Tensor` with the same type as `input` and shape\n `spatial_filter_shape + [in_channels, out_channels]`.\n padding: A string, either `\"VALID\"` or `\"SAME\"`. The padding algorithm.\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input when the strides are 1. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n strides: Optional. Sequence of N ints >= 1. Specifies the output stride.\n Defaults to `[1]*N`. If any value of strides is > 1, then all values of\n dilation_rate must be 1.\n dilation_rate: Optional. Sequence of N ints >= 1. Specifies the filter\n upsampling/input downsampling rate. In the literature, the same parameter\n is sometimes called `input stride` or `dilation`. The effective filter\n size used for the convolution will be `spatial_filter_shape +\n (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting\n (dilation_rate[i]-1) zeros between consecutive elements of the original\n filter in each spatial dimension i. If any value of dilation_rate is > 1,\n then all values of strides must be 1.\n name: Optional name for the returned tensor.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n\n Returns:\n A `Tensor` with the same type as `input` of shape\n\n `[batch_size] + output_spatial_shape + [out_channels]`\n\n if data_format is None or does not start with \"NC\", or\n\n `[batch_size, out_channels] + output_spatial_shape`\n\n if data_format starts with \"NC\",\n where `output_spatial_shape` depends on the value of `padding`.\n\n If padding == \"SAME\":\n output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])\n\n If padding == \"VALID\":\n output_spatial_shape[i] =\n ceil((input_spatial_shape[i] -\n (spatial_filter_shape[i]-1) * dilation_rate[i])\n / strides[i]).\n\n Raises:\n ValueError: If input/output depth does not match `filter` shape, if padding\n is other than `\"VALID\"` or `\"SAME\"`, or if data_format is invalid.\n\n ", "desc": "Computes sums of N-D convolutions (actually cross-correlation).", "type": "API"}, {"name": "tf.compat.v1.nn.crelu", "docs": "Computes Concatenated ReLU.\n\n Concatenates a ReLU which selects only the positive part of the activation\n with a ReLU which selects only the *negative* part of the activation.\n Note that as a result this non-linearity doubles the depth of the activations.\n Source: [Understanding and Improving Convolutional Neural Networks via\n Concatenated Rectified Linear Units. W. Shang, et\n al.](https://arxiv.org/abs/1603.05201)\n\n Args:\n features: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,\n `int16`, or `int8`.\n name: A name for the operation (optional).\n axis: The axis that the output values are concatenated along. Default is -1.\n\n Returns:\n A `Tensor` with the same type as `features`.\n\n References:\n Understanding and Improving Convolutional Neural Networks via Concatenated\n Rectified Linear Units:\n [Shang et al., 2016](http://proceedings.mlr.press/v48/shang16)\n ([pdf](http://proceedings.mlr.press/v48/shang16.pdf))\n ", "desc": "Computes Concatenated ReLU.", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_beam_search_decoder", "docs": "Performs beam search decoding on the logits given in input.\n\n **Note** Although in general greedy search is a special case of beam-search\n with `top_paths=1` and `beam_width=1`, `ctc_beam_search_decoder` differs\n from `ctc_greedy_decoder` in the treatment of blanks when computing the\n probability of a sequence:\n - `ctc_beam_search_decoder` treats blanks as sequence termination\n - `ctc_greedy_decoder` treats blanks as regular elements\n\n If `merge_repeated` is `True`, merge repeated classes in the output beams.\n This means that if consecutive entries in a beam are the same,\n only the first of these is emitted. That is, when the sequence is\n `A B B * B * B` (where '*' is the blank label), the return value is:\n\n * `A B` if `merge_repeated = True`.\n * `A B B B` if `merge_repeated = False`.\n\n Args:\n inputs: 3-D `float` `Tensor`, size `[max_time x batch_size x num_classes]`.\n The logits.\n sequence_length: 1-D `int32` vector containing sequence lengths, having size\n `[batch_size]`.\n beam_width: An int scalar >= 0 (beam search beam width).\n top_paths: An int scalar >= 0, <= beam_width (controls output size).\n merge_repeated: Boolean. Default: True.\n\n Returns:\n A tuple `(decoded, log_probabilities)` where\n\n decoded: A list of length top_paths, where `decoded[j]`\n is a `SparseTensor` containing the decoded outputs:\n\n `decoded[j].indices`: Indices matrix `(total_decoded_outputs[j] x 2)`\n The rows store: [batch, time].\n\n `decoded[j].values`: Values vector, size `(total_decoded_outputs[j])`.\n The vector stores the decoded classes for beam j.\n\n `decoded[j].dense_shape`: Shape vector, size `(2)`.\n The shape values are: `[batch_size, max_decoded_length[j]]`.\n\n log_probability: A `float` matrix `(batch_size x top_paths)` containing\n sequence log-probabilities.\n ", "desc": "Performs beam search decoding on the logits given in input.", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_beam_search_decoder_v2", "docs": "Performs beam search decoding on the logits given in input.\n\n **Note** Although in general greedy search is a special case of beam-search\n with `top_paths=1` and `beam_width=1`, `ctc_beam_search_decoder` differs\n from `ctc_greedy_decoder` in the treatment of blanks when computing the\n probability of a sequence:\n - `ctc_beam_search_decoder` treats blanks as sequence termination\n - `ctc_greedy_decoder` treats blanks as regular elements\n\n Args:\n inputs: 3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`.\n The logits.\n sequence_length: 1-D `int32` vector containing sequence lengths, having size\n `[batch_size]`.\n beam_width: An int scalar >= 0 (beam search beam width).\n top_paths: An int scalar >= 0, <= beam_width (controls output size).\n\n Returns:\n A tuple `(decoded, log_probabilities)` where\n\n decoded: A list of length top_paths, where `decoded[j]`\n is a `SparseTensor` containing the decoded outputs:\n\n `decoded[j].indices`: Indices matrix `[total_decoded_outputs[j], 2]`;\n The rows store: `[batch, time]`.\n\n `decoded[j].values`: Values vector, size `[total_decoded_outputs[j]]`.\n The vector stores the decoded classes for beam `j`.\n\n `decoded[j].dense_shape`: Shape vector, size `(2)`.\n The shape values are: `[batch_size, max_decoded_length[j]]`.\n\n log_probability: A `float` matrix `[batch_size, top_paths]` containing\n sequence log-probabilities.\n ", "desc": "Performs beam search decoding on the logits given in input.", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_greedy_decoder", "docs": "Performs greedy decoding on the logits given in input (best path).\n\n Given a tensor as `inputs`, the `blank_index` parameter defines the class\n index of the blank symbol.\n\n For example:\n\n If `blank_index` is equal to 1:\n\n >>> inf = float(\"inf\")\n >>> logits = tf.constant([[[ 0., -inf, -inf],\n ... [ -2.3, -inf, -0.1]],\n ... [[ -inf, -0.5, -inf],\n ... [ -inf, -inf, -0.1]],\n ... [[ -inf, -inf, -inf],\n ... [ -0.1, -inf, -2.3]]])\n >>> seq_lens = tf.constant([2, 3])\n >>> outputs = tf.nn.ctc_greedy_decoder(\n ... logits,\n ... seq_lens,\n ... blank_index=1)\n\n Notes:\n\n - Unlike `ctc_beam_search_decoder`, `ctc_greedy_decoder` considers blanks\n as regular elements when computing the probability of a sequence.\n - Default `blank_index` is `(num_classes - 1)`, unless overriden.\n\n If `merge_repeated` is `True`, merge repeated classes in output.\n This means that if consecutive logits' maximum indices are the same,\n only the first of these is emitted. The sequence `A B B * B * B` (where '*'\n is the blank label) becomes\n\n * `A B B B` if `merge_repeated=True`.\n * `A B B B B` if `merge_repeated=False`.\n\n Args:\n inputs: 3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`.\n The logits.\n sequence_length: 1-D `int32` vector containing sequence lengths, having size\n `[batch_size]`.\n merge_repeated: Boolean. Default: True.\n blank_index: (Optional). Default: `num_classes - 1`. Define the class index\n to use for the blank label. Negative values will start from num_classes,\n ie, -1 will reproduce the ctc_greedy_decoder behavior of using\n num_classes - 1 for the blank symbol, which corresponds to the default.\n\n Returns:\n A tuple `(decoded, neg_sum_logits)` where\n\n decoded: A single-element list. `decoded[0]`\n is an `SparseTensor` containing the decoded outputs s.t.:\n\n `decoded.indices`: Indices matrix `(total_decoded_outputs, 2)`.\n The rows store: `[batch, time]`.\n\n `decoded.values`: Values vector, size `(total_decoded_outputs)`.\n The vector stores the decoded classes.\n\n `decoded.dense_shape`: Shape vector, size `(2)`.\n The shape values are: `[batch_size, max_decoded_length]`\n\n neg_sum_logits: A `float` matrix `(batch_size x 1)` containing, for the\n sequence found, the negative of the sum of the greatest logit at each\n timeframe.\n ", "desc": "Performs greedy decoding on the logits given in input (best path).", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_loss", "docs": "Computes the CTC (Connectionist Temporal Classification) Loss.\n\n This op implements the CTC loss as presented in (Graves et al., 2006).\n\n Input requirements:\n\n ```\n sequence_length(b) <= time for all b\n\n max(labels.indices(labels.indices[:, 1] == b, 2))\n <= sequence_length(b) for all b.\n ```\n\n Notes:\n\n This class performs the softmax operation for you, so inputs should\n be e.g. linear projections of outputs by an LSTM.\n\n The `inputs` Tensor's innermost dimension size, `num_classes`, represents\n `num_labels + 1` classes, where num_labels is the number of true labels, and\n the largest value `(num_classes - 1)` is reserved for the blank label.\n\n For example, for a vocabulary containing 3 labels `[a, b, c]`,\n `num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.\n\n Regarding the arguments `preprocess_collapse_repeated` and\n `ctc_merge_repeated`:\n\n If `preprocess_collapse_repeated` is True, then a preprocessing step runs\n before loss calculation, wherein repeated labels passed to the loss\n are merged into single labels. This is useful if the training labels come\n from, e.g., forced alignments and therefore have unnecessary repetitions.\n\n If `ctc_merge_repeated` is set False, then deep within the CTC calculation,\n repeated non-blank labels will not be merged and are interpreted\n as individual labels. This is a simplified (non-standard) version of CTC.\n\n Here is a table of the (roughly) expected first order behavior:\n\n * `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`\n\n Classical CTC behavior: Outputs true repeated classes with blanks in\n between, and can also output repeated classes with no blanks in\n between that need to be collapsed by the decoder.\n\n * `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`\n\n Never learns to output repeated classes, as they are collapsed\n in the input labels before training.\n\n * `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`\n\n Outputs repeated classes with blanks in between, but generally does not\n require the decoder to collapse/merge repeated classes.\n\n * `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`\n\n Untested. Very likely will not learn to output repeated classes.\n\n The `ignore_longer_outputs_than_inputs` option allows to specify the behavior\n of the CTCLoss when dealing with sequences that have longer outputs than\n inputs. If true, the CTCLoss will simply return zero gradient for those\n items, otherwise an InvalidArgument error is returned, stopping training.\n\n Args:\n labels: An `int32` `SparseTensor`.\n `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id\n for (batch b, time t). `labels.values[i]` must take on values in `[0,\n num_labels)`. See `core/ops/ctc_ops.cc` for more details.\n inputs: 3-D `float` `Tensor`.\n If time_major == False, this will be a `Tensor` shaped: `[batch_size,\n max_time, num_classes]`.\n If time_major == True (default), this will be a `Tensor` shaped:\n `[max_time, batch_size, num_classes]`. The logits.\n sequence_length: 1-D `int32` vector, size `[batch_size]`. The sequence\n lengths.\n preprocess_collapse_repeated: Boolean. Default: False. If True, repeated\n labels are collapsed prior to the CTC calculation.\n ctc_merge_repeated: Boolean. Default: True.\n ignore_longer_outputs_than_inputs: Boolean. Default: False. If True,\n sequences with longer outputs than inputs will be ignored.\n time_major: The shape format of the `inputs` Tensors. If True, these\n `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False,\n these `Tensors` must be shaped `[batch_size, max_time, num_classes]`.\n Using `time_major = True` (default) is a bit more efficient because it\n avoids transposes at the beginning of the ctc_loss calculation. However,\n most TensorFlow data is batch-major, so by this function also accepts\n inputs in batch-major form.\n logits: Alias for inputs.\n\n Returns:\n A 1-D `float` `Tensor`, size `[batch]`, containing the negative log\n probabilities.\n\n Raises:\n TypeError: if labels is not a `SparseTensor`.\n\n References:\n Connectionist Temporal Classification - Labeling Unsegmented Sequence Data\n with Recurrent Neural Networks:\n [Graves et al., 2006](https://dl.acm.org/citation.cfm?id=1143891)\n ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf))\n ", "desc": "Computes the CTC (Connectionist Temporal Classification) Loss.", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_loss_v2", "docs": "Computes CTC (Connectionist Temporal Classification) loss.\n\n This op implements the CTC loss as presented in (Graves et al., 2006).\n\n Notes:\n\n - Same as the \"Classic CTC\" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss\n setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True\n - Labels may be supplied as either a dense, zero-padded tensor with a\n vector of label sequence lengths OR as a SparseTensor.\n - On TPU and GPU: Only dense padded labels are supported.\n - On CPU: Caller may use SparseTensor or dense padded labels but calling with\n a SparseTensor will be significantly faster.\n - Default blank label is 0 rather num_classes - 1, unless overridden by\n blank_index.\n\n Args:\n labels: tensor of shape [batch_size, max_label_seq_length] or SparseTensor\n logits: tensor of shape [frames, batch_size, num_labels], if\n logits_time_major == False, shape is [batch_size, frames, num_labels].\n label_length: tensor of shape [batch_size], None if labels is SparseTensor\n Length of reference label sequence in labels.\n logit_length: tensor of shape [batch_size] Length of input sequence in\n logits.\n logits_time_major: (optional) If True (default), logits is shaped [time,\n batch, logits]. If False, shape is [batch, time, logits]\n unique: (optional) Unique label indices as computed by\n ctc_unique_labels(labels). If supplied, enable a faster, memory efficient\n implementation on TPU.\n blank_index: (optional) Set the class index to use for the blank label.\n Negative values will start from num_classes, ie, -1 will reproduce the\n ctc_loss behavior of using num_classes - 1 for the blank symbol. There is\n some memory/performance overhead to switching from the default of 0 as an\n additional shifted copy of the logits may be created.\n name: A name for this `Op`. Defaults to \"ctc_loss_dense\".\n\n Returns:\n loss: tensor of shape [batch_size], negative log probabilities.\n\n References:\n Connectionist Temporal Classification - Labeling Unsegmented Sequence Data\n with Recurrent Neural Networks:\n [Graves et al., 2006](https://dl.acm.org/citation.cfm?id=1143891)\n ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf))\n ", "desc": "Computes CTC (Connectionist Temporal Classification) loss.", "type": "API"}, {"name": "tf.compat.v1.nn.ctc_unique_labels", "docs": "Get unique labels and indices for batched labels for `tf.nn.ctc_loss`.\n\n For use with `tf.nn.ctc_loss` optional argument `unique`: This op can be\n used to preprocess labels in input pipeline to for better speed/memory use\n computing the ctc loss on TPU.\n\n Example:\n ctc_unique_labels([[3, 4, 4, 3]]) ->\n unique labels padded with 0: [[3, 4, 0, 0]]\n indices of original labels in unique: [0, 1, 1, 0]\n\n Args:\n labels: tensor of shape [batch_size, max_label_length] padded with 0.\n name: A name for this `Op`. Defaults to \"ctc_unique_labels\".\n\n Returns:\n tuple of\n - unique labels, tensor of shape `[batch_size, max_label_length]`\n - indices into unique labels, shape `[batch_size, max_label_length]`\n ", "desc": "Get unique labels and indices for batched labels for `tf.nn.ctc_loss`.", "type": "API"}, {"name": "tf.compat.v1.nn.depth_to_space", "docs": "DepthToSpace for tensors of type T.\n\n Rearranges data from depth into blocks of spatial data.\n This is the reverse transformation of SpaceToDepth. More specifically,\n this op outputs a copy of the input tensor where values from the `depth`\n dimension are moved in spatial blocks to the `height` and `width` dimensions.\n The attr `block_size` indicates the input block size and how the data is moved.\n\n * Chunks of data of size `block_size * block_size` from depth are rearranged\n into non-overlapping blocks of size `block_size x block_size`\n * The width the output tensor is `input_depth * block_size`, whereas the\n height is `input_height * block_size`.\n * The Y, X coordinates within each block of the output image are determined\n by the high order component of the input channel index.\n * The depth of the input tensor must be divisible by\n `block_size * block_size`.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates\n within the input image, bX, bY means coordinates\n within the output block, oC means output channels).\n The output would be the input transposed to the following layout:\n n,iY,bY,iX,bX,oC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 1, 1, 4]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1, 2, 3, 4]]]]\n\n ```\n\n This operation will output a tensor of shape `[1, 2, 2, 1]`:\n\n ```\n [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,\n the corresponding output will have 2x2 elements and will have a depth of\n 1 channel (1 = `4 / (block_size * block_size)`).\n The output element shape is `[2, 2, 1]`.\n\n For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.\n\n ```\n x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n This operation, for block size of 2, will return the following tensor of shape\n `[1, 2, 2, 3]`\n\n ```\n [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n\n ```\n\n Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 4 4 1]`:\n\n ```\n x = [[[ [1], [2], [5], [6]],\n [ [3], [4], [7], [8]],\n [ [9], [10], [13], [14]],\n [ [11], [12], [15], [16]]]]\n\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`.\n The size of the spatial block, same as in Space2Depth.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "DepthToSpace for tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d", "docs": "Depthwise 2-D convolution.\n\n Given a 4D input tensor ('NHWC' or 'NCHW' data formats)\n and a filter tensor of shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`\n containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d`\n applies a different filter to each input channel (expanding from 1 channel\n to `channel_multiplier` channels for each), then concatenates the results\n together. The output has `in_channels * channel_multiplier` channels.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}\n filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,\n strides[2] * j + rate[1] * dj, k]\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the\n same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n If any value in `rate` is greater than 1, we perform atrous depthwise\n convolution, in which case all values in the `strides` tensor must be equal\n to 1.\n\n Usage Example:\n\n >>> x = np.array([\n ... [1., 2.],\n ... [3., 4.],\n ... [5., 6.]\n ... ], dtype=np.float32).reshape((1, 3, 2, 1))\n >>> kernel = np.array([\n ... [1., 2.],\n ... [3., 4]\n ... ], dtype=np.float32).reshape((2, 1, 1, 2))\n >>> tf.compat.v1.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],\n ... padding='VALID').numpy()\n array([[[[10., 14.],\n [14., 20.]],\n [[18., 26.],\n [22., 32.]]]], dtype=float32)\n\n >>> tf.compat.v1.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],\n ... padding=[[0, 0], [1, 0], [1, 0], [0, 0]]\n ... ).numpy()\n array([[[[ 0., 0.],\n [ 3., 4.],\n [ 6., 8.]],\n [[ 0., 0.],\n [10., 14.],\n [14., 20.]],\n [[ 0., 0.],\n [18., 26.],\n [22., 32.]]]], dtype=float32)\n\n Args:\n input: 4-D with shape according to `data_format`.\n filter: 4-D with shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`.\n strides: 1-D of size 4. The stride of the sliding window for each\n dimension of `input`.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. When explicit padding is used and data_format\n is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n rate: 1-D of size 2. The dilation rate in which we sample input values\n across the `height` and `width` dimensions in atrous convolution. If it is\n greater than 1, then all values of strides must be 1.\n name: A name for this operation (optional).\n data_format: The data format for input. Either \"NHWC\" (default) or \"NCHW\".\n dilations: Alias of rate.\n\n Returns:\n A 4-D `Tensor` with shape according to `data_format`. E.g., for\n \"NHWC\" format, shape is\n `[batch, out_height, out_width, in_channels * channel_multiplier].`\n ", "desc": "Depthwise 2-D convolution.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d_backprop_filter", "docs": "Computes the gradients of depthwise convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape based on `data_format`. For example,\n if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height,\n in_width, in_channels]` tensor.\n filter_sizes: A `Tensor` of type `int32`. An integer vector representing the\n tensor shape of `filter`, where `filter` is a 4-D `[filter_height,\n filter_width, in_channels, depthwise_multiplier]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`. 4-D with shape\n based on `data_format`. For example, if `data_format` is 'NHWC' then\n out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the filter.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d_backprop_input", "docs": "Computes the gradients of depthwise convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`. An integer vector representing the\n shape of `input`, based on `data_format`. For example, if `data_format`\n is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape `[filter_height, filter_width,\n in_channels, depthwise_multiplier]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`. 4-D with\n shape based on `data_format`. For example, if `data_format` is 'NHWC'\n then out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the input.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d_native", "docs": "Computes a 2-D depthwise convolution.\n\n Given an input tensor of shape `[batch, in_height, in_width, in_channels]`\n and a filter / kernel tensor of shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`, containing\n `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies\n a different filter to each input channel (expanding from 1 channel to\n `channel_multiplier` channels for each), then concatenates the results\n together. Thus, the output has `in_channels * channel_multiplier` channels.\n\n ```\n for k in 0..in_channels-1\n for q in 0..channel_multiplier-1\n output[b, i, j, k * channel_multiplier + q] =\n sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *\n filter[di, dj, k, q]\n ```\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertices strides, `strides = [1, stride, stride, 1]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`.\n filter: A `Tensor`. Must have the same type as `input`.\n strides: A list of `ints`. 1-D of length 4. The stride of the sliding\n window for each dimension of `input`.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. When explicit padding is used and data_format\n is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 2-D depthwise convolution.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter", "docs": "Computes the gradients of depthwise convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape based on `data_format`. For example,\n if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height,\n in_width, in_channels]` tensor.\n filter_sizes: A `Tensor` of type `int32`. An integer vector representing the\n tensor shape of `filter`, where `filter` is a 4-D `[filter_height,\n filter_width, in_channels, depthwise_multiplier]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`. 4-D with shape\n based on `data_format`. For example, if `data_format` is 'NHWC' then\n out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the filter.", "type": "API"}, {"name": "tf.compat.v1.nn.depthwise_conv2d_native_backprop_input", "docs": "Computes the gradients of depthwise convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`. An integer vector representing the\n shape of `input`, based on `data_format`. For example, if `data_format`\n is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape `[filter_height, filter_width,\n in_channels, depthwise_multiplier]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`. 4-D with\n shape based on `data_format`. For example, if `data_format` is 'NHWC'\n then out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the input.", "type": "API"}, {"name": "tf.compat.v1.nn.dilation2d", "docs": "Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.\n\n The `input` tensor has shape `[batch, in_height, in_width, depth]` and the\n `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each\n input channel is processed independently of the others with its own structuring\n function. The `output` tensor has shape\n `[batch, out_height, out_width, depth]`. The spatial dimensions of the output\n tensor depend on the `padding` algorithm. We currently only support the default\n \"NHWC\" `data_format`.\n\n In detail, the grayscale morphological 2-D dilation is the max-sum correlation\n (for consistency with `conv2d`, we use unmirrored filters):\n\n output[b, y, x, c] =\n max_{dy, dx} input[b,\n strides[1] * y + rates[1] * dy,\n strides[2] * x + rates[2] * dx,\n c] +\n filter[dy, dx, c]\n\n Max-pooling is a special case when the filter has size equal to the pooling\n kernel size and contains all zeros.\n\n Note on duality: The dilation of `input` by the `filter` is equal to the\n negation of the erosion of `-input` by the reflected `filter`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, in_height, in_width, depth]`.\n filter: A `Tensor`. Must have the same type as `input`.\n 3-D with shape `[filter_height, filter_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the input\n tensor. Must be: `[1, stride_height, stride_width, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n The input stride for atrous morphological dilation. Must be:\n `[1, rate_height, rate_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.dropout", "docs": "Computes dropout. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_prob)`. They will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n\nFor each element of `x`, with probability `rate`, outputs `0`, and otherwise\nscales up the input by `1 / (1-rate)`. The scaling is such that the expected\nsum is unchanged.\n\nBy default, each element is kept or dropped independently. If `noise_shape`\nis specified, it must be\n[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\nto the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`\nwill make independent decisions. For example, if `shape(x) = [k, l, m, n]`\nand `noise_shape = [k, 1, 1, n]`, each batch and channel component will be\nkept independently and each row and column will be kept or not kept together.\n\nArgs:\n x: A floating point tensor.\n keep_prob: (deprecated) A deprecated alias for `(1-rate)`.\n noise_shape: A 1-D integer `Tensor`, representing the\n shape for randomly generated keep/drop flags.\n seed: A Python integer. Used to create random seeds. See\n `tf.random.set_seed` for behavior.\n name: A name for this operation (optional).\n rate: A scalar `Tensor` with the same type as `x`. The probability that each\n element of `x` is discarded.\n\nReturns:\n A Tensor of the same shape of `x`.\n\nRaises:\n ValueError: If `rate` is not in `[0, 1)` or if `x` is not a floating\n point tensor.", "desc": "Computes dropout. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.nn.dynamic_rnn", "docs": "Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.RNN(cell)`, which is equivalent to this API\n\nPerforms fully dynamic unrolling of `inputs`.\n\nExample:\n\n```python\n# create a BasicRNNCell\nrnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)\n\n# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]\n\n# defining initial state\ninitial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)\n\n# 'state' is a tensor of shape [batch_size, cell_state_size]\noutputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,\n initial_state=initial_state,\n dtype=tf.float32)\n```\n\n```python\n# create 2 LSTMCells\nrnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]\n\n# create a RNN cell composed sequentially of a number of RNNCells\nmulti_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)\n\n# 'outputs' is a tensor of shape [batch_size, max_time, 256]\n# 'state' is a N-tuple where N is the number of LSTMCells containing a\n# tf.nn.rnn_cell.LSTMStateTuple for each cell\noutputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=data,\n dtype=tf.float32)\n```\n\n\nArgs:\n cell: An instance of RNNCell.\n inputs: The RNN inputs.\n If `time_major == False` (default), this must be a `Tensor` of shape:\n `[batch_size, max_time, ...]`, or a nested tuple of such elements.\n If `time_major == True`, this must be a `Tensor` of shape: `[max_time,\n batch_size, ...]`, or a nested tuple of such elements. This may also be\n a (possibly nested) tuple of Tensors satisfying this property. The\n first two dimensions must match across all the inputs, but otherwise the\n ranks and other shape components may differ. In this case, input to\n `cell` at each time-step will replicate the structure of these tuples,\n except for the time dimension (from which the time is taken). The input\n to `cell` at each time step will be a `Tensor` or (possibly nested)\n tuple of Tensors each with dimensions `[batch_size, ...]`.\n sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used\n to copy-through state and zero-out outputs when past a batch element's\n sequence length. This parameter enables users to extract the last valid\n state and properly padded outputs, so it is provided for correctness.\n initial_state: (optional) An initial state for the RNN. If `cell.state_size`\n is an integer, this must be a `Tensor` of appropriate type and shape\n `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this\n should be a tuple of tensors having shapes `[batch_size, s] for s in\n cell.state_size`.\n dtype: (optional) The data type for the initial state and expected output.\n Required if initial_state is not provided or RNN state has a heterogeneous\n dtype.\n parallel_iterations: (Default: 32). The number of iterations to run in\n parallel. Those operations which do not have any temporal dependency and\n can be run in parallel, will be. This parameter trades off time for\n space. Values >> 1 use more memory but take less time, while smaller\n values use less memory but computations take longer.\n swap_memory: Transparently swap the tensors produced in forward inference\n but needed for back prop from GPU to CPU. This allows training RNNs which\n would typically not fit on a single GPU, with very minimal (or no)\n performance penalty.\n time_major: The shape format of the `inputs` and `outputs` Tensors. If true,\n these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,\n these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using\n `time_major = True` is a bit more efficient because it avoids transposes\n at the beginning and end of the RNN calculation. However, most TensorFlow\n data is batch-major, so by default this function accepts input and emits\n output in batch-major form.\n scope: VariableScope for the created subgraph; defaults to \"rnn\".\n\nReturns:\n A pair (outputs, state) where:\n\n outputs: The RNN output `Tensor`.\n\n If time_major == False (default), this will be a `Tensor` shaped:\n `[batch_size, max_time, cell.output_size]`.\n\n If time_major == True, this will be a `Tensor` shaped:\n `[max_time, batch_size, cell.output_size]`.\n\n Note, if `cell.output_size` is a (possibly nested) tuple of integers\n or `TensorShape` objects, then `outputs` will be a tuple having the\n same structure as `cell.output_size`, containing Tensors having shapes\n corresponding to the shape data in `cell.output_size`.\n\n state: The final state. If `cell.state_size` is an int, this\n will be shaped `[batch_size, cell.state_size]`. If it is a\n `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.\n If it is a (possibly nested) tuple of ints or `TensorShape`, this will\n be a tuple having the corresponding shapes. If cells are `LSTMCells`\n `state` will be a tuple containing a `LSTMStateTuple` for each cell.\n\nRaises:\n TypeError: If `cell` is not an instance of RNNCell.\n ValueError: If inputs is None or an empty list.\n\n@compatibility(TF2)\n`tf.compat.v1.nn.dynamic_rnn` is not compatible with eager execution and\n`tf.function`. Please use `tf.keras.layers.RNN` instead for TF2 migration.\nTake LSTM as an example, you can instantiate a `tf.keras.layers.RNN` layer\nwith `tf.keras.layers.LSTMCell`, or directly via `tf.keras.layers.LSTM`. Once\nthe keras layer is created, you can get the output and states by calling\nthe layer with input and states. Please refer to [this\nguide](https://www.tensorflow.org/guide/keras/rnn) for more details about\nKeras RNN. You can also find more details about the difference and comparison\nbetween Keras RNN and TF compat v1 rnn in [this\ndocument](https://github.com/tensorflow/community/blob/master/rfcs/20180920-unify-rnn-interface.md)\n\n#### Structural Mapping to Native TF2\n\nBefore:\n\n```python\n# create 2 LSTMCells\nrnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]\n\n# create a RNN cell composed sequentially of a number of RNNCells\nmulti_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)\n\n# 'outputs' is a tensor of shape [batch_size, max_time, 256]\n# 'state' is a N-tuple where N is the number of LSTMCells containing a\n# tf.nn.rnn_cell.LSTMStateTuple for each cell\noutputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=data,\n dtype=tf.float32)\n```\n\nAfter:\n\n```python\n# RNN layer can take a list of cells, which will then stack them together.\n# By default, keras RNN will only return the last timestep output and will not\n# return states. If you need whole time sequence output as well as the states,\n# you can set `return_sequences` and `return_state` to True.\nrnn_layer = tf.keras.layers.RNN([tf.keras.layers.LSTMCell(128),\n tf.keras.layers.LSTMCell(256)],\n return_sequences=True,\n return_state=True)\noutputs, output_states = rnn_layer(inputs, states)\n```\n\n#### How to Map Arguments\n\n| TF1 Arg Name | TF2 Arg Name | Note |\n| :-------------------- | :-------------- | :------------------------------- |\n| `cell` | `cell` | In the RNN layer constructor |\n| `inputs` | `inputs` | In the RNN layer `__call__` |\n| `sequence_length` | Not used | Adding masking layer before RNN :\n: : : to achieve the same result. :\n| `initial_state` | `initial_state` | In the RNN layer `__call__` |\n| `dtype` | `dtype` | In the RNN layer constructor |\n| `parallel_iterations` | Not supported | |\n| `swap_memory` | Not supported | |\n| `time_major` | `time_major` | In the RNN layer constructor |\n| `scope` | Not supported | |\n@end_compatibility", "desc": "Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.elu", "docs": "Computes the exponential linear function.\n\n The ELU function is defined as:\n\n * $ e ^ x - 1 $ if $ x < 0 $\n * $ x $ if $ x >= 0 $\n\n Examples:\n\n >>> tf.nn.elu(1.0)\n \n >>> tf.nn.elu(0.0)\n \n >>> tf.nn.elu(-1000.0)\n \n\n See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)\n ](http://arxiv.org/abs/1511.07289)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes the exponential linear function.", "type": "API"}, {"name": "tf.compat.v1.nn.embedding_lookup", "docs": "Looks up embeddings for the given `ids` from a list of tensors.\n\n This function is used to perform parallel lookups on the list of tensors in\n `params`. It is a generalization of `tf.gather`, where `params` is\n interpreted as a partitioning of a large embedding tensor. `params` may be\n a `PartitionedVariable` as returned by using `tf.compat.v1.get_variable()`\n with a partitioner.\n\n If `len(params) > 1`, each element `id` of `ids` is partitioned between\n the elements of `params` according to the `partition_strategy`.\n In all strategies, if the id space does not evenly divide the number of\n partitions, each of the first `(max_id + 1) % len(params)` partitions will\n be assigned one more id.\n\n If `partition_strategy` is `\"mod\"`, we assign each id to partition\n `p = id % len(params)`. For instance,\n 13 ids are split across 5 partitions as:\n `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`\n\n If `partition_strategy` is `\"div\"`, we assign ids to partitions in a\n contiguous manner. In this case, 13 ids are split across 5 partitions as:\n `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`\n\n If the input ids are ragged tensors, partition variables are not supported and\n the partition strategy and the max_norm are ignored.\n The results of the lookup are concatenated into a dense\n tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.\n\n Args:\n params: A single tensor representing the complete embedding tensor, or a\n list of P tensors all of same shape except for the first dimension,\n representing sharded embedding tensors. Alternatively, a\n `PartitionedVariable`, created by partitioning along dimension 0. Each\n element must be appropriately sized for the given `partition_strategy`.\n ids: A `Tensor` or a 'RaggedTensor' with type `int32` or `int64` containing\n the ids to be looked up in `params`.\n partition_strategy: A string specifying the partitioning strategy, relevant\n if `len(params) > 1`. Currently `\"div\"` and `\"mod\"` are supported. Default\n is `\"mod\"`.\n name: A name for the operation (optional).\n validate_indices: DEPRECATED. If this operation is assigned to CPU, values\n in `indices` are always validated to be within range. If assigned to GPU,\n out-of-bound indices result in safe but unspecified behavior, which may\n include raising an error.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is larger\n than this value.\n\n Returns:\n A `Tensor` or a 'RaggedTensor', depending on the input, with the same type\n as the tensors in `params`.\n\n Raises:\n ValueError: If `params` is empty.\n ", "desc": "Looks up embeddings for the given `ids` from a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.embedding_lookup_sparse", "docs": "Looks up embeddings for the given ids and weights from a list of tensors.\n\n This op assumes that there is at least one id for each row in the dense tensor\n represented by sp_ids (i.e. there are no rows with empty features), and that\n all the indices of sp_ids are in canonical row-major order.\n\n `sp_ids` and `sp_weights` (if not None) are `SparseTensor`s with rank of 2.\n Embeddings are always aggregated along the last dimension.\n\n It also assumes that all id values lie in the range [0, p0), where p0\n is the sum of the size of params along dimension 0.\n\n Args:\n params: A single tensor representing the complete embedding tensor, or a\n list tensors all of same shape except for the first dimension,\n representing sharded embedding tensors. Alternatively, a\n `PartitionedVariable`, created by partitioning along dimension 0. Each\n element must be appropriately sized for the given `partition_strategy`.\n sp_ids: N x M `SparseTensor` of int64 ids where N is typically batch size\n and M is arbitrary.\n sp_weights: either a `SparseTensor` of float / double weights, or `None` to\n indicate all weights should be taken to be 1. If specified, `sp_weights`\n must have exactly the same shape and indices as `sp_ids`.\n partition_strategy: A string specifying the partitioning strategy, relevant\n if `len(params) > 1`. Currently `\"div\"` and `\"mod\"` are supported. Default\n is `\"mod\"`. See `tf.nn.embedding_lookup` for more details.\n name: Optional name for the op.\n combiner: A string specifying the reduction op. Currently \"mean\", \"sqrtn\"\n and \"sum\" are supported. \"sum\" computes the weighted sum of the embedding\n results for each row. \"mean\" is the weighted sum divided by the total\n weight. \"sqrtn\" is the weighted sum divided by the square root of the sum\n of the squares of the weights. Defaults to `mean`.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is larger\n than this value, before combining.\n\n Returns:\n A dense tensor representing the combined embeddings for the\n sparse ids. For each row in the dense tensor represented by `sp_ids`, the op\n looks up the embeddings for all ids in that row, multiplies them by the\n corresponding weight, and combines these embeddings as specified.\n\n In other words, if\n\n `shape(combined params) = [p0, p1, ..., pm]`\n\n and\n\n `shape(sp_ids) = shape(sp_weights) = [d0, d1]`\n\n then\n\n `shape(output) = [d0, p1, ..., pm]`.\n\n For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are\n\n ```python\n [0, 0]: id 1, weight 2.0\n [0, 1]: id 3, weight 0.5\n [1, 0]: id 0, weight 1.0\n [2, 3]: id 1, weight 3.0\n ```\n\n with `combiner`=\"mean\", then the output will be a 3x20 matrix where\n\n ```python\n output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)\n output[1, :] = (params[0, :] * 1.0) / 1.0\n output[2, :] = (params[1, :] * 3.0) / 3.0\n ```\n\n Raises:\n TypeError: If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is\n neither `None` nor `SparseTensor`.\n ValueError: If `combiner` is not one of {\"mean\", \"sqrtn\", \"sum\"}.\n ", "desc": "Looks up embeddings for the given ids and weights from a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.erosion2d", "docs": "Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.\n\n The `value` tensor has shape `[batch, in_height, in_width, depth]` and the\n `kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e.,\n each input channel is processed independently of the others with its own\n structuring function. The `output` tensor has shape\n `[batch, out_height, out_width, depth]`. The spatial dimensions of the\n output tensor depend on the `padding` algorithm. We currently only support the\n default \"NHWC\" `data_format`.\n\n In detail, the grayscale morphological 2-D erosion is given by:\n\n output[b, y, x, c] =\n min_{dy, dx} value[b,\n strides[1] * y - rates[1] * dy,\n strides[2] * x - rates[2] * dx,\n c] -\n kernel[dy, dx, c]\n\n Duality: The erosion of `value` by the `kernel` is equal to the negation of\n the dilation of `-value` by the reflected `kernel`.\n\n Args:\n value: A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.\n kernel: A `Tensor`. Must have the same type as `value`.\n 3-D with shape `[kernel_height, kernel_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The stride of the sliding window for each dimension of\n the input tensor. Must be: `[1, stride_height, stride_width, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The input stride for atrous morphological dilation.\n Must be: `[1, rate_height, rate_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional). If not specified \"erosion2d\"\n is used.\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n 4-D with shape `[batch, out_height, out_width, depth]`.\n Raises:\n ValueError: If the `value` depth does not match `kernel`' shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n ", "desc": "Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.fixed_unigram_candidate_sampler", "docs": "Samples a set of classes using the provided (fixed) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution is read from a file or passed in as an\n in-memory array. There is also an option to skew the distribution by\n applying a distortion power to the weights.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n vocab_file: Each valid line in this file (which should have a CSV-like\n format) corresponds to a valid word ID. IDs are in sequential order,\n starting from num_reserved_ids. The last entry in each line is expected\n to be a value corresponding to the count or relative probability. Exactly\n one of `vocab_file` and `unigrams` needs to be passed to this operation.\n distortion: The distortion is used to skew the unigram probability\n distribution. Each weight is first raised to the distortion's power\n before adding to the internal unigram distribution. As a result,\n `distortion = 1.0` gives regular unigram sampling (as defined by the vocab\n file), and `distortion = 0.0` gives a uniform distribution.\n num_reserved_ids: Optionally some reserved IDs can be added in the range\n `[0, num_reserved_ids)` by the users. One use case is that a special\n unknown word token is used as ID 0. These IDs will have a sampling\n probability of 0.\n num_shards: A sampler can be used to sample from a subset of the original\n range in order to speed up the whole computation through parallelism. This\n parameter (together with `shard`) indicates the number of partitions that\n are being used in the overall computation.\n shard: A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This\n parameter (together with `num_shards`) indicates the particular partition\n number of the operation, when partitioning is being used.\n unigrams: A list of unigram counts or probabilities, one per ID in\n sequential order. Exactly one of `vocab_file` and `unigrams` should be\n passed to this operation.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes using the provided (fixed) base distribution.", "type": "API"}, {"name": "tf.compat.v1.nn.fractional_avg_pool", "docs": "Performs fractional average pooling on the input. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n`seed2` and `deterministic` args are deprecated. Use fractional_avg_pool_v2.\n\nThis is a deprecated version of `fractional_avg_pool`.\n\nFractional average pooling is similar to Fractional max pooling in the pooling\nregion generation step. The only difference is that after pooling regions are\ngenerated, a mean operation is performed instead of a max operation in each\npooling region.\n\nArgs:\n value: A `Tensor`. 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: A list of `floats` that has length >= 4. Pooling ratio for\n each dimension of `value`, currently only supports row and col dimension\n and should be >= 1.0. For example, a valid pooling ratio looks like [1.0,\n 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't\n allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling\n ratio on height and width dimensions respectively.\n pseudo_random: An optional `bool`. Defaults to `False`. When set to `True`,\n generates the pooling sequence in a pseudorandom fashion, otherwise, in a\n random fashion. Check paper (Graham, 2015) for difference between\n pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`. When set to `True`,\n it means when pooling, the values at the boundary of adjacent pooling\n cells are used by both cells. For example:\n `index 0 1 2 3 4`\n `value 20 5 16 3 7`\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used\n twice. The result would be [20, 16] for fractional avg pooling.\n deterministic: An optional `bool`. Deprecated; use `fractional_avg_pool_v2`\n instead.\n seed: An optional `int`. Defaults to `0`. If set to be non-zero, the\n random number generator is seeded by the given seed. Otherwise it is\n seeded by a random seed.\n seed2: An optional `int`. Deprecated; use `fractional_avg_pool_v2` instead.\n name: A name for the operation (optional).\n\nReturns:\nA tuple of `Tensor` objects (`output`, `row_pooling_sequence`,\n`col_pooling_sequence`).\n output: Output `Tensor` after fractional avg pooling. Has the same type as\n `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n\nReferences:\n Fractional Max-Pooling:\n [Graham, 2015](https://arxiv.org/abs/1412.6071)\n ([pdf](https://arxiv.org/pdf/1412.6071.pdf))", "desc": "Performs fractional average pooling on the input. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.fractional_max_pool", "docs": "Performs fractional max pooling on the input. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n`seed2` and `deterministic` args are deprecated. Use fractional_max_pool_v2.\n\nThis is a deprecated version of `fractional_max_pool`.\n\nFractional max pooling is slightly different than regular max pooling. In\nregular max pooling, you downsize an input set by taking the maximum value of\nsmaller N x N subsections of the set (often 2x2), and try to reduce the set by\na factor of N, where N is an integer. Fractional max pooling, as you might\nexpect from the word \"fractional\", means that the overall reduction ratio N\ndoes not have to be an integer.\n\nThe sizes of the pooling regions are generated randomly but are fairly\nuniform. For example, let's look at the height dimension, and the constraints\non the list of rows that will be pool boundaries.\n\nFirst we define the following:\n\n1. input_row_length : the number of rows from the input set\n2. output_row_length : which will be smaller than the input\n3. alpha = input_row_length / output_row_length : our reduction ratio\n4. K = floor(alpha)\n5. row_pooling_sequence : this is the result list of pool boundary rows\n\nThen, row_pooling_sequence should satisfy:\n\n1. a[0] = 0 : the first value of the sequence is 0\n2. a[end] = input_row_length : the last value of the sequence is the size\n3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size\n4. length(row_pooling_sequence) = output_row_length+1\n\nArgs:\n value: A `Tensor`. 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: A list of `floats` that has length >= 4. Pooling ratio for\n each dimension of `value`, currently only supports row and col dimension\n and should be >= 1.0. For example, a valid pooling ratio looks like [1.0,\n 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't\n allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling\n ratio on height and width dimensions respectively.\n pseudo_random: An optional `bool`. Defaults to `False`. When set to `True`,\n generates the pooling sequence in a pseudorandom fashion, otherwise, in a\n random fashion. Check (Graham, 2015) for difference between\n pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`. When set to `True`,\n it means when pooling, the values at the boundary of adjacent pooling\n cells are used by both cells. For example:\n `index 0 1 2 3 4`\n `value 20 5 16 3 7`\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used\n twice. The result would be [20, 16] for fractional max pooling.\n deterministic: An optional `bool`. Deprecated; use `fractional_max_pool_v2`\n instead.\n seed: An optional `int`. Defaults to `0`. If set to be non-zero, the\n random number generator is seeded by the given seed. Otherwise it is\n seeded by a random seed.\n seed2: An optional `int`. Deprecated; use `fractional_max_pool_v2` instead.\n name: A name for the operation (optional).\n\nReturns:\nA tuple of `Tensor` objects (`output`, `row_pooling_sequence`,\n`col_pooling_sequence`).\n output: Output `Tensor` after fractional max pooling. Has the same type as\n `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n\nRaises:\n ValueError: If op determinism is enabled and either the seeds are not set or\n the \"deterministic\" argument is False.\n\nReferences:\n Fractional Max-Pooling:\n [Graham, 2015](https://arxiv.org/abs/1412.6071)\n ([pdf](https://arxiv.org/pdf/1412.6071.pdf))", "desc": "Performs fractional max pooling on the input. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.fused_batch_norm", "docs": "Batch normalization.\n\n\n See Source: [Batch Normalization: Accelerating Deep Network Training by\n Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy]\n (http://arxiv.org/abs/1502.03167).\n\n Args:\n x: Input `Tensor` of 4 or 5 dimensions.\n scale: A `Tensor` of 1 dimension for scaling.\n offset: A `Tensor` of 1 dimension for bias.\n mean: A `Tensor` of 1 dimension for population mean. The shape and meaning\n of this argument depends on the value of is_training and\n exponential_avg_factor as follows:\n is_training==False (inference):\n Mean must be a `Tensor` of the same shape as scale containing the\n estimated population mean computed during training.\n is_training==True and exponential_avg_factor == 1.0:\n Mean must be None.\n is_training==True and exponential_avg_factor != 1.0:\n Mean must be a `Tensor` of the same shape as scale containing the\n exponential running mean.\n variance: A `Tensor` of 1 dimension for population variance. The shape and\n meaning of this argument depends on the value of is_training and\n exponential_avg_factor as follows:\n is_training==False (inference):\n Variance must be a `Tensor` of the same shape as scale containing\n the estimated population variance computed during training.\n is_training==True and exponential_avg_factor == 1.0:\n Variance must be None.\n is_training==True and exponential_avg_factor != 1.0:\n Variance must be a `Tensor` of the same shape as scale containing\n the exponential running variance.\n epsilon: A small float number added to the variance of x.\n data_format: The data format for x. Support \"NHWC\" (default) or \"NCHW\" for\n 4D tenors and \"NDHWC\" or \"NCDHW\" for 5D tensors.\n is_training: A bool value to specify if the operation is used for\n training or inference.\n name: A name for this operation (optional).\n exponential_avg_factor: A float number (usually between 0 and 1) used\n for controlling the decay of the running\n population average of mean and variance.\n If set to 1.0, the current batch average is\n returned.\n\n Returns:\n y: A 4D or 5D Tensor for the normalized, scaled, offsetted x.\n running_mean: A 1D Tensor for the exponential running mean of x.\n The output value is (1 - exponential_avg_factor) * mean +\n exponential_avg_factor * batch_mean), where batch_mean\n is the mean of the current batch in x.\n running_var: A 1D Tensor for the exponential running variance\n The output value is (1 - exponential_avg_factor) * variance +\n exponential_avg_factor * batch_variance), where batch_variance\n is the variance of the current batch in x.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.compat.v1.nn.in_top_k", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is finite (not inf, -inf, or nan) and among\n the top `k` predictions among all predictions for example `i`. Note that the\n behavior of `InTopK` differs from the `TopK` op in its handling of ties; if\n multiple classes have the same prediction value and straddle the top-`k`\n boundary, all of those classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: An `int`. Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.compat.v1.nn.l2_loss", "docs": "L2 Loss.\n\n Computes half the L2 norm of a tensor without the `sqrt`:\n\n output = sum(t ** 2) / 2\n\n Args:\n t: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Typically 2-D, but may have any dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "L2 Loss.", "type": "API"}, {"name": "tf.compat.v1.nn.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.nn.leaky_relu", "docs": "Compute the Leaky ReLU activation function.\n\n Source: [Rectifier Nonlinearities Improve Neural Network Acoustic Models.\n AL Maas, AY Hannun, AY Ng - Proc. ICML, 2013]\n (https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf).\n\n Args:\n features: A `Tensor` representing preactivation values. Must be one of\n the following types: `float16`, `float32`, `float64`, `int32`, `int64`.\n alpha: Slope of the activation function at x < 0.\n name: A name for the operation (optional).\n\n Returns:\n The activation value.\n\n References:\n Rectifier Nonlinearities Improve Neural Network Acoustic Models:\n [Maas et al., 2013]\n (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.693.1422)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.1422&rep=rep1&type=pdf))\n ", "desc": "Compute the Leaky ReLU activation function.", "type": "API"}, {"name": "tf.compat.v1.nn.learned_unigram_candidate_sampler", "docs": "Samples a set of classes from a distribution learned during training.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is constructed on the fly\n during training. It is a unigram distribution over the target\n classes seen so far during training. Every integer in `[0, range_max)`\n begins with a weight of 1, and is incremented by 1 each time it is\n seen as a target class. The base distribution is not saved to checkpoints,\n so it is reset when the model is reloaded.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes from a distribution learned during training.", "type": "API"}, {"name": "tf.compat.v1.nn.local_response_normalization", "docs": "Local Response Normalization.\n\n The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last\n dimension), and each vector is normalized independently. Within a given vector,\n each component is divided by the weighted, squared sum of inputs within\n `depth_radius`. In detail,\n\n sqr_sum[a, b, c, d] =\n sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)\n output = input / (bias + alpha * sqr_sum) ** beta\n\n For details, see [Krizhevsky et al., ImageNet classification with deep\n convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D.\n depth_radius: An optional `int`. Defaults to `5`.\n 0-D. Half-width of the 1-D normalization window.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually positive to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Local Response Normalization.", "type": "API"}, {"name": "tf.compat.v1.nn.log_poisson_loss", "docs": "Computes log Poisson loss given `log_input`.\n\n Gives the log-likelihood loss between the prediction and the target under the\n assumption that the target has a Poisson distribution.\n Caveat: By default, this is not the exact loss, but the loss minus a\n constant term [log(z!)]. That has no effect for optimization, but\n does not play well with relative loss comparisons. To compute an\n approximation of the log factorial term, specify\n compute_full_loss=True to enable Stirling's Approximation.\n\n For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson\n loss is\n\n -log(exp(-x) * (x^z) / z!)\n = -log(exp(-x) * (x^z)) + log(z!)\n ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n [ Note the second term is the Stirling's Approximation for log(z!).\n It is invariant to x and does not affect optimization, though\n important for correct relative loss comparisons. It is only\n computed when compute_full_loss == True. ]\n = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n\n Args:\n targets: A `Tensor` of the same type and shape as `log_input`.\n log_input: A `Tensor` of type `float32` or `float64`.\n compute_full_loss: whether to compute the full loss. If false, a constant\n term is dropped in favor of more efficient optimization.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `log_input` with the componentwise\n logistic losses.\n\n Raises:\n ValueError: If `log_input` and `targets` do not have the same shape.\n ", "desc": "Computes log Poisson loss given `log_input`.", "type": "API"}, {"name": "tf.compat.v1.nn.log_softmax", "docs": "Computes log softmax activations. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor each batch `i` and class `j` we have\n\n logsoftmax = logits - log(reduce_sum(exp(logits), axis))\n\nArgs:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n dim: Deprecated alias for `axis`.\n\nReturns:\n A `Tensor`. Has the same type as `logits`. Same shape as `logits`.\n\nRaises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.", "desc": "Computes log softmax activations. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.nn.log_uniform_candidate_sampler", "docs": "Samples a set of classes using a log-uniform (Zipfian) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is an approximately log-uniform\n or Zipfian distribution:\n\n `P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`\n\n This sampler is useful when the target classes approximately follow such\n a distribution - for example, if the classes represent words in a lexicon\n sorted in decreasing order of frequency. If your classes are not ordered by\n decreasing frequency, do not use this op.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a log-uniform (Zipfian) base distribution.", "type": "API"}, {"name": "tf.compat.v1.nn.lrn", "docs": "Local Response Normalization.\n\n The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last\n dimension), and each vector is normalized independently. Within a given vector,\n each component is divided by the weighted, squared sum of inputs within\n `depth_radius`. In detail,\n\n sqr_sum[a, b, c, d] =\n sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)\n output = input / (bias + alpha * sqr_sum) ** beta\n\n For details, see [Krizhevsky et al., ImageNet classification with deep\n convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D.\n depth_radius: An optional `int`. Defaults to `5`.\n 0-D. Half-width of the 1-D normalization window.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually positive to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Local Response Normalization.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool", "docs": "Performs the max pooling on the input.\n\n Args:\n value: A 4-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`.\n The size of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `2` or `4`.\n The stride of the sliding window for each dimension of the input tensor.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. When explicit padding is used and\n data_format is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top,\n pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used\n and data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit\n padding, the size of the paddings cannot be greater than the sliding\n window size.\n data_format: A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.\n name: Optional name for the operation.\n input: Alias for value.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the max pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool_v2", "docs": "Performs max pooling on the input.\n\n For a given window of `ksize`, takes the maximum value within that window.\n Used for reducing computation and preventing overfitting.\n\n Consider an example of pooling with 2x2, non-overlapping windows:\n\n >>> matrix = tf.constant([\n ... [0, 0, 1, 7],\n ... [0, 2, 0, 0],\n ... [5, 2, 0, 0],\n ... [0, 0, 9, 8],\n ... ])\n >>> reshaped = tf.reshape(matrix, (1, 4, 4, 1))\n >>> tf.nn.max_pool(reshaped, ksize=2, strides=2, padding=\"SAME\")\n \n\n We can adjust the window size using the `ksize` parameter. For example, if we\n were to expand the window to 3:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=2, padding=\"SAME\")\n \n\n We've now picked up two additional large numbers (5 and 9) in two of the\n pooled spots.\n\n Note that our windows are now overlapping, since we're still moving by 2 units\n on each iteration. This is causing us to see the same 9 repeated twice, since\n it is part of two overlapping windows.\n\n We can adjust how far we move our window with each iteration using the\n `strides` parameter. Updating this to the same value as our window size\n eliminates the overlap:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=3, padding=\"SAME\")\n \n\n Because the window does not neatly fit into our input, padding is added around\n the edges, giving us the same result as when we used a 2x2 window. We can skip\n padding altogether and simply drop the windows that do not fully fit into our\n input by instead passing `\"VALID\"` to the `padding` argument:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=3, padding=\"VALID\")\n \n\n Now we've grabbed the largest value in the 3x3 window starting from the upper-\n left corner. Since no other windows fit in our input, they are dropped.\n\n Args:\n input: Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape +\n [num_channels]` if `data_format` does not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n ksize: An int or list of `ints` that has length `1`, `N` or `N+2`. The size\n of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit\n padding, the size of the paddings cannot be greater than the sliding\n window size.\n data_format: A string. Specifies the channel dimension. For N=1 it can be\n either \"NWC\" (default) or \"NCW\", for N=2 it can be either \"NHWC\" (default)\n or \"NCHW\" and for N=3 either \"NDHWC\" (default) or \"NCDHW\".\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs max pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool_with_argmax", "docs": "Performs max pooling on the input and outputs both max values and indices.\n\n The indices in `argmax` are flattened, so that a maximum value at position\n `[b, y, x, c]` becomes flattened index:\n `(y * width + x) * channels + c` if `include_batch_in_index` is False;\n `((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True.\n\n The indices returned are always in `[0, height) x [0, width)` before flattening,\n even if padding is involved and the mathematically correct answer is outside\n (either negative or too large). This is a bug, but fixing it is difficult to do\n in a safe backwards compatible way, especially due to flattening.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, height, width, channels]`. Input to pool over.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n Targmax: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n include_batch_in_index: An optional `bool`. Defaults to `False`.\n Whether to include batch dimension in flattened index of `argmax`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, argmax).\n\n output: A `Tensor`. Has the same type as `input`.\n argmax: A `Tensor` of type `Targmax`.\n ", "desc": "Performs max pooling on the input and outputs both max values and indices.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool1d", "docs": "Performs the max pooling on the input.\n\n Note internally this op reshapes and uses the underlying 2d operation.\n\n Args:\n input: A 3-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1` or `3`. The size of the\n window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1` or `3`. The stride of\n the sliding window for each dimension of the input tensor.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NWC\"`, this should be in the form `[[0, 0], [pad_left, pad_right], [0,\n 0]]`. When explicit padding used and data_format is `\"NCW\"`, this should\n be in the form `[[0, 0], [0, 0], [pad_left, pad_right]]`. When using\n explicit padding, the size of the paddings cannot be greater than the\n sliding window size.\n data_format: An optional string from: \"NWC\", \"NCW\". Defaults to \"NWC\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the max pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool2d", "docs": "Performs max pooling on 2D spatial data such as images.\n\n This is a more specific version of `tf.nn.max_pool` where the input tensor\n is 4D, representing 2D spatial data such as images. Using these APIs are\n equivalent\n\n Downsamples the input images along theirs spatial dimensions (height and\n width) by taking its maximum over an input window defined by `ksize`.\n The window is shifted by `strides` along each dimension.\n\n For example, for `strides=(2, 2)` and `padding=VALID` windows that extend\n outside of the input are not included in the output:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> # Add the `batch` and `channels` dimensions.\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding=\"VALID\")\n >>> result[0, :, :, 0]\n \n\n With `padding=SAME`, we get:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding='SAME')\n >>> result[0, :, :, 0]\n \n\n We can also specify padding explicitly. The following example adds width-1\n padding on all sides (top, bottom, left, right):\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding=[[0, 0], [1, 1], [1, 1], [0, 0]])\n >>> result[0, :, :, 0]\n \n\n For more examples and detail, see `tf.nn.max_pool`.\n\n Args:\n input: A 4-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`. The size of\n the window for each dimension of the input tensor. If only one integer is\n specified, then we apply the same window for all 4 dims. If two are\n provided then we use those for H, W dimensions and keep N, C dimension\n window size = 1.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of the input tensor. If\n only one integer is specified, we apply the same stride to all 4 dims. If\n two are provided we use those for the H, W dimensions and keep N, C of\n stride = 1.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit\n padding, the size of the paddings cannot be greater than the sliding\n window size.\n data_format: A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs max pooling on 2D spatial data such as images.", "type": "API"}, {"name": "tf.compat.v1.nn.max_pool3d", "docs": "Performs the max pooling on the input.\n\n Args:\n input: A 5-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1`, `3` or `5`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `3` or `5`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional string from: \"NDHWC\", \"NCDHW\". Defaults to \"NDHWC\".\n The data format of the input and output data. With the default format\n \"NDHWC\", the data is stored in the order of: [batch, in_depth, in_height,\n in_width, in_channels]. Alternatively, the format could be \"NCDHW\", the\n data storage order is: [batch, in_channels, in_depth, in_height,\n in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the max pooling on the input.", "type": "API"}, {"name": "tf.compat.v1.nn.moments", "docs": "Calculate the mean and variance of `x`.\n\n The mean and variance are calculated by aggregating the contents of `x`\n across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean\n and variance of a vector.\n\n Note: shift is currently not used; the true mean is computed and used.\n\n When using these moments for batch normalization (see\n `tf.nn.batch_normalization`):\n\n * for so-called \"global normalization\", used with convolutional filters with\n shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`.\n * for simple batch normalization pass `axes=[0]` (batch only).\n\n Args:\n x: A `Tensor`.\n axes: Array of ints. Axes along which to compute mean and\n variance.\n shift: Not used in the current implementation\n name: Name used to scope the operations that compute the moments.\n keep_dims: produce moments with the same dimensionality as the input.\n keepdims: Alias to keep_dims.\n\n Returns:\n Two `Tensor` objects: `mean` and `variance`.\n ", "desc": "Calculate the mean and variance of `x`.", "type": "API"}, {"name": "tf.compat.v1.nn.nce_loss", "docs": "Computes and returns the noise-contrastive estimation training loss.\n\n A common use case is to use this method for training, and calculate the full\n sigmoid loss for evaluation or inference. In this case, you must set\n `partition_strategy=\"div\"` for the two losses to be consistent, as in the\n following example:\n\n ```python\n if mode == \"train\":\n loss = tf.nn.nce_loss(\n weights=weights,\n biases=biases,\n labels=labels,\n inputs=inputs,\n ...,\n partition_strategy=\"div\")\n elif mode == \"eval\":\n logits = tf.matmul(inputs, tf.transpose(weights))\n logits = tf.nn.bias_add(logits, biases)\n labels_one_hot = tf.one_hot(labels, n_classes)\n loss = tf.nn.sigmoid_cross_entropy_with_logits(\n labels=labels_one_hot,\n logits=logits)\n loss = tf.reduce_sum(loss, axis=1)\n ```\n\n Note: By default this uses a log-uniform (Zipfian) distribution for sampling,\n so your labels must be sorted in order of decreasing frequency to achieve\n good results. For more details, see\n `tf.random.log_uniform_candidate_sampler`.\n\n Note: In the case where `num_true` > 1, we assign to each target class\n the target probability 1 / `num_true` so that the target probabilities\n sum to 1 per-example.\n\n Note: It would be useful to allow a variable number of target classes per\n example. We hope to provide this functionality in a future release.\n For now, if you have a variable number of target classes, you can pad them\n out to a constant number by either repeating them or by padding\n with an otherwise unused class.\n\n Args:\n weights: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`\n objects whose concatenation along dimension 0 has shape\n [num_classes, dim]. The (possibly-partitioned) class embeddings.\n biases: A `Tensor` of shape `[num_classes]`. The class biases.\n labels: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n inputs: A `Tensor` of shape `[batch_size, dim]`. The forward\n activations of the input network.\n num_sampled: An `int`. The number of negative classes to randomly sample\n per batch. This single sample of negative classes is evaluated for each\n element in the batch.\n num_classes: An `int`. The number of possible classes.\n num_true: An `int`. The number of target classes per training example.\n sampled_values: a tuple of (`sampled_candidates`, `true_expected_count`,\n `sampled_expected_count`) returned by a `*_candidate_sampler` function.\n (if None, we default to `log_uniform_candidate_sampler`)\n remove_accidental_hits: A `bool`. Whether to remove \"accidental hits\"\n where a sampled class equals one of the target classes. If set to\n `True`, this is a \"Sampled Logistic\" loss instead of NCE, and we are\n learning to generate log-odds instead of log probabilities. See\n our Candidate Sampling Algorithms Reference\n ([pdf](https://www.tensorflow.org/extras/candidate_sampling.pdf)).\n Default is False.\n partition_strategy: A string specifying the partitioning strategy, relevant\n if `len(weights) > 1`. Currently `\"div\"` and `\"mod\"` are supported.\n Default is `\"mod\"`. See `tf.nn.embedding_lookup` for more details.\n name: A name for the operation (optional).\n\n Returns:\n A `batch_size` 1-D tensor of per-example NCE losses.\n\n References:\n Noise-contrastive estimation - A new estimation principle for unnormalized\n statistical models:\n [Gutmann et al., 2010](http://proceedings.mlr.press/v9/gutmann10a)\n ([pdf](http://proceedings.mlr.press/v9/gutmann10a/gutmann10a.pdf))\n ", "desc": "Computes and returns the noise-contrastive estimation training loss.", "type": "API"}, {"name": "tf.compat.v1.nn.normalize_moments", "docs": "Calculate the mean and variance of based on the sufficient statistics.\n\n Args:\n counts: A `Tensor` containing the total count of the data (one value).\n mean_ss: A `Tensor` containing the mean sufficient statistics: the (possibly\n shifted) sum of the elements to average over.\n variance_ss: A `Tensor` containing the variance sufficient statistics: the\n (possibly shifted) squared sum of the data to compute the variance over.\n shift: A `Tensor` containing the value by which the data is shifted for\n numerical stability, or `None` if no shift was performed.\n name: Name used to scope the operations that compute the moments.\n\n Returns:\n Two `Tensor` objects: `mean` and `variance`.\n ", "desc": "Calculate the mean and variance of based on the sufficient statistics.", "type": "API"}, {"name": "tf.compat.v1.nn.pool", "docs": "Performs an N-D pooling operation.\n\n In the case that `data_format` does not start with \"NC\", computes for\n 0 <= b < batch_size,\n 0 <= x[i] < output_spatial_shape[i],\n 0 <= c < num_channels:\n\n ```\n output[b, x[0], ..., x[N-1], c] =\n REDUCE_{z[0], ..., z[N-1]}\n input[b,\n x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],\n ...\n x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1],\n c],\n ```\n\n where the reduction function REDUCE depends on the value of `pooling_type`,\n and pad_before is defined based on the value of `padding` as described in\n the \"returns\" section of `tf.nn.convolution` for details.\n The reduction never includes out-of-bounds positions.\n\n In the case that `data_format` starts with `\"NC\"`, the `input` and output are\n simply transposed as follows:\n\n ```python\n pool(input, data_format, **kwargs) =\n tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]),\n **kwargs),\n [0, N+1] + range(1, N+1))\n ```\n\n Args:\n input: Tensor of rank N+2, of shape\n `[batch_size] + input_spatial_shape + [num_channels]` if data_format does\n not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n window_shape: Sequence of N ints >= 1.\n pooling_type: Specifies pooling operation, must be \"AVG\" or \"MAX\".\n padding: The padding algorithm, must be \"SAME\" or \"VALID\".\n See the \"returns\" section of `tf.nn.convolution` for details.\n dilation_rate: Optional. Dilation rate. List of N ints >= 1.\n Defaults to `[1]*N`. If any value of dilation_rate is > 1, then all\n values of strides must be 1.\n strides: Optional. Sequence of N ints >= 1. Defaults to `[1]*N`.\n If any value of strides is > 1, then all values of dilation_rate must be\n 1.\n name: Optional. Name of the op.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n dilations: Alias for dilation_rate\n\n Returns:\n Tensor of rank N+2, of shape\n [batch_size] + output_spatial_shape + [num_channels]\n\n if data_format is None or does not start with \"NC\", or\n\n [batch_size, num_channels] + output_spatial_shape\n\n if data_format starts with \"NC\",\n where `output_spatial_shape` depends on the value of padding:\n\n If padding = \"SAME\":\n output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])\n\n If padding = \"VALID\":\n output_spatial_shape[i] =\n ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i])\n / strides[i]).\n\n Raises:\n ValueError: if arguments are invalid.\n\n ", "desc": "Performs an N-D pooling operation.", "type": "API"}, {"name": "tf.compat.v1.nn.quantized_avg_pool", "docs": "Produces the average pool of the input tensor for quantized types.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n 4-D with shape `[batch, height, width, channels]`.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n ksize: A list of `ints`.\n The size of the window for each dimension of the input tensor.\n The length must be 4 to match the number of dimensions of the input.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor. The length must be 4 to match the number of dimensions of the input.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor`. Has the same type as `input`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Produces the average pool of the input tensor for quantized types.", "type": "API"}, {"name": "tf.compat.v1.nn.quantized_conv2d", "docs": "Computes a 2D convolution given quantized 4D input and filter tensors.\n\n The inputs are quantized tensors where the lowest value represents the real\n number of the associated minimum, and the highest represents the maximum.\n This means that you can only interpret the quantized output in the same way, by\n taking the returned minimum and maximum values into account.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter's input_depth dimension must match input's depth dimensions.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the lowest quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the highest quantized filter value represents.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes a 2D convolution given quantized 4D input and filter tensors.", "type": "API"}, {"name": "tf.compat.v1.nn.quantized_max_pool", "docs": "Produces the max pool of the input tensor for quantized types.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The 4D (batch x rows x cols x depth) Tensor to MaxReduce over.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n ksize: A list of `ints`.\n The size of the window for each dimension of the input tensor.\n The length must be 4 to match the number of dimensions of the input.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor. The length must be 4 to match the number of dimensions of the input.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor`. Has the same type as `input`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Produces the max pool of the input tensor for quantized types.", "type": "API"}, {"name": "tf.compat.v1.nn.quantized_relu_x", "docs": "Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`\n\n Args:\n features: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n max_value: A `Tensor` of type `float32`.\n min_features: A `Tensor` of type `float32`.\n The float value that the lowest quantized value represents.\n max_features: A `Tensor` of type `float32`.\n The float value that the highest quantized value represents.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (activations, min_activations, max_activations).\n\n activations: A `Tensor` of type `out_type`.\n min_activations: A `Tensor` of type `float32`.\n max_activations: A `Tensor` of type `float32`.\n ", "desc": "Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`", "type": "API"}, {"name": "tf.compat.v1.nn.raw_rnn", "docs": "Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.\n\n **NOTE: This method is still in testing, and the API may change.**\n\n This function is a more primitive version of `dynamic_rnn` that provides\n more direct access to the inputs each iteration. It also provides more\n control over when to start and finish reading the sequence, and\n what to emit for the output.\n\n For example, it can be used to implement the dynamic decoder of a seq2seq\n model.\n\n Instead of working with `Tensor` objects, most operations work with\n `TensorArray` objects directly.\n\n The operation of `raw_rnn`, in pseudo-code, is basically the following:\n\n ```python\n time = tf.constant(0, dtype=tf.int32)\n (finished, next_input, initial_state, emit_structure, loop_state) = loop_fn(\n time=time, cell_output=None, cell_state=None, loop_state=None)\n emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)\n state = initial_state\n while not all(finished):\n (output, cell_state) = cell(next_input, state)\n (next_finished, next_input, next_state, emit, loop_state) = loop_fn(\n time=time + 1, cell_output=output, cell_state=cell_state,\n loop_state=loop_state)\n # Emit zeros and copy forward state for minibatch entries that are finished.\n state = tf.where(finished, state, next_state)\n emit = tf.where(finished, tf.zeros_like(emit_structure), emit)\n emit_ta = emit_ta.write(time, emit)\n # If any new minibatch entries are marked as finished, mark these.\n finished = tf.logical_or(finished, next_finished)\n time += 1\n return (emit_ta, state, loop_state)\n ```\n\n with the additional properties that output and state may be (possibly nested)\n tuples, as determined by `cell.output_size` and `cell.state_size`, and\n as a result the final `state` and `emit_ta` may themselves be tuples.\n\n A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:\n\n ```python\n inputs = tf.compat.v1.placeholder(shape=(max_time, batch_size, input_depth),\n dtype=tf.float32)\n sequence_length = tf.compat.v1.placeholder(shape=(batch_size,),\n dtype=tf.int32)\n inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time)\n inputs_ta = inputs_ta.unstack(inputs)\n\n cell = tf.compat.v1.nn.rnn_cell.LSTMCell(num_units)\n\n def loop_fn(time, cell_output, cell_state, loop_state):\n emit_output = cell_output # == None for time == 0\n if cell_output is None: # time == 0\n next_cell_state = cell.zero_state(batch_size, tf.float32)\n else:\n next_cell_state = cell_state\n elements_finished = (time >= sequence_length)\n finished = tf.reduce_all(elements_finished)\n next_input = tf.cond(\n finished,\n lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32),\n lambda: inputs_ta.read(time))\n next_loop_state = None\n return (elements_finished, next_input, next_cell_state,\n emit_output, next_loop_state)\n\n outputs_ta, final_state, _ = raw_rnn(cell, loop_fn)\n outputs = outputs_ta.stack()\n ```\n\n Args:\n cell: An instance of RNNCell.\n loop_fn: A callable that takes inputs `(time, cell_output, cell_state,\n loop_state)` and returns the tuple `(finished, next_input,\n next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32\n scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of\n tensors as determined by `cell.output_size`, and `cell_state` is a\n `Tensor` or (possibly nested) tuple of tensors, as determined by the\n `loop_fn` on its first call (and should match `cell.state_size`).\n The outputs are: `finished`, a boolean `Tensor` of\n shape `[batch_size]`, `next_input`: the next input to feed to `cell`,\n `next_cell_state`: the next state to feed to `cell`,\n and `emit_output`: the output to store for this iteration. Note that\n `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors\n which is aggregated in the `emit_ta` inside the `while_loop`. For the\n first call to `loop_fn`, the `emit_output` corresponds to the\n `emit_structure` which is then used to determine the size of the\n `zero_tensor` for the `emit_ta` (defaults to `cell.output_size`). For\n the subsequent calls to the `loop_fn`, the `emit_output` corresponds to\n the actual output tensor that is to be aggregated in the `emit_ta`. The\n parameter `cell_state` and output `next_cell_state` may be either a\n single or (possibly nested) tuple of tensors. The parameter\n `loop_state` and output `next_loop_state` may be either a single or\n (possibly nested) tuple of `Tensor` and `TensorArray` objects. This\n last parameter may be ignored by `loop_fn` and the return value may be\n `None`. If it is not `None`, then the `loop_state` will be propagated\n through the RNN loop, for use purely by `loop_fn` to keep track of its\n own state. The `next_loop_state` parameter returned may be `None`. The\n first call to `loop_fn` will be `time = 0`, `cell_output = None`,\n `cell_state = None`, and `loop_state = None`. For this call: The\n `next_cell_state` value should be the value with which to initialize the\n cell's state. It may be a final state from a previous RNN or it may be\n the output of `cell.zero_state()`. It should be a (possibly nested)\n tuple structure of tensors. If `cell.state_size` is an integer, this\n must be a `Tensor` of appropriate type and shape `[batch_size,\n cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be\n a `Tensor` of appropriate type and shape `[batch_size] +\n cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of\n ints or `TensorShape`, this will be a tuple having the corresponding\n shapes. The `emit_output` value may be either `None` or a (possibly\n nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0,\n dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first\n `emit_output` return value is `None`, then the `emit_ta` result of\n `raw_rnn` will have the same structure and dtypes as `cell.output_size`.\n Otherwise `emit_ta` will have the same structure, shapes (prepended with\n a `batch_size` dimension), and dtypes as `emit_output`. The actual\n values returned for `emit_output` at this initializing call are ignored.\n Note, this emit structure must be consistent across all time steps.\n parallel_iterations: (Default: 32). The number of iterations to run in\n parallel. Those operations which do not have any temporal dependency and\n can be run in parallel, will be. This parameter trades off time for\n space. Values >> 1 use more memory but take less time, while smaller\n values use less memory but computations take longer.\n swap_memory: Transparently swap the tensors produced in forward inference\n but needed for back prop from GPU to CPU. This allows training RNNs which\n would typically not fit on a single GPU, with very minimal (or no)\n performance penalty.\n scope: VariableScope for the created subgraph; defaults to \"rnn\".\n\n Returns:\n A tuple `(emit_ta, final_state, final_loop_state)` where:\n\n `emit_ta`: The RNN output `TensorArray`.\n If `loop_fn` returns a (possibly nested) set of Tensors for\n `emit_output` during initialization, (inputs `time = 0`,\n `cell_output = None`, and `loop_state = None`), then `emit_ta` will\n have the same structure, dtypes, and shapes as `emit_output` instead.\n If `loop_fn` returns `emit_output = None` during this call,\n the structure of `cell.output_size` is used:\n If `cell.output_size` is a (possibly nested) tuple of integers\n or `TensorShape` objects, then `emit_ta` will be a tuple having the\n same structure as `cell.output_size`, containing TensorArrays whose\n elements' shapes correspond to the shape data in `cell.output_size`.\n\n `final_state`: The final cell state. If `cell.state_size` is an int, this\n will be shaped `[batch_size, cell.state_size]`. If it is a\n `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.\n If it is a (possibly nested) tuple of ints or `TensorShape`, this will\n be a tuple having the corresponding shapes.\n\n `final_loop_state`: The final loop state as returned by `loop_fn`.\n\n Raises:\n TypeError: If `cell` is not an instance of RNNCell, or `loop_fn` is not\n a `callable`.\n ", "desc": "Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.", "type": "API"}, {"name": "tf.compat.v1.nn.relu", "docs": "Computes rectified linear: `max(features, 0)`.\n\n See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks)\n Example usage:\n >>> tf.nn.relu([-2., 0., 3.]).numpy()\n array([0., 0., 3.], dtype=float32)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes rectified linear: `max(features, 0)`.", "type": "API"}, {"name": "tf.compat.v1.nn.relu_layer", "docs": "Computes Relu(x * weight + biases).\n\n Args:\n x: a 2D tensor. Dimensions typically: batch, in_units\n weights: a 2D tensor. Dimensions typically: in_units, out_units\n biases: a 1D tensor. Dimensions: out_units\n name: A name for the operation (optional). If not specified\n \"nn_relu_layer\" is used.\n\n Returns:\n A 2-D Tensor computing relu(matmul(x, weights) + biases).\n Dimensions typically: batch, out_units.\n ", "desc": "Computes Relu(x * weight + biases).", "type": "API"}, {"name": "tf.compat.v1.nn.relu6", "docs": "Computes Rectified Linear 6: `min(max(features, 0), 6)`.\n\n In comparison with `tf.nn.relu`, relu6 activation functions have shown to\n empirically perform better under low-precision conditions (e.g. fixed point\n inference) by encouraging the model to learn sparse features earlier.\n Source: [Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al.,\n 2010](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf).\n\n For example:\n\n >>> x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)\n >>> y = tf.nn.relu6(x)\n >>> y.numpy()\n array([0., 0., 0., 6., 6.], dtype=float32)\n\n Args:\n features: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,\n `int16`, or `int8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `features`.\n\n References:\n Convolutional Deep Belief Networks on CIFAR-10:\n Krizhevsky et al., 2010\n ([pdf](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf))\n ", "desc": "Computes Rectified Linear 6: `min(max(features, 0), 6)`.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.BasicLSTMCell", "docs": "DEPRECATED: Please use `tf.compat.v1.nn.rnn_cell.LSTMCell` instead.\n\n Basic LSTM recurrent network cell.\n\n The implementation is based on\n\n We add forget_bias (default: 1) to the biases of the forget gate in order to\n reduce the scale of forgetting in the beginning of the training.\n\n It does not allow cell clipping, a projection layer, and does not\n use peep-hole connections: it is the basic baseline.\n\n For advanced models, please use the full `tf.compat.v1.nn.rnn_cell.LSTMCell`\n that follows.\n\n Note that this cell is not optimized for performance. Please use\n `tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or\n `tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for\n better performance on CPU.\n ", "desc": "DEPRECATED: Please use `tf.compat.v1.nn.rnn_cell.LSTMCell` instead.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.BasicRNNCell", "docs": "The most basic RNN cell.\n\n Note that this cell is not optimized for performance. Please use\n `tf.contrib.cudnn_rnn.CudnnRNNTanh` for better performance on GPU.\n\n Args:\n num_units: int, The number of units in the RNN cell.\n activation: Nonlinearity to use. Default: `tanh`. It could also be string\n that is within Keras activation function names.\n reuse: (optional) Python boolean describing whether to reuse variables in an\n existing scope. If not `True`, and the existing scope already has the\n given variables, an error is raised.\n name: String, the name of the layer. Layers with the same name will share\n weights, but to avoid mistakes we require reuse=True in such cases.\n dtype: Default dtype of the layer (default of `None` means use the type of\n the first input). Required when `build` is called before `call`.\n **kwargs: Dict, keyword named properties for common layer attributes, like\n `trainable` etc when constructing the cell from configs of get_config().\n ", "desc": "The most basic RNN cell.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.DeviceWrapper", "docs": "Operator that ensures an RNNCell runs on a particular device.", "desc": "Operator that ensures an RNNCell runs on a particular device.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.DropoutWrapper", "docs": "Operator adding dropout to inputs and outputs of the given cell.", "desc": "Operator adding dropout to inputs and outputs of the given cell.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.GRUCell", "docs": "Gated Recurrent Unit cell.\n\n Note that this cell is not optimized for performance. Please use\n `tf.contrib.cudnn_rnn.CudnnGRU` for better performance on GPU, or\n `tf.contrib.rnn.GRUBlockCellV2` for better performance on CPU.\n\n Args:\n num_units: int, The number of units in the GRU cell.\n activation: Nonlinearity to use. Default: `tanh`.\n reuse: (optional) Python boolean describing whether to reuse variables in an\n existing scope. If not `True`, and the existing scope already has the\n given variables, an error is raised.\n kernel_initializer: (optional) The initializer to use for the weight and\n projection matrices.\n bias_initializer: (optional) The initializer to use for the bias.\n name: String, the name of the layer. Layers with the same name will share\n weights, but to avoid mistakes we require reuse=True in such cases.\n dtype: Default dtype of the layer (default of `None` means use the type of\n the first input). Required when `build` is called before `call`.\n **kwargs: Dict, keyword named properties for common layer attributes, like\n `trainable` etc when constructing the cell from configs of get_config().\n\n References:\n Learning Phrase Representations using RNN Encoder Decoder for Statistical\n Machine Translation:\n [Cho et al., 2014]\n (https://aclanthology.coli.uni-saarland.de/papers/D14-1179/d14-1179)\n ([pdf](http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf))\n ", "desc": "Gated Recurrent Unit cell.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.LSTMCell", "docs": "Long short-term memory unit (LSTM) recurrent network cell.\n\n The default non-peephole implementation is based on (Gers et al., 1999).\n The peephole implementation is based on (Sak et al., 2014).\n\n The class uses optional peep-hole connections, optional cell clipping, and\n an optional projection layer.\n\n Note that this cell is not optimized for performance. Please use\n `tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or\n `tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for\n better performance on CPU.\n References:\n Long short-term memory recurrent neural network architectures for large\n scale acoustic modeling:\n [Sak et al., 2014]\n (https://www.isca-speech.org/archive/interspeech_2014/i14_0338.html)\n ([pdf]\n (https://www.isca-speech.org/archive/archive_papers/interspeech_2014/i14_0338.pdf))\n Learning to forget:\n [Gers et al., 1999]\n (http://digital-library.theiet.org/content/conferences/10.1049/cp_19991218)\n ([pdf](https://arxiv.org/pdf/1409.2329.pdf))\n Long Short-Term Memory:\n [Hochreiter et al., 1997]\n (https://www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.8.1735)\n ([pdf](http://ml.jku.at/publications/older/3504.pdf))\n ", "desc": "Long short-term memory unit (LSTM) recurrent network cell.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.LSTMStateTuple", "docs": "Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.\n\n Stores two elements: `(c, h)`, in that order. Where `c` is the hidden state\n and `h` is the output.\n\n Only used when `state_is_tuple=True`.\n ", "desc": "Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.MultiRNNCell", "docs": "RNN cell composed sequentially of multiple simple cells.\n\n Example:\n\n ```python\n num_units = [128, 64]\n cells = [BasicLSTMCell(num_units=n) for n in num_units]\n stacked_rnn_cell = MultiRNNCell(cells)\n ```\n ", "desc": "RNN cell composed sequentially of multiple simple cells.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.ResidualWrapper", "docs": "RNNCell wrapper that ensures cell inputs are added to the outputs.", "desc": "RNNCell wrapper that ensures cell inputs are added to the outputs.", "type": "API"}, {"name": "tf.compat.v1.nn.rnn_cell.RNNCell", "docs": "Abstract object representing an RNN cell.\n\n Every `RNNCell` must have the properties below and implement `call` with\n the signature `(output, next_state) = call(input, state)`. The optional\n third input argument, `scope`, is allowed for backwards compatibility\n purposes; but should be left off for new subclasses.\n\n This definition of cell differs from the definition used in the literature.\n In the literature, 'cell' refers to an object with a single scalar output.\n This definition refers to a horizontal array of such units.\n\n An RNN cell, in the most abstract setting, is anything that has\n a state and performs some operation that takes a matrix of inputs.\n This operation results in an output matrix with `self.output_size` columns.\n If `self.state_size` is an integer, this operation also results in a new\n state matrix with `self.state_size` columns. If `self.state_size` is a\n (possibly nested tuple of) TensorShape object(s), then it should return a\n matching structure of Tensors having shape `[batch_size].concatenate(s)`\n for each `s` in `self.batch_size`.\n ", "desc": "Abstract object representing an RNN cell.", "type": "API"}, {"name": "tf.compat.v1.nn.safe_embedding_lookup_sparse", "docs": "Lookup embedding results, accounting for invalid IDs and empty features.\n\n The partitioned embedding in `embedding_weights` must all be the same shape\n except for the first dimension. The first dimension is allowed to vary as the\n vocabulary size is not necessarily a multiple of `P`. `embedding_weights`\n may be a `PartitionedVariable` as returned by using\n `tf.compat.v1.get_variable()` with a\n partitioner.\n\n Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs\n with non-positive weight. For an entry with no features, the embedding vector\n for `default_id` is returned, or the 0-vector if `default_id` is not supplied.\n\n The ids and weights may be multi-dimensional. Embeddings are always aggregated\n along the last dimension.\n\n Args:\n embedding_weights: A single tensor representing the complete embedding\n tensor, or a list tensors all of same shape except for the first\n dimension, representing sharded embedding tensors. Alternatively, a\n `PartitionedVariable`, created by partitioning along dimension 0. Each\n element must be appropriately sized for the given `partition_strategy`.\n sparse_ids: `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the\n ids. `d_0` is typically batch size.\n sparse_weights: `SparseTensor` of same shape as `sparse_ids`, containing\n float weights corresponding to `sparse_ids`, or `None` if all weights are\n be assumed to be 1.0.\n combiner: A string specifying how to combine embedding results for each\n entry. Currently \"mean\", \"sqrtn\" and \"sum\" are supported, with \"mean\" the\n default.\n default_id: The id to use for an entry with no features.\n name: A name for this operation (optional).\n partition_strategy: A string specifying the partitioning strategy. Currently\n `\"div\"` and `\"mod\"` are supported. Default is `\"div\"`.\n max_norm: If not `None`, all embeddings are l2-normalized to max_norm before\n combining.\n\n Returns:\n A dense tensor representing the combined embeddings for the\n sparse ids. For each row in the dense tensor represented by `sp_ids`, the op\n looks up the embeddings for all ids in that row, multiplies them by the\n corresponding weight, and combines these embeddings as specified.\n\n In other words, if\n\n `shape(combined embedding_weights) = [p0, p1, ..., pm]`\n\n and\n\n `shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn]`\n\n then\n\n `shape(output) = [d0, d1, ... dn-1, p1, ..., pm]`.\n\n For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are\n\n ```python\n [0, 0]: id 1, weight 2.0\n [0, 1]: id 3, weight 0.5\n [1, 0]: id -1, weight 1.0\n [2, 3]: id 1, weight 3.0\n ```\n\n `default_id` is 0.\n\n with `combiner`=\"mean\", then the output will be a 3x20 matrix where\n\n ```python\n output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)\n output[1, :] = (params[0, :] * 1.0) / 1.0\n output[2, :] = (params[1, :] * 3.0) / 3.0\n ```\n\n Raises:\n ValueError: if `embedding_weights` is empty.\n ", "desc": "Lookup embedding results, accounting for invalid IDs and empty features.", "type": "API"}, {"name": "tf.compat.v1.nn.sampled_softmax_loss", "docs": "Computes and returns the sampled softmax training loss.\n\n This is a faster way to train a softmax classifier over a huge number of\n classes.\n\n This operation is for training only. It is generally an underestimate of\n the full softmax loss.\n\n A common use case is to use this method for training, and calculate the full\n softmax loss for evaluation or inference. In this case, you must set\n `partition_strategy=\"div\"` for the two losses to be consistent, as in the\n following example:\n\n ```python\n if mode == \"train\":\n loss = tf.nn.sampled_softmax_loss(\n weights=weights,\n biases=biases,\n labels=labels,\n inputs=inputs,\n ...,\n partition_strategy=\"div\")\n elif mode == \"eval\":\n logits = tf.matmul(inputs, tf.transpose(weights))\n logits = tf.nn.bias_add(logits, biases)\n labels_one_hot = tf.one_hot(labels, n_classes)\n loss = tf.nn.softmax_cross_entropy_with_logits(\n labels=labels_one_hot,\n logits=logits)\n ```\n\n See our Candidate Sampling Algorithms Reference\n ([pdf](https://www.tensorflow.org/extras/candidate_sampling.pdf)).\n Also see Section 3 of (Jean et al., 2014) for the math.\n\n Args:\n weights: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`\n objects whose concatenation along dimension 0 has shape\n [num_classes, dim]. The (possibly-sharded) class embeddings.\n biases: A `Tensor` of shape `[num_classes]`. The class biases.\n labels: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes. Note that this format differs from\n the `labels` argument of `nn.softmax_cross_entropy_with_logits`.\n inputs: A `Tensor` of shape `[batch_size, dim]`. The forward\n activations of the input network.\n num_sampled: An `int`. The number of classes to randomly sample per batch.\n num_classes: An `int`. The number of possible classes.\n num_true: An `int`. The number of target classes per training example.\n sampled_values: a tuple of (`sampled_candidates`, `true_expected_count`,\n `sampled_expected_count`) returned by a `*_candidate_sampler` function.\n (if None, we default to `log_uniform_candidate_sampler`)\n remove_accidental_hits: A `bool`. whether to remove \"accidental hits\"\n where a sampled class equals one of the target classes. Default is\n True.\n partition_strategy: A string specifying the partitioning strategy, relevant\n if `len(weights) > 1`. Currently `\"div\"` and `\"mod\"` are supported.\n Default is `\"mod\"`. See `tf.nn.embedding_lookup` for more details.\n name: A name for the operation (optional).\n seed: random seed for candidate sampling. Default to None, which doesn't set\n the op-level random seed for candidate sampling.\n\n Returns:\n A `batch_size` 1-D tensor of per-example sampled softmax losses.\n\n References:\n On Using Very Large Target Vocabulary for Neural Machine Translation:\n [Jean et al., 2014]\n (https://aclanthology.coli.uni-saarland.de/papers/P15-1001/p15-1001)\n ([pdf](http://aclweb.org/anthology/P15-1001))\n ", "desc": "Computes and returns the sampled softmax training loss.", "type": "API"}, {"name": "tf.compat.v1.nn.scale_regularization_loss", "docs": "Scales the sum of the given regularization losses by number of replicas.\n\n Usage with distribution strategy and custom training loop:\n\n ```python\n with strategy.scope():\n def compute_loss(self, label, predictions):\n per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, predictions)\n\n # Compute loss that is scaled by sample_weight and by global batch size.\n loss = tf.nn.compute_average_loss(\n per_example_loss,\n sample_weight=sample_weight,\n global_batch_size=GLOBAL_BATCH_SIZE)\n\n # Add scaled regularization losses.\n loss += tf.nn.scale_regularization_loss(tf.nn.l2_loss(weights))\n return loss\n ```\n\n Args:\n regularization_loss: Regularization loss.\n\n Returns:\n Scalar loss value.\n ", "desc": "Scales the sum of the given regularization losses by number of replicas.", "type": "API"}, {"name": "tf.compat.v1.nn.selu", "docs": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`\n\n if < 0, `scale * features` otherwise.\n\n To be used together with\n `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`.\n For correct dropout, use `tf.contrib.nn.alpha_dropout`.\n\n See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`", "type": "API"}, {"name": "tf.compat.v1.nn.separable_conv2d", "docs": "2-D convolution with separable filters.\n\n Performs a depthwise convolution that acts separately on channels followed by\n a pointwise convolution that mixes channels. Note that this is separability\n between dimensions `[1, 2]` and `3`, not spatial separability between\n dimensions `1` and `2`.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k] = sum_{di, dj, q, r}\n input[b, strides[1] * i + di, strides[2] * j + dj, q] *\n depthwise_filter[di, dj, q, r] *\n pointwise_filter[0, 0, q * channel_multiplier + r, k]\n\n `strides` controls the strides for the depthwise convolution only, since\n the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have\n `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n If any value in `rate` is greater than 1, we perform atrous depthwise\n convolution, in which case all values in the `strides` tensor must be equal\n to 1.\n\n Args:\n input: 4-D `Tensor` with shape according to `data_format`.\n depthwise_filter: 4-D `Tensor` with shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`.\n Contains `in_channels` convolutional filters of depth 1.\n pointwise_filter: 4-D `Tensor` with shape\n `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise\n filter to mix channels after `depthwise_filter` has convolved spatially.\n strides: 1-D of size 4. The strides for the depthwise convolution for\n each dimension of `input`.\n padding: Controls how to pad the image before applying the depthwise\n convolution. Can be the string `\"SAME\"` or `\"VALID\"` indicating the type\n of padding algorithm to use, or a Python list indicating the explicit\n paddings at the start and end of each dimension. When explicit padding is\n used and data_format is `\"NHWC\"`, this should be in the form `[[0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit\n padding used and data_format is `\"NCHW\"`, this should be in the form\n `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.\n rate: 1-D of size 2. The dilation rate in which we sample input values\n across the `height` and `width` dimensions in atrous convolution. If it is\n greater than 1, then all values of strides must be 1.\n name: A name for this operation (optional).\n data_format: The data format for input. Either \"NHWC\" (default) or \"NCHW\".\n dilations: Alias of rate.\n\n Returns:\n A 4-D `Tensor` with shape according to 'data_format'. For\n example, with data_format=\"NHWC\", shape is [batch, out_height,\n out_width, out_channels].\n ", "desc": "2-D convolution with separable filters.", "type": "API"}, {"name": "tf.compat.v1.nn.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.nn.sigmoid_cross_entropy_with_logits", "docs": "Computes sigmoid cross entropy given `logits`.\n\n Measures the probability error in tasks with two outcomes in which each\n outcome is independent and need not have a fully certain label. For instance,\n one could perform a regression where the probability of an event happening is\n known and used as a label. This loss may also be used for binary\n classification, where labels are either zero or one.\n\n For brevity, let `x = logits`, `z = labels`. The logistic loss is\n\n z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))\n = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))\n = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))\n = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))\n = (1 - z) * x + log(1 + exp(-x))\n = x - x * z + log(1 + exp(-x))\n\n For x < 0, to avoid overflow in exp(-x), we reformulate the above\n\n x - x * z + log(1 + exp(-x))\n = log(exp(x)) - x * z + log(1 + exp(-x))\n = - x * z + log(1 + exp(x))\n\n Hence, to ensure stability and avoid overflow, the implementation uses this\n equivalent formulation\n\n max(x, 0) - x * z + log(1 + exp(-abs(x)))\n\n `logits` and `labels` must have the same type and shape.\n\n >>> logits = tf.constant([1., -1., 0., 1., -1., 0., 0.])\n >>> labels = tf.constant([0., 0., 0., 1., 1., 1., 0.5])\n >>> tf.nn.sigmoid_cross_entropy_with_logits(\n ... labels=labels, logits=logits).numpy()\n array([1.3132617, 0.3132617, 0.6931472, 0.3132617, 1.3132617, 0.6931472,\n 0.6931472], dtype=float32)\n\n Compared to the losses which handle multiple outcomes,\n `tf.nn.softmax_cross_entropy_with_logits` for general multi-class\n classification and `tf.nn.sparse_softmax_cross_entropy_with_logits` for more\n efficient multi-class classification with hard labels,\n `sigmoid_cross_entropy_with_logits` is a slight simplification for binary\n classification:\n\n sigmoid(x) = softmax([x, 0])[0]\n\n $$\\frac{1}{1 + e^{-x}} = \\frac{e^x}{e^x + e^0}$$\n\n While `sigmoid_cross_entropy_with_logits` works for soft binary labels\n (probabilities between 0 and 1), it can also be used for binary classification\n where the labels are hard. There is an equivalence between all three symbols\n in this case, with a probability 0 indicating the second class or 1 indicating\n the first class:\n\n >>> sigmoid_logits = tf.constant([1., -1., 0.])\n >>> softmax_logits = tf.stack([sigmoid_logits, tf.zeros_like(sigmoid_logits)],\n ... axis=-1)\n >>> soft_binary_labels = tf.constant([1., 1., 0.])\n >>> soft_multiclass_labels = tf.stack(\n ... [soft_binary_labels, 1. - soft_binary_labels], axis=-1)\n >>> hard_labels = tf.constant([0, 0, 1])\n >>> tf.nn.sparse_softmax_cross_entropy_with_logits(\n ... labels=hard_labels, logits=softmax_logits).numpy()\n array([0.31326166, 1.3132616 , 0.6931472 ], dtype=float32)\n >>> tf.nn.softmax_cross_entropy_with_logits(\n ... labels=soft_multiclass_labels, logits=softmax_logits).numpy()\n array([0.31326166, 1.3132616, 0.6931472], dtype=float32)\n >>> tf.nn.sigmoid_cross_entropy_with_logits(\n ... labels=soft_binary_labels, logits=sigmoid_logits).numpy()\n array([0.31326166, 1.3132616, 0.6931472], dtype=float32)\n\n Args:\n labels: A `Tensor` of the same type and shape as `logits`. Between 0 and 1,\n inclusive.\n logits: A `Tensor` of type `float32` or `float64`. Any real number.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `logits` with the componentwise\n logistic losses.\n\n Raises:\n ValueError: If `logits` and `labels` do not have the same shape.\n ", "desc": "Computes sigmoid cross entropy given `logits`.", "type": "API"}, {"name": "tf.compat.v1.nn.silu", "docs": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.\n\n beta : Hyperparameter for Swish activation function. Default value 1.0.\n\n The SiLU activation function was introduced in \"Gaussian Error Linear Units\n (GELUs)\" [Hendrycks et al. 2016](https://arxiv.org/abs/1606.08415) and\n \"Sigmoid-Weighted Linear Units for Neural Network Function Approximation in\n Reinforcement Learning\"\n [Elfwing et al. 2017](https://arxiv.org/abs/1702.03118) and was independently\n discovered (and called swish) in \"Searching for Activation Functions\"\n [Ramachandran et al. 2017](https://arxiv.org/abs/1710.05941)\n\n Args:\n features: A `Tensor` representing preactivation values.\n beta: A 'Tensor' representing value of beta hyperparameter.\n\n Returns:\n The activation value.\n ", "desc": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.", "type": "API"}, {"name": "tf.compat.v1.nn.softmax", "docs": "Computes softmax activations.\n\n Used for multi-class predictions. The sum of all outputs generated by softmax\n is 1.\n\n This function performs the equivalent of\n\n ```python\n softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis, keepdims=True)\n ```\n Example usage:\n\n >>> softmax = tf.nn.softmax([-1, 0., 1.])\n >>> softmax\n \n >>> sum(softmax)\n \n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type and shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes softmax activations.", "type": "API"}, {"name": "tf.compat.v1.nn.softmax_cross_entropy_with_logits", "docs": "Computes softmax cross entropy between `logits` and `labels`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n\nFuture major versions of TensorFlow will allow gradients to flow\ninto the labels input on backprop by default.\n\nSee `tf.nn.softmax_cross_entropy_with_logits_v2`.\n\n\nMeasures the probability error in discrete classification tasks in which the\nclasses are mutually exclusive (each entry is in exactly one class). For\nexample, each CIFAR-10 image is labeled with one and only one label: an image\ncan be a dog or a truck, but not both.\n\n**NOTE:** While the classes are mutually exclusive, their probabilities\nneed not be. All that is required is that each row of `labels` is\na valid probability distribution. If they are not, the computation of the\ngradient will be incorrect.\n\nIf using exclusive `labels` (wherein one and only\none class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.\n\n**WARNING:** This op expects unscaled logits, since it performs a `softmax`\non `logits` internally for efficiency. Do not call this op with the\noutput of `softmax`, as it will produce incorrect results.\n\nA common use case is to have logits and labels of shape\n`[batch_size, num_classes]`, but higher dimensions are supported, with\nthe `dim` argument specifying the class dimension.\n\nBackpropagation will happen only into `logits`. To calculate a cross entropy\nloss that allows backpropagation into both `logits` and `labels`, see\n`tf.nn.softmax_cross_entropy_with_logits_v2`.\n\n**Note that to avoid confusion, it is required to pass only named arguments to\nthis function.**\n\nArgs:\n _sentinel: Used to prevent positional parameters. Internal, do not use.\n labels: Each vector along the class dimension should hold a valid\n probability distribution e.g. for the case in which labels are of shape\n `[batch_size, num_classes]`, each row of `labels[i]` must be a valid\n probability distribution.\n logits: Per-label activations, typically a linear output. These activation\n energies are interpreted as unnormalized log probabilities.\n dim: The class dimension. Defaulted to -1 which is the last dimension.\n name: A name for the operation (optional).\n axis: Alias for dim.\n\nReturns:\n A `Tensor` that contains the softmax cross entropy loss. Its type is the\n same as `logits` and its shape is the same as `labels` except that it does\n not have the last dimension of `labels`.", "desc": "Computes softmax cross entropy between `logits` and `labels`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2", "docs": "Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nMeasures the probability error in discrete classification tasks in which the\nclasses are mutually exclusive (each entry is in exactly one class). For\nexample, each CIFAR-10 image is labeled with one and only one label: an image\ncan be a dog or a truck, but not both.\n\n**NOTE:** While the classes are mutually exclusive, their probabilities\nneed not be. All that is required is that each row of `labels` is\na valid probability distribution. If they are not, the computation of the\ngradient will be incorrect.\n\nIf using exclusive `labels` (wherein one and only\none class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.\n\n**WARNING:** This op expects unscaled logits, since it performs a `softmax`\non `logits` internally for efficiency. Do not call this op with the\noutput of `softmax`, as it will produce incorrect results.\n\nA common use case is to have logits and labels of shape\n`[batch_size, num_classes]`, but higher dimensions are supported, with\nthe `axis` argument specifying the class dimension.\n\n`logits` and `labels` must have the same dtype (either `float16`, `float32`,\nor `float64`).\n\nBackpropagation will happen into both `logits` and `labels`. To disallow\nbackpropagation into `labels`, pass label tensors through `tf.stop_gradient`\nbefore feeding it to this function.\n\n**Note that to avoid confusion, it is required to pass only named arguments to\nthis function.**\n\nArgs:\n labels: Each vector along the class dimension should hold a valid\n probability distribution e.g. for the case in which labels are of shape\n `[batch_size, num_classes]`, each row of `labels[i]` must be a valid\n probability distribution.\n logits: Unscaled log probabilities.\n axis: The class dimension. Defaulted to -1 which is the last dimension.\n name: A name for the operation (optional).\n dim: Deprecated alias for axis.\n\nReturns:\n A `Tensor` that contains the softmax cross entropy loss. Its type is the\n same as `logits` and its shape is the same as `labels` except that it does\n not have the last dimension of `labels`.", "desc": "Computes softmax cross entropy between `logits` and `labels`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.nn.softplus", "docs": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.\n\n `softplus` is a smooth approximation of `relu`. Like `relu`, `softplus` always\n takes on positive values.\n\n \n\n Example:\n\n >>> import tensorflow as tf\n >>> tf.math.softplus(tf.range(0, 2, dtype=tf.float32)).numpy()\n array([0.6931472, 1.3132616], dtype=float32)\n\n Args:\n features: `Tensor`\n name: Optional: name to associate with this operation.\n Returns:\n `Tensor`\n ", "desc": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.nn.softsign", "docs": "Computes softsign: `features / (abs(features) + 1)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softsign: `features / (abs(features) + 1)`.", "type": "API"}, {"name": "tf.compat.v1.nn.space_to_batch", "docs": "SpaceToBatch for 4-D tensors of type T.\n\n This is a legacy version of the more general SpaceToBatchND.\n\n Zero-pads and then rearranges (permutes) blocks of spatial data into batch.\n More specifically, this op outputs a copy of the input tensor where values from\n the `height` and `width` dimensions are moved to the `batch` dimension. After\n the zero-padding, both `height` and `width` of the input must be divisible by the\n block size.\n\n The attr `block_size` must be greater than one. It indicates the block size.\n\n * Non-overlapping blocks of size `block_size x block size` in the height and\n width dimensions are rearranged into the batch dimension at each location.\n * The batch of the output tensor is `batch * block_size * block_size`.\n * Both height_pad and width_pad must be divisible by block_size.\n\n The shape of the output will be:\n\n [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,\n depth]\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],\n [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`. 4-D with shape `[batch, height, width, depth]`.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies\n the padding of the input with zeros across the spatial dimensions as follows:\n\n paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]\n\n The effective spatial dimensions of the zero-padded input tensor will be:\n\n height_pad = pad_top + height + pad_bottom\n width_pad = pad_left + width + pad_right\n block_size: An `int` that is `>= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for 4-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.nn.space_to_depth", "docs": "SpaceToDepth for tensors of type T.\n\n Rearranges blocks of spatial data, into depth. More specifically,\n this op outputs a copy of the input tensor where values from the `height`\n and `width` dimensions are moved to the `depth` dimension.\n The attr `block_size` indicates the input block size.\n\n * Non-overlapping blocks of size `block_size x block size` are rearranged\n into depth at each location.\n * The depth of the output tensor is `block_size * block_size * input_depth`.\n * The Y, X coordinates within each block of the input become the high order\n component of the output channel index.\n * The input tensor's height and width must be divisible by block_size.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates\n within the output image, bX, bY means coordinates\n within the input block, iC means input channels).\n The output would be a transpose to the following layout:\n n,oY,oX,bY,bX,iC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 2, 2, 1]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n This operation will output a tensor of shape `[1, 1, 1, 4]`:\n\n ```\n [[[[1, 2, 3, 4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,\n the corresponding output will have a single element (i.e. width and height are\n both 1) and will have a depth of 4 channels (1 * block_size * block_size).\n The output element shape is `[1, 1, 4]`.\n\n For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n This operation, for block_size of 2, will return the following tensor of shape\n `[1, 1, 1, 12]`\n\n ```\n [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:\n\n ```\n x = [[[[1], [2], [5], [6]],\n [[3], [4], [7], [8]],\n [[9], [10], [13], [14]],\n [[11], [12], [15], [16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 2 2 4]`:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`. The size of the spatial block.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToDepth for tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits", "docs": "Computes sparse softmax cross entropy between `logits` and `labels`.\n\n Measures the probability error in discrete classification tasks in which the\n classes are mutually exclusive (each entry is in exactly one class). For\n example, each CIFAR-10 image is labeled with one and only one label: an image\n can be a dog or a truck, but not both.\n\n **NOTE:** For this operation, the probability of a given label is considered\n exclusive. That is, soft classes are not allowed, and the `labels` vector\n must provide a single specific index for the true class for each row of\n `logits` (each minibatch entry). For soft softmax classification with\n a probability distribution for each entry, see\n `softmax_cross_entropy_with_logits_v2`.\n\n **WARNING:** This op expects unscaled logits, since it performs a `softmax`\n on `logits` internally for efficiency. Do not call this op with the\n output of `softmax`, as it will produce incorrect results.\n\n A common use case is to have logits of shape\n `[batch_size, num_classes]` and have labels of shape\n `[batch_size]`, but higher dimensions are supported, in which\n case the `dim`-th dimension is assumed to be of size `num_classes`.\n `logits` must have the dtype of `float16`, `float32`, or `float64`, and\n `labels` must have the dtype of `int32` or `int64`.\n\n **Note that to avoid confusion, it is required to pass only named arguments to\n this function.**\n\n Args:\n _sentinel: Used to prevent positional parameters. Internal, do not use.\n labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of\n `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`\n must be an index in `[0, num_classes)`. Other values will raise an\n exception when this op is run on CPU, and return `NaN` for corresponding\n loss and gradient rows on GPU.\n logits: Per-label activations (typically a linear output) of shape\n `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float16`, `float32`, or\n `float64`. These activation energies are interpreted as unnormalized log\n probabilities.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `labels` and of the same type as `logits`\n with the softmax cross entropy loss.\n\n Raises:\n ValueError: If logits are scalars (need to have rank >= 1) or if the rank\n of the labels is not equal to the rank of the logits minus one.\n ", "desc": "Computes sparse softmax cross entropy between `logits` and `labels`.", "type": "API"}, {"name": "tf.compat.v1.nn.static_bidirectional_rnn", "docs": "Creates a bidirectional recurrent neural network. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.Bidirectional(keras.layers.RNN(cell, unroll=True))`, which is equivalent to this API\n\nSimilar to the unidirectional case above (rnn) but takes input and builds\nindependent forward and backward RNNs with the final forward and backward\noutputs depth-concatenated, such that the output will have the format\n[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of\nforward and backward cell must match. The initial state for both directions\nis zero by default (but can be set optionally) and no intermediate states are\never returned -- the network is fully unrolled for the given (passed in)\nlength(s) of the sequence(s) or completely unrolled if length(s) is not given.\n\nArgs:\n cell_fw: An instance of RNNCell, to be used for forward direction.\n cell_bw: An instance of RNNCell, to be used for backward direction.\n inputs: A length T list of inputs, each a tensor of shape [batch_size,\n input_size], or a nested tuple of such elements.\n initial_state_fw: (optional) An initial state for the forward RNN. This must\n be a tensor of appropriate type and shape `[batch_size,\n cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a\n tuple of tensors having shapes `[batch_size, s] for s in\n cell_fw.state_size`.\n initial_state_bw: (optional) Same as for `initial_state_fw`, but using the\n corresponding properties of `cell_bw`.\n dtype: (optional) The data type for the initial state. Required if either\n of the initial states are not provided.\n sequence_length: (optional) An int32/int64 vector, size `[batch_size]`,\n containing the actual lengths for each of the sequences.\n scope: VariableScope for the created subgraph; defaults to\n \"bidirectional_rnn\"\n\nReturns:\n A tuple (outputs, output_state_fw, output_state_bw) where:\n outputs is a length `T` list of outputs (one for each input), which\n are depth-concatenated forward and backward outputs.\n output_state_fw is the final state of the forward rnn.\n output_state_bw is the final state of the backward rnn.\n\nRaises:\n TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.\n ValueError: If inputs is None or an empty list.", "desc": "Creates a bidirectional recurrent neural network. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.static_rnn", "docs": "Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.RNN(cell, unroll=True)`, which is equivalent to this API\n\nThe simplest form of RNN network generated is:\n\n```python\n state = cell.zero_state(...)\n outputs = []\n for input_ in inputs:\n output, state = cell(input_, state)\n outputs.append(output)\n return (outputs, state)\n```\nHowever, a few other options are available:\n\nAn initial state can be provided.\nIf the sequence_length vector is provided, dynamic calculation is performed.\nThis method of calculation does not compute the RNN steps past the maximum\nsequence length of the minibatch (thus saving computational time),\nand properly propagates the state at an example's sequence length\nto the final state output.\n\nThe dynamic calculation performed is, at time `t` for batch row `b`,\n\n```python\n (output, state)(b, t) =\n (t >= sequence_length(b))\n ? (zeros(cell.output_size), states(b, sequence_length(b) - 1))\n : cell(input(b, t), state(b, t - 1))\n```\n\nArgs:\n cell: An instance of RNNCell.\n inputs: A length T list of inputs, each a `Tensor` of shape `[batch_size,\n input_size]`, or a nested tuple of such elements.\n initial_state: (optional) An initial state for the RNN. If `cell.state_size`\n is an integer, this must be a `Tensor` of appropriate type and shape\n `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this\n should be a tuple of tensors having shapes `[batch_size, s] for s in\n cell.state_size`.\n dtype: (optional) The data type for the initial state and expected output.\n Required if initial_state is not provided or RNN state has a heterogeneous\n dtype.\n sequence_length: Specifies the length of each sequence in inputs. An int32\n or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.\n scope: VariableScope for the created subgraph; defaults to \"rnn\".\n\nReturns:\n A pair (outputs, state) where:\n\n - outputs is a length T list of outputs (one for each input), or a nested\n tuple of such elements.\n - state is the final state\n\nRaises:\n TypeError: If `cell` is not an instance of RNNCell.\n ValueError: If `inputs` is `None` or an empty list, or if the input depth\n (column size) cannot be inferred from inputs via shape inference.", "desc": "Creates a recurrent neural network specified by RNNCell `cell`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.static_state_saving_rnn", "docs": "RNN that accepts a state saver for time-truncated RNN calculation. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.RNN(cell, stateful=True)`, which is equivalent to this API\n\nArgs:\n cell: An instance of `RNNCell`.\n inputs: A length T list of inputs, each a `Tensor` of shape `[batch_size,\n input_size]`.\n state_saver: A state saver object with methods `state` and `save_state`.\n state_name: Python string or tuple of strings. The name to use with the\n state_saver. If the cell returns tuples of states (i.e., `cell.state_size`\n is a tuple) then `state_name` should be a tuple of strings having the same\n length as `cell.state_size`. Otherwise it should be a single string.\n sequence_length: (optional) An int32/int64 vector size [batch_size]. See the\n documentation for rnn() for more details about sequence_length.\n scope: VariableScope for the created subgraph; defaults to \"rnn\".\n\nReturns:\n A pair (outputs, state) where:\n outputs is a length T list of outputs (one for each input)\n states is the final state\n\nRaises:\n TypeError: If `cell` is not an instance of RNNCell.\n ValueError: If `inputs` is `None` or an empty list, or if the arity and\n type of `state_name` does not match that of `cell.state_size`.", "desc": "RNN that accepts a state saver for time-truncated RNN calculation. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.nn.sufficient_statistics", "docs": "Calculate the sufficient statistics for the mean and variance of `x`.\n\n These sufficient statistics are computed using the one pass algorithm on\n an input that's optionally shifted. See:\n https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data\n\n For example:\n >>> t = [[1, 2, 3], [4, 5, 6]]\n >>> sufficient_statistics(t, [1])\n (, , , None)\n >>> sufficient_statistics(t, [-1])\n (, , , None)\n\n Args:\n x: A `Tensor`.\n axes: Array of ints. Axes along which to compute mean and variance. As in\n Python, the axes can also be negative numbers. A negative axis is\n interpreted as counting from the end of the rank, i.e., axis +\n rank(values)-th dimension.\n shift: A `Tensor` containing the value by which to shift the data for\n numerical stability, or `None` if no shift is to be performed. A shift\n close to the true mean provides the most numerically stable results.\n keep_dims: produce statistics with the same dimensionality as the input.\n name: Name used to scope the operations that compute the sufficient stats.\n keepdims: Alias for keep_dims.\n\n Returns:\n Four `Tensor` objects of the same type as `x`:\n\n * the count (number of elements to average over).\n * the (possibly shifted) sum of the elements in the array.\n * the (possibly shifted) sum of squares of the elements in the array.\n * the shift by which the mean must be corrected or None if `shift` is None.\n ", "desc": "Calculate the sufficient statistics for the mean and variance of `x`.", "type": "API"}, {"name": "tf.compat.v1.nn.swish", "docs": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.\n\n beta : Hyperparameter for Swish activation function. Default value 1.0.\n\n The SiLU activation function was introduced in \"Gaussian Error Linear Units\n (GELUs)\" [Hendrycks et al. 2016](https://arxiv.org/abs/1606.08415) and\n \"Sigmoid-Weighted Linear Units for Neural Network Function Approximation in\n Reinforcement Learning\"\n [Elfwing et al. 2017](https://arxiv.org/abs/1702.03118) and was independently\n discovered (and called swish) in \"Searching for Activation Functions\"\n [Ramachandran et al. 2017](https://arxiv.org/abs/1710.05941)\n\n Args:\n features: A `Tensor` representing preactivation values.\n beta: A 'Tensor' representing value of beta hyperparameter.\n\n Returns:\n The activation value.\n ", "desc": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.", "type": "API"}, {"name": "tf.compat.v1.nn.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.nn.top_k", "docs": "Finds values and indices of the `k` largest entries for the last dimension.\n\n If the input is a vector (rank=1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n >>> result = tf.math.top_k([1, 2, 98, 1, 1, 99, 3, 1, 3, 96, 4, 1],\n ... k=3)\n >>> result.values.numpy()\n array([99, 98, 96], dtype=int32)\n >>> result.indices.numpy()\n array([5, 2, 9], dtype=int32)\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n >>> input = tf.random.normal(shape=(3,4,5,6))\n >>> k = 2\n >>> values, indices = tf.math.top_k(input, k=k)\n >>> values.shape.as_list()\n [3, 4, 5, 2]\n >>>\n >>> values.shape == indices.shape == input.shape[:-1] + [k]\n True\n\n The indices can be used to `gather` from a tensor who's shape matches `input`.\n\n >>> gathered_values = tf.gather(input, indices, batch_dims=-1)\n >>> assert tf.reduce_all(gathered_values == values)\n\n If two elements are equal, the lower-index element appears first.\n\n >>> result = tf.math.top_k([1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],\n ... k=3)\n >>> result.indices.numpy()\n array([0, 1, 3], dtype=int32)\n\n Args:\n input: 1-D or higher `Tensor` with last dimension at least `k`.\n k: 0-D `int32` `Tensor`. Number of top elements to look for along the last\n dimension (along each row for matrices).\n sorted: If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: Optional name for the operation.\n\n Returns:\n A tuple with two named fields:\n values: The `k` largest elements along each last dimensional slice.\n indices: The indices of `values` within the last dimension of `input`.\n ", "desc": "Finds values and indices of the `k` largest entries for the last dimension.", "type": "API"}, {"name": "tf.compat.v1.nn.uniform_candidate_sampler", "docs": "Samples a set of classes using a uniform base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is the uniform distribution\n over the range of integers `[0, range_max)`.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample. The\n `sampled_candidates` return value will have shape `[num_sampled]`. If\n `unique=True`, `num_sampled` must be less than or equal to `range_max`.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`. The\n sampled classes, either with possible duplicates (`unique=False`) or all\n unique (`unique=True`). In either case, `sampled_candidates` is\n independent of the true classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a uniform base distribution.", "type": "API"}, {"name": "tf.compat.v1.nn.weighted_cross_entropy_with_logits", "docs": "Computes a weighted cross entropy. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(targets)`. They will be removed in a future version.\nInstructions for updating:\ntargets is deprecated, use labels instead\n\nThis is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`,\nallows one to trade off recall and precision by up- or down-weighting the\ncost of a positive error relative to a negative error.\n\nThe usual cross-entropy cost is defined as:\n\n labels * -log(sigmoid(logits)) +\n (1 - labels) * -log(1 - sigmoid(logits))\n\nA value `pos_weight > 1` decreases the false negative count, hence increasing\nthe recall.\nConversely setting `pos_weight < 1` decreases the false positive count and\nincreases the precision.\nThis can be seen from the fact that `pos_weight` is introduced as a\nmultiplicative coefficient for the positive labels term\nin the loss expression:\n\n labels * -log(sigmoid(logits)) * pos_weight +\n (1 - labels) * -log(1 - sigmoid(logits))\n\nFor brevity, let `x = logits`, `z = labels`, `q = pos_weight`.\nThe loss is:\n\n qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))\n = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))\n = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))\n = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))\n\nSetting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow,\nthe implementation uses\n\n (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))\n\n`logits` and `labels` must have the same type and shape.\n\nArgs:\n labels: A `Tensor` of the same type and shape as `logits`.\n logits: A `Tensor` of type `float32` or `float64`.\n pos_weight: A coefficient to use on the positive examples.\n name: A name for the operation (optional).\n targets: Deprecated alias for labels.\n\nReturns:\n A `Tensor` of the same shape as `logits` with the componentwise\n weighted logistic losses.\n\nRaises:\n ValueError: If `logits` and `labels` do not have the same shape.", "desc": "Computes a weighted cross entropy. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.nn.weighted_moments", "docs": "Returns the frequency-weighted mean and variance of `x`.\n\n Args:\n x: A tensor.\n axes: 1-d tensor of int32 values; these are the axes along which\n to compute mean and variance.\n frequency_weights: A tensor of positive weights which can be\n broadcast with x.\n name: Name used to scope the operation.\n keep_dims: Produce moments with the same dimensionality as the input.\n keepdims: Alias of keep_dims.\n\n Returns:\n Two tensors: `weighted_mean` and `weighted_variance`.\n ", "desc": "Returns the frequency-weighted mean and variance of `x`.", "type": "API"}, {"name": "tf.compat.v1.nn.with_space_to_batch", "docs": "Performs `op` on the space-to-batch representation of `input`.\n\n This has the effect of transforming sliding window operations into the\n corresponding \"atrous\" operation in which the input is sampled at the\n specified `dilation_rate`.\n\n In the special case that `dilation_rate` is uniformly 1, this simply returns:\n\n op(input, num_spatial_dims, padding)\n\n Otherwise, it returns:\n\n batch_to_space_nd(\n op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings),\n num_spatial_dims,\n \"VALID\")\n adjusted_dilation_rate,\n adjusted_crops),\n\n where:\n\n adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)],\n adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]\n\n defined as follows:\n\n We first define two int64 tensors `paddings` and `crops` of shape\n `[num_spatial_dims, 2]` based on the value of `padding` and the spatial\n dimensions of the `input`:\n\n If `padding = \"VALID\"`, then:\n\n paddings, crops = required_space_to_batch_paddings(\n input_shape[spatial_dims],\n dilation_rate)\n\n If `padding = \"SAME\"`, then:\n\n dilated_filter_shape =\n filter_shape + (filter_shape - 1) * (dilation_rate - 1)\n\n paddings, crops = required_space_to_batch_paddings(\n input_shape[spatial_dims],\n dilation_rate,\n [(dilated_filter_shape - 1) // 2,\n dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])\n\n Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial\n dimensions are contiguous starting at the second dimension, but the specified\n `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and\n `crops` in order to be usable with these operations. For a given dimension,\n if the block size is 1, and both the starting and ending padding and crop\n amounts are 0, then space_to_batch_nd effectively leaves that dimension alone,\n which is what is needed for dimensions not part of `spatial_dims`.\n Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case\n efficiently for any number of leading and trailing dimensions.\n\n For 0 <= i < len(spatial_dims), we assign:\n\n adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i]\n adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :]\n adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]\n\n All unassigned values of `adjusted_dilation_rate` default to 1, while all\n unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.\n\n Note in the case that `dilation_rate` is not uniformly 1, specifying \"VALID\"\n padding is equivalent to specifying `padding = \"SAME\"` with a filter_shape of\n `[1]*N`.\n\n Advanced usage. Note the following optimization: A sequence of\n `with_space_to_batch` operations with identical (not uniformly 1)\n `dilation_rate` parameters and \"VALID\" padding\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", op_1)\n ...\n net = with_space_to_batch(net, dilation_rate, \"VALID\", op_k)\n\n can be combined into a single `with_space_to_batch` operation as follows:\n\n def combined_op(converted_input, num_spatial_dims, _):\n result = op_1(converted_input, num_spatial_dims, \"VALID\")\n ...\n result = op_k(result, num_spatial_dims, \"VALID\")\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", combined_op)\n\n This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and\n `batch_to_space_nd`.\n\n Similarly, a sequence of `with_space_to_batch` operations with identical (not\n uniformly 1) `dilation_rate` parameters, \"SAME\" padding, and odd filter\n dimensions\n\n net = with_space_to_batch(net, dilation_rate, \"SAME\", op_1, filter_shape_1)\n ...\n net = with_space_to_batch(net, dilation_rate, \"SAME\", op_k, filter_shape_k)\n\n can be combined into a single `with_space_to_batch` operation as follows:\n\n def combined_op(converted_input, num_spatial_dims, _):\n result = op_1(converted_input, num_spatial_dims, \"SAME\")\n ...\n result = op_k(result, num_spatial_dims, \"SAME\")\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", combined_op)\n\n Args:\n input: Tensor of rank > max(spatial_dims).\n dilation_rate: int32 Tensor of *known* shape [num_spatial_dims].\n padding: str constant equal to \"VALID\" or \"SAME\"\n op: Function that maps (input, num_spatial_dims, padding) -> output\n filter_shape: If padding = \"SAME\", specifies the shape of the convolution\n kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims].\n If padding = \"VALID\", filter_shape is ignored and need not be specified.\n spatial_dims: Monotonically increasing sequence of `num_spatial_dims`\n integers (which are >= 1) specifying the spatial dimensions of `input`\n and output. Defaults to: `range(1, num_spatial_dims+1)`.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n\n Returns:\n The output Tensor as described above, dimensions will vary based on the op\n provided.\n\n Raises:\n ValueError: if `padding` is invalid or the arguments are incompatible.\n ValueError: if `spatial_dims` are invalid.\n ", "desc": "Performs `op` on the space-to-batch representation of `input`.", "type": "API"}, {"name": "tf.compat.v1.nn.xw_plus_b", "docs": "Computes matmul(x, weights) + biases.\n\n Args:\n x: a 2D tensor. Dimensions typically: batch, in_units\n weights: a 2D tensor. Dimensions typically: in_units, out_units\n biases: a 1D tensor. Dimensions: out_units\n name: A name for the operation (optional). If not specified\n \"xw_plus_b\" is used.\n\n Returns:\n A 2-D Tensor computing matmul(x, weights) + biases.\n Dimensions typically: batch, out_units.\n ", "desc": "Computes matmul(x, weights) + biases.", "type": "API"}, {"name": "tf.compat.v1.nn.zero_fraction", "docs": "Returns the fraction of zeros in `value`.\n\n If `value` is empty, the result is `nan`.\n\n This is useful in summaries to measure and report sparsity. For example,\n\n ```python\n z = tf.nn.relu(...)\n summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z))\n ```\n\n Args:\n value: A tensor of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n The fraction of zeros in `value`, with type `float32`.\n ", "desc": "Returns the fraction of zeros in `value`.", "type": "API"}, {"name": "tf.compat.v1.no_gradient", "docs": "Specifies that ops of type `op_type` is not differentiable.\n\n This function should *not* be used for operations that have a\n well-defined gradient that is not yet implemented.\n\n This function is only used when defining a new op type. It may be\n used for ops such as `tf.size()` that are not differentiable. For\n example:\n\n ```python\n tf.no_gradient(\"Size\")\n ```\n\n The gradient computed for 'op_type' will then propagate zeros.\n\n For ops that have a well-defined gradient but are not yet implemented,\n no declaration should be made, and an error *must* be thrown if\n an attempt to request its gradient is made.\n\n Args:\n op_type: The string type of an operation. This corresponds to the\n `OpDef.name` field for the proto that defines the operation.\n\n Raises:\n TypeError: If `op_type` is not a string.\n\n ", "desc": "Specifies that ops of type `op_type` is not differentiable.", "type": "API"}, {"name": "tf.compat.v1.no_op", "docs": "Does nothing. Only useful as a placeholder for control edges.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Does nothing. Only useful as a placeholder for control edges.", "type": "API"}, {"name": "tf.compat.v1.no_regularizer", "docs": "Use this function to prevent regularization of variables.", "desc": "Use this function to prevent regularization of variables.", "type": "API"}, {"name": "tf.compat.v1.NodeDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.NodeDef.AttrEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.NodeDef.ExperimentalDebugInfo", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.NoGradient", "docs": "Specifies that ops of type `op_type` is not differentiable.\n\n This function should *not* be used for operations that have a\n well-defined gradient that is not yet implemented.\n\n This function is only used when defining a new op type. It may be\n used for ops such as `tf.size()` that are not differentiable. For\n example:\n\n ```python\n tf.no_gradient(\"Size\")\n ```\n\n The gradient computed for 'op_type' will then propagate zeros.\n\n For ops that have a well-defined gradient but are not yet implemented,\n no declaration should be made, and an error *must* be thrown if\n an attempt to request its gradient is made.\n\n Args:\n op_type: The string type of an operation. This corresponds to the\n `OpDef.name` field for the proto that defines the operation.\n\n Raises:\n TypeError: If `op_type` is not a string.\n\n ", "desc": "Specifies that ops of type `op_type` is not differentiable.", "type": "API"}, {"name": "tf.compat.v1.nondifferentiable_batch_function", "docs": "Batches the computation done by the decorated function.\n\n So, for example, in the following code\n\n ```python\n @batch_function(1, 2, 3)\n def layer(a):\n return tf.matmul(a, a)\n\n b = layer(w)\n ```\n\n if more than one session.run call is simultaneously trying to compute `b`\n the values of `w` will be gathered, non-deterministically concatenated\n along the first axis, and only one thread will run the computation. See the\n documentation of the `Batch` op for more details.\n\n Assumes that all arguments of the decorated function are Tensors which will\n be batched along their first dimension.\n\n SparseTensor is not supported. The return value of the decorated function\n must be a Tensor or a list/tuple of Tensors.\n\n Args:\n num_batch_threads: Number of scheduling threads for processing batches\n of work. Determines the number of batches processed in parallel.\n max_batch_size: Batch sizes will never be bigger than this.\n batch_timeout_micros: Maximum number of microseconds to wait before\n outputting an incomplete batch.\n allowed_batch_sizes: Optional list of allowed batch sizes. If left empty,\n does nothing. Otherwise, supplies a list of batch sizes, causing the op\n to pad batches up to one of those sizes. The entries must increase\n monotonically, and the final entry must equal max_batch_size.\n max_enqueued_batches: The maximum depth of the batch queue. Defaults to 10.\n autograph: Whether to use autograph to compile python and eager style code\n for efficient graph-mode execution.\n enable_large_batch_splitting: The value of this option doesn't affect\n processing output given the same input; it affects implementation details\n as stated below: 1. Improve batching efficiency by eliminating unnecessary\n adding. 2.`max_batch_size` specifies the limit of input and\n `allowed_batch_sizes` specifies the limit of a task to be processed. API\n user can give an input of size 128 when 'max_execution_batch_size'\n is 32 -> implementation can split input of 128 into 4 x 32, schedule\n concurrent processing, and then return concatenated results corresponding\n to 128.\n\n Returns:\n The decorated function will return the unbatched computation output Tensors.\n ", "desc": "Batches the computation done by the decorated function.", "type": "API"}, {"name": "tf.compat.v1.norm", "docs": "Computes the norm of vectors, matrices, and tensors. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis function can compute several different vector norms (the 1-norm, the\nEuclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\nmatrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\nArgs:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are 'fro', 'euclidean',\n `1`, `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply:\n a) The Frobenius norm `fro` is not defined for vectors,\n b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', `1`,\n `2`, `np.inf` are supported.\n See the description of `axis` on how to compute norms for a batch of\n vectors or matrices stored in a tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`.\n If `axis` is a Python integer, the input is considered a batch of vectors,\n and `axis` determines the axis in `tensor` over which to compute vector\n norms.\n If `axis` is a 2-tuple of Python integers it is considered a batch of\n matrices and `axis` determines the axes in `tensor` over which to compute\n a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n keepdims: If True, the axis indicated in `axis` are kept with size 1.\n Otherwise, the dimensions in `axis` are removed from the output shape.\n name: The name of the op.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n output: A `Tensor` of the same type as tensor, containing the vector or\n matrix norms. If `keepdims` is True then the rank of output is equal to\n the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,\n if `axis` is an integer, the rank of `output` is one less than the rank\n of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less\n than the rank of `tensor`.\n\nRaises:\n ValueError: If `ord` or `axis` is invalid.\n\n@compatibility(numpy)\nMostly equivalent to numpy.linalg.norm.\nNot supported: ord <= 0, 2-norm for matrices, nuclear norm.\nOther differences:\n a) If axis is `None`, treats the flattened `tensor` as a vector\n regardless of rank.\n b) Explicitly supports 'euclidean' norm as the default, including for\n higher order tensors.\n@end_compatibility", "desc": "Computes the norm of vectors, matrices, and tensors. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.not_equal", "docs": "Returns the truth value of (x != y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise inequality comparison, returning a Tensor\n of boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.not_equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.not_equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x != y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.NotDifferentiable", "docs": "Specifies that ops of type `op_type` is not differentiable.\n\n This function should *not* be used for operations that have a\n well-defined gradient that is not yet implemented.\n\n This function is only used when defining a new op type. It may be\n used for ops such as `tf.size()` that are not differentiable. For\n example:\n\n ```python\n tf.no_gradient(\"Size\")\n ```\n\n The gradient computed for 'op_type' will then propagate zeros.\n\n For ops that have a well-defined gradient but are not yet implemented,\n no declaration should be made, and an error *must* be thrown if\n an attempt to request its gradient is made.\n\n Args:\n op_type: The string type of an operation. This corresponds to the\n `OpDef.name` field for the proto that defines the operation.\n\n Raises:\n TypeError: If `op_type` is not a string.\n\n ", "desc": "Specifies that ops of type `op_type` is not differentiable.", "type": "API"}, {"name": "tf.compat.v1.numpy_function", "docs": "Wraps a python function and uses it as a TensorFlow op.\n\n Given a python function `func` wrap this function as an operation in a\n TensorFlow function. `func` must take numpy arrays as its arguments and\n return numpy arrays as its outputs.\n\n The following example creates a TensorFlow graph with `np.sinh()` as an\n operation in the graph:\n\n >>> def my_numpy_func(x):\n ... # x will be a numpy array with the contents of the input to the\n ... # tf.function\n ... return np.sinh(x)\n >>> @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)])\n ... def tf_function(input):\n ... y = tf.numpy_function(my_numpy_func, [input], tf.float32)\n ... return y * y\n >>> tf_function(tf.constant(1.))\n \n\n Comparison to `tf.py_function`:\n `tf.py_function` and `tf.numpy_function` are very similar, except that\n `tf.numpy_function` takes numpy arrays, and not `tf.Tensor`s. If you want the\n function to contain `tf.Tensors`, and have any TensorFlow operations executed\n in the function be differentiable, please use `tf.py_function`.\n\n Note: We recommend to avoid using `tf.numpy_function` outside of\n prototyping and experimentation due to the following known limitations:\n\n * Calling `tf.numpy_function` will acquire the Python Global Interpreter Lock\n (GIL) that allows only one thread to run at any point in time. This will\n preclude efficient parallelization and distribution of the execution of the\n program. Therefore, you are discouraged to use `tf.numpy_function` outside\n of prototyping and experimentation.\n\n * The body of the function (i.e. `func`) will not be serialized in a\n `tf.SavedModel`. Therefore, you should not use this function if you need to\n serialize your model and restore it in a different environment.\n\n * The operation must run in the same address space as the Python program\n that calls `tf.numpy_function()`. If you are using distributed\n TensorFlow, you must run a `tf.distribute.Server` in the same process as the\n program that calls `tf.numpy_function` you must pin the created\n operation to a device in that server (e.g. using `with tf.device():`).\n\n * Currently `tf.numpy_function` is not compatible with XLA. Calling\n `tf.numpy_function` inside `tf.function(jit_comiple=True)` will raise an\n error.\n\n * Since the function takes numpy arrays, you cannot take gradients\n through a numpy_function. If you require something that is differentiable,\n please consider using tf.py_function.\n\n Args:\n func: A Python function, which accepts `numpy.ndarray` objects as arguments\n and returns a list of `numpy.ndarray` objects (or a single\n `numpy.ndarray`). This function must accept as many arguments as there are\n tensors in `inp`, and these argument types will match the corresponding\n `tf.Tensor` objects in `inp`. The returns `numpy.ndarray`s must match the\n number and types defined `Tout`.\n Important Note: Input and output `numpy.ndarray`s of `func` are not\n guaranteed to be copies. In some cases their underlying memory will be\n shared with the corresponding TensorFlow tensors. In-place modification\n or storing `func` input or return values in python datastructures\n without explicit (np.)copy can have non-deterministic consequences.\n inp: A list of `tf.Tensor` objects.\n Tout: A list or tuple of tensorflow data types or a single tensorflow data\n type if there is only one, indicating what `func` returns.\n stateful: (Boolean.) Setting this argument to False tells the runtime to\n treat the function as stateless, which enables certain optimizations.\n A function is stateless when given the same input it will return the\n same output and have no side effects; its only purpose is to have a\n return value.\n The behavior for a stateful function with the `stateful` argument False\n is undefined. In particular, caution should be taken when\n mutating the input arguments as this is a stateful operation.\n name: (Optional) A name for the operation.\n\n Returns:\n Single or list of `tf.Tensor` which `func` computes.\n ", "desc": "Wraps a python function and uses it as a TensorFlow op.", "type": "API"}, {"name": "tf.compat.v1.one_hot", "docs": "Returns a one-hot tensor.\n\n See also `tf.fill`, `tf.eye`.\n\n The locations represented by indices in `indices` take value `on_value`,\n while all other locations take value `off_value`.\n\n `on_value` and `off_value` must have matching data types. If `dtype` is also\n provided, they must be the same data type as specified by `dtype`.\n\n If `on_value` is not provided, it will default to the value `1` with type\n `dtype`\n\n If `off_value` is not provided, it will default to the value `0` with type\n `dtype`\n\n If the input `indices` is rank `N`, the output will have rank `N+1`. The\n new axis is created at dimension `axis` (default: the new axis is appended\n at the end).\n\n If `indices` is a scalar the output shape will be a vector of length `depth`\n\n If `indices` is a vector of length `features`, the output shape will be:\n\n ```\n features x depth if axis == -1\n depth x features if axis == 0\n ```\n\n If `indices` is a matrix (batch) with shape `[batch, features]`, the output\n shape will be:\n\n ```\n batch x features x depth if axis == -1\n batch x depth x features if axis == 1\n depth x batch x features if axis == 0\n ```\n\n If `indices` is a RaggedTensor, the 'axis' argument must be positive and refer\n to a non-ragged axis. The output will be equivalent to applying 'one_hot' on\n the values of the RaggedTensor, and creating a new RaggedTensor from the\n result.\n\n If `dtype` is not provided, it will attempt to assume the data type of\n `on_value` or `off_value`, if one or both are passed in. If none of\n `on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the\n value `tf.float32`.\n\n Note: If a non-numeric data type output is desired (`tf.string`, `tf.bool`,\n etc.), both `on_value` and `off_value` _must_ be provided to `one_hot`.\n\n For example:\n\n ```python\n indices = [0, 1, 2]\n depth = 3\n tf.one_hot(indices, depth) # output: [3 x 3]\n # [[1., 0., 0.],\n # [0., 1., 0.],\n # [0., 0., 1.]]\n\n indices = [0, 2, -1, 1]\n depth = 3\n tf.one_hot(indices, depth,\n on_value=5.0, off_value=0.0,\n axis=-1) # output: [4 x 3]\n # [[5.0, 0.0, 0.0], # one_hot(0)\n # [0.0, 0.0, 5.0], # one_hot(2)\n # [0.0, 0.0, 0.0], # one_hot(-1)\n # [0.0, 5.0, 0.0]] # one_hot(1)\n\n indices = [[0, 2], [1, -1]]\n depth = 3\n tf.one_hot(indices, depth,\n on_value=1.0, off_value=0.0,\n axis=-1) # output: [2 x 2 x 3]\n # [[[1.0, 0.0, 0.0], # one_hot(0)\n # [0.0, 0.0, 1.0]], # one_hot(2)\n # [[0.0, 1.0, 0.0], # one_hot(1)\n # [0.0, 0.0, 0.0]]] # one_hot(-1)\n\n indices = tf.ragged.constant([[0, 1], [2]])\n depth = 3\n tf.one_hot(indices, depth) # output: [2 x None x 3]\n # [[[1., 0., 0.],\n # [0., 1., 0.]],\n # [[0., 0., 1.]]]\n ```\n\n Args:\n indices: A `Tensor` of indices.\n depth: A scalar defining the depth of the one hot dimension.\n on_value: A scalar defining the value to fill in output when `indices[j]\n = i`. (default: 1)\n off_value: A scalar defining the value to fill in output when `indices[j]\n != i`. (default: 0)\n axis: The axis to fill (default: -1, a new inner-most axis).\n dtype: The data type of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n output: The one-hot tensor.\n\n Raises:\n TypeError: If dtype of either `on_value` or `off_value` don't match `dtype`\n TypeError: If dtype of `on_value` and `off_value` don't match one another\n ", "desc": "Returns a one-hot tensor.", "type": "API"}, {"name": "tf.compat.v1.ones", "docs": "Creates a tensor with all elements set to one (1).\n\n See also `tf.ones_like`, `tf.zeros`, `tf.fill`, `tf.eye`.\n\n This operation returns a tensor of type `dtype` with shape `shape` and\n all elements set to one.\n\n >>> tf.ones([3, 4], tf.int32)\n \n\n Args:\n shape: A `list` of integers, a `tuple` of integers, or\n a 1-D `Tensor` of type `int32`.\n dtype: Optional DType of an element in the resulting `Tensor`. Default is\n `tf.float32`.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor` with all elements set to one (1).\n ", "desc": "Creates a tensor with all elements set to one (1).", "type": "API"}, {"name": "tf.compat.v1.ones_initializer", "docs": "Initializer that generates tensors initialized to 1.\n\n @compatibility(TF2)\n This API is compatible with TF2 behavior and `tf.function`, and can be\n migrated immediately with `tf.keras.initializers.ones`.\n\n Before:\n >>> initializer = tf.compat.v1.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n After:\n >>> initializer = tf.keras.initializers.ones()\n >>> initializer((1, 1))\n \n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 1.", "type": "API"}, {"name": "tf.compat.v1.ones_like", "docs": "Creates a tensor with all elements set to 1.\n\n See also `tf.ones`.\n\n Given a single tensor (`tensor`), this operation returns a tensor of the same\n type and shape as `tensor` with all elements set to 1. Optionally, you can\n specify a new type (`dtype`) for the returned tensor.\n\n For example:\n\n ```python\n tensor = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.ones_like(tensor) # [[1, 1, 1], [1, 1, 1]]\n ```\n\n Args:\n tensor: A `Tensor`.\n dtype: A type for the returned `Tensor`. Must be `float32`, `float64`,\n `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `complex64`,\n `complex128` or `bool`.\n name: A name for the operation (optional).\n optimize: if true, attempt to statically determine the shape of 'tensor' and\n encode it as a constant.\n\n Returns:\n A `Tensor` with all elements set to 1.\n ", "desc": "Creates a tensor with all elements set to 1.", "type": "API"}, {"name": "tf.compat.v1.op_scope", "docs": "DEPRECATED. Same as name_scope above, just different argument order.", "desc": "DEPRECATED. Same as name_scope above, just different argument order.", "type": "API"}, {"name": "tf.compat.v1.Operation", "docs": "Represents a graph node that performs computation on tensors.\n\n An `Operation` is a node in a `tf.Graph` that takes zero or more `Tensor`\n objects as input, and produces zero or more `Tensor` objects as output.\n Objects of type `Operation` are created by calling a Python op constructor\n (such as `tf.matmul`) within a `tf.function` or under a `tf.Graph.as_default`\n context manager.\n\n For example, within a `tf.function`, `c = tf.matmul(a, b)` creates an\n `Operation` of type \"MatMul\" that takes tensors `a` and `b` as input, and\n produces `c` as output.\n\n If a `tf.compat.v1.Session` is used, an `Operation` of a `tf.Graph` can be\n executed by passing it to `tf.Session.run`. `op.run()` is a shortcut for\n calling `tf.compat.v1.get_default_session().run(op)`.\n ", "desc": "Represents a graph node that performs computation on tensors.", "type": "API"}, {"name": "tf.compat.v1.OpError", "docs": "The base class for TensorFlow exceptions.\n\n Usually, TensorFlow will raise a more specific subclass of `OpError` from the\n `tf.errors` module.\n ", "desc": "The base class for TensorFlow exceptions.", "type": "API"}, {"name": "tf.compat.v1.OptimizerOptions", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.OptionalSpec", "docs": "Type specification for `tf.experimental.Optional`.\n\n For instance, `tf.OptionalSpec` can be used to define a tf.function that takes\n `tf.experimental.Optional` as an input argument:\n\n >>> @tf.function(input_signature=[tf.OptionalSpec(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])\n ... def maybe_square(optional):\n ... if optional.has_value():\n ... x = optional.get_value()\n ... return x * x\n ... return -1\n >>> optional = tf.experimental.Optional.from_value(5)\n >>> print(maybe_square(optional))\n tf.Tensor(25, shape=(), dtype=int32)\n\n Attributes:\n element_spec: A (nested) structure of `TypeSpec` objects that represents the\n type specification of the optional element.\n ", "desc": "Type specification for `tf.experimental.Optional`.", "type": "API"}, {"name": "tf.compat.v1.orthogonal_initializer", "docs": "Initializer that generates an orthogonal matrix.\n\n If the shape of the tensor to initialize is two-dimensional, it is initialized\n with an orthogonal matrix obtained from the QR decomposition of a matrix of\n random numbers drawn from a normal distribution.\n If the matrix has fewer rows than columns then the output will have orthogonal\n rows. Otherwise, the output will have orthogonal columns.\n\n If the shape of the tensor to initialize is more than two-dimensional,\n a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`\n is initialized, where `n` is the length of the shape vector.\n The matrix is subsequently reshaped to give a tensor of the desired shape.\n\n Args:\n gain: multiplicative factor to apply to the orthogonal matrix\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C)\n ([pdf](https://arxiv.org/pdf/1312.6120.pdf))\n ", "desc": "Initializer that generates an orthogonal matrix.", "type": "API"}, {"name": "tf.compat.v1.pad", "docs": "Pads a tensor.\n\n This operation pads a `tensor` according to the `paddings` you specify.\n `paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of\n `tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how\n many values to add before the contents of `tensor` in that dimension, and\n `paddings[D, 1]` indicates how many values to add after the contents of\n `tensor` in that dimension. If `mode` is \"REFLECT\" then both `paddings[D, 0]`\n and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If\n `mode` is \"SYMMETRIC\" then both `paddings[D, 0]` and `paddings[D, 1]` must be\n no greater than `tensor.dim_size(D)`.\n\n The padded size of each dimension D of the output is:\n\n `paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]`\n\n For example:\n\n ```python\n t = tf.constant([[1, 2, 3], [4, 5, 6]])\n paddings = tf.constant([[1, 1,], [2, 2]])\n # 'constant_values' is 0.\n # rank of 't' is 2.\n tf.pad(t, paddings, \"CONSTANT\") # [[0, 0, 0, 0, 0, 0, 0],\n # [0, 0, 1, 2, 3, 0, 0],\n # [0, 0, 4, 5, 6, 0, 0],\n # [0, 0, 0, 0, 0, 0, 0]]\n\n tf.pad(t, paddings, \"REFLECT\") # [[6, 5, 4, 5, 6, 5, 4],\n # [3, 2, 1, 2, 3, 2, 1],\n # [6, 5, 4, 5, 6, 5, 4],\n # [3, 2, 1, 2, 3, 2, 1]]\n\n tf.pad(t, paddings, \"SYMMETRIC\") # [[2, 1, 1, 2, 3, 3, 2],\n # [2, 1, 1, 2, 3, 3, 2],\n # [5, 4, 4, 5, 6, 6, 5],\n # [5, 4, 4, 5, 6, 6, 5]]\n ```\n\n Args:\n tensor: A `Tensor`.\n paddings: A `Tensor` of type `int32`.\n mode: One of \"CONSTANT\", \"REFLECT\", or \"SYMMETRIC\" (case-insensitive)\n name: A name for the operation (optional).\n constant_values: In \"CONSTANT\" mode, the scalar pad value to use. Must be\n same type as `tensor`.\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n\n Raises:\n ValueError: When mode is not one of \"CONSTANT\", \"REFLECT\", or \"SYMMETRIC\".\n ", "desc": "Pads a tensor.", "type": "API"}, {"name": "tf.compat.v1.PaddingFIFOQueue", "docs": "A FIFOQueue that supports batching variable-sized tensors by padding.\n\n A `PaddingFIFOQueue` may contain components with dynamic shape, while also\n supporting `dequeue_many`. See the constructor for more details.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A FIFOQueue that supports batching variable-sized tensors by padding.", "type": "API"}, {"name": "tf.compat.v1.parallel_stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.\n\n Requires that the shape of inputs be known at graph construction time.\n\n Packs the list of tensors in `values` into a tensor with rank one higher than\n each tensor in `values`, by packing them along the first dimension.\n Given a list of length `N` of tensors of shape `(A, B, C)`; the `output`\n tensor will have the shape `(N, A, B, C)`.\n\n For example:\n\n ```python\n x = tf.constant([1, 4])\n y = tf.constant([2, 5])\n z = tf.constant([3, 6])\n tf.parallel_stack([x, y, z]) # [[1, 4], [2, 5], [3, 6]]\n ```\n\n The difference between `stack` and `parallel_stack` is that `stack` requires\n all the inputs be computed before the operation will begin but doesn't require\n that the input shapes be known during graph construction.\n\n `parallel_stack` will copy pieces of the input into the output as they become\n available, in some situations this can provide a performance benefit.\n\n Unlike `stack`, `parallel_stack` does NOT support backpropagation.\n\n This is the opposite of unstack. The numpy equivalent is\n\n tf.parallel_stack([x, y, z]) = np.asarray([x, y, z])\n\n @compatibility(eager)\n parallel_stack is not compatible with eager execution.\n @end_compatibility\n\n Args:\n values: A list of `Tensor` objects with the same shape and type.\n name: A name for this operation (optional).\n\n Returns:\n output: A stacked `Tensor` with the same type as `values`.\n\n Raises:\n RuntimeError: if executed in eager mode.\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.", "type": "API"}, {"name": "tf.compat.v1.parse_example", "docs": "Parses `Example` protos into a `dict` of tensors.\n\n Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n protos given in `serialized`. We refer to `serialized` as a batch with\n `batch_size` many entries of individual `Example` protos.\n\n `example_names` may contain descriptive names for the corresponding serialized\n protos. These may be useful for debugging purposes, but they have no effect on\n the output. If not `None`, `example_names` must be the same length as\n `serialized`.\n\n This op parses serialized examples into a dictionary mapping keys to `Tensor`\n `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to\n `VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature`\n objects. Each `VarLenFeature` and `SparseFeature` is mapped to a\n `SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each\n `RaggedFeature` is mapped to a `RaggedTensor`.\n\n Each `VarLenFeature` maps to a `SparseTensor` of the specified type\n representing a ragged matrix. Its indices are `[batch, index]` where `batch`\n identifies the example in `serialized`, and `index` is the value's index in\n the list of values associated with that feature and example.\n\n Each `SparseFeature` maps to a `SparseTensor` of the specified type\n representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`.\n Its `values` come from the feature in the examples with key `value_key`.\n A `values[i]` comes from a position `k` in the feature of an example at batch\n entry `batch`. This positional information is recorded in `indices[i]` as\n `[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of\n the feature in the example at with key `SparseFeature.index_key[j]`.\n In other words, we split the indices (except the first index indicating the\n batch entry) of a `SparseTensor` by dimension into different features of the\n `Example`. Due to its complexity a `VarLenFeature` should be preferred over a\n `SparseFeature` whenever possible.\n\n Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or\n `tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.\n\n `FixedLenFeature` entries with a `default_value` are optional. With no default\n value, we will fail if that `Feature` is missing from any example in\n `serialized`.\n\n Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type\n (or `tf.float32` if not specified) and shape\n `(serialized.size(), None) + df.shape`.\n All examples in `serialized` will be padded with `default_value` along the\n second dimension.\n\n Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It\n is formed by stacking the `RaggedTensor` for each example, where the\n `RaggedTensor` for each individual example is constructed using the tensors\n specified by `RaggedTensor.values_key` and `RaggedTensor.partition`. See\n the `tf.io.RaggedFeature` documentation for details and examples.\n\n Examples:\n\n For example, if one expects a `tf.float32` `VarLenFeature` `ft` and three\n serialized `Example`s are provided:\n\n ```\n serialized = [\n features\n { feature { key: \"ft\" value { float_list { value: [1.0, 2.0] } } } },\n features\n { feature []},\n features\n { feature { key: \"ft\" value { float_list { value: [3.0] } } }\n ]\n ```\n\n then the output will look like:\n\n ```python\n {\"ft\": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],\n values=[1.0, 2.0, 3.0],\n dense_shape=(3, 2)) }\n ```\n\n If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and\n `shape=[]` is used then the output will look like:\n\n ```python\n {\"ft\": [[1.0, 2.0], [3.0, -1.0]]}\n ```\n\n Given two `Example` input protos in `serialized`:\n\n ```\n [\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"knit\", \"big\" ] } } }\n feature { key: \"gps\" value { float_list { value: [] } } }\n },\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"emmy\" ] } } }\n feature { key: \"dank\" value { int64_list { value: [ 42 ] } } }\n feature { key: \"gps\" value { } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"kw\": VarLenFeature(tf.string),\n \"dank\": VarLenFeature(tf.int64),\n \"gps\": VarLenFeature(tf.float32),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"kw\": SparseTensor(\n indices=[[0, 0], [0, 1], [1, 0]],\n values=[\"knit\", \"big\", \"emmy\"]\n dense_shape=[2, 2]),\n \"dank\": SparseTensor(\n indices=[[1, 0]],\n values=[42],\n dense_shape=[2, 1]),\n \"gps\": SparseTensor(\n indices=[],\n values=[],\n dense_shape=[2, 0]),\n }\n ```\n\n For dense results in two serialized `Example`s:\n\n ```\n [\n features {\n feature { key: \"age\" value { int64_list { value: [ 0 ] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n },\n features {\n feature { key: \"age\" value { int64_list { value: [] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n }\n ]\n ```\n\n We can use arguments:\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"age\": FixedLenFeature([], dtype=tf.int64, default_value=-1),\n \"gender\": FixedLenFeature([], dtype=tf.string),\n }\n ```\n\n And the expected output is:\n\n ```python\n {\n \"age\": [[0], [-1]],\n \"gender\": [[\"f\"], [\"f\"]],\n }\n ```\n\n An alternative to `VarLenFeature` to obtain a `SparseTensor` is\n `SparseFeature`. For example, given two `Example` input protos in\n `serialized`:\n\n ```\n [\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 3, 20 ] } } }\n },\n features {\n feature { key: \"val\" value { float_list { value: [ 0.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 42 ] } } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"sparse\": SparseFeature(\n index_key=\"ix\", value_key=\"val\", dtype=tf.float32, size=100),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"sparse\": SparseTensor(\n indices=[[0, 3], [0, 20], [1, 42]],\n values=[0.5, -1.0, 0.0]\n dense_shape=[2, 100]),\n }\n ```\n\n See the `tf.io.RaggedFeature` documentation for examples showing how\n `RaggedFeature` can be used to obtain `RaggedTensor`s.\n\n Args:\n serialized: A vector (1-D Tensor) of strings, a batch of binary\n serialized `Example` protos.\n features: A `dict` mapping feature keys to `FixedLenFeature`,\n `VarLenFeature`, `SparseFeature`, and `RaggedFeature` values.\n example_names: A vector (1-D Tensor) of strings (optional), the names of\n the serialized protos in the batch.\n name: A name for this operation (optional).\n\n Returns:\n A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and\n `RaggedTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses `Example` protos into a `dict` of tensors.", "type": "API"}, {"name": "tf.compat.v1.parse_single_example", "docs": "Parses a single `Example` proto.\n\n Similar to `parse_example`, except:\n\n For dense tensors, the returned `Tensor` is identical to the output of\n `parse_example`, except there is no batch dimension, the output shape is the\n same as the shape given in `dense_shape`.\n\n For `SparseTensor`s, the first (batch) column of the indices matrix is removed\n (the indices matrix is a column vector), the values vector is unchanged, and\n the first (`batch_size`) entry of the shape vector is removed (it is now a\n single element vector).\n\n One might see performance advantages by batching `Example` protos with\n `parse_example` instead of using this function directly.\n\n Args:\n serialized: A scalar string Tensor, a single serialized Example.\n features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` values.\n name: A name for this operation (optional).\n example_names: (Optional) A scalar string Tensor, the associated name.\n\n Returns:\n A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `Example` proto.", "type": "API"}, {"name": "tf.compat.v1.parse_single_sequence_example", "docs": "Parses a single `SequenceExample` proto.\n\n Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n proto given in `serialized`.\n\n This op parses a serialized sequence example into a tuple of dictionaries,\n each mapping keys to `Tensor` and `SparseTensor` objects.\n The first dictionary contains mappings for keys appearing in\n `context_features`, and the second dictionary contains mappings for keys\n appearing in `sequence_features`.\n\n At least one of `context_features` and `sequence_features` must be provided\n and non-empty.\n\n The `context_features` keys are associated with a `SequenceExample` as a\n whole, independent of time / frame. In contrast, the `sequence_features` keys\n provide a way to access variable-length data within the `FeatureList` section\n of the `SequenceExample` proto. While the shapes of `context_features` values\n are fixed with respect to frame, the frame dimension (the first dimension)\n of `sequence_features` values may vary between `SequenceExample` protos,\n and even between `feature_list` keys within the same `SequenceExample`.\n\n `context_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`;\n each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature`\n is mapped to a `Tensor`, of the specified type, shape, and default value.\n\n `sequence_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type.\n The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`,\n where `T` is the length of the associated `FeatureList` in the\n `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar\n 1-D `Tensor` of static shape `[None]` and dynamic shape `[T]`, while\n `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor`\n of static shape `[None, k]` and dynamic shape `[T, k]`.\n\n Each `SparseTensor` corresponding to `sequence_features` represents a ragged\n vector. Its indices are `[time, index]`, where `time` is the `FeatureList`\n entry and `index` is the value's index in the list of values associated with\n that time.\n\n `FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`\n entries with `allow_missing=True` are optional; otherwise, we will fail if\n that `Feature` or `FeatureList` is missing from any example in `serialized`.\n\n `example_name` may contain a descriptive name for the corresponding serialized\n proto. This may be useful for debugging purposes, but it has no effect on the\n output. If not `None`, `example_name` must be a scalar.\n\n Note that the batch version of this function, `tf.parse_sequence_example`,\n is written for better memory efficiency and will be faster on large\n `SequenceExample`s.\n\n Args:\n serialized: A scalar (0-D Tensor) of type string, a single binary\n serialized `SequenceExample` proto.\n context_features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` or `RaggedFeature` values. These features are associated\n with a `SequenceExample` as a whole.\n sequence_features: A `dict` mapping feature keys to\n `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values.\n These features are associated with data within the `FeatureList` section\n of the `SequenceExample` proto.\n example_name: A scalar (0-D Tensor) of strings (optional), the name of\n the serialized proto.\n name: A name for this operation (optional).\n\n Returns:\n A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s\n and `RaggedTensor`s.\n\n * The first dict contains the context key/values.\n * The second dict contains the feature_list key/values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `SequenceExample` proto.", "type": "API"}, {"name": "tf.compat.v1.parse_tensor", "docs": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar string containing a serialized TensorProto proto.\n out_type: A `tf.DType`.\n The type of the serialized tensor. The provided type must match the\n type of the serialized tensor and no implicit conversion will take place.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.", "type": "API"}, {"name": "tf.compat.v1.placeholder", "docs": "Inserts a placeholder for a tensor that will be always fed.\n\n **Important**: This tensor will produce an error if evaluated. Its value must\n be fed using the `feed_dict` optional argument to `Session.run()`,\n `Tensor.eval()`, or `Operation.run()`.\n\n For example:\n\n ```python\n x = tf.compat.v1.placeholder(tf.float32, shape=(1024, 1024))\n y = tf.matmul(x, x)\n\n with tf.compat.v1.Session() as sess:\n print(sess.run(y)) # ERROR: will fail because x was not fed.\n\n rand_array = np.random.rand(1024, 1024)\n print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.\n ```\n\n Args:\n dtype: The type of elements in the tensor to be fed.\n shape: The shape of the tensor to be fed (optional). If the shape is not\n specified, you can feed a tensor of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that may be used as a handle for feeding a value, but not\n evaluated directly.\n\n Raises:\n RuntimeError: if eager execution is enabled\n\n @compatibility(TF2)\n This API is not compatible with eager execution and `tf.function`. To migrate\n to TF2, rewrite the code to be compatible with eager execution. Check the\n [migration\n guide](https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls)\n on replacing `Session.run` calls. In TF2, you can just pass tensors directly\n into ops and layers. If you want to explicitly set up your inputs, also see\n [Keras functional API](https://www.tensorflow.org/guide/keras/functional) on\n how to use `tf.keras.Input` to replace `tf.compat.v1.placeholder`.\n `tf.function` arguments also do the job of `tf.compat.v1.placeholder`.\n For more details please read [Better\n performance with tf.function](https://www.tensorflow.org/guide/function).\n @end_compatibility\n ", "desc": "Inserts a placeholder for a tensor that will be always fed.", "type": "API"}, {"name": "tf.compat.v1.placeholder_with_default", "docs": "A placeholder op that passes through `input` when its output is not fed.\n\n @compatibility(TF2)\n This API is strongly discouraged for use with eager execution and\n `tf.function`. The primary use of this API is for testing computation wrapped\n within a `tf.function` where the input tensors might not have statically known\n fully-defined shapes. The same can be achieved by creating a\n [concrete function](\n https://www.tensorflow.org/guide/function#obtaining_concrete_functions)\n from the `tf.function` with a `tf.TensorSpec` input which has partially\n defined shapes. For example, the code\n\n >>> @tf.function\n ... def f():\n ... x = tf.compat.v1.placeholder_with_default(\n ... tf.constant([[1., 2., 3.], [4., 5., 6.]]), [None, 3])\n ... y = tf.constant([[1.],[2.], [3.]])\n ... z = tf.matmul(x, y)\n ... assert z.shape[0] == None\n ... assert z.shape[1] == 1\n\n >>> f()\n\n can easily be replaced by\n\n >>> @tf.function\n ... def f(x):\n ... y = tf.constant([[1.],[2.], [3.]])\n ... z = tf.matmul(x, y)\n ... assert z.shape[0] == None\n ... assert z.shape[1] == 1\n\n >>> g = f.get_concrete_function(tf.TensorSpec([None, 3]))\n\n You can learn more about `tf.function` at [Better\n performance with tf.function](https://www.tensorflow.org/guide/function).\n @end_compatibility\n\n Args:\n input: A `Tensor`. The default value to produce when output is not fed.\n shape: A `tf.TensorShape` or list of `int`s. The (possibly partial) shape of\n the tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "A placeholder op that passes through `input` when its output is not fed.", "type": "API"}, {"name": "tf.compat.v1.polygamma", "docs": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).\n\n The polygamma function is defined as:\n\n\n \\\\(\\psi^{(a)}(x) = \\frac{d^a}{dx^a} \\psi(x)\\\\)\n\n where \\\\(\\psi(x)\\\\) is the digamma function.\n The polygamma function is defined only for non-negative integer orders \\\\a\\\\.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).", "type": "API"}, {"name": "tf.compat.v1.pow", "docs": "Computes the power of one value to another.\n\n Given a tensor `x` and a tensor `y`, this operation computes \\\\(x^y\\\\) for\n corresponding elements in `x` and `y`. For example:\n\n ```python\n x = tf.constant([[2, 2], [3, 3]])\n y = tf.constant([[8, 16], [2, 3]])\n tf.pow(x, y) # [[256, 65536], [9, 27]]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n y: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`.\n ", "desc": "Computes the power of one value to another.", "type": "API"}, {"name": "tf.compat.v1.Print", "docs": "Prints a list of tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-08-20.\nInstructions for updating:\nUse tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:\n\n\nThis is an identity op (behaves like `tf.identity`) with the side effect\nof printing `data` when evaluating.\n\nNote: This op prints to the standard error. It is not currently compatible\n with jupyter notebook (printing to the notebook *server's* output, not into\n the notebook).\n\n@compatibility(TF2)\nThis API is deprecated. Use `tf.print` instead. `tf.print` does not need the\n`input_` argument.\n\n`tf.print` works in TF2 when executing eagerly and inside a `tf.function`.\n\nIn TF1-styled sessions, an explicit control dependency declaration is needed\nto execute the `tf.print` operation. Refer to the documentation of\n`tf.print` for more details.\n@end_compatibility\n\nArgs:\n input_: A tensor passed through this op.\n data: A list of tensors to print out when op is evaluated.\n message: A string, prefix of the error message.\n first_n: Only log `first_n` number of times. Negative numbers log always;\n this is the default.\n summarize: Only print this many entries of each tensor. If None, then a\n maximum of 3 elements are printed per input tensor.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor`. Has the same type and contents as `input_`.\n\n ```python\n sess = tf.compat.v1.Session()\n with sess.as_default():\n tensor = tf.range(10)\n print_op = tf.print(tensor)\n with tf.control_dependencies([print_op]):\n out = tf.add(tensor, tensor)\n sess.run(out)\n ```", "desc": "Prints a list of tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.PriorityQueue", "docs": "A queue implementation that dequeues elements in prioritized order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in prioritized order.", "type": "API"}, {"name": "tf.compat.v1.profiler", "docs": "Public API for tf.profiler namespace.\n", "desc": "Public API for tf.profiler namespace.", "type": "API"}, {"name": "tf.compat.v1.profiler.AdviceProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.AdviceProto.Checker", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.AdviceProto.CheckersEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.advise", "docs": "Auto profile and advise.\n\n Builds profiles and automatically check anomalies of various\n aspects. For more details:\n https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/README.md\n\n Args:\n graph: tf.Graph. If None and eager execution is not enabled, use default\n graph.\n run_meta: optional tensorflow.RunMetadata proto. It is necessary to to\n support run time information profiling, such as time and memory.\n options: see ALL_ADVICE example above. Default checks everything.\n\n Returns:\n Returns AdviceProto proto\n ", "desc": "Auto profile and advise.", "type": "API"}, {"name": "tf.compat.v1.profiler.GraphNodeProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.GraphNodeProto.InputShapesEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.MultiGraphNodeProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.OpLogProto", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.OpLogProto.IdToStringEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.profiler.profile", "docs": "Profile model.\n\n Tutorials and examples can be found in:\n https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/profiler/g3doc/python_api.md\n\n Args:\n graph: tf.Graph. If None and eager execution is not enabled, use default\n graph.\n run_meta: optional tensorflow.RunMetadata proto. It is necessary to to\n support run time information profiling, such as time and memory.\n op_log: tensorflow.tfprof.OpLogProto proto. User can assign \"types\" to graph\n nodes with op_log. \"types\" allow user to flexibly group and account\n profiles using options['accounted_type_regexes'].\n cmd: string. Either 'op', 'scope', 'graph' or 'code'. 'op' view organizes\n profile using operation type. (e.g. MatMul) 'scope' view organizes profile\n using graph node name scope. 'graph' view organizes profile using graph\n node inputs/outputs. 'code' view organizes profile using Python call\n stack.\n options: A dict of options. See core/profiler/g3doc/options.md.\n\n Returns:\n If cmd is 'scope' or 'graph', returns GraphNodeProto proto.\n If cmd is 'op' or 'code', returns MultiGraphNodeProto proto.\n Side effect: stdout/file/timeline.json depending on options['output']\n ", "desc": "Profile model.", "type": "API"}, {"name": "tf.compat.v1.profiler.ProfileOptionBuilder", "docs": "Option Builder for Profiling API.\n\n For tutorial on the options, see\n https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler/g3doc/options.md\n\n ```python\n # Users can use pre-built options:\n opts = (\n tf.profiler.ProfileOptionBuilder.trainable_variables_parameter())\n\n # Or, build your own options:\n opts = (tf.compat.v1.profiler.ProfileOptionBuilder()\n .with_max_depth(10)\n .with_min_micros(1000)\n .select(['accelerator_micros'])\n .with_stdout_output()\n .build()\n\n # Or customize the pre-built options:\n opts = (tf.compat.v1.profiler.ProfileOptionBuilder(\n tf.profiler.ProfileOptionBuilder.time_and_memory())\n .with_displaying_options(show_name_regexes=['.*rnn.*'])\n .build())\n\n # Finally, profiling with the options:\n _ = tf.compat.v1.profiler.profile(tf.compat.v1.get_default_graph(),\n run_meta=run_meta,\n cmd='scope',\n options=opts)\n ```\n ", "desc": "Option Builder for Profiling API.", "type": "API"}, {"name": "tf.compat.v1.profiler.Profiler", "docs": "TensorFlow multi-step profiler.\n\n\n ```python\n Typical use case:\n # Currently we are only allowed to create 1 profiler per process.\n profiler = Profiler(sess.graph)\n\n for i in range(total_steps):\n if i % 10000 == 0:\n run_meta = tf.compat.v1.RunMetadata()\n _ = sess.run(...,\n options=tf.compat.v1.RunOptions(\n trace_level=tf.RunOptions.FULL_TRACE),\n run_metadata=run_meta)\n profiler.add_step(i, run_meta)\n\n # Profile the parameters of your model.\n profiler.profile_name_scope(options=(option_builder.ProfileOptionBuilder\n .trainable_variables_parameter()))\n\n # Or profile the timing of your model operations.\n opts = option_builder.ProfileOptionBuilder.time_and_memory()\n profiler.profile_operations(options=opts)\n\n # Or you can generate a timeline:\n opts = (option_builder.ProfileOptionBuilder(\n option_builder.ProfileOptionBuilder.time_and_memory())\n .with_step(i)\n .with_timeline_output(filename).build())\n profiler.profile_graph(options=opts)\n else:\n _ = sess.run(...)\n # Auto detect problems and generate advice.\n profiler.advise()\n ```\n ", "desc": "TensorFlow multi-step profiler.", "type": "API"}, {"name": "tf.compat.v1.profiler.write_op_log", "docs": "Log provided 'op_log', and add additional model information below.\n\n The API also assigns ops in tf.compat.v1.trainable_variables() an op type\n called '_trainable_variables'.\n The API also logs 'flops' statistics for ops with op.RegisterStatistics()\n defined. flops calculation depends on Tensor shapes defined in 'graph',\n which might not be complete. 'run_meta', if provided, completes the shape\n information with best effort.\n\n Args:\n graph: tf.Graph. If None and eager execution is not enabled, use\n default graph.\n log_dir: directory to write the log file.\n op_log: (Optional) OpLogProto proto to be written. If not provided, an new\n one is created.\n run_meta: (Optional) RunMetadata proto that helps flops computation using\n run time shape information.\n add_trace: Whether to add python code trace information.\n Used to support \"code\" view.\n ", "desc": "Log provided 'op_log', and add additional model information below.", "type": "API"}, {"name": "tf.compat.v1.py_func", "docs": "Wraps a python function and uses it as a TensorFlow op.\n\n Given a python function `func`, which takes numpy arrays as its\n arguments and returns numpy arrays as its outputs, wrap this function as an\n operation in a TensorFlow graph. The following snippet constructs a simple\n TensorFlow graph that invokes the `np.sinh()` NumPy function as a operation\n in the graph:\n\n ```python\n def my_func(x):\n # x will be a numpy array with the contents of the placeholder below\n return np.sinh(x)\n input = tf.compat.v1.placeholder(tf.float32)\n y = tf.compat.v1.py_func(my_func, [input], tf.float32)\n ```\n\n **N.B.** The `tf.compat.v1.py_func()` operation has the following known\n limitations:\n\n * The body of the function (i.e. `func`) will not be serialized in a\n `GraphDef`. Therefore, you should not use this function if you need to\n serialize your model and restore it in a different environment.\n\n * The operation must run in the same address space as the Python program\n that calls `tf.compat.v1.py_func()`. If you are using distributed\n TensorFlow, you\n must run a `tf.distribute.Server` in the same process as the program that\n calls\n `tf.compat.v1.py_func()` and you must pin the created operation to a device\n in that\n server (e.g. using `with tf.device():`).\n\n Note: It produces tensors of unknown shape and rank as shape inference\n does not work on arbitrary Python code.\n If you need the shape, you need to set it based on statically\n available information.\n\n E.g.\n ```python\n import tensorflow as tf\n import numpy as np\n\n def make_synthetic_data(i):\n return np.cast[np.uint8](i) * np.ones([20,256,256,3],\n dtype=np.float32) / 10.\n\n def preprocess_fn(i):\n ones = tf.py_function(make_synthetic_data,[i],tf.float32)\n ones.set_shape(tf.TensorShape([None, None, None, None]))\n ones = tf.image.resize(ones, [224,224])\n return ones\n\n ds = tf.data.Dataset.range(10)\n ds = ds.map(preprocess_fn)\n ```\n\n Args:\n func: A Python function, which accepts `ndarray` objects as arguments and\n returns a list of `ndarray` objects (or a single `ndarray`). This function\n must accept as many arguments as there are tensors in `inp`, and these\n argument types will match the corresponding `tf.Tensor` objects in `inp`.\n The returns `ndarray`s must match the number and types defined `Tout`.\n Important Note: Input and output numpy `ndarray`s of `func` are not\n guaranteed to be copies. In some cases their underlying memory will be\n shared with the corresponding TensorFlow tensors. In-place modification\n or storing `func` input or return values in python datastructures\n without explicit (np.)copy can have non-deterministic consequences.\n inp: A list of `Tensor` objects.\n Tout: A list or tuple of tensorflow data types or a single tensorflow data\n type if there is only one, indicating what `func` returns.\n stateful: (Boolean.) If True, the function should be considered stateful. If\n a function is stateless, when given the same input it will return the same\n output and have no observable side effects. Optimizations such as common\n subexpression elimination are only performed on stateless operations.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` or a single `Tensor` which `func` computes.\n\n @compatibility(TF2)\n\n This name was deprecated and removed in TF2, but `tf.numpy_function` is a\n near-exact replacement, just drop the `stateful` argument (all\n `tf.numpy_function` calls are considered stateful). It is compatible with\n eager execution and `tf.function`.\n\n `tf.py_function` is a close but not an exact replacement, passing TensorFlow\n tensors to the wrapped function instead of NumPy arrays, which provides\n gradients and can take advantage of accelerators.\n\n Before:\n\n >>> def fn_using_numpy(x):\n ... x[0] = 0.\n ... return x\n >>> tf.compat.v1.py_func(fn_using_numpy, inp=[tf.constant([1., 2.])],\n ... Tout=tf.float32, stateful=False)\n \n\n After:\n\n >>> tf.numpy_function(fn_using_numpy, inp=[tf.constant([1., 2.])],\n ... Tout=tf.float32)\n \n\n @end_compatibility\n\n ", "desc": "Wraps a python function and uses it as a TensorFlow op.", "type": "API"}, {"name": "tf.compat.v1.py_function", "docs": "Wraps a python function into a TensorFlow op that executes it eagerly.\n\n This function allows expressing computations in a TensorFlow graph as\n Python functions. In particular, it wraps a Python function `func`\n in a once-differentiable TensorFlow operation that executes it with eager\n execution enabled. As a consequence, `tf.py_function` makes it\n possible to express control flow using Python constructs (`if`, `while`,\n `for`, etc.), instead of TensorFlow control flow constructs (`tf.cond`,\n `tf.while_loop`). For example, you might use `tf.py_function` to\n implement the log huber function:\n\n ```python\n def log_huber(x, m):\n if tf.abs(x) <= m:\n return x**2\n else:\n return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))\n\n x = tf.constant(1.0)\n m = tf.constant(2.0)\n\n with tf.GradientTape() as t:\n t.watch([x, m])\n y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32)\n\n dy_dx = t.gradient(y, x)\n assert dy_dx.numpy() == 2.0\n ```\n\n You can also use `tf.py_function` to debug your models at runtime\n using Python tools, i.e., you can isolate portions of your code that\n you want to debug, wrap them in Python functions and insert `pdb` tracepoints\n or print statements as desired, and wrap those functions in\n `tf.py_function`.\n\n For more information on eager execution, see the\n [Eager guide](https://tensorflow.org/guide/eager).\n\n `tf.py_function` is similar in spirit to `tf.compat.v1.py_func`, but unlike\n the latter, the former lets you use TensorFlow operations in the wrapped\n Python function. In particular, while `tf.compat.v1.py_func` only runs on CPUs\n and wraps functions that take NumPy arrays as inputs and return NumPy arrays\n as outputs, `tf.py_function` can be placed on GPUs and wraps functions\n that take Tensors as inputs, execute TensorFlow operations in their bodies,\n and return Tensors as outputs.\n\n Note: We recommend to avoid using `tf.py_function` outside of prototyping\n and experimentation due to the following known limitations:\n\n * Calling `tf.py_function` will acquire the Python Global Interpreter Lock\n (GIL) that allows only one thread to run at any point in time. This will\n preclude efficient parallelization and distribution of the execution of the\n program.\n\n * The body of the function (i.e. `func`) will not be serialized in a\n `GraphDef`. Therefore, you should not use this function if you need to\n serialize your model and restore it in a different environment.\n\n * The operation must run in the same address space as the Python program\n that calls `tf.py_function()`. If you are using distributed\n TensorFlow, you must run a `tf.distribute.Server` in the same process as the\n program that calls `tf.py_function()` and you must pin the created\n operation to a device in that server (e.g. using `with tf.device():`).\n\n * Currently `tf.py_function` is not compatible with XLA. Calling\n `tf.py_function` inside `tf.function(jit_comiple=True)` will raise an\n error.\n\n Args:\n func: A Python function that accepts `inp` as arguments, and returns a\n value (or list of values) whose type is described by `Tout`.\n\n inp: Input arguments for `func`. A list whose elements are `Tensor`s or\n `CompositeTensors` (such as `tf.RaggedTensor`); or a single `Tensor` or\n `CompositeTensor`.\n\n Tout: The type(s) of the value(s) returned by `func`. One of the\n following.\n\n * If `func` returns a `Tensor` (or a value that can be converted to a\n Tensor): the `tf.DType` for that value.\n * If `func` returns a `CompositeTensor`: The `tf.TypeSpec` for that value.\n * If `func` returns `None`: the empty list (`[]`).\n * If `func` returns a list of `Tensor` and `CompositeTensor` values:\n a corresponding list of `tf.DType`s and `tf.TypeSpec`s for each value.\n\n name: A name for the operation (optional).\n\n Returns:\n The value(s) computed by `func`: a `Tensor`, `CompositeTensor`, or list of\n `Tensor` and `CompositeTensor`; or an empty list if `func` returns `None`.\n ", "desc": "Wraps a python function into a TensorFlow op that executes it eagerly.", "type": "API"}, {"name": "tf.compat.v1.python_io", "docs": "Python functions for directly manipulating TFRecord-formatted files.\n", "desc": "Python functions for directly manipulating TFRecord-formatted files.", "type": "API"}, {"name": "tf.compat.v1.python_io.tf_record_iterator", "docs": "An iterator that read the records from a TFRecords file. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse eager execution and: \n`tf.data.TFRecordDataset(path)`\n\nArgs:\n path: The path to the TFRecords file.\n options: (optional) A TFRecordOptions object.\n\nReturns:\n An iterator of serialized TFRecords.\n\nRaises:\n IOError: If `path` cannot be opened for reading.", "desc": "An iterator that read the records from a TFRecords file. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.python_io.TFRecordCompressionType", "docs": "The type of compression for the record.", "desc": "The type of compression for the record.", "type": "API"}, {"name": "tf.compat.v1.python_io.TFRecordOptions", "docs": "Options used for manipulating TFRecord files.", "desc": "Options used for manipulating TFRecord files.", "type": "API"}, {"name": "tf.compat.v1.python_io.TFRecordWriter", "docs": "A class to write records to a TFRecords file.\n\n [TFRecords tutorial](https://www.tensorflow.org/tutorials/load_data/tfrecord)\n\n TFRecords is a binary format which is optimized for high throughput data\n retrieval, generally in conjunction with `tf.data`. `TFRecordWriter` is used\n to write serialized examples to a file for later consumption. The key steps\n are:\n\n Ahead of time:\n\n - [Convert data into a serialized format](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfexample)\n - [Write the serialized data to one or more files](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfrecord_files_in_python)\n\n During training or evaluation:\n\n - [Read serialized examples into memory](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n - [Parse (deserialize) examples](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n\n A minimal example is given below:\n\n >>> import tempfile\n >>> example_path = os.path.join(tempfile.gettempdir(), \"example.tfrecords\")\n >>> np.random.seed(0)\n\n >>> # Write the records to a file.\n ... with tf.io.TFRecordWriter(example_path) as file_writer:\n ... for _ in range(4):\n ... x, y = np.random.random(), np.random.random()\n ...\n ... record_bytes = tf.train.Example(features=tf.train.Features(feature={\n ... \"x\": tf.train.Feature(float_list=tf.train.FloatList(value=[x])),\n ... \"y\": tf.train.Feature(float_list=tf.train.FloatList(value=[y])),\n ... })).SerializeToString()\n ... file_writer.write(record_bytes)\n\n >>> # Read the data back out.\n >>> def decode_fn(record_bytes):\n ... return tf.io.parse_single_example(\n ... # Data\n ... record_bytes,\n ...\n ... # Schema\n ... {\"x\": tf.io.FixedLenFeature([], dtype=tf.float32),\n ... \"y\": tf.io.FixedLenFeature([], dtype=tf.float32)}\n ... )\n\n >>> for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn):\n ... print(\"x = {x:.4f}, y = {y:.4f}\".format(**batch))\n x = 0.5488, y = 0.7152\n x = 0.6028, y = 0.5449\n x = 0.4237, y = 0.6459\n x = 0.4376, y = 0.8918\n\n This class implements `__enter__` and `__exit__`, and can be used\n in `with` blocks like a normal file. (See the usage example above.)\n ", "desc": "A class to write records to a TFRecords file.", "type": "API"}, {"name": "tf.compat.v1.qr", "docs": "Computes the QR decompositions of one or more matrices.\n\n Computes the QR decomposition of each inner matrix in `tensor` such that\n `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`\n\n Currently, the gradient for the QR decomposition is well-defined only when\n the first `P` columns of the inner matrix are linearly independent, where\n `P` is the minimum of `M` and `N`, the 2 inner-most dimmensions of `tensor`.\n\n ```python\n # a is a tensor.\n # q is a tensor of orthonormal matrices.\n # r is a tensor of upper triangular matrices.\n q, r = qr(a)\n q_full, r_full = qr(a, full_matrices=True)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.\n full_matrices: An optional `bool`. Defaults to `False`.\n If true, compute full-sized `q` and `r`. If false\n (the default), compute only the leading `P` columns of `q`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (q, r).\n\n q: A `Tensor`. Has the same type as `input`.\n r: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the QR decompositions of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.quantization", "docs": "Public API for tf.quantization namespace.\n", "desc": "Public API for tf.quantization namespace.", "type": "API"}, {"name": "tf.compat.v1.quantization.dequantize", "docs": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.\n\n [min_range, max_range] are scalar floats that specify the range for\n the output. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n if T == qint8: in[i] += (range(T) + 1)/ 2.0\n out[i] = min_range + (in[i]* (max_range - min_range) / range(T))\n ```\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n If the input comes from a QuantizedRelu6, the output type is\n quint8 (range of 0-255) but the possible range of QuantizedRelu6 is\n 0-6. The min_range and max_range values are therefore 0.0 and 6.0.\n Dequantize on quint8 will take each value, cast to float, and multiply\n by 6 / 255.\n Note that if quantizedtype is qint8, the operation will additionally add\n each value by 128 prior to casting.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```c++\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = range / num_discrete_values\n const double offset_input = static_cast(input) - lowest_quantized;\n result = range_min + ((input - numeric_limits::min()) * range_scale)\n ```\n\n If the mode is `SCALED`, dequantization is performed by multiplying each\n input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).\n\n The scaling_factor is determined from `min_range`, `max_range`, and\n `narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}`\n and `QuantizeV2`, using the following algorithm:\n\n ```c++\n\n const int min_expected_T = std::numeric_limits::min() +\n (narrow_range ? 1 : 0);\n const int max_expected_T = std::numeric_limits::max();\n const float max_expected_T = std::numeric_limits::max();\n\n const float scale_factor =\n (std::numeric_limits::min() == 0) ? (max_range / max_expected_T)\n : std::max(min_range / min_expected_T,\n max_range / max_expected_T);\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_range: A `Tensor` of type `float32`.\n The minimum scalar value possibly produced for the input.\n max_range: A `Tensor` of type `float32`.\n The maximum scalar value possibly produced for the input.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n dtype: An optional `tf.DType` from: `tf.bfloat16, tf.float32`. Defaults to `tf.float32`.\n Type of the output tensor. Currently Dequantize supports float and bfloat16.\n If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_args", "docs": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n Quantization is called fake since the output is still in floating point.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_args_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxArgs operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxArgs operation.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxArgs operation.", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_vars", "docs": "Fake-quantize the 'inputs' tensor of type float via global float scalars\n\n Fake-quantize the `inputs` tensor of type float via global float scalars\n `min` and `max` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via global float scalars", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVars operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation.\n min, max: Quantization interval, scalar floats.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 8, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVars operation.", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel", "docs": "Fake-quantize the 'inputs' tensor of type float via per-channel floats\n\n Fake-quantize the `inputs` tensor of type float per-channel and one of the\n shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max`\n of shape `[d]` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via per-channel floats", "type": "API"}, {"name": "tf.compat.v1.quantization.fake_quant_with_min_max_vars_per_channel_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation,\n shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape\n same as `gradients`.\n min, max: Quantization interval, floats of shape `[d]`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 16, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.", "type": "API"}, {"name": "tf.compat.v1.quantization.quantize", "docs": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.\n\n [min_range, max_range] are scalar floats that specify the range for\n the 'input' data. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents. The\n 'round_mode' attribute controls which rounding tie-breaking algorithm is used\n when rounding float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)\n if T == qint8: out[i] -= (range(T) + 1) / 2.0\n ```\n\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n Assume the input is type float and has a possible range of [0.0, 6.0] and the\n output type is quint8 ([0, 255]). The min_range and max_range values should be\n specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each\n value of the input by 255/6 and cast to quint8.\n\n If the output type was qint8 ([-128, 127]), the operation will additionally\n subtract each value by 128 prior to casting, so that the range of values aligns\n with the range of qint8.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = num_discrete_values / range\n quantized = round(input * range_scale) - round(range_min * range_scale) +\n numeric_limits::min()\n quantized = max(quantized, numeric_limits::min())\n quantized = min(quantized, numeric_limits::max())\n ```\n\n The biggest difference between this and MIN_COMBINED is that the minimum range\n is rounded first, before it's subtracted from the rounded value. With\n MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing\n and dequantizing will introduce a larger and larger error.\n\n *SCALED mode Example*\n\n `SCALED` mode matches the quantization approach used in\n `QuantizeAndDequantize{V2|V3}`.\n\n If the mode is `SCALED`, the quantization is performed by multiplying each\n input value by a scaling_factor.\n The scaling_factor is determined from `min_range` and `max_range` to be as large\n as possible such that the range from `min_range` to `max_range` is representable\n within values of type T.\n\n ```c++\n\n const int min_T = std::numeric_limits::min();\n const int max_T = std::numeric_limits::max();\n const float max_float = std::numeric_limits::max();\n\n const float scale_factor_from_min_side =\n (min_T * min_range > 0) ? min_T / min_range : max_float;\n const float scale_factor_from_max_side =\n (max_T * max_range > 0) ? max_T / max_range : max_float;\n\n const float scale_factor = std::min(scale_factor_from_min_side,\n scale_factor_from_max_side);\n ```\n\n We next use the scale_factor to adjust min_range and max_range as follows:\n\n ```c++\n min_range = min_T / scale_factor;\n max_range = max_T / scale_factor;\n ```\n\n\n e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would\n compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8\n In this case, min_range would remain -10, but max_range would be adjusted to\n 127 / 12.8 = 9.921875\n\n So we will quantize input values in the range (-10, 9.921875) to (-128, 127).\n\n The input tensor can now be quantized by clipping values to the range\n `min_range` to `max_range`, then multiplying by scale_factor as follows:\n\n ```c++\n result = round(min(max_range, max(min_range, input)) * scale_factor)\n ```\n\n The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of\n this operation. These outputs should be used as the range for any further\n calculations.\n\n\n *narrow_range (bool) attribute*\n\n If true, we do not use the minimum quantized value.\n i.e. for int8 the quantized output, it would be restricted to the range\n -127..127 instead of the full -128..127 range.\n This is provided for compatibility with certain inference backends.\n (Only applies to SCALED mode)\n\n\n *axis (int) attribute*\n\n An optional `axis` attribute can specify a dimension index of the input tensor,\n such that quantization ranges will be calculated and applied separately for each\n slice of the tensor along that dimension. This is useful for per-channel\n quantization.\n\n If axis is specified, min_range and max_range\n\n if `axis`=None, per-tensor quantization is performed as normal.\n\n\n *ensure_minimum_range (float) attribute*\n\n Ensures the minimum quantization range is at least this value.\n The legacy default value for this is 0.01, but it is strongly suggested to\n set it to 0 for new uses.\n\n Args:\n input: A `Tensor` of type `float32`.\n min_range: A `Tensor` of type `float32`.\n The minimum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_min`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n max_range: A `Tensor` of type `float32`.\n The maximum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_max`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n T: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n round_mode: An optional `string` from: `\"HALF_AWAY_FROM_ZERO\", \"HALF_TO_EVEN\"`. Defaults to `\"HALF_AWAY_FROM_ZERO\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n ensure_minimum_range: An optional `float`. Defaults to `0.01`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `T`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.", "type": "API"}, {"name": "tf.compat.v1.quantization.quantize_and_dequantize", "docs": "Quantizes then dequantizes a tensor. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis Op has been deprecated, use`quantize_and_dequantize_v2` instead. To To simulate the V1 the behavior of tf.quantization.quantize_and_dequantize(...) use tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).\n\nArgs:\n input: A `Tensor` to quantize and dequantize.\n input_min: If range_given=True, the minimum input value, that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of minimum values for each slice along axis.\n input_max: If range_given=True, the maximum input value that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of maximum values for each slice along axis.\n signed_input: True if the quantization is signed or unsigned.\n num_bits: The bitwidth of the quantization.\n range_given: If true use `input_min` and `input_max` for the range of the\n input, otherwise determine min and max from the input `Tensor`.\n round_mode: Rounding mode when rounding from float values to quantized ones.\n one of ['HALF_TO_EVEN', 'HALF_UP']\n name: Optional name for the operation.\n narrow_range: If true, then the absolute value of the quantized minimum\n value is the same as the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: Integer. If specified, refers to a dimension of the input tensor, such\n that quantization will be per slice along that dimension.\n\nReturns:\n A `Tensor`. Each element is the result of quantizing and dequantizing the\n corresponding element of `input`.", "desc": "Quantizes then dequantizes a tensor. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.quantization.quantize_and_dequantize_v2", "docs": "Quantizes then dequantizes a tensor.\n\n Updates the gradient definition for quantization that is outside the range to\n be 0.To simulate the V1 the behavior of\n tf.quantization.quantize_and_dequantize(...) use\n tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).\n\n Example usage:\n\n ```python\n def getQuantizeOp(input):\n input_tensor = tf.placeholder(tf.float32, shape=[4, 4])\n net = tf.quantization.quantize_and_dequantize(input,\n input_min=min_threshold,\n input_max=max_threshold,\n range_given=True)\n\n To simulate v1 behavior:\n\n def testDecomposeQuantizeDequantize(self):\n def f(input_tensor):\n return tf.quantization.quantize_and_dequantize_v2(input_tensor,\n input_min = 5.0,\n input_max= -10.0,\n range_given=True)\n input_tensor = tf.placeholder(tf.float32, shape=[4, 4])\n net = tf.grad_pass_through(f)(input_tensor)\n ```\n\n Args:\n input: A `Tensor` to quantize and dequantize.\n input_min: If range_given=True, the minimum input value, that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of minimum values for each slice along axis.\n input_max: If range_given=True, the maximum input value that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of maximum values for each slice along axis.\n signed_input: True if the quantization is signed or unsigned.\n num_bits: The bitwidth of the quantization.\n range_given: If true use `input_min` and `input_max` for the range of the\n input, otherwise determine min and max from the input `Tensor`.\n round_mode: Rounding mode when rounding from float values to quantized ones.\n one of ['HALF_TO_EVEN', 'HALF_UP']\n name: Optional name for the operation.\n narrow_range: If true, then the absolute value of the quantized minimum\n value is the same as the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: Integer. If specified, refers to a dimension of the input tensor, such\n that quantization will be per slice along that dimension.\n\n Returns:\n A `Tensor`. Each element is the result of quantizing and dequantizing the\n corresponding element of `input`.\n ", "desc": "Quantizes then dequantizes a tensor.", "type": "API"}, {"name": "tf.compat.v1.quantization.quantized_concat", "docs": "Concatenates quantized tensors along one dimension.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [0, rank(values)).\n values: A list of at least 2 `Tensor` objects with the same type.\n The `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n input_mins: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The minimum scalar values for each of the input tensors.\n input_maxes: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The maximum scalar values for each of the input tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor`. Has the same type as `values`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Concatenates quantized tensors along one dimension.", "type": "API"}, {"name": "tf.compat.v1.quantize", "docs": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.\n\n [min_range, max_range] are scalar floats that specify the range for\n the 'input' data. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents. The\n 'round_mode' attribute controls which rounding tie-breaking algorithm is used\n when rounding float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)\n if T == qint8: out[i] -= (range(T) + 1) / 2.0\n ```\n\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n Assume the input is type float and has a possible range of [0.0, 6.0] and the\n output type is quint8 ([0, 255]). The min_range and max_range values should be\n specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each\n value of the input by 255/6 and cast to quint8.\n\n If the output type was qint8 ([-128, 127]), the operation will additionally\n subtract each value by 128 prior to casting, so that the range of values aligns\n with the range of qint8.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = num_discrete_values / range\n quantized = round(input * range_scale) - round(range_min * range_scale) +\n numeric_limits::min()\n quantized = max(quantized, numeric_limits::min())\n quantized = min(quantized, numeric_limits::max())\n ```\n\n The biggest difference between this and MIN_COMBINED is that the minimum range\n is rounded first, before it's subtracted from the rounded value. With\n MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing\n and dequantizing will introduce a larger and larger error.\n\n *SCALED mode Example*\n\n `SCALED` mode matches the quantization approach used in\n `QuantizeAndDequantize{V2|V3}`.\n\n If the mode is `SCALED`, the quantization is performed by multiplying each\n input value by a scaling_factor.\n The scaling_factor is determined from `min_range` and `max_range` to be as large\n as possible such that the range from `min_range` to `max_range` is representable\n within values of type T.\n\n ```c++\n\n const int min_T = std::numeric_limits::min();\n const int max_T = std::numeric_limits::max();\n const float max_float = std::numeric_limits::max();\n\n const float scale_factor_from_min_side =\n (min_T * min_range > 0) ? min_T / min_range : max_float;\n const float scale_factor_from_max_side =\n (max_T * max_range > 0) ? max_T / max_range : max_float;\n\n const float scale_factor = std::min(scale_factor_from_min_side,\n scale_factor_from_max_side);\n ```\n\n We next use the scale_factor to adjust min_range and max_range as follows:\n\n ```c++\n min_range = min_T / scale_factor;\n max_range = max_T / scale_factor;\n ```\n\n\n e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would\n compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8\n In this case, min_range would remain -10, but max_range would be adjusted to\n 127 / 12.8 = 9.921875\n\n So we will quantize input values in the range (-10, 9.921875) to (-128, 127).\n\n The input tensor can now be quantized by clipping values to the range\n `min_range` to `max_range`, then multiplying by scale_factor as follows:\n\n ```c++\n result = round(min(max_range, max(min_range, input)) * scale_factor)\n ```\n\n The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of\n this operation. These outputs should be used as the range for any further\n calculations.\n\n\n *narrow_range (bool) attribute*\n\n If true, we do not use the minimum quantized value.\n i.e. for int8 the quantized output, it would be restricted to the range\n -127..127 instead of the full -128..127 range.\n This is provided for compatibility with certain inference backends.\n (Only applies to SCALED mode)\n\n\n *axis (int) attribute*\n\n An optional `axis` attribute can specify a dimension index of the input tensor,\n such that quantization ranges will be calculated and applied separately for each\n slice of the tensor along that dimension. This is useful for per-channel\n quantization.\n\n If axis is specified, min_range and max_range\n\n if `axis`=None, per-tensor quantization is performed as normal.\n\n\n *ensure_minimum_range (float) attribute*\n\n Ensures the minimum quantization range is at least this value.\n The legacy default value for this is 0.01, but it is strongly suggested to\n set it to 0 for new uses.\n\n Args:\n input: A `Tensor` of type `float32`.\n min_range: A `Tensor` of type `float32`.\n The minimum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_min`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n max_range: A `Tensor` of type `float32`.\n The maximum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_max`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n T: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n round_mode: An optional `string` from: `\"HALF_AWAY_FROM_ZERO\", \"HALF_TO_EVEN\"`. Defaults to `\"HALF_AWAY_FROM_ZERO\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n ensure_minimum_range: An optional `float`. Defaults to `0.01`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `T`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.", "type": "API"}, {"name": "tf.compat.v1.quantize_v2", "docs": "Please use `tf.quantization.quantize` instead.", "desc": "Please use `tf.quantization.quantize` instead.", "type": "API"}, {"name": "tf.compat.v1.quantized_concat", "docs": "Concatenates quantized tensors along one dimension.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [0, rank(values)).\n values: A list of at least 2 `Tensor` objects with the same type.\n The `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n input_mins: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The minimum scalar values for each of the input tensors.\n input_maxes: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The maximum scalar values for each of the input tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor`. Has the same type as `values`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Concatenates quantized tensors along one dimension.", "type": "API"}, {"name": "tf.compat.v1.queue", "docs": "Public API for tf.queue namespace.\n", "desc": "Public API for tf.queue namespace.", "type": "API"}, {"name": "tf.compat.v1.queue.FIFOQueue", "docs": "A queue implementation that dequeues elements in first-in first-out order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in first-in first-out order.", "type": "API"}, {"name": "tf.compat.v1.queue.PaddingFIFOQueue", "docs": "A FIFOQueue that supports batching variable-sized tensors by padding.\n\n A `PaddingFIFOQueue` may contain components with dynamic shape, while also\n supporting `dequeue_many`. See the constructor for more details.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A FIFOQueue that supports batching variable-sized tensors by padding.", "type": "API"}, {"name": "tf.compat.v1.queue.PriorityQueue", "docs": "A queue implementation that dequeues elements in prioritized order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in prioritized order.", "type": "API"}, {"name": "tf.compat.v1.queue.QueueBase", "docs": "Base class for queue implementations.\n\n A queue is a TensorFlow data structure that stores tensors across\n multiple steps, and exposes operations that enqueue and dequeue\n tensors.\n\n Each queue element is a tuple of one or more tensors, where each\n tuple component has a static dtype, and may have a static shape. The\n queue implementations support versions of enqueue and dequeue that\n handle single elements, versions that support enqueuing and\n dequeuing a batch of elements at once.\n\n See `tf.queue.FIFOQueue` and\n `tf.queue.RandomShuffleQueue` for concrete\n implementations of this class, and instructions on how to create\n them.\n ", "desc": "Base class for queue implementations.", "type": "API"}, {"name": "tf.compat.v1.queue.RandomShuffleQueue", "docs": "A queue implementation that dequeues elements in a random order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in a random order.", "type": "API"}, {"name": "tf.compat.v1.QueueBase", "docs": "Base class for queue implementations.\n\n A queue is a TensorFlow data structure that stores tensors across\n multiple steps, and exposes operations that enqueue and dequeue\n tensors.\n\n Each queue element is a tuple of one or more tensors, where each\n tuple component has a static dtype, and may have a static shape. The\n queue implementations support versions of enqueue and dequeue that\n handle single elements, versions that support enqueuing and\n dequeuing a batch of elements at once.\n\n See `tf.queue.FIFOQueue` and\n `tf.queue.RandomShuffleQueue` for concrete\n implementations of this class, and instructions on how to create\n them.\n ", "desc": "Base class for queue implementations.", "type": "API"}, {"name": "tf.compat.v1.ragged", "docs": "Ragged Tensors.\n\nThis package defines ops for manipulating ragged tensors (`tf.RaggedTensor`),\nwhich are tensors with non-uniform shapes. In particular, each `RaggedTensor`\nhas one or more *ragged dimensions*, which are dimensions whose slices may have\ndifferent lengths. For example, the inner (column) dimension of\n`rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged, since the column slices\n(`rt[0, :]`, ..., `rt[4, :]`) have different lengths. For a more detailed\ndescription of ragged tensors, see the `tf.RaggedTensor` class documentation\nand the [Ragged Tensor Guide](/guide/ragged_tensor).\n\n\n### Additional ops that support `RaggedTensor`\n\nArguments that accept `RaggedTensor`s are marked in **bold**.\n\n* `tf.__operators__.eq`(**self**, **other**)\n* `tf.__operators__.ne`(**self**, **other**)\n* `tf.bitcast`(**input**, type, name=`None`)\n* `tf.bitwise.bitwise_and`(**x**, **y**, name=`None`)\n* `tf.bitwise.bitwise_or`(**x**, **y**, name=`None`)\n* `tf.bitwise.bitwise_xor`(**x**, **y**, name=`None`)\n* `tf.bitwise.invert`(**x**, name=`None`)\n* `tf.bitwise.left_shift`(**x**, **y**, name=`None`)\n* `tf.bitwise.right_shift`(**x**, **y**, name=`None`)\n* `tf.broadcast_to`(**input**, **shape**, name=`None`)\n* `tf.cast`(**x**, dtype, name=`None`)\n* `tf.clip_by_value`(**t**, clip_value_min, clip_value_max, name=`None`)\n* `tf.concat`(**values**, axis, name=`'concat'`)\n* `tf.debugging.check_numerics`(**tensor**, message, name=`None`)\n* `tf.dtypes.complex`(**real**, **imag**, name=`None`)\n* `tf.dtypes.saturate_cast`(**value**, dtype, name=`None`)\n* `tf.dynamic_partition`(**data**, **partitions**, num_partitions, name=`None`)\n* `tf.expand_dims`(**input**, axis, name=`None`)\n* `tf.gather_nd`(**params**, **indices**, batch_dims=`0`, name=`None`)\n* `tf.gather`(**params**, **indices**, validate_indices=`None`, axis=`None`, batch_dims=`0`, name=`None`)\n* `tf.image.adjust_brightness`(**image**, delta)\n* `tf.image.adjust_gamma`(**image**, gamma=`1`, gain=`1`)\n* `tf.image.convert_image_dtype`(**image**, dtype, saturate=`False`, name=`None`)\n* `tf.image.random_brightness`(**image**, max_delta, seed=`None`)\n* `tf.image.resize`(**images**, size, method=`'bilinear'`, preserve_aspect_ratio=`False`, antialias=`False`, name=`None`)\n* `tf.image.stateless_random_brightness`(**image**, max_delta, seed)\n* `tf.io.decode_base64`(**input**, name=`None`)\n* `tf.io.decode_compressed`(**bytes**, compression_type=`''`, name=`None`)\n* `tf.io.encode_base64`(**input**, pad=`False`, name=`None`)\n* `tf.linalg.matmul`(**a**, **b**, transpose_a=`False`, transpose_b=`False`, adjoint_a=`False`, adjoint_b=`False`, a_is_sparse=`False`, b_is_sparse=`False`, output_type=`None`, name=`None`)\n* `tf.math.abs`(**x**, name=`None`)\n* `tf.math.acos`(**x**, name=`None`)\n* `tf.math.acosh`(**x**, name=`None`)\n* `tf.math.add_n`(**inputs**, name=`None`)\n* `tf.math.add`(**x**, **y**, name=`None`)\n* `tf.math.angle`(**input**, name=`None`)\n* `tf.math.asin`(**x**, name=`None`)\n* `tf.math.asinh`(**x**, name=`None`)\n* `tf.math.atan2`(**y**, **x**, name=`None`)\n* `tf.math.atan`(**x**, name=`None`)\n* `tf.math.atanh`(**x**, name=`None`)\n* `tf.math.bessel_i0`(**x**, name=`None`)\n* `tf.math.bessel_i0e`(**x**, name=`None`)\n* `tf.math.bessel_i1`(**x**, name=`None`)\n* `tf.math.bessel_i1e`(**x**, name=`None`)\n* `tf.math.ceil`(**x**, name=`None`)\n* `tf.math.conj`(**x**, name=`None`)\n* `tf.math.cos`(**x**, name=`None`)\n* `tf.math.cosh`(**x**, name=`None`)\n* `tf.math.digamma`(**x**, name=`None`)\n* `tf.math.divide_no_nan`(**x**, **y**, name=`None`)\n* `tf.math.divide`(**x**, **y**, name=`None`)\n* `tf.math.equal`(**x**, **y**, name=`None`)\n* `tf.math.erf`(**x**, name=`None`)\n* `tf.math.erfc`(**x**, name=`None`)\n* `tf.math.erfcinv`(**x**, name=`None`)\n* `tf.math.erfinv`(**x**, name=`None`)\n* `tf.math.exp`(**x**, name=`None`)\n* `tf.math.expm1`(**x**, name=`None`)\n* `tf.math.floor`(**x**, name=`None`)\n* `tf.math.floordiv`(**x**, **y**, name=`None`)\n* `tf.math.floormod`(**x**, **y**, name=`None`)\n* `tf.math.greater_equal`(**x**, **y**, name=`None`)\n* `tf.math.greater`(**x**, **y**, name=`None`)\n* `tf.math.imag`(**input**, name=`None`)\n* `tf.math.is_finite`(**x**, name=`None`)\n* `tf.math.is_inf`(**x**, name=`None`)\n* `tf.math.is_nan`(**x**, name=`None`)\n* `tf.math.less_equal`(**x**, **y**, name=`None`)\n* `tf.math.less`(**x**, **y**, name=`None`)\n* `tf.math.lgamma`(**x**, name=`None`)\n* `tf.math.log1p`(**x**, name=`None`)\n* `tf.math.log_sigmoid`(**x**, name=`None`)\n* `tf.math.log`(**x**, name=`None`)\n* `tf.math.logical_and`(**x**, **y**, name=`None`)\n* `tf.math.logical_not`(**x**, name=`None`)\n* `tf.math.logical_or`(**x**, **y**, name=`None`)\n* `tf.math.logical_xor`(**x**, **y**, name=`'LogicalXor'`)\n* `tf.math.maximum`(**x**, **y**, name=`None`)\n* `tf.math.minimum`(**x**, **y**, name=`None`)\n* `tf.math.multiply_no_nan`(**x**, **y**, name=`None`)\n* `tf.math.multiply`(**x**, **y**, name=`None`)\n* `tf.math.ndtri`(**x**, name=`None`)\n* `tf.math.negative`(**x**, name=`None`)\n* `tf.math.nextafter`(**x1**, x2, name=`None`)\n* `tf.math.not_equal`(**x**, **y**, name=`None`)\n* `tf.math.pow`(**x**, **y**, name=`None`)\n* `tf.math.real`(**input**, name=`None`)\n* `tf.math.reciprocal_no_nan`(**x**, name=`None`)\n* `tf.math.reciprocal`(**x**, name=`None`)\n* `tf.math.reduce_all`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_any`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_max`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_mean`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_min`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_prod`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_std`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_sum`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_variance`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.rint`(**x**, name=`None`)\n* `tf.math.round`(**x**, name=`None`)\n* `tf.math.rsqrt`(**x**, name=`None`)\n* `tf.math.scalar_mul`(**scalar**, **x**, name=`None`)\n* `tf.math.sigmoid`(**x**, name=`None`)\n* `tf.math.sign`(**x**, name=`None`)\n* `tf.math.sin`(**x**, name=`None`)\n* `tf.math.sinh`(**x**, name=`None`)\n* `tf.math.softplus`(**features**, name=`None`)\n* `tf.math.special.bessel_j0`(**x**, name=`None`)\n* `tf.math.special.bessel_j1`(**x**, name=`None`)\n* `tf.math.special.bessel_k0`(**x**, name=`None`)\n* `tf.math.special.bessel_k0e`(**x**, name=`None`)\n* `tf.math.special.bessel_k1`(**x**, name=`None`)\n* `tf.math.special.bessel_k1e`(**x**, name=`None`)\n* `tf.math.special.bessel_y0`(**x**, name=`None`)\n* `tf.math.special.bessel_y1`(**x**, name=`None`)\n* `tf.math.special.dawsn`(**x**, name=`None`)\n* `tf.math.special.expint`(**x**, name=`None`)\n* `tf.math.special.fresnel_cos`(**x**, name=`None`)\n* `tf.math.special.fresnel_sin`(**x**, name=`None`)\n* `tf.math.special.spence`(**x**, name=`None`)\n* `tf.math.sqrt`(**x**, name=`None`)\n* `tf.math.square`(**x**, name=`None`)\n* `tf.math.squared_difference`(**x**, **y**, name=`None`)\n* `tf.math.subtract`(**x**, **y**, name=`None`)\n* `tf.math.tan`(**x**, name=`None`)\n* `tf.math.tanh`(**x**, name=`None`)\n* `tf.math.truediv`(**x**, **y**, name=`None`)\n* `tf.math.unsorted_segment_max`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_mean`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_min`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_prod`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_sqrt_n`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_sum`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.xdivy`(**x**, **y**, name=`None`)\n* `tf.math.xlog1py`(**x**, **y**, name=`None`)\n* `tf.math.xlogy`(**x**, **y**, name=`None`)\n* `tf.math.zeta`(**x**, **q**, name=`None`)\n* `tf.nn.dropout`(**x**, rate, noise_shape=`None`, seed=`None`, name=`None`)\n* `tf.nn.elu`(**features**, name=`None`)\n* `tf.nn.gelu`(**features**, approximate=`False`, name=`None`)\n* `tf.nn.leaky_relu`(**features**, alpha=`0.2`, name=`None`)\n* `tf.nn.relu6`(**features**, name=`None`)\n* `tf.nn.relu`(**features**, name=`None`)\n* `tf.nn.selu`(**features**, name=`None`)\n* `tf.nn.sigmoid_cross_entropy_with_logits`(**labels**=`None`, **logits**=`None`, name=`None`)\n* `tf.nn.silu`(**features**, beta=`1.0`)\n* `tf.nn.softmax`(**logits**, axis=`None`, name=`None`)\n* `tf.nn.softsign`(**features**, name=`None`)\n* `tf.one_hot`(**indices**, depth, on_value=`None`, off_value=`None`, axis=`None`, dtype=`None`, name=`None`)\n* `tf.ones_like`(**input**, dtype=`None`, name=`None`)\n* `tf.print`(***inputs**, **kwargs)\n* `tf.rank`(**input**, name=`None`)\n* `tf.realdiv`(**x**, **y**, name=`None`)\n* `tf.reshape`(**tensor**, **shape**, name=`None`)\n* `tf.reverse`(**tensor**, axis, name=`None`)\n* `tf.size`(**input**, out_type=`tf.int32`, name=`None`)\n* `tf.split`(**value**, num_or_size_splits, axis=`0`, num=`None`, name=`'split'`)\n* `tf.squeeze`(**input**, axis=`None`, name=`None`)\n* `tf.stack`(**values**, axis=`0`, name=`'stack'`)\n* `tf.strings.as_string`(**input**, precision=`-1`, scientific=`False`, shortest=`False`, width=`-1`, fill=`''`, name=`None`)\n* `tf.strings.format`(**template**, **inputs**, placeholder=`'{}'`, summarize=`3`, name=`None`)\n* `tf.strings.join`(**inputs**, separator=`''`, name=`None`)\n* `tf.strings.length`(**input**, unit=`'BYTE'`, name=`None`)\n* `tf.strings.lower`(**input**, encoding=`''`, name=`None`)\n* `tf.strings.reduce_join`(**inputs**, axis=`None`, keepdims=`False`, separator=`''`, name=`None`)\n* `tf.strings.regex_full_match`(**input**, pattern, name=`None`)\n* `tf.strings.regex_replace`(**input**, pattern, rewrite, replace_global=`True`, name=`None`)\n* `tf.strings.strip`(**input**, name=`None`)\n* `tf.strings.substr`(**input**, pos, len, unit=`'BYTE'`, name=`None`)\n* `tf.strings.to_hash_bucket_fast`(**input**, num_buckets, name=`None`)\n* `tf.strings.to_hash_bucket_strong`(**input**, num_buckets, key, name=`None`)\n* `tf.strings.to_hash_bucket`(**input**, num_buckets, name=`None`)\n* `tf.strings.to_number`(**input**, out_type=`tf.float32`, name=`None`)\n* `tf.strings.unicode_script`(**input**, name=`None`)\n* `tf.strings.unicode_transcode`(**input**, input_encoding, output_encoding, errors=`'replace'`, replacement_char=`65533`, replace_control_characters=`False`, name=`None`)\n* `tf.strings.upper`(**input**, encoding=`''`, name=`None`)\n* `tf.tile`(**input**, multiples, name=`None`)\n* `tf.truncatediv`(**x**, **y**, name=`None`)\n* `tf.truncatemod`(**x**, **y**, name=`None`)\n* `tf.where`(**condition**, **x**=`None`, **y**=`None`, name=`None`)\n* `tf.zeros_like`(**input**, dtype=`None`, name=`None`)n\n", "desc": "Ragged Tensors.", "type": "API"}, {"name": "tf.compat.v1.ragged.boolean_mask", "docs": "Applies a boolean mask to `data` without flattening the mask dimensions.\n\n Returns a potentially ragged tensor that is formed by retaining the elements\n in `data` where the corresponding value in `mask` is `True`.\n\n * `output[a1...aA, i, b1...bB] = data[a1...aA, j, b1...bB]`\n\n Where `j` is the `i`th `True` entry of `mask[a1...aA]`.\n\n Note that `output` preserves the mask dimensions `a1...aA`; this differs\n from `tf.boolean_mask`, which flattens those dimensions.\n\n Args:\n data: A potentially ragged tensor.\n mask: A potentially ragged boolean tensor. `mask`'s shape must be a prefix\n of `data`'s shape. `rank(mask)` must be known statically.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A potentially ragged tensor that is formed by retaining the elements in\n `data` where the corresponding value in `mask` is `True`.\n\n * `rank(output) = rank(data)`.\n * `output.ragged_rank = max(data.ragged_rank, rank(mask) - 1)`.\n\n Raises:\n ValueError: if `rank(mask)` is not known statically; or if `mask.shape` is\n not a prefix of `data.shape`.\n\n #### Examples:\n\n >>> # Aliases for True & False so data and mask line up.\n >>> T, F = (True, False)\n\n >>> tf.ragged.boolean_mask( # Mask a 2D Tensor.\n ... data=[[1, 2, 3], [4, 5, 6], [7, 8, 9]],\n ... mask=[[T, F, T], [F, F, F], [T, F, F]]).to_list()\n [[1, 3], [], [7]]\n\n >>> tf.ragged.boolean_mask( # Mask a 2D RaggedTensor.\n ... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),\n ... tf.ragged.constant([[F, F, T], [F], [T, T]])).to_list()\n [[3], [], [5, 6]]\n\n >>> tf.ragged.boolean_mask( # Mask rows of a 2D RaggedTensor.\n ... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),\n ... tf.ragged.constant([True, False, True])).to_list()\n [[1, 2, 3], [5, 6]]\n ", "desc": "Applies a boolean mask to `data` without flattening the mask dimensions.", "type": "API"}, {"name": "tf.compat.v1.ragged.constant", "docs": "Constructs a constant RaggedTensor from a nested Python list.\n\n Example:\n\n >>> tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n \n\n All scalar values in `pylist` must have the same nesting depth `K`, and the\n returned `RaggedTensor` will have rank `K`. If `pylist` contains no scalar\n values, then `K` is one greater than the maximum depth of empty lists in\n `pylist`. All scalar values in `pylist` must be compatible with `dtype`.\n\n Args:\n pylist: A nested `list`, `tuple` or `np.ndarray`. Any nested element that\n is not a `list`, `tuple` or `np.ndarray` must be a scalar value\n compatible with `dtype`.\n dtype: The type of elements for the returned `RaggedTensor`. If not\n specified, then a default is chosen based on the scalar values in\n `pylist`.\n ragged_rank: An integer specifying the ragged rank of the returned\n `RaggedTensor`. Must be nonnegative and less than `K`. Defaults to\n `max(0, K - 1)` if `inner_shape` is not specified. Defaults to\n `max(0, K - 1 - len(inner_shape))` if `inner_shape` is specified.\n inner_shape: A tuple of integers specifying the shape for individual inner\n values in the returned `RaggedTensor`. Defaults to `()` if `ragged_rank`\n is not specified. If `ragged_rank` is specified, then a default is chosen\n based on the contents of `pylist`.\n name: A name prefix for the returned tensor (optional).\n row_splits_dtype: data type for the constructed `RaggedTensor`'s row_splits.\n One of `tf.int32` or `tf.int64`.\n\n Returns:\n A potentially ragged tensor with rank `K` and the specified `ragged_rank`,\n containing the values from `pylist`.\n\n Raises:\n ValueError: If the scalar values in `pylist` have inconsistent nesting\n depth; or if ragged_rank or inner_shape are incompatible with `pylist`.\n ", "desc": "Constructs a constant RaggedTensor from a nested Python list.", "type": "API"}, {"name": "tf.compat.v1.ragged.constant_value", "docs": "Constructs a RaggedTensorValue from a nested Python list.\n\n Warning: This function returns a `RaggedTensorValue`, not a `RaggedTensor`.\n If you wish to construct a constant `RaggedTensor`, use\n [`ragged.constant(...)`](constant.md) instead.\n\n Example:\n\n >>> tf.compat.v1.ragged.constant_value([[1, 2], [3], [4, 5, 6]])\n tf.RaggedTensorValue(values=array([1, 2, 3, 4, 5, 6]),\n row_splits=array([0, 2, 3, 6]))\n\n All scalar values in `pylist` must have the same nesting depth `K`, and the\n returned `RaggedTensorValue` will have rank `K`. If `pylist` contains no\n scalar values, then `K` is one greater than the maximum depth of empty lists\n in `pylist`. All scalar values in `pylist` must be compatible with `dtype`.\n\n Args:\n pylist: A nested `list`, `tuple` or `np.ndarray`. Any nested element that\n is not a `list` or `tuple` must be a scalar value compatible with `dtype`.\n dtype: `numpy.dtype`. The type of elements for the returned `RaggedTensor`.\n If not specified, then a default is chosen based on the scalar values in\n `pylist`.\n ragged_rank: An integer specifying the ragged rank of the returned\n `RaggedTensorValue`. Must be nonnegative and less than `K`. Defaults to\n `max(0, K - 1)` if `inner_shape` is not specified. Defaults to `max(0, K\n - 1 - len(inner_shape))` if `inner_shape` is specified.\n inner_shape: A tuple of integers specifying the shape for individual inner\n values in the returned `RaggedTensorValue`. Defaults to `()` if\n `ragged_rank` is not specified. If `ragged_rank` is specified, then a\n default is chosen based on the contents of `pylist`.\n row_splits_dtype: data type for the constructed `RaggedTensorValue`'s\n row_splits. One of `numpy.int32` or `numpy.int64`.\n\n Returns:\n A `tf.RaggedTensorValue` or `numpy.array` with rank `K` and the specified\n `ragged_rank`, containing the values from `pylist`.\n\n Raises:\n ValueError: If the scalar values in `pylist` have inconsistent nesting\n depth; or if ragged_rank or inner_shape are incompatible with `pylist`.\n ", "desc": "Constructs a RaggedTensorValue from a nested Python list.", "type": "API"}, {"name": "tf.compat.v1.ragged.cross", "docs": "Generates feature cross from a list of tensors.\n\n The input tensors must have `rank=2`, and must all have the same number of\n rows. The result is a `RaggedTensor` with the same number of rows as the\n inputs, where `result[row]` contains a list of all combinations of values\n formed by taking a single value from each input's corresponding row\n (`inputs[i][row]`). Values are combined by joining their strings with '_X_'.\n E.g.:\n\n >>> tf.ragged.cross([tf.ragged.constant([['a'], ['b', 'c']]),\n ... tf.ragged.constant([['d'], ['e']]),\n ... tf.ragged.constant([['f'], ['g']])])\n \n\n Args:\n inputs: A list of `RaggedTensor` or `Tensor` or `SparseTensor`.\n name: Optional name for the op.\n\n Returns:\n A 2D `RaggedTensor` of type `string`.\n ", "desc": "Generates feature cross from a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.ragged.cross_hashed", "docs": "Generates hashed feature cross from a list of tensors.\n\n The input tensors must have `rank=2`, and must all have the same number of\n rows. The result is a `RaggedTensor` with the same number of rows as the\n inputs, where `result[row]` contains a list of all combinations of values\n formed by taking a single value from each input's corresponding row\n (`inputs[i][row]`). Values are combined by hashing together their\n fingerprints. E.g.:\n\n >>> tf.ragged.cross_hashed([tf.ragged.constant([['a'], ['b', 'c']]),\n ... tf.ragged.constant([['d'], ['e']]),\n ... tf.ragged.constant([['f'], ['g']])],\n ... num_buckets=100)\n \n\n Args:\n inputs: A list of `RaggedTensor` or `Tensor` or `SparseTensor`.\n num_buckets: A non-negative `int` that used to bucket the hashed values. If\n `num_buckets != 0`, then `output = hashed_value % num_buckets`.\n hash_key: Integer hash_key that will be used by the `FingerprintCat64`\n function. If not given, a default key is used.\n name: Optional name for the op.\n\n Returns:\n A 2D `RaggedTensor` of type `int64`.\n ", "desc": "Generates hashed feature cross from a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.ragged.map_flat_values", "docs": "Applies `op` to the `flat_values` of one or more RaggedTensors.\n\n Replaces any `RaggedTensor` in `args` or `kwargs` with its `flat_values`\n tensor (which collapses all ragged dimensions), and then calls `op`. Returns\n a `RaggedTensor` that is constructed from the input `RaggedTensor`s'\n `nested_row_splits` and the value returned by the `op`.\n\n If the input arguments contain multiple `RaggedTensor`s, then they must have\n identical `nested_row_splits`.\n\n This operation is generally used to apply elementwise operations to each value\n in a `RaggedTensor`.\n\n Warning: `tf.ragged.map_flat_values` does *not* apply `op` to each row of a\n ragged tensor. This difference is important for non-elementwise operations,\n such as `tf.reduce_sum`. If you wish to apply a non-elementwise operation to\n each row of a ragged tensor, use `tf.map_fn` instead. (You may need to\n specify an `output_signature` when using `tf.map_fn` with ragged tensors.)\n\n Examples:\n\n >>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n >>> tf.ragged.map_flat_values(tf.ones_like, rt)\n \n >>> tf.ragged.map_flat_values(tf.multiply, rt, rt)\n \n >>> tf.ragged.map_flat_values(tf.add, rt, 5)\n \n\n Example with a non-elementwise operation (note that `map_flat_values` and\n `map_fn` return different results):\n\n >>> rt = tf.ragged.constant([[1.0, 3.0], [], [3.0, 6.0, 3.0]])\n >>> def normalized(x):\n ... return x / tf.reduce_sum(x)\n >>> tf.ragged.map_flat_values(normalized, rt)\n \n >>> tf.map_fn(normalized, rt)\n \n\n Args:\n op: The operation that should be applied to the RaggedTensor `flat_values`.\n `op` is typically an element-wise operation (such as math_ops.add), but\n any operation that preserves the size of the outermost dimension can be\n used. I.e., `shape[0]` of the value returned by `op` must match\n `shape[0]` of the `RaggedTensor`s' `flat_values` tensors.\n *args: Arguments for `op`.\n **kwargs: Keyword arguments for `op`.\n\n Returns:\n A `RaggedTensor` whose `ragged_rank` matches the `ragged_rank` of all\n input `RaggedTensor`s.\n Raises:\n ValueError: If args contains no `RaggedTensors`, or if the `nested_splits`\n of the input `RaggedTensor`s are not identical.\n ", "desc": "Applies `op` to the `flat_values` of one or more RaggedTensors.", "type": "API"}, {"name": "tf.compat.v1.ragged.placeholder", "docs": "Creates a placeholder for a `tf.RaggedTensor` that will always be fed.\n\n **Important**: This ragged tensor will produce an error if evaluated.\n Its value must be fed using the `feed_dict` optional argument to\n `Session.run()`, `Tensor.eval()`, or `Operation.run()`.\n\n @compatibility{eager} Placeholders are not compatible with eager execution.\n\n Args:\n dtype: The data type for the `RaggedTensor`.\n ragged_rank: The ragged rank for the `RaggedTensor`\n value_shape: The shape for individual flat values in the `RaggedTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `RaggedTensor` that may be used as a handle for feeding a value, but\n not evaluated directly.\n\n Raises:\n RuntimeError: if eager execution is enabled\n ", "desc": "Creates a placeholder for a `tf.RaggedTensor` that will always be fed.", "type": "API"}, {"name": "tf.compat.v1.ragged.RaggedTensorValue", "docs": "Represents the value of a `RaggedTensor`.\n\n Warning: `RaggedTensorValue` should only be used in graph mode; in\n eager mode, the `tf.RaggedTensor` class contains its value directly.\n\n See `tf.RaggedTensor` for a description of ragged tensors.\n ", "desc": "Represents the value of a `RaggedTensor`.", "type": "API"}, {"name": "tf.compat.v1.ragged.range", "docs": "Returns a `RaggedTensor` containing the specified sequences of numbers.\n\n Each row of the returned `RaggedTensor` contains a single sequence:\n\n ```python\n ragged.range(starts, limits, deltas)[i] ==\n tf.range(starts[i], limits[i], deltas[i])\n ```\n\n If `start[i] < limits[i] and deltas[i] > 0`, then `output[i]` will be an\n empty list. Similarly, if `start[i] > limits[i] and deltas[i] < 0`, then\n `output[i]` will be an empty list. This behavior is consistent with the\n Python `range` function, but differs from the `tf.range` op, which returns\n an error for these cases.\n\n Examples:\n\n >>> tf.ragged.range([3, 5, 2]).to_list()\n [[0, 1, 2], [0, 1, 2, 3, 4], [0, 1]]\n >>> tf.ragged.range([0, 5, 8], [3, 3, 12]).to_list()\n [[0, 1, 2], [], [8, 9, 10, 11]]\n >>> tf.ragged.range([0, 5, 8], [3, 3, 12], 2).to_list()\n [[0, 2], [], [8, 10]]\n\n The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors.\n The vector inputs must all have the same size. Scalar inputs are broadcast\n to match the size of the vector inputs.\n\n Args:\n starts: Vector or scalar `Tensor`. Specifies the first entry for each range\n if `limits` is not `None`; otherwise, specifies the range limits, and the\n first entries default to `0`.\n limits: Vector or scalar `Tensor`. Specifies the exclusive upper limits for\n each range.\n deltas: Vector or scalar `Tensor`. Specifies the increment for each range.\n Defaults to `1`.\n dtype: The type of the elements of the resulting tensor. If not specified,\n then a value is chosen based on the other args.\n name: A name for the operation.\n row_splits_dtype: `dtype` for the returned `RaggedTensor`'s `row_splits`\n tensor. One of `tf.int32` or `tf.int64`.\n\n Returns:\n A `RaggedTensor` of type `dtype` with `ragged_rank=1`.\n ", "desc": "Returns a `RaggedTensor` containing the specified sequences of numbers.", "type": "API"}, {"name": "tf.compat.v1.ragged.row_splits_to_segment_ids", "docs": "Generates the segmentation corresponding to a RaggedTensor `row_splits`.\n\n Returns an integer vector `segment_ids`, where `segment_ids[i] == j` if\n `splits[j] <= i < splits[j+1]`. Example:\n\n >>> print(tf.ragged.row_splits_to_segment_ids([0, 3, 3, 5, 6, 9]))\n tf.Tensor([0 0 0 2 2 3 4 4 4], shape=(9,), dtype=int64)\n\n Args:\n splits: A sorted 1-D integer Tensor. `splits[0]` must be zero.\n name: A name prefix for the returned tensor (optional).\n out_type: The dtype for the return value. Defaults to `splits.dtype`,\n or `tf.int64` if `splits` does not have a dtype.\n\n Returns:\n A sorted 1-D integer Tensor, with `shape=[splits[-1]]`\n\n Raises:\n ValueError: If `splits` is invalid.\n ", "desc": "Generates the segmentation corresponding to a RaggedTensor `row_splits`.", "type": "API"}, {"name": "tf.compat.v1.ragged.segment_ids_to_row_splits", "docs": "Generates the RaggedTensor `row_splits` corresponding to a segmentation.\n\n Returns an integer vector `splits`, where `splits[0] = 0` and\n `splits[i] = splits[i-1] + count(segment_ids==i)`. Example:\n\n >>> print(tf.ragged.segment_ids_to_row_splits([0, 0, 0, 2, 2, 3, 4, 4, 4]))\n tf.Tensor([0 3 3 5 6 9], shape=(6,), dtype=int64)\n\n Args:\n segment_ids: A 1-D integer Tensor.\n num_segments: A scalar integer indicating the number of segments. Defaults\n to `max(segment_ids) + 1` (or zero if `segment_ids` is empty).\n out_type: The dtype for the return value. Defaults to `segment_ids.dtype`,\n or `tf.int64` if `segment_ids` does not have a dtype.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A sorted 1-D integer Tensor, with `shape=[num_segments + 1]`.\n ", "desc": "Generates the RaggedTensor `row_splits` corresponding to a segmentation.", "type": "API"}, {"name": "tf.compat.v1.ragged.stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`.\n\n Given a list of tensors or ragged tensors with the same rank `R`\n (`R >= axis`), returns a rank-`R+1` `RaggedTensor` `result` such that\n `result[i0...iaxis]` is `[value[i0...iaxis] for value in values]`.\n\n #### Examples:\n\n >>> # Stacking two ragged tensors.\n >>> t1 = tf.ragged.constant([[1, 2], [3, 4, 5]])\n >>> t2 = tf.ragged.constant([[6], [7, 8, 9]])\n >>> tf.ragged.stack([t1, t2], axis=0)\n \n >>> tf.ragged.stack([t1, t2], axis=1)\n \n\n >>> # Stacking two dense tensors with different sizes.\n >>> t3 = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> t4 = tf.constant([[5], [6], [7]])\n >>> tf.ragged.stack([t3, t4], axis=0)\n \n\n Args:\n values: A list of `tf.Tensor` or `tf.RaggedTensor`. May not be empty. All\n `values` must have the same rank and the same dtype; but unlike\n `tf.stack`, they can have arbitrary dimension sizes.\n axis: A python integer, indicating the dimension along which to stack.\n (Note: Unlike `tf.stack`, the `axis` parameter must be statically known.)\n Negative values are supported only if the rank of at least one\n `values` value is statically known.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A `RaggedTensor` with rank `R+1` (if `R>0`).\n If `R==0`, then the result will be returned as a 1D `Tensor`, since\n `RaggedTensor` can only be used when `rank>1`.\n `result.ragged_rank=1+max(axis, max(rt.ragged_rank for rt in values]))`.\n\n Raises:\n ValueError: If `values` is empty, if `axis` is out of bounds or if\n the input tensors have different ranks.\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`.", "type": "API"}, {"name": "tf.compat.v1.ragged.stack_dynamic_partitions", "docs": "Stacks dynamic partitions of a Tensor or RaggedTensor.\n\n Returns a RaggedTensor `output` with `num_partitions` rows, where the row\n `output[i]` is formed by stacking all slices `data[j1...jN]` such that\n `partitions[j1...jN] = i`. Slices of `data` are stacked in row-major\n order.\n\n If `num_partitions` is an `int` (not a `Tensor`), then this is equivalent to\n `tf.ragged.stack(tf.dynamic_partition(data, partitions, num_partitions))`.\n\n #### Example:\n\n >>> data = ['a', 'b', 'c', 'd', 'e']\n >>> partitions = [ 3, 0, 2, 2, 3]\n >>> num_partitions = 5\n >>> tf.ragged.stack_dynamic_partitions(data, partitions, num_partitions)\n \n\n Args:\n data: A `Tensor` or `RaggedTensor` containing the values to stack.\n partitions: An `int32` or `int64` `Tensor` or `RaggedTensor` specifying the\n partition that each slice of `data` should be added to. `partitions.shape`\n must be a prefix of `data.shape`. Values must be greater than or equal to\n zero, and less than `num_partitions`. `partitions` is not required to be\n sorted.\n num_partitions: An `int32` or `int64` scalar specifying the number of\n partitions to output. This determines the number of rows in `output`.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A `RaggedTensor` containing the stacked partitions. The returned tensor\n has the same dtype as `data`, and its shape is\n `[num_partitions, (D)] + data.shape[partitions.rank:]`, where `(D)` is a\n ragged dimension whose length is the number of data slices stacked for\n each `partition`.\n ", "desc": "Stacks dynamic partitions of a Tensor or RaggedTensor.", "type": "API"}, {"name": "tf.compat.v1.RaggedTensor", "docs": "Represents a ragged tensor.\n\n A `RaggedTensor` is a tensor with one or more *ragged dimensions*, which are\n dimensions whose slices may have different lengths. For example, the inner\n (column) dimension of `rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged,\n since the column slices (`rt[0, :]`, ..., `rt[4, :]`) have different lengths.\n Dimensions whose slices all have the same length are called *uniform\n dimensions*. The outermost dimension of a `RaggedTensor` is always uniform,\n since it consists of a single slice (and so there is no possibility for\n differing slice lengths).\n\n The total number of dimensions in a `RaggedTensor` is called its *rank*,\n and the number of ragged dimensions in a `RaggedTensor` is called its\n *ragged-rank*. A `RaggedTensor`'s ragged-rank is fixed at graph creation\n time: it can't depend on the runtime values of `Tensor`s, and can't vary\n dynamically for different session runs.\n\n Note that the `__init__` constructor is private. Please use one of the\n following methods to construct a `RaggedTensor`:\n\n * `tf.RaggedTensor.from_row_lengths`\n * `tf.RaggedTensor.from_value_rowids`\n * `tf.RaggedTensor.from_row_splits`\n * `tf.RaggedTensor.from_row_starts`\n * `tf.RaggedTensor.from_row_limits`\n * `tf.RaggedTensor.from_nested_row_splits`\n * `tf.RaggedTensor.from_nested_row_lengths`\n * `tf.RaggedTensor.from_nested_value_rowids`\n\n ### Potentially Ragged Tensors\n\n Many ops support both `Tensor`s and `RaggedTensor`s\n (see [tf.ragged](https://www.tensorflow.org/api_docs/python/tf/ragged) for a\n full listing). The term \"potentially ragged tensor\" may be used to refer to a\n tensor that might be either a `Tensor` or a `RaggedTensor`. The ragged-rank\n of a `Tensor` is zero.\n\n ### Documenting RaggedTensor Shapes\n\n When documenting the shape of a RaggedTensor, ragged dimensions can be\n indicated by enclosing them in parentheses. For example, the shape of\n a 3-D `RaggedTensor` that stores the fixed-size word embedding for each\n word in a sentence, for each sentence in a batch, could be written as\n `[num_sentences, (num_words), embedding_size]`. The parentheses around\n `(num_words)` indicate that dimension is ragged, and that the length\n of each element list in that dimension may vary for each item.\n\n ### Component Tensors\n\n Internally, a `RaggedTensor` consists of a concatenated list of values that\n are partitioned into variable-length rows. In particular, each `RaggedTensor`\n consists of:\n\n * A `values` tensor, which concatenates the variable-length rows into a\n flattened list. For example, the `values` tensor for\n `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is `[3, 1, 4, 1, 5, 9, 2, 6]`.\n\n * A `row_splits` vector, which indicates how those flattened values are\n divided into rows. In particular, the values for row `rt[i]` are stored\n in the slice `rt.values[rt.row_splits[i]:rt.row_splits[i+1]]`.\n\n Example:\n\n >>> print(tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... row_splits=[0, 4, 4, 7, 8, 8]))\n \n\n ### Alternative Row-Partitioning Schemes\n\n In addition to `row_splits`, ragged tensors provide support for five other\n row-partitioning schemes:\n\n * `row_lengths`: a vector with shape `[nrows]`, which specifies the length\n of each row.\n\n * `value_rowids` and `nrows`: `value_rowids` is a vector with shape\n `[nvals]`, corresponding one-to-one with `values`, which specifies\n each value's row index. In particular, the row `rt[row]` consists of the\n values `rt.values[j]` where `value_rowids[j]==row`. `nrows` is an\n integer scalar that specifies the number of rows in the\n `RaggedTensor`. (`nrows` is used to indicate trailing empty rows.)\n\n * `row_starts`: a vector with shape `[nrows]`, which specifies the start\n offset of each row. Equivalent to `row_splits[:-1]`.\n\n * `row_limits`: a vector with shape `[nrows]`, which specifies the stop\n offset of each row. Equivalent to `row_splits[1:]`.\n\n * `uniform_row_length`: A scalar tensor, specifying the length of every\n row. This row-partitioning scheme may only be used if all rows have\n the same length.\n\n Example: The following ragged tensors are equivalent, and all represent the\n nested list `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]`.\n\n >>> values = [3, 1, 4, 1, 5, 9, 2, 6]\n >>> RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])\n \n >>> RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])\n \n >>> RaggedTensor.from_value_rowids(\n ... values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)\n \n >>> RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])\n \n >>> RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])\n \n >>> RaggedTensor.from_uniform_row_length(values, uniform_row_length=2)\n \n\n ### Multiple Ragged Dimensions\n\n `RaggedTensor`s with multiple ragged dimensions can be defined by using\n a nested `RaggedTensor` for the `values` tensor. Each nested `RaggedTensor`\n adds a single ragged dimension.\n\n >>> inner_rt = RaggedTensor.from_row_splits( # =rt1 from above\n ... values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8])\n >>> outer_rt = RaggedTensor.from_row_splits(\n ... values=inner_rt, row_splits=[0, 3, 3, 5])\n >>> print(outer_rt.to_list())\n [[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]\n >>> print(outer_rt.ragged_rank)\n 2\n\n The factory function `RaggedTensor.from_nested_row_splits` may be used to\n construct a `RaggedTensor` with multiple ragged dimensions directly, by\n providing a list of `row_splits` tensors:\n\n >>> RaggedTensor.from_nested_row_splits(\n ... flat_values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list()\n [[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]\n\n ### Uniform Inner Dimensions\n\n `RaggedTensor`s with uniform inner dimensions can be defined\n by using a multidimensional `Tensor` for `values`.\n\n >>> rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32),\n ... row_splits=[0, 2, 5])\n >>> print(rt.to_list())\n [[[1, 1, 1], [1, 1, 1]],\n [[1, 1, 1], [1, 1, 1], [1, 1, 1]]]\n >>> print(rt.shape)\n (2, None, 3)\n\n ### Uniform Outer Dimensions\n\n `RaggedTensor`s with uniform outer dimensions can be defined by using\n one or more `RaggedTensor` with a `uniform_row_length` row-partitioning\n tensor. For example, a `RaggedTensor` with shape `[2, 2, None]` can be\n constructed with this method from a `RaggedTensor` values with shape\n `[4, None]`:\n\n >>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])\n >>> print(values.shape)\n (4, None)\n >>> rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2)\n >>> print(rt6)\n \n >>> print(rt6.shape)\n (2, 2, None)\n\n Note that `rt6` only contains one ragged dimension (the innermost\n dimension). In contrast, if `from_row_splits` is used to construct a similar\n `RaggedTensor`, then that `RaggedTensor` will have two ragged dimensions:\n\n >>> rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])\n >>> print(rt7.shape)\n (2, None, None)\n\n Uniform and ragged outer dimensions may be interleaved, meaning that a\n tensor with any combination of ragged and uniform dimensions may be created.\n For example, a RaggedTensor `t4` with shape `[3, None, 4, 8, None, 2]` could\n be constructed as follows:\n\n ```python\n t0 = tf.zeros([1000, 2]) # Shape: [1000, 2]\n t1 = RaggedTensor.from_row_lengths(t0, [...]) # [160, None, 2]\n t2 = RaggedTensor.from_uniform_row_length(t1, 8) # [20, 8, None, 2]\n t3 = RaggedTensor.from_uniform_row_length(t2, 4) # [5, 4, 8, None, 2]\n t4 = RaggedTensor.from_row_lengths(t3, [...]) # [3, None, 4, 8, None, 2]\n ```\n\n ", "desc": "Represents a ragged tensor.", "type": "API"}, {"name": "tf.compat.v1.RaggedTensorSpec", "docs": "Type specification for a `tf.RaggedTensor`.", "desc": "Type specification for a `tf.RaggedTensor`.", "type": "API"}, {"name": "tf.compat.v1.random", "docs": "Public API for tf.random namespace.\n", "desc": "Public API for tf.random namespace.", "type": "API"}, {"name": "tf.compat.v1.random.Algorithm", "docs": "An enumeration.", "desc": "An enumeration.", "type": "API"}, {"name": "tf.compat.v1.random.all_candidate_sampler", "docs": "Generate the set of all classes.\n\n Deterministically generates and returns the set of all possible classes.\n For testing purposes. There is no need to use this, since you might as\n well use full softmax or full logistic regression.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of possible classes.\n unique: A `bool`. Ignored.\n unique.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n This operation deterministically returns the entire range\n `[0, num_sampled]`.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`. All returned values are 1.0.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`. All returned values are 1.0.\n ", "desc": "Generate the set of all classes.", "type": "API"}, {"name": "tf.compat.v1.random.categorical", "docs": "Draws samples from a categorical distribution.\n\n Example:\n\n ```python\n # samples has shape [1, 5], where each value is either 0 or 1 with equal\n # probability.\n samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5)\n ```\n\n Args:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n dtype: The integer type of the output: `int32` or `int64`. Defaults to\n `int64`.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for behavior.\n name: Optional name for the operation.\n\n Returns:\n The drawn samples of shape `[batch_size, num_samples]`.\n ", "desc": "Draws samples from a categorical distribution.", "type": "API"}, {"name": "tf.compat.v1.random.create_rng_state", "docs": "Creates a RNG state from an integer or a vector.\n\n Example:\n\n >>> tf.random.create_rng_state(\n ... 1234, \"philox\")\n \n >>> tf.random.create_rng_state(\n ... [12, 34], \"threefry\")\n \n\n Args:\n seed: an integer or 1-D numpy array.\n alg: the RNG algorithm. Can be a string, an `Algorithm` or an integer.\n\n Returns:\n a 1-D numpy array whose size depends on the algorithm.\n ", "desc": "Creates a RNG state from an integer or a vector.", "type": "API"}, {"name": "tf.compat.v1.random.experimental", "docs": "Public API for tf.random.experimental namespace.\n", "desc": "Public API for tf.random.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.Algorithm", "docs": "An enumeration.", "desc": "An enumeration.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.create_rng_state", "docs": "Creates a RNG state from an integer or a vector.\n\n Example:\n\n >>> tf.random.create_rng_state(\n ... 1234, \"philox\")\n \n >>> tf.random.create_rng_state(\n ... [12, 34], \"threefry\")\n \n\n Args:\n seed: an integer or 1-D numpy array.\n alg: the RNG algorithm. Can be a string, an `Algorithm` or an integer.\n\n Returns:\n a 1-D numpy array whose size depends on the algorithm.\n ", "desc": "Creates a RNG state from an integer or a vector.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.Generator", "docs": "Random-number generator.\n\n Example:\n\n Creating a generator from a seed:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.normal(shape=(2, 3))\n \n\n Creating a generator from a non-deterministic state:\n\n >>> g = tf.random.Generator.from_non_deterministic_state()\n >>> g.normal(shape=(2, 3))\n \n\n All the constructors allow explicitly choosing an Random-Number-Generation\n (RNG) algorithm. Supported algorithms are `\"philox\"` and `\"threefry\"`. For\n example:\n\n >>> g = tf.random.Generator.from_seed(123, alg=\"philox\")\n >>> g.normal(shape=(2, 3))\n \n\n CPU, GPU and TPU with the same algorithm and seed will generate the same\n integer random numbers. Float-point results (such as the output of `normal`)\n may have small numerical discrepancies between different devices.\n\n This class uses a `tf.Variable` to manage its internal state. Every time\n random numbers are generated, the state of the generator will change. For\n example:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.state\n \n >>> g.normal(shape=(2, 3))\n <...>\n >>> g.state\n \n\n The shape of the state is algorithm-specific.\n\n There is also a global generator:\n\n >>> g = tf.random.get_global_generator()\n >>> g.normal(shape=(2, 3))\n \n\n When creating a generator inside a `tf.distribute.Strategy` scope, each\n replica will get a different stream of random numbers.\n\n For example, in this code:\n\n ```\n strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n with strat.scope():\n g = tf.random.Generator.from_seed(1)\n def f():\n return g.normal([])\n results = strat.run(f).values\n ```\n\n `results[0]` and `results[1]` will have different values.\n\n If the generator is seeded (e.g. created via `Generator.from_seed`), the\n random numbers will be determined by the seed, even though different replicas\n get different numbers. One can think of a random number generated on a\n replica as a hash of the replica ID and a \"master\" random number that may be\n common to all replicas. Hence, the whole system is still deterministic.\n\n (Note that the random numbers on different replicas are not correlated, even\n if they are deterministically determined by the same seed. They are not\n correlated in the sense that no matter what statistics one calculates on them,\n there won't be any discernable correlation.)\n\n Generators can be freely saved and restored using `tf.train.Checkpoint`. The\n checkpoint can be restored in a distribution strategy with a different number\n of replicas than the original strategy. If a replica ID is present in both the\n original and the new distribution strategy, its state will be properly\n restored (i.e. the random-number stream from the restored point will be the\n same as that from the saving point) unless the replicas have already diverged\n in their RNG call traces before saving (e.g. one replica has made one RNG call\n while another has made two RNG calls). We don't have such guarantee if the\n generator is saved in a strategy scope and restored outside of any strategy\n scope, or vice versa.\n\n When a generator is created within the scope of\n `tf.distribute.experimental.ParameterServerStrategy`, the workers\n will share the generator's state (placed on one of the parameter\n servers). In this way the workers will still get different\n random-number streams, as stated above. (This is similar to replicas\n in a `tf.distribute.MirroredStrategy` sequentially accessing a\n generator created outside the strategy.) Each RNG call on a worker\n will incur a round-trip to a parameter server, which may have\n performance impacts. When creating a\n `tf.distribute.experimental.ParameterServerStrategy`, please make\n sure that the `variable_partitioner` argument won't shard small\n variables of shape `[2]` or `[3]` (because generator states must not\n be sharded). Ways to avoid sharding small variables include setting\n `variable_partitioner` to `None` or to\n `tf.distribute.experimental.partitioners.MinSizePartitioner` with a\n large enough `min_shard_bytes` (see\n `tf.distribute.experimental.ParameterServerStrategy`'s documentation\n for more details).\n ", "desc": "Random-number generator.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.get_global_generator", "docs": "Retrieves the global generator.\n\n This function will create the global generator the first time it is called,\n and the generator will be placed at the default device at that time, so one\n needs to be careful when this function is first called. Using a generator\n placed on a less-ideal device will incur performance regression.\n\n Returns:\n The global `tf.random.Generator` object.\n ", "desc": "Retrieves the global generator.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.set_global_generator", "docs": "Replaces the global generator with another `Generator` object.\n\n This function replaces the global generator with the provided `generator`\n object.\n A random number generator utilizes a `tf.Variable` object to store its state.\n The user shall be aware of caveats how `set_global_generator` interacts with\n `tf.function`:\n\n - tf.function puts restrictions on Variable creation thus one cannot freely\n create a new random generator instance inside `tf.function`.\n To call `set_global_generator` inside `tf.function`, the generator instance\n must have already been created eagerly.\n - tf.function captures the Variable during trace-compilation, thus a compiled\n f.function will not be affected `set_global_generator` as demonstrated by\n random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun .\n\n For most use cases, avoid calling `set_global_generator` after program\n initialization, and prefer to reset the state of the existing global generator\n instead, such as,\n\n >>> rng = tf.random.get_global_generator()\n >>> rng.reset_from_seed(30)\n\n\n Args:\n generator: the new `Generator` object.\n ", "desc": "Replaces the global generator with another `Generator` object.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.stateless_fold_in", "docs": "Folds in data to an RNG seed to form a new RNG seed.\n\n For example, in a distributed-training setting, suppose we have a master seed\n and a replica ID. We want to fold the replica ID into the master seed to\n form a \"replica seed\" to be used by that replica later on, so that different\n replicas will generate different random numbers but the reproducibility of the\n whole system can still be controlled by the master seed:\n\n >>> master_seed = [1, 2]\n >>> replica_id = 3\n >>> replica_seed = tf.random.experimental.stateless_fold_in(\n ... master_seed, replica_id)\n >>> print(replica_seed)\n tf.Tensor([1105988140 3], shape=(2,), dtype=int32)\n >>> tf.random.stateless_normal(shape=[3], seed=replica_seed)\n \n\n Args:\n seed: an RNG seed (a tensor with shape [2] and dtype `int32` or\n `int64`). (When using XLA, only `int32` is allowed.)\n data: an `int32` or `int64` scalar representing data to be folded in to the\n seed.\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A new RNG seed that is a deterministic function of the inputs and is\n statistically safe for producing a stream of new pseudo-random values. It\n will have the same dtype as `data` (if `data` doesn't have an explict dtype,\n the dtype will be determined by `tf.convert_to_tensor`).\n ", "desc": "Folds in data to an RNG seed to form a new RNG seed.", "type": "API"}, {"name": "tf.compat.v1.random.experimental.stateless_split", "docs": "Splits an RNG seed into `num` new seeds by adding a leading axis.\n\n Example:\n\n >>> seed = [1, 2]\n >>> new_seeds = tf.random.experimental.stateless_split(seed, num=3)\n >>> print(new_seeds)\n tf.Tensor(\n [[1105988140 1738052849]\n [-335576002 370444179]\n [ 10670227 -246211131]], shape=(3, 2), dtype=int32)\n >>> tf.random.stateless_normal(shape=[3], seed=new_seeds[0, :])\n \n\n Args:\n seed: an RNG seed (a tensor with shape [2] and dtype `int32` or\n `int64`). (When using XLA, only `int32` is allowed.)\n num: optional, a positive integer or scalar tensor indicating the number of\n seeds to produce (default 2).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor with shape [num, 2] representing `num` new seeds. It will have the\n same dtype as `seed` (if `seed` doesn't have an explict dtype, the dtype\n will be determined by `tf.convert_to_tensor`).\n ", "desc": "Splits an RNG seed into `num` new seeds by adding a leading axis.", "type": "API"}, {"name": "tf.compat.v1.random.fixed_unigram_candidate_sampler", "docs": "Samples a set of classes using the provided (fixed) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution is read from a file or passed in as an\n in-memory array. There is also an option to skew the distribution by\n applying a distortion power to the weights.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n vocab_file: Each valid line in this file (which should have a CSV-like\n format) corresponds to a valid word ID. IDs are in sequential order,\n starting from num_reserved_ids. The last entry in each line is expected\n to be a value corresponding to the count or relative probability. Exactly\n one of `vocab_file` and `unigrams` needs to be passed to this operation.\n distortion: The distortion is used to skew the unigram probability\n distribution. Each weight is first raised to the distortion's power\n before adding to the internal unigram distribution. As a result,\n `distortion = 1.0` gives regular unigram sampling (as defined by the vocab\n file), and `distortion = 0.0` gives a uniform distribution.\n num_reserved_ids: Optionally some reserved IDs can be added in the range\n `[0, num_reserved_ids)` by the users. One use case is that a special\n unknown word token is used as ID 0. These IDs will have a sampling\n probability of 0.\n num_shards: A sampler can be used to sample from a subset of the original\n range in order to speed up the whole computation through parallelism. This\n parameter (together with `shard`) indicates the number of partitions that\n are being used in the overall computation.\n shard: A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This\n parameter (together with `num_shards`) indicates the particular partition\n number of the operation, when partitioning is being used.\n unigrams: A list of unigram counts or probabilities, one per ID in\n sequential order. Exactly one of `vocab_file` and `unigrams` should be\n passed to this operation.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes using the provided (fixed) base distribution.", "type": "API"}, {"name": "tf.compat.v1.random.gamma", "docs": "Draws `shape` samples from each of the given Gamma distribution(s).\n\n `alpha` is the shape parameter describing the distribution(s), and `beta` is\n the inverse scale parameter(s).\n\n Note: Because internal calculations are done using `float64` and casting has\n `floor` semantics, we must manually map zero outcomes to the smallest\n possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This\n means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise\n should. This bias can only happen for small values of `alpha`, i.e.,\n `alpha << 1` or large values of `beta`, i.e., `beta >> 1`.\n\n The samples are differentiable w.r.t. alpha and beta.\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n Example:\n\n ```python\n samples = tf.random.gamma([10], [0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.gamma([7, 5], [0.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.],[3.],[5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.gamma([30], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n loss = tf.reduce_mean(tf.square(samples))\n dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per alpha/beta-parameterized distribution.\n alpha: A Tensor or Python value or N-D array of type `dtype`. `alpha`\n provides the shape parameter(s) describing the gamma distribution(s) to\n sample. Must be broadcastable with `beta`.\n beta: A Tensor or Python value or N-D array of type `dtype`. Defaults to 1.\n `beta` provides the inverse scale parameter(s) of the gamma\n distribution(s) to sample. Must be broadcastable with `alpha`.\n dtype: The type of alpha, beta, and the output: `float16`, `float32`, or\n `float64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape\n `tf.concat([shape, tf.shape(alpha + beta)], axis=0)` with values of type\n `dtype`.\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Draws `shape` samples from each of the given Gamma distribution(s).", "type": "API"}, {"name": "tf.compat.v1.random.Generator", "docs": "Random-number generator.\n\n Example:\n\n Creating a generator from a seed:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.normal(shape=(2, 3))\n \n\n Creating a generator from a non-deterministic state:\n\n >>> g = tf.random.Generator.from_non_deterministic_state()\n >>> g.normal(shape=(2, 3))\n \n\n All the constructors allow explicitly choosing an Random-Number-Generation\n (RNG) algorithm. Supported algorithms are `\"philox\"` and `\"threefry\"`. For\n example:\n\n >>> g = tf.random.Generator.from_seed(123, alg=\"philox\")\n >>> g.normal(shape=(2, 3))\n \n\n CPU, GPU and TPU with the same algorithm and seed will generate the same\n integer random numbers. Float-point results (such as the output of `normal`)\n may have small numerical discrepancies between different devices.\n\n This class uses a `tf.Variable` to manage its internal state. Every time\n random numbers are generated, the state of the generator will change. For\n example:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.state\n \n >>> g.normal(shape=(2, 3))\n <...>\n >>> g.state\n \n\n The shape of the state is algorithm-specific.\n\n There is also a global generator:\n\n >>> g = tf.random.get_global_generator()\n >>> g.normal(shape=(2, 3))\n \n\n When creating a generator inside a `tf.distribute.Strategy` scope, each\n replica will get a different stream of random numbers.\n\n For example, in this code:\n\n ```\n strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n with strat.scope():\n g = tf.random.Generator.from_seed(1)\n def f():\n return g.normal([])\n results = strat.run(f).values\n ```\n\n `results[0]` and `results[1]` will have different values.\n\n If the generator is seeded (e.g. created via `Generator.from_seed`), the\n random numbers will be determined by the seed, even though different replicas\n get different numbers. One can think of a random number generated on a\n replica as a hash of the replica ID and a \"master\" random number that may be\n common to all replicas. Hence, the whole system is still deterministic.\n\n (Note that the random numbers on different replicas are not correlated, even\n if they are deterministically determined by the same seed. They are not\n correlated in the sense that no matter what statistics one calculates on them,\n there won't be any discernable correlation.)\n\n Generators can be freely saved and restored using `tf.train.Checkpoint`. The\n checkpoint can be restored in a distribution strategy with a different number\n of replicas than the original strategy. If a replica ID is present in both the\n original and the new distribution strategy, its state will be properly\n restored (i.e. the random-number stream from the restored point will be the\n same as that from the saving point) unless the replicas have already diverged\n in their RNG call traces before saving (e.g. one replica has made one RNG call\n while another has made two RNG calls). We don't have such guarantee if the\n generator is saved in a strategy scope and restored outside of any strategy\n scope, or vice versa.\n\n When a generator is created within the scope of\n `tf.distribute.experimental.ParameterServerStrategy`, the workers\n will share the generator's state (placed on one of the parameter\n servers). In this way the workers will still get different\n random-number streams, as stated above. (This is similar to replicas\n in a `tf.distribute.MirroredStrategy` sequentially accessing a\n generator created outside the strategy.) Each RNG call on a worker\n will incur a round-trip to a parameter server, which may have\n performance impacts. When creating a\n `tf.distribute.experimental.ParameterServerStrategy`, please make\n sure that the `variable_partitioner` argument won't shard small\n variables of shape `[2]` or `[3]` (because generator states must not\n be sharded). Ways to avoid sharding small variables include setting\n `variable_partitioner` to `None` or to\n `tf.distribute.experimental.partitioners.MinSizePartitioner` with a\n large enough `min_shard_bytes` (see\n `tf.distribute.experimental.ParameterServerStrategy`'s documentation\n for more details).\n ", "desc": "Random-number generator.", "type": "API"}, {"name": "tf.compat.v1.random.get_global_generator", "docs": "Retrieves the global generator.\n\n This function will create the global generator the first time it is called,\n and the generator will be placed at the default device at that time, so one\n needs to be careful when this function is first called. Using a generator\n placed on a less-ideal device will incur performance regression.\n\n Returns:\n The global `tf.random.Generator` object.\n ", "desc": "Retrieves the global generator.", "type": "API"}, {"name": "tf.compat.v1.random.get_seed", "docs": "Returns the local seeds an operation should use given an op-specific seed.\n\n Given operation-specific seed, `op_seed`, this helper function returns two\n seeds derived from graph-level and op-level seeds. Many random operations\n internally use the two seeds to allow user to change the seed globally for a\n graph, or for only specific operations.\n\n For details on how the graph-level seed interacts with op seeds, see\n `tf.compat.v1.random.set_random_seed`.\n\n Args:\n op_seed: integer.\n\n Returns:\n A tuple of two integers that should be used for the local seed of this\n operation.\n ", "desc": "Returns the local seeds an operation should use given an op-specific seed.", "type": "API"}, {"name": "tf.compat.v1.random.learned_unigram_candidate_sampler", "docs": "Samples a set of classes from a distribution learned during training.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is constructed on the fly\n during training. It is a unigram distribution over the target\n classes seen so far during training. Every integer in `[0, range_max)`\n begins with a weight of 1, and is incremented by 1 each time it is\n seen as a target class. The base distribution is not saved to checkpoints,\n so it is reset when the model is reloaded.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes from a distribution learned during training.", "type": "API"}, {"name": "tf.compat.v1.random.log_uniform_candidate_sampler", "docs": "Samples a set of classes using a log-uniform (Zipfian) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is an approximately log-uniform\n or Zipfian distribution:\n\n `P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`\n\n This sampler is useful when the target classes approximately follow such\n a distribution - for example, if the classes represent words in a lexicon\n sorted in decreasing order of frequency. If your classes are not ordered by\n decreasing frequency, do not use this op.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a log-uniform (Zipfian) base distribution.", "type": "API"}, {"name": "tf.compat.v1.random.multinomial", "docs": "Draws samples from a multinomial distribution. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.random.categorical` instead.\n\nExample:\n\n```python\n# samples has shape [1, 5], where each value is either 0 or 1 with equal\n# probability.\nsamples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5)\n```\n\nArgs:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for behavior.\n name: Optional name for the operation.\n output_dtype: The integer type of the output: `int32` or `int64`. Defaults\n to `int64`.\n\nReturns:\n The drawn samples of shape `[batch_size, num_samples]`.", "desc": "Draws samples from a multinomial distribution. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.random.normal", "docs": "Outputs random values from a normal distribution.\n\n Example that generates a new set of random values every time:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([4], 0, 1, tf.float32)\n \n\n Example that outputs a reproducible result:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([2,2], 0, 1, tf.float32, seed=1)\n \n\n In this case, we are setting both the global and operation-level seed to\n ensure this result is reproducible. See `tf.random.set_seed` for more\n information.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A Tensor or Python value of type `dtype`, broadcastable with `stddev`.\n The mean of the normal distribution.\n stddev: A Tensor or Python value of type `dtype`, broadcastable with `mean`.\n The standard deviation of the normal distribution.\n dtype: The float type of the output: `float16`, `bfloat16`, `float32`,\n `float64`. Defaults to `float32`.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random normal values.\n ", "desc": "Outputs random values from a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random.poisson", "docs": "Draws `shape` samples from each of the given Poisson distribution(s).\n\n `lam` is the rate parameter describing the distribution(s).\n\n Example:\n\n ```python\n samples = tf.random.poisson([0.5, 1.5], [10])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.poisson([12.2, 3.3], [7, 5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n ```\n\n Args:\n lam: A Tensor or Python value or N-D array of type `dtype`.\n `lam` provides the rate parameter(s) describing the poisson\n distribution(s) to sample.\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per \"rate\"-parameterized distribution.\n dtype: The type of the output: `float16`, `float32`, `float64`, `int32` or\n `int64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)`\n with values of type `dtype`.\n ", "desc": "Draws `shape` samples from each of the given Poisson distribution(s).", "type": "API"}, {"name": "tf.compat.v1.random.set_global_generator", "docs": "Replaces the global generator with another `Generator` object.\n\n This function replaces the global generator with the provided `generator`\n object.\n A random number generator utilizes a `tf.Variable` object to store its state.\n The user shall be aware of caveats how `set_global_generator` interacts with\n `tf.function`:\n\n - tf.function puts restrictions on Variable creation thus one cannot freely\n create a new random generator instance inside `tf.function`.\n To call `set_global_generator` inside `tf.function`, the generator instance\n must have already been created eagerly.\n - tf.function captures the Variable during trace-compilation, thus a compiled\n f.function will not be affected `set_global_generator` as demonstrated by\n random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun .\n\n For most use cases, avoid calling `set_global_generator` after program\n initialization, and prefer to reset the state of the existing global generator\n instead, such as,\n\n >>> rng = tf.random.get_global_generator()\n >>> rng.reset_from_seed(30)\n\n\n Args:\n generator: the new `Generator` object.\n ", "desc": "Replaces the global generator with another `Generator` object.", "type": "API"}, {"name": "tf.compat.v1.random.set_random_seed", "docs": "Sets the graph-level random seed for the default graph.\n\n Operations that rely on a random seed actually derive it from two seeds:\n the graph-level and operation-level seeds. This sets the graph-level seed.\n\n Its interactions with operation-level seeds is as follows:\n\n 1. If neither the graph-level nor the operation seed is set:\n A random seed is used for this op.\n 2. If the graph-level seed is set, but the operation seed is not:\n The system deterministically picks an operation seed in conjunction with\n the graph-level seed so that it gets a unique random sequence. Within the\n same version of tensorflow and user code, this sequence is deterministic.\n However across different versions, this sequence might change. If the\n code depends on particular seeds to work, specify both graph-level\n and operation-level seeds explicitly.\n 3. If the graph-level seed is not set, but the operation seed is set:\n A default graph-level seed and the specified operation seed are used to\n determine the random sequence.\n 4. If both the graph-level and the operation seed are set:\n Both seeds are used in conjunction to determine the random sequence.\n\n To illustrate the user-visible effects, consider these examples:\n\n To generate different sequences across sessions, set neither\n graph-level nor op-level seeds:\n\n ```python\n a = tf.random.uniform([1])\n b = tf.random.normal([1])\n\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A3'\n print(sess2.run(a)) # generates 'A4'\n print(sess2.run(b)) # generates 'B3'\n print(sess2.run(b)) # generates 'B4'\n ```\n\n To generate the same repeatable sequence for an op across sessions, set the\n seed for the op:\n\n ```python\n a = tf.random.uniform([1], seed=1)\n b = tf.random.normal([1])\n\n # Repeatedly running this block with the same graph will generate the same\n # sequence of values for 'a', but different sequences of values for 'b'.\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A1'\n print(sess2.run(a)) # generates 'A2'\n print(sess2.run(b)) # generates 'B3'\n print(sess2.run(b)) # generates 'B4'\n ```\n\n To make the random sequences generated by all ops be repeatable across\n sessions, set a graph-level seed:\n\n ```python\n tf.compat.v1.random.set_random_seed(1234)\n a = tf.random.uniform([1])\n b = tf.random.normal([1])\n\n # Repeatedly running this block with the same graph will generate the same\n # sequences of 'a' and 'b'.\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A1'\n print(sess2.run(a)) # generates 'A2'\n print(sess2.run(b)) # generates 'B1'\n print(sess2.run(b)) # generates 'B2'\n ```\n\n @compatibility(TF2)\n 'tf.compat.v1.set_random_seed' is compatible with eager mode. However,\n in eager mode this API will set the global seed instead of the\n graph-level seed of the default graph. In TF2 this API is changed to\n [tf.random.set_seed]\n (https://www.tensorflow.org/api_docs/python/tf/random/set_seed).\n @end_compatibility\n\n Args:\n seed: integer.\n ", "desc": "Sets the graph-level random seed for the default graph.", "type": "API"}, {"name": "tf.compat.v1.random.shuffle", "docs": "Randomly shuffles a tensor along its first dimension.\n\n The tensor is shuffled along dimension 0, such that each `value[j]` is mapped\n to one and only one `output[i]`. For example, a mapping that might occur for a\n 3x2 tensor is:\n\n ```python\n [[1, 2], [[5, 6],\n [3, 4], ==> [1, 2],\n [5, 6]] [3, 4]]\n ```\n\n Args:\n value: A Tensor to be shuffled.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of same shape and type as `value`, shuffled along its first\n dimension.\n ", "desc": "Randomly shuffles a tensor along its first dimension.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_binomial", "docs": "Outputs deterministic pseudorandom values from a binomial distribution.\n\n The generated values follow a binomial distribution with specified count and\n probability of success parameters.\n\n This is a stateless version of `tf.random.Generator.binomial`: if run twice\n with the same seeds and shapes, it will produce the same pseudorandom numbers.\n The output is consistent across multiple runs on the same hardware (and\n between CPU and GPU), but may change between versions of TensorFlow or on\n non-CPU/GPU hardware.\n\n Example:\n\n ```python\n counts = [10., 20.]\n # Probability of success.\n probs = [0.8]\n\n binomial_samples = tf.random.stateless_binomial(\n shape=[2], seed=[123, 456], counts=counts, probs=probs)\n\n counts = ... # Shape [3, 1, 2]\n probs = ... # Shape [1, 4, 2]\n shape = [3, 4, 3, 4, 2]\n # Sample shape will be [3, 4, 3, 4, 2]\n binomial_samples = tf.random.stateless_binomial(\n shape=shape, seed=[123, 456], counts=counts, probs=probs)\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n counts: Tensor. The counts of the binomial distribution. Must be\n broadcastable with `probs`, and broadcastable with the rightmost\n dimensions of `shape`.\n probs: Tensor. The probability of success for the binomial distribution.\n Must be broadcastable with `counts` and broadcastable with the rightmost\n dimensions of `shape`.\n output_dtype: The type of the output. Default: tf.int32\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random binomial\n values. For each i, each samples[..., i] is an independent draw from\n the binomial distribution on counts[i] trials with probability of\n success probs[i].\n\n ", "desc": "Outputs deterministic pseudorandom values from a binomial distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_categorical", "docs": "Draws deterministic pseudorandom samples from a categorical distribution.\n\n This is a stateless version of `tf.categorical`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n\n Example:\n\n ```python\n # samples has shape [1, 5], where each value is either 0 or 1 with equal\n # probability.\n samples = tf.random.stateless_categorical(\n tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])\n ```\n\n Args:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n dtype: The integer type of the output: `int32` or `int64`. Defaults to\n `int64`.\n name: Optional name for the operation.\n\n Returns:\n The drawn samples of shape `[batch_size, num_samples]`.\n ", "desc": "Draws deterministic pseudorandom samples from a categorical distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_gamma", "docs": "Outputs deterministic pseudorandom values from a gamma distribution.\n\n The generated values follow a gamma distribution with specified concentration\n (`alpha`) and inverse scale (`beta`) parameters.\n\n This is a stateless version of `tf.random.gamma`: if run twice with the same\n seeds and shapes, it will produce the same pseudorandom numbers. The output is\n consistent across multiple runs on the same hardware (and between CPU and\n GPU),\n but may change between versions of TensorFlow or on non-CPU/GPU hardware.\n\n A slight difference exists in the interpretation of the `shape` parameter\n between `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always\n prepended to the shape of the broadcast of `alpha` with `beta`; whereas in\n `stateless_gamma` the `shape` parameter must always encompass the shapes of\n each of `alpha` and `beta` (which must broadcast together to match the\n trailing dimensions of `shape`).\n\n Note: Because internal calculations are done using `float64` and casting has\n `floor` semantics, we must manually map zero outcomes to the smallest\n possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This\n means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise\n should. This bias can only happen for small values of `alpha`, i.e.,\n `alpha << 1` or large values of `beta`, i.e., `beta >> 1`.\n\n The samples are differentiable w.r.t. alpha and beta.\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n Example:\n\n ```python\n samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.], [3.], [5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n with tf.GradientTape() as tape:\n tape.watch([alpha, beta])\n loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)))\n dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n alpha: Tensor. The concentration parameter of the gamma distribution. Must\n be broadcastable with `beta`, and broadcastable with the rightmost\n dimensions of `shape`.\n beta: Tensor. The inverse scale parameter of the gamma distribution. Must be\n broadcastable with `alpha` and broadcastable with the rightmost dimensions\n of `shape`.\n dtype: Floating point dtype of `alpha`, `beta`, and the output.\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random gamma values.\n For each i, each `samples[..., i] is an independent draw from the gamma\n distribution with concentration alpha[i] and scale beta[i].\n\n ", "desc": "Outputs deterministic pseudorandom values from a gamma distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_multinomial", "docs": "Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.random.stateless_categorical` instead.\n\nThis is a stateless version of `tf.random.categorical`: if run twice with the\nsame seeds and shapes, it will produce the same pseudorandom numbers. The\noutput is consistent across multiple runs on the same hardware (and between\nCPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\nhardware.\n\nExample:\n\n```python\n# samples has shape [1, 5], where each value is either 0 or 1 with equal\n# probability.\nsamples = tf.random.stateless_categorical(\n tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])\n```\n\nArgs:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n output_dtype: The integer type of the output: `int32` or `int64`. Defaults\n to `int64`.\n name: Optional name for the operation.\n\nReturns:\n The drawn samples of shape `[batch_size, num_samples]`.", "desc": "Draws deterministic pseudorandom samples from a multinomial distribution. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.random.stateless_normal", "docs": "Outputs deterministic pseudorandom values from a normal distribution.\n\n This is a stateless version of `tf.random.normal`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the normal\n distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution.\n dtype: The float type of the output: `float16`, `bfloat16`, `float32`,\n `float64`. Defaults to `float32`.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor of the specified shape filled with random normal values.\n ", "desc": "Outputs deterministic pseudorandom values from a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_parameterized_truncated_normal", "docs": "Outputs random values from a truncated normal distribution.\n\n The generated values follow a normal distribution with specified mean and\n standard deviation, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n\n Examples:\n\n Sample from a Truncated normal, with deferring shape parameters that\n broadcast.\n\n >>> means = 0.\n >>> stddevs = tf.math.exp(tf.random.uniform(shape=[2, 3]))\n >>> minvals = [-1., -2., -1000.]\n >>> maxvals = [[10000.], [1.]]\n >>> y = tf.random.stateless_parameterized_truncated_normal(\n ... shape=[10, 2, 3], seed=[7, 17],\n ... means=means, stddevs=stddevs, minvals=minvals, maxvals=maxvals)\n >>> y.shape\n TensorShape([10, 2, 3])\n\n Args:\n shape: A 1-D integer `Tensor` or Python array. The shape of the output\n tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n means: A `Tensor` or Python value of type `dtype`. The mean of the truncated\n normal distribution. This must broadcast with `stddevs`, `minvals` and\n `maxvals`, and the broadcasted shape must be dominated by `shape`.\n stddevs: A `Tensor` or Python value of type `dtype`. The standard deviation\n of the truncated normal distribution. This must broadcast with `means`,\n `minvals` and `maxvals`, and the broadcasted shape must be dominated by\n `shape`.\n minvals: A `Tensor` or Python value of type `dtype`. The minimum value of\n the truncated normal distribution. This must broadcast with `means`,\n `stddevs` and `maxvals`, and the broadcasted shape must be dominated by\n `shape`.\n maxvals: A `Tensor` or Python value of type `dtype`. The maximum value of\n the truncated normal distribution. This must broadcast with `means`,\n `stddevs` and `minvals`, and the broadcasted shape must be dominated by\n `shape`.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_poisson", "docs": "Outputs deterministic pseudorandom values from a Poisson distribution.\n\n The generated values follow a Poisson distribution with specified rate\n parameter.\n\n This is a stateless version of `tf.random.poisson`: if run twice with the same\n seeds and shapes, it will produce the same pseudorandom numbers. The output is\n consistent across multiple runs on the same hardware, but may change between\n versions of TensorFlow or on non-CPU/GPU hardware.\n\n A slight difference exists in the interpretation of the `shape` parameter\n between `stateless_poisson` and `poisson`: in `poisson`, the `shape` is always\n prepended to the shape of `lam`; whereas in `stateless_poisson` the shape of\n `lam` must match the trailing dimensions of `shape`.\n\n Example:\n\n ```python\n samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n rate = tf.constant([[1.], [3.], [5.]])\n samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate)\n # samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions.\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n lam: Tensor. The rate parameter \"lambda\" of the Poisson distribution. Shape\n must match the rightmost dimensions of `shape`.\n dtype: Dtype of the samples (int or float dtypes are permissible, as samples\n are discrete). Default: int32.\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random Poisson values.\n For each i, each `samples[..., i]` is an independent draw from the Poisson\n distribution with rate `lam[i]`.\n\n ", "desc": "Outputs deterministic pseudorandom values from a Poisson distribution.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_truncated_normal", "docs": "Outputs deterministic pseudorandom values, truncated normally distributed.\n\n This is a stateless version of `tf.random.truncated_normal`: if run twice with\n the same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n The generated values follow a normal distribution with specified mean and\n standard deviation, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the\n truncated normal distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution, before truncation.\n dtype: The type of the output.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs deterministic pseudorandom values, truncated normally distributed.", "type": "API"}, {"name": "tf.compat.v1.random.stateless_uniform", "docs": "Outputs deterministic pseudorandom values from a uniform distribution.\n\n This is a stateless version of `tf.random.uniform`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n The generated values follow a uniform distribution in the range\n `[minval, maxval)`. The lower bound `minval` is included in the range, while\n the upper bound `maxval` is excluded.\n\n For floats, the default range is `[0, 1)`. For ints, at least `maxval` must\n be specified explicitly.\n\n In the integer case, the random integers are slightly biased unless\n `maxval - minval` is an exact power of two. The bias is small for values of\n `maxval - minval` significantly smaller than the range of the output (either\n `2**32` or `2**64`).\n\n For full-range (i.e. inclusive of both max and min) random integers, pass\n `minval=None` and `maxval=None` with an integer `dtype`. For an integer dtype\n either both `minval` and `maxval` must be `None` or neither may be `None`. For\n example:\n ```python\n ints = tf.random.stateless_uniform(\n [10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32)\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n minval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The lower bound on the range of random values to\n generate. Pass `None` for full-range integers. Defaults to 0.\n maxval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The upper bound on the range of random values to generate.\n Defaults to 1 if `dtype` is floating point. Pass `None` for full-range\n integers.\n dtype: The type of the output: `float16`, `bfloat16`, `float32`, `float64`,\n `int32`, or `int64`. For unbounded uniform ints (`minval`, `maxval` both\n `None`), `uint32` and `uint64` may be used. Defaults to `float32`.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. Valid\n choices are `\"philox\"` for [the Philox\n algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf),\n `\"threefry\"` for [the ThreeFry\n algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf),\n and `\"auto_select\"` (default) for the system to automatically\n select an algorithm based the device type. Values of\n `tf.random.Algorithm` can also be used. Note that with\n `\"auto_select\"`, the outputs of this function may change when\n it is running on a different device.\n\n Returns:\n A tensor of the specified shape filled with random uniform values.\n\n Raises:\n ValueError: If `dtype` is integral and only one of `minval` or `maxval` is\n specified.\n ", "desc": "Outputs deterministic pseudorandom values from a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.random.truncated_normal", "docs": "Outputs random values from a truncated normal distribution.\n\n The values are drawn from a normal distribution with specified mean and\n standard deviation, discarding and re-drawing any samples that are more than\n two standard deviations from the mean.\n\n Examples:\n\n >>> tf.random.truncated_normal(shape=[2])\n \n\n >>> tf.random.truncated_normal(shape=[2], mean=3, stddev=1, dtype=tf.float32)\n \n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the\n truncated normal distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution, before truncation.\n dtype: The type of the output. Restricted to floating-point types:\n `tf.half`, `tf.float`, `tf.double`, etc.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for more information.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random.uniform", "docs": "Outputs random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range\n `[minval, maxval)`. The lower bound `minval` is included in the range, while\n the upper bound `maxval` is excluded.\n\n For floats, the default range is `[0, 1)`. For ints, at least `maxval` must\n be specified explicitly.\n\n In the integer case, the random integers are slightly biased unless\n `maxval - minval` is an exact power of two. The bias is small for values of\n `maxval - minval` significantly smaller than the range of the output (either\n `2**32` or `2**64`).\n\n Examples:\n\n >>> tf.random.uniform(shape=[2])\n \n >>> tf.random.uniform(shape=[], minval=-1., maxval=0.)\n \n >>> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64)\n \n\n The `seed` argument produces a deterministic sequence of tensors across\n multiple calls. To repeat that sequence, use `tf.random.set_seed`:\n\n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n\n Without `tf.random.set_seed` but with a `seed` argument is specified, small\n changes to function graphs or previously executed operations will change the\n returned value. See `tf.random.set_seed` for details.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n minval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The lower bound on the range of random values to generate\n (inclusive). Defaults to 0.\n maxval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The upper bound on the range of random values to generate\n (exclusive). Defaults to 1 if `dtype` is floating point.\n dtype: The type of the output: `float16`, `bfloat16`, `float32`, `float64`,\n `int32`, or `int64`. Defaults to `float32`.\n seed: A Python integer. Used in combination with `tf.random.set_seed` to\n create a reproducible sequence of tensors across multiple calls.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random uniform values.\n\n Raises:\n ValueError: If `dtype` is integral and `maxval` is not specified.\n ", "desc": "Outputs random values from a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.random.uniform_candidate_sampler", "docs": "Samples a set of classes using a uniform base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is the uniform distribution\n over the range of integers `[0, range_max)`.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample. The\n `sampled_candidates` return value will have shape `[num_sampled]`. If\n `unique=True`, `num_sampled` must be less than or equal to `range_max`.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`. The\n sampled classes, either with possible duplicates (`unique=False`) or all\n unique (`unique=True`). In either case, `sampled_candidates` is\n independent of the true classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a uniform base distribution.", "type": "API"}, {"name": "tf.compat.v1.random_crop", "docs": "Randomly crops a tensor to a given size.\n\n Slices a shape `size` portion out of `value` at a uniformly chosen offset.\n Requires `value.shape >= size`.\n\n If a dimension should not be cropped, pass the full size of that dimension.\n For example, RGB images can be cropped with\n `size = [crop_height, crop_width, 3]`.\n\n Example usage:\n\n >>> image = [[1, 2, 3], [4, 5, 6]]\n >>> result = tf.image.random_crop(value=image, size=(1, 3))\n >>> result.shape.as_list()\n [1, 3]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_crop`. Unlike using the `seed` param with\n `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same\n results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n value: Input tensor to crop.\n size: 1-D tensor with size the rank of `value`.\n seed: Python integer. Used to create a random seed. See\n `tf.random.set_seed`\n for behavior.\n name: A name for this operation (optional).\n\n Returns:\n A cropped tensor of the same rank as `value` and shape `size`.\n ", "desc": "Randomly crops a tensor to a given size.", "type": "API"}, {"name": "tf.compat.v1.random_gamma", "docs": "Draws `shape` samples from each of the given Gamma distribution(s).\n\n `alpha` is the shape parameter describing the distribution(s), and `beta` is\n the inverse scale parameter(s).\n\n Note: Because internal calculations are done using `float64` and casting has\n `floor` semantics, we must manually map zero outcomes to the smallest\n possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This\n means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise\n should. This bias can only happen for small values of `alpha`, i.e.,\n `alpha << 1` or large values of `beta`, i.e., `beta >> 1`.\n\n The samples are differentiable w.r.t. alpha and beta.\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n Example:\n\n ```python\n samples = tf.random.gamma([10], [0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.gamma([7, 5], [0.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.],[3.],[5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.gamma([30], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n loss = tf.reduce_mean(tf.square(samples))\n dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per alpha/beta-parameterized distribution.\n alpha: A Tensor or Python value or N-D array of type `dtype`. `alpha`\n provides the shape parameter(s) describing the gamma distribution(s) to\n sample. Must be broadcastable with `beta`.\n beta: A Tensor or Python value or N-D array of type `dtype`. Defaults to 1.\n `beta` provides the inverse scale parameter(s) of the gamma\n distribution(s) to sample. Must be broadcastable with `alpha`.\n dtype: The type of alpha, beta, and the output: `float16`, `float32`, or\n `float64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape\n `tf.concat([shape, tf.shape(alpha + beta)], axis=0)` with values of type\n `dtype`.\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Draws `shape` samples from each of the given Gamma distribution(s).", "type": "API"}, {"name": "tf.compat.v1.random_normal", "docs": "Outputs random values from a normal distribution.\n\n Example that generates a new set of random values every time:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([4], 0, 1, tf.float32)\n \n\n Example that outputs a reproducible result:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([2,2], 0, 1, tf.float32, seed=1)\n \n\n In this case, we are setting both the global and operation-level seed to\n ensure this result is reproducible. See `tf.random.set_seed` for more\n information.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A Tensor or Python value of type `dtype`, broadcastable with `stddev`.\n The mean of the normal distribution.\n stddev: A Tensor or Python value of type `dtype`, broadcastable with `mean`.\n The standard deviation of the normal distribution.\n dtype: The float type of the output: `float16`, `bfloat16`, `float32`,\n `float64`. Defaults to `float32`.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random normal values.\n ", "desc": "Outputs random values from a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random_normal_initializer", "docs": "Initializer that generates tensors with a normal distribution.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.RandomNormal` or `tf.keras.initializers.RandomNormal`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default stddev and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.random_normal_initializer(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.RandomNormal(\n mean=mean,\n seed=seed,\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :----------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it as a |\n : : : `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported. |\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.compat.v1.random_poisson", "docs": "Draws `shape` samples from each of the given Poisson distribution(s).\n\n `lam` is the rate parameter describing the distribution(s).\n\n Example:\n\n ```python\n samples = tf.random.poisson([0.5, 1.5], [10])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.poisson([12.2, 3.3], [7, 5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n ```\n\n Args:\n lam: A Tensor or Python value or N-D array of type `dtype`.\n `lam` provides the rate parameter(s) describing the poisson\n distribution(s) to sample.\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per \"rate\"-parameterized distribution.\n dtype: The type of the output: `float16`, `float32`, `float64`, `int32` or\n `int64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)`\n with values of type `dtype`.\n ", "desc": "Draws `shape` samples from each of the given Poisson distribution(s).", "type": "API"}, {"name": "tf.compat.v1.random_shuffle", "docs": "Randomly shuffles a tensor along its first dimension.\n\n The tensor is shuffled along dimension 0, such that each `value[j]` is mapped\n to one and only one `output[i]`. For example, a mapping that might occur for a\n 3x2 tensor is:\n\n ```python\n [[1, 2], [[5, 6],\n [3, 4], ==> [1, 2],\n [5, 6]] [3, 4]]\n ```\n\n Args:\n value: A Tensor to be shuffled.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of same shape and type as `value`, shuffled along its first\n dimension.\n ", "desc": "Randomly shuffles a tensor along its first dimension.", "type": "API"}, {"name": "tf.compat.v1.random_uniform", "docs": "Outputs random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range\n `[minval, maxval)`. The lower bound `minval` is included in the range, while\n the upper bound `maxval` is excluded.\n\n For floats, the default range is `[0, 1)`. For ints, at least `maxval` must\n be specified explicitly.\n\n In the integer case, the random integers are slightly biased unless\n `maxval - minval` is an exact power of two. The bias is small for values of\n `maxval - minval` significantly smaller than the range of the output (either\n `2**32` or `2**64`).\n\n Examples:\n\n >>> tf.random.uniform(shape=[2])\n \n >>> tf.random.uniform(shape=[], minval=-1., maxval=0.)\n \n >>> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64)\n \n\n The `seed` argument produces a deterministic sequence of tensors across\n multiple calls. To repeat that sequence, use `tf.random.set_seed`:\n\n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n\n Without `tf.random.set_seed` but with a `seed` argument is specified, small\n changes to function graphs or previously executed operations will change the\n returned value. See `tf.random.set_seed` for details.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n minval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The lower bound on the range of random values to generate\n (inclusive). Defaults to 0.\n maxval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The upper bound on the range of random values to generate\n (exclusive). Defaults to 1 if `dtype` is floating point.\n dtype: The type of the output: `float16`, `bfloat16`, `float32`, `float64`,\n `int32`, or `int64`. Defaults to `float32`.\n seed: A Python integer. Used in combination with `tf.random.set_seed` to\n create a reproducible sequence of tensors across multiple calls.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random uniform values.\n\n Raises:\n ValueError: If `dtype` is integral and `maxval` is not specified.\n ", "desc": "Outputs random values from a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.random_uniform_initializer", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate.\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate. Defaults to 1 for float types.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer.\n\n @compatibility(TF2)\n Although it is a legacy compat.v1 API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.RandomUniform` or `tf.keras.initializers.RandomUniform`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default minval, maxval and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.random_uniform_initializer(\n minval=minval,\n maxval=maxval,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.RandomUniform(\n minval=minval,\n maxval=maxval,\n seed=seed)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `minval` | `minval` | Default changes from 0 to -0.05 |\n | `maxval` | `maxval` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.compat.v1.RandomShuffleQueue", "docs": "A queue implementation that dequeues elements in a random order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in a random order.", "type": "API"}, {"name": "tf.compat.v1.range", "docs": "Creates a sequence of numbers.\n\n Creates a sequence of numbers that begins at `start` and extends by\n increments of `delta` up to but not including `limit`.\n\n The dtype of the resulting tensor is inferred from the inputs unless\n it is provided explicitly.\n\n Like the Python builtin `range`, `start` defaults to 0, so that\n `range(n) = range(0, n)`.\n\n For example:\n\n >>> start = 3\n >>> limit = 18\n >>> delta = 3\n >>> tf.range(start, limit, delta)\n \n\n >>> start = 3\n >>> limit = 1\n >>> delta = -0.5\n >>> tf.range(start, limit, delta)\n \n\n >>> limit = 5\n >>> tf.range(limit)\n \n\n Args:\n start: A 0-D `Tensor` (scalar). Acts as first entry in the range if `limit`\n is not None; otherwise, acts as range limit and first entry defaults to 0.\n limit: A 0-D `Tensor` (scalar). Upper limit of sequence, exclusive. If None,\n defaults to the value of `start` while the first entry of the range\n defaults to 0.\n delta: A 0-D `Tensor` (scalar). Number that increments `start`. Defaults to\n 1.\n dtype: The type of the elements of the resulting tensor.\n name: A name for the operation. Defaults to \"range\".\n\n Returns:\n An 1-D `Tensor` of type `dtype`.\n\n @compatibility(numpy)\n Equivalent to np.arange\n @end_compatibility\n ", "desc": "Creates a sequence of numbers.", "type": "API"}, {"name": "tf.compat.v1.rank", "docs": "Returns the rank of a tensor.\n\n See also `tf.shape`.\n\n Returns a 0-D `int32` `Tensor` representing the rank of `input`.\n\n For example:\n\n ```python\n # shape of tensor 't' is [2, 2, 3]\n t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n tf.rank(t) # 3\n ```\n\n **Note**: The rank of a tensor is not the same as the rank of a matrix. The\n rank of a tensor is the number of indices required to uniquely select each\n element of the tensor. Rank is also known as \"order\", \"degree\", or \"ndims.\"\n\n Args:\n input: A `Tensor` or `SparseTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n\n @compatibility(numpy)\n Equivalent to np.ndim\n @end_compatibility\n ", "desc": "Returns the rank of a tensor.", "type": "API"}, {"name": "tf.compat.v1.read_file", "docs": "Reads the contents of file.\n\n This operation returns a tensor with the entire contents of the input\n filename. It does not do any parsing, it just returns the contents as\n they are. Usually, this is the first step in the input pipeline.\n\n Example:\n\n >>> with open(\"/tmp/file.txt\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.read_file(\"/tmp/file.txt\")\n \n\n Example of using the op in a function to read an image, decode it and reshape\n the tensor containing the pixel data:\n\n >>> @tf.function\n ... def load_image(filename):\n ... raw = tf.io.read_file(filename)\n ... image = tf.image.decode_png(raw, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... image.set_shape([28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n Args:\n filename: string. filename to read from.\n name: string. Optional name for the op.\n\n Returns:\n A tensor of dtype \"string\", with the file contents.\n ", "desc": "Reads the contents of file.", "type": "API"}, {"name": "tf.compat.v1.ReaderBase", "docs": "Base class for different Reader types, that produce a record every step.\n\n Conceptually, Readers convert string 'work units' into records (key,\n value pairs). Typically the 'work units' are filenames and the\n records are extracted from the contents of those files. We want a\n single record produced per step, but a work unit can correspond to\n many records.\n\n Therefore we introduce some decoupling using a queue. The queue\n contains the work units and the Reader dequeues from the queue when\n it is asked to produce a record (via Read()) but it has finished the\n last work unit.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "Base class for different Reader types, that produce a record every step.", "type": "API"}, {"name": "tf.compat.v1.real", "docs": "Returns the real part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the real part of each element in `input` considered as a complex number.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.real(x) # [-2.25, 3.25]\n ```\n\n If `input` is already real, it is returned unchanged.\n\n Args:\n input: A `Tensor`. Must have numeric type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the real part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.compat.v1.realdiv", "docs": "Returns x / y element-wise for real types.\n\n If `x` and `y` are reals, this will return the floating-point division.\n\n *NOTE*: `Div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for real types.", "type": "API"}, {"name": "tf.compat.v1.reciprocal", "docs": "Computes the reciprocal of x element-wise.\n\n I.e., \\\\(y = 1 / x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the reciprocal of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.recompute_grad", "docs": "Defines a function as a recompute-checkpoint for the tape auto-diff.\n\n Tape checkpointing is a technique to reduce the memory consumption of the\n auto-diff tape:\n\n - Without tape checkpointing operations and intermediate values are\n recorded to the tape for use in the backward pass.\n\n - With tape checkpointing, only the function call and its inputs are\n recorded. During back-propagation the `recompute_grad` custom gradient\n (`tf.custom_gradient`) recomputes the function under a localized Tape object.\n This recomputation of the function during backpropagation performs redundant\n calculation, but reduces the overall memory usage of the Tape.\n\n >>> y = tf.Variable(1.0)\n\n >>> def my_function(x):\n ... tf.print('running')\n ... z = x*y\n ... return z\n\n >>> my_function_recompute = tf.recompute_grad(my_function)\n\n >>> with tf.GradientTape() as tape:\n ... r = tf.constant(1.0)\n ... for i in range(4):\n ... r = my_function_recompute(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, [y])\n running\n running\n running\n running\n\n Without `recompute_grad`, the tape contains all intermitate steps, and no\n recomputation is performed.\n\n >>> with tf.GradientTape() as tape:\n ... r = tf.constant(1.0)\n ... for i in range(4):\n ... r = my_function(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, [y])\n\n\n If `f` was a `tf.keras` `Model` or `Layer` object, methods and attributes\n such as `f.variables` are not available on the returned function `g`.\n Either keep a reference of `f` , or use `g.__wrapped__` for accessing\n these variables and methods.\n\n\n >>> def print_running_and_return(x):\n ... tf.print(\"running\")\n ... return x\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Lambda(print_running_and_return),\n ... tf.keras.layers.Dense(2)\n ... ])\n\n >>> model_recompute = tf.recompute_grad(model)\n\n >>> with tf.GradientTape(persistent=True) as tape:\n ... r = tf.constant([[1,2]])\n ... for i in range(4):\n ... r = model_recompute(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, model.variables)\n running\n running\n running\n running\n\n Alternatively, use the `__wrapped__` attribute to access the original\n model object.\n\n >>> grad = tape.gradient(r, model_recompute.__wrapped__.variables)\n running\n running\n running\n running\n\n\n Args:\n f: function `f(*x)` that returns a `Tensor` or sequence of `Tensor` outputs.\n\n Returns:\n A function `g` wrapping `f` that defines a custom gradient, which recomputes\n `f` on the backwards pass of a gradient call.\n ", "desc": "Defines a function as a recompute-checkpoint for the tape auto-diff.", "type": "API"}, {"name": "tf.compat.v1.reduce_all", "docs": "Computes `tf.math.logical_and` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.logical_and` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.math.reduce_all(x)\n \n >>> tf.math.reduce_all(x, 0)\n \n >>> tf.math.reduce_all(x, 1)\n \n\nArgs:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.all\n@end_compatibility", "desc": "Computes `tf.math.logical_and` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_any", "docs": "Computes `tf.math.logical_or` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.logical_or` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.reduce_any(x)\n \n >>> tf.reduce_any(x, 0)\n \n >>> tf.reduce_any(x, 1)\n \n\nArgs:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.any\n@end_compatibility", "desc": "Computes `tf.math.logical_or` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_join", "docs": "Joins all strings into a single string, or joins along an axis.\n\n This is the reduction operation for the elementwise `tf.strings.join` op.\n\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']]).numpy()\n b'abc123def456'\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']], axis=-1).numpy()\n array([b'abc123', b'def456'], dtype=object)\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']],\n ... axis=-1,\n ... separator=\" \").numpy()\n array([b'abc 123', b'def 456'], dtype=object)\n\n Args:\n inputs: A `tf.string` tensor.\n axis: Which axis to join along. The default behavior is to join all\n elements, producing a scalar.\n keepdims: If true, retains reduced dimensions with length 1.\n separator: a string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Joins all strings into a single string, or joins along an axis.", "type": "API"}, {"name": "tf.compat.v1.reduce_logsumexp", "docs": "Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` has no entries, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nThis function is more numerically stable than log(sum(exp(input))). It avoids\noverflows caused by taking the exp of large inputs and underflows caused by\ntaking the log of small inputs.\n\nFor example:\n\n```python\nx = tf.constant([[0., 0., 0.], [0., 0., 0.]])\ntf.reduce_logsumexp(x) # log(6)\ntf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)]\ntf.reduce_logsumexp(x, 1) # [log(3), log(3)]\ntf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]]\ntf.reduce_logsumexp(x, [0, 1]) # log(6)\n```\n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_max", "docs": "Computes `tf.math.maximum` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.maximum` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nUsage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_max(x)\n \n\nSee the numpy docs for `np.amax` and `np.nanmax` behavior.\n\nArgs:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes `tf.math.maximum` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_mean", "docs": "Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis` by computing the\n mean of elements across the dimensions in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a tensor with a single\n element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 1.], [2., 2.]])\n >>> tf.reduce_mean(x)\n \n >>> tf.reduce_mean(x, 0)\n \n >>> tf.reduce_mean(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.mean\n\n Please note that `np.mean` has a `dtype` parameter that could be used to\n specify the output type. By default this is `dtype=float64`. On the other\n hand, `tf.reduce_mean` has an aggressive type inference from `input_tensor`,\n for example:\n\n >>> x = tf.constant([1, 0, 1, 0])\n >>> tf.reduce_mean(x)\n \n >>> y = tf.constant([1., 0., 1., 0.])\n >>> tf.reduce_mean(y)\n \n\n @end_compatibility\n ", "desc": "Computes the mean of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.reduce_min", "docs": "Computes the `tf.math.minimum` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.minimum` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nUsage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_min(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_min(x)\n \n\nSee the numpy docs for `np.amin` and `np.nanmin` behavior.\n\nArgs:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.", "desc": "Computes the `tf.math.minimum` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_prod", "docs": "Computes `tf.math.multiply` of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.multiply` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_prod(x)\n \n >>> tf.math.reduce_prod(x, 0)\n \n >>> tf.math.reduce_prod(x, 1)\n \n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor.\n\n@compatibility(numpy)\nEquivalent to np.prod\n@end_compatibility", "desc": "Computes `tf.math.multiply` of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reduce_sum", "docs": "Computes the sum of elements across dimensions of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis is the reduction operation for the elementwise `tf.math.add` op.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keepdims` is true, the rank of the tensor is reduced by 1 for each\nof the entries in `axis`, which must be unique. If `keepdims` is true, the\nreduced dimensions are retained with length 1.\n\nIf `axis` is None, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nFor example:\n\n >>> # x has a shape of (2, 3) (two rows and three columns):\n >>> x = tf.constant([[1, 1, 1], [1, 1, 1]])\n >>> x.numpy()\n array([[1, 1, 1],\n [1, 1, 1]], dtype=int32)\n >>> # sum all the elements\n >>> # 1 + 1 + 1 + 1 + 1+ 1 = 6\n >>> tf.reduce_sum(x).numpy()\n 6\n >>> # reduce along the first dimension\n >>> # the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> tf.reduce_sum(x, 0).numpy()\n array([2, 2, 2], dtype=int32)\n >>> # reduce along the second dimension\n >>> # the result is [1, 1] + [1, 1] + [1, 1] = [3, 3]\n >>> tf.reduce_sum(x, 1).numpy()\n array([3, 3], dtype=int32)\n >>> # keep the original dimensions\n >>> tf.reduce_sum(x, 1, keepdims=True).numpy()\n array([[3],\n [3]], dtype=int32)\n >>> # reduce along both dimensions\n >>> # the result is 1 + 1 + 1 + 1 + 1 + 1 = 6\n >>> # or, equivalently, reduce along rows, then reduce the resultant array\n >>> # [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> # 2 + 2 + 2 = 6\n >>> tf.reduce_sum(x, [0, 1]).numpy()\n 6\n\nArgs:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n reduction_indices: The old (deprecated) name for axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced tensor, of the same dtype as the input_tensor.\n\n@compatibility(numpy)\nEquivalent to np.sum apart the fact that numpy upcast uint8 and int32 to\nint64 while tensorflow returns the same dtype as the input.\n@end_compatibility", "desc": "Computes the sum of elements across dimensions of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.regex_replace", "docs": "Replace elements of `input` matching regex `pattern` with `rewrite`.\n\n >>> tf.strings.regex_replace(\"Text with tags.
contains html\",\n ... \"<[^>]+>\", \" \")\n \n\n Args:\n input: string `Tensor`, the source strings to process.\n pattern: string or scalar string `Tensor`, regular expression to use,\n see more details at https://github.com/google/re2/wiki/Syntax\n rewrite: string or scalar string `Tensor`, value to use in match\n replacement, supports backslash-escaped digits (\\1 to \\9) can be to insert\n text matching corresponding parenthesized group.\n replace_global: `bool`, if `True` replace all non-overlapping matches,\n else replace only the first match.\n name: A name for the operation (optional).\n\n Returns:\n string `Tensor` of the same shape as `input` with specified replacements.\n ", "desc": "Replace elements of `input` matching regex `pattern` with `rewrite`.", "type": "API"}, {"name": "tf.compat.v1.register_tensor_conversion_function", "docs": "Registers a function for converting objects of `base_type` to `Tensor`.\n\n The conversion function must have the following signature:\n\n ```python\n def conversion_func(value, dtype=None, name=None, as_ref=False):\n # ...\n ```\n\n It must return a `Tensor` with the given `dtype` if specified. If the\n conversion function creates a new `Tensor`, it should use the given\n `name` if specified. All exceptions will be propagated to the caller.\n\n The conversion function may return `NotImplemented` for some\n inputs. In this case, the conversion process will continue to try\n subsequent conversion functions.\n\n If `as_ref` is true, the function must return a `Tensor` reference,\n such as a `Variable`.\n\n NOTE: The conversion functions will execute in order of priority,\n followed by order of registration. To ensure that a conversion function\n `F` runs before another conversion function `G`, ensure that `F` is\n registered with a smaller priority than `G`.\n\n Args:\n base_type: The base type or tuple of base types for all objects that\n `conversion_func` accepts.\n conversion_func: A function that converts instances of `base_type` to\n `Tensor`.\n priority: Optional integer that indicates the priority for applying this\n conversion function. Conversion functions with smaller priority values run\n earlier than conversion functions with larger priority values. Defaults to\n 100.\n\n Raises:\n TypeError: If the arguments do not have the appropriate type.\n ", "desc": "Registers a function for converting objects of `base_type` to `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.RegisterGradient", "docs": "A decorator for registering the gradient function for an op type.\n\n This decorator is only used when defining a new op type. For an op\n with `m` inputs and `n` outputs, the gradient function is a function\n that takes the original `Operation` and `n` `Tensor` objects\n (representing the gradients with respect to each output of the op),\n and returns `m` `Tensor` objects (representing the partial gradients\n with respect to each input of the op).\n\n For example, assuming that operations of type `\"Sub\"` take two\n inputs `x` and `y`, and return a single output `x - y`, the\n following gradient function would be registered:\n\n ```python\n @tf.RegisterGradient(\"Sub\")\n def _sub_grad(unused_op, grad):\n return grad, tf.negative(grad)\n ```\n\n The decorator argument `op_type` is the string type of an\n operation. This corresponds to the `OpDef.name` field for the proto\n that defines the operation.\n ", "desc": "A decorator for registering the gradient function for an op type.", "type": "API"}, {"name": "tf.compat.v1.repeat", "docs": "Repeat elements of `input`.\n\n See also `tf.concat`, `tf.stack`, `tf.tile`.\n\n Args:\n input: An `N`-dimensional Tensor.\n repeats: An 1-D `int` Tensor. The number of repetitions for each element.\n repeats is broadcasted to fit the shape of the given axis. `len(repeats)`\n must equal `input.shape[axis]` if axis is not None.\n axis: An int. The axis along which to repeat values. By default (axis=None),\n use the flattened input array, and return a flat output array.\n name: A name for the operation.\n\n Returns:\n A Tensor which has the same shape as `input`, except along the given axis.\n If axis is None then the output array is flattened to match the flattened\n input array.\n\n Example usage:\n\n >>> repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)\n \n\n >>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)\n \n\n >>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1)\n \n\n >>> repeat(3, repeats=4)\n \n\n >>> repeat([[1,2], [3,4]], repeats=2)\n \n\n ", "desc": "Repeat elements of `input`.", "type": "API"}, {"name": "tf.compat.v1.report_uninitialized_variables", "docs": "Adds ops to list the names of uninitialized variables.\n\nWhen run, it returns a 1-D tensor containing the names of uninitialized\nvariables if there are any, or an empty array if there are none.\n\nArgs:\n var_list: List of `Variable` objects to check. Defaults to the value of\n `global_variables() + local_variables()`\n name: Optional name of the `Operation`.\n\nReturns:\n A 1-D tensor containing names of the uninitialized variables, or an empty\n 1-D tensor if there are no variables or no uninitialized variables.\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Adds ops to list the names of uninitialized variables.", "type": "API"}, {"name": "tf.compat.v1.required_space_to_batch_paddings", "docs": "Calculate padding required to make block_shape divide input_shape.\n\n This function can be used to calculate a suitable paddings argument for use\n with space_to_batch_nd and batch_to_space_nd.\n\n Args:\n input_shape: int32 Tensor of shape [N].\n block_shape: int32 Tensor of shape [N].\n base_paddings: Optional int32 Tensor of shape [N, 2]. Specifies the minimum\n amount of padding to use. All elements must be >= 0. If not specified,\n defaults to 0.\n name: string. Optional name prefix.\n\n Returns:\n (paddings, crops), where:\n\n `paddings` and `crops` are int32 Tensors of rank 2 and shape [N, 2]\n satisfying:\n\n paddings[i, 0] = base_paddings[i, 0].\n 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]\n (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0\n\n crops[i, 0] = 0\n crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]\n\n Raises: ValueError if called with incompatible shapes.\n ", "desc": "Calculate padding required to make block_shape divide input_shape.", "type": "API"}, {"name": "tf.compat.v1.reset_default_graph", "docs": "Clears the default graph stack and resets the global default graph.\n\n NOTE: The default graph is a property of the current thread. This\n function applies only to the current thread. Calling this function while\n a `tf.compat.v1.Session` or `tf.compat.v1.InteractiveSession` is active will\n result in undefined\n behavior. Using any previously created `tf.Operation` or `tf.Tensor` objects\n after calling this function will result in undefined behavior.\n\n @compatibility(TF2)\n `reset_default_graph` does not work with either eager execution or\n `tf.function`, and you should not invoke it directly. To migrate code that\n uses Graph-related functions to TF2, rewrite the code without them. See the\n [migration guide](https://www.tensorflow.org/guide/migrate) for more\n description about the behavior and semantic changes between Tensorflow 1 and\n Tensorflow 2.\n @end_compatibility\n\n Raises:\n AssertionError: If this function is called within a nested graph.\n ", "desc": "Clears the default graph stack and resets the global default graph.", "type": "API"}, {"name": "tf.compat.v1.reshape", "docs": "Reshapes a tensor.\n\n Given `tensor`, this operation returns a new `tf.Tensor` that has the same\n values as `tensor` in the same order, except with a new shape given by\n `shape`.\n\n >>> t1 = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> print(tf.shape(t1).numpy())\n [2 3]\n >>> t2 = tf.reshape(t1, [6])\n >>> t2\n \n >>> tf.reshape(t2, [3, 2])\n \n\n The `tf.reshape` does not change the order of or the total number of elements\n in the tensor, and so it can reuse the underlying data buffer. This makes it\n a fast operation independent of how big of a tensor it is operating on.\n\n >>> tf.reshape([1, 2, 3], [2, 2])\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Input to reshape is a tensor with 3 values, but the\n requested shape has 4\n\n To instead reorder the data to rearrange the dimensions of a tensor, see\n `tf.transpose`.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [3, 2]).numpy()\n array([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n >>> tf.transpose(t, perm=[1, 0]).numpy()\n array([[1, 4],\n [2, 5],\n [3, 6]], dtype=int32)\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total size remains constant. In particular,\n a `shape` of `[-1]` flattens into 1-D. At most one component of `shape` can\n be -1.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [-1])\n \n >>> tf.reshape(t, [3, -1])\n \n >>> tf.reshape(t, [-1, 2])\n \n\n `tf.reshape(t, [])` reshapes a tensor `t` with one element to a scalar.\n\n >>> tf.reshape([7], []).numpy()\n 7\n\n More examples:\n\n >>> t = [1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> print(tf.shape(t).numpy())\n [9]\n >>> tf.reshape(t, [3, 3])\n \n\n >>> t = [[[1, 1], [2, 2]],\n ... [[3, 3], [4, 4]]]\n >>> print(tf.shape(t).numpy())\n [2 2 2]\n >>> tf.reshape(t, [2, 4])\n \n\n >>> t = [[[1, 1, 1],\n ... [2, 2, 2]],\n ... [[3, 3, 3],\n ... [4, 4, 4]],\n ... [[5, 5, 5],\n ... [6, 6, 6]]]\n >>> print(tf.shape(t).numpy())\n [3 2 3]\n >>> # Pass '[-1]' to flatten 't'.\n >>> tf.reshape(t, [-1])\n \n >>> # -- Using -1 to infer the shape --\n >>> # Here -1 is inferred to be 9:\n >>> tf.reshape(t, [2, -1])\n \n >>> # -1 is inferred to be 2:\n >>> tf.reshape(t, [-1, 9])\n \n >>> # -1 is inferred to be 3:\n >>> tf.reshape(t, [ 2, -1, 3])\n \n\n Args:\n tensor: A `Tensor`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Defines the shape of the output tensor.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reshapes a tensor.", "type": "API"}, {"name": "tf.compat.v1.resource_loader", "docs": "Resource management library.\n", "desc": "Resource management library.", "type": "API"}, {"name": "tf.compat.v1.resource_loader.get_data_files_path", "docs": "Get a direct path to the data files colocated with the script.\n\n Returns:\n The directory where files specified in data attribute of py_test\n and py_binary are stored.\n ", "desc": "Get a direct path to the data files colocated with the script.", "type": "API"}, {"name": "tf.compat.v1.resource_loader.get_path_to_datafile", "docs": "Get the path to the specified file in the data dependencies.\n\n The path is relative to tensorflow/\n\n Args:\n path: a string resource path relative to tensorflow/\n\n Returns:\n The path to the specified file present in the data attribute of py_test\n or py_binary.\n\n Raises:\n IOError: If the path is not found, or the resource can't be opened.\n ", "desc": "Get the path to the specified file in the data dependencies.", "type": "API"}, {"name": "tf.compat.v1.resource_loader.get_root_dir_with_all_resources", "docs": "Get a root directory containing all the data attributes in the build rule.\n\n Returns:\n The path to the specified file present in the data attribute of py_test\n or py_binary. Falls back to returning the same as get_data_files_path if it\n fails to detect a bazel runfiles directory.\n ", "desc": "Get a root directory containing all the data attributes in the build rule.", "type": "API"}, {"name": "tf.compat.v1.resource_loader.load_resource", "docs": "Load the resource at given path, where path is relative to tensorflow/.\n\n Args:\n path: a string resource path relative to tensorflow/.\n\n Returns:\n The contents of that resource.\n\n Raises:\n IOError: If the path is not found, or the resource can't be opened.\n ", "desc": "Load the resource at given path, where path is relative to tensorflow/.", "type": "API"}, {"name": "tf.compat.v1.resource_loader.readahead_file_path", "docs": "Readahead files not implemented; simply returns given path.", "desc": "Readahead files not implemented; simply returns given path.", "type": "API"}, {"name": "tf.compat.v1.resource_variables_enabled", "docs": "Returns `True` if resource variables are enabled.\n\n Resource variables are improved versions of TensorFlow variables with a\n well-defined memory model. Accessing a resource variable reads its value, and\n all ops which access a specific read value of the variable are guaranteed to\n see the same value for that tensor. Writes which happen after a read (by\n having a control or data dependency on the read) are guaranteed not to affect\n the value of the read tensor, and similarly writes which happen before a read\n are guaranteed to affect the value. No guarantees are made about unordered\n read/write pairs.\n\n Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0\n feature.\n ", "desc": "Returns `True` if resource variables are enabled.", "type": "API"}, {"name": "tf.compat.v1.reverse", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `int32` tensor `axis` representing the set of\n dimensions of `tensor` to reverse. This operation reverses each dimension\n `i` for which there exists `j` s.t. `axis[j] == i`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions specified\n in `axis` may be 0 or more entries. If an index is specified more than\n once, a InvalidArgument error is raised.\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [3] or 'dims' is [-1]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is '[1]' (or 'dims' is '[-3]')\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is '[2]' (or 'dims' is '[-2]')\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `int64`, `uint64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. The indices of the dimensions to reverse. Must be in the range\n `[-rank(tensor), rank(tensor))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.reverse_sequence", "docs": "Reverses variable length slices. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(seq_dim)`. They will be removed in a future version.\nInstructions for updating:\nseq_dim is deprecated, use seq_axis instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(batch_dim)`. They will be removed in a future version.\nInstructions for updating:\nbatch_dim is deprecated, use batch_axis instead\n\nThis op first slices `input` along the dimension `batch_axis`, and for\neach slice `i`, reverses the first `seq_lengths[i]` elements along the\ndimension `seq_axis`.\n\nThe elements of `seq_lengths` must obey `seq_lengths[i] <=\ninput.dims[seq_axis]`, and `seq_lengths` must be a vector of length\n`input.dims[batch_axis]`.\n\nThe output slice `i` along dimension `batch_axis` is then given by\ninput slice `i`, with the first `seq_lengths[i]` slices along\ndimension `seq_axis` reversed.\n\nExample usage:\n\n>>> seq_lengths = [7, 2, 3, 5]\n>>> input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0],\n... [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]]\n>>> output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0)\n>>> output\n\n\nArgs:\n input: A `Tensor`. The input to reverse.\n seq_lengths: A `Tensor`. Must be one of the following types: `int32`,\n `int64`. 1-D with length `input.dims(batch_axis)` and `max(seq_lengths) <=\n input.dims(seq_axis)`\n seq_axis: An `int`. The dimension which is partially reversed.\n batch_axis: An optional `int`. Defaults to `0`. The dimension along which\n reversal is performed.\n name: A name for the operation (optional).\n\nReturns:\n A Tensor. Has the same type as input.", "desc": "Reverses variable length slices. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.reverse_v2", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `int32` tensor `axis` representing the set of\n dimensions of `tensor` to reverse. This operation reverses each dimension\n `i` for which there exists `j` s.t. `axis[j] == i`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions specified\n in `axis` may be 0 or more entries. If an index is specified more than\n once, a InvalidArgument error is raised.\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [3] or 'dims' is [-1]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is '[1]' (or 'dims' is '[-3]')\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is '[2]' (or 'dims' is '[-2]')\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `int64`, `uint64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. The indices of the dimensions to reverse. Must be in the range\n `[-rank(tensor), rank(tensor))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.compat.v1.rint", "docs": "Returns element-wise integer closest to x.\n\n If the result is midway between two representable values,\n the even representable is chosen.\n For example:\n\n ```\n rint(-1.5) ==> -2.0\n rint(0.5000001) ==> 1.0\n rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise integer closest to x.", "type": "API"}, {"name": "tf.compat.v1.roll", "docs": "Rolls the elements of a tensor along an axis.\n\n The elements are shifted positively (towards larger indices) by the offset of\n `shift` along the dimension of `axis`. Negative `shift` values will shift\n elements in the opposite direction. Elements that roll passed the last position\n will wrap around to the first and vice versa. Multiple shifts along multiple\n axes may be specified.\n\n For example:\n\n ```\n # 't' is [0, 1, 2, 3, 4]\n roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]\n\n # shifting along multiple dimensions\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]\n\n # shifting along the same axis multiple times\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]\n ```\n\n Args:\n input: A `Tensor`.\n shift: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which\n elements are shifted positively (towards larger indices) along the dimension\n specified by `axis[i]`. Negative shifts will roll the elements in the opposite\n direction.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift\n `shift[i]` should occur. If the same axis is referenced more than once, the\n total shift for that axis will be the sum of all the shifts that belong to that\n axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Rolls the elements of a tensor along an axis.", "type": "API"}, {"name": "tf.compat.v1.round", "docs": "Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example:\n\n ```python\n x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5])\n tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, or `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n ", "desc": "Rounds the values of a tensor to the nearest integer, element-wise.", "type": "API"}, {"name": "tf.compat.v1.rsqrt", "docs": "Computes reciprocal of square root of x element-wise.\n\n For example:\n\n >>> x = tf.constant([2., 0., -2.])\n >>> tf.math.rsqrt(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n ", "desc": "Computes reciprocal of square root of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.RunMetadata", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.RunMetadata.FunctionGraphs", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.RunOptions", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.RunOptions.Experimental", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.RunOptions.Experimental.RunHandlerPoolOptions", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.saturate_cast", "docs": "Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underflow in the cast, this op\n applies the appropriate clamping before the cast.\n\n Args:\n value: A `Tensor`.\n dtype: The desired output `DType`.\n name: A name for the operation (optional).\n\n Returns:\n `value` safely cast to `dtype`.\n ", "desc": "Performs a safe saturating cast of `value` to `dtype`.", "type": "API"}, {"name": "tf.compat.v1.saved_model", "docs": "Public API for tf.saved_model namespace.\n", "desc": "Public API for tf.saved_model namespace.", "type": "API"}, {"name": "tf.compat.v1.saved_model.Asset", "docs": "Represents a file asset to hermetically include in a SavedModel.\n\n A SavedModel can include arbitrary files, called assets, that are needed\n for its use. For example a vocabulary file used initialize a lookup table.\n\n When a trackable object is exported via `tf.saved_model.save()`, all the\n `Asset`s reachable from it are copied into the SavedModel assets directory.\n Upon loading, the assets and the serialized functions that depend on them\n will refer to the correct filepaths inside the SavedModel directory.\n\n Example:\n\n ```\n filename = tf.saved_model.Asset(\"file.txt\")\n\n @tf.function(input_signature=[])\n def func():\n return tf.io.read_file(filename)\n\n trackable_obj = tf.train.Checkpoint()\n trackable_obj.func = func\n trackable_obj.filename = filename\n tf.saved_model.save(trackable_obj, \"/tmp/saved_model\")\n\n # The created SavedModel is hermetic, it does not depend on\n # the original file and can be moved to another path.\n tf.io.gfile.remove(\"file.txt\")\n tf.io.gfile.rename(\"/tmp/saved_model\", \"/tmp/new_location\")\n\n reloaded_obj = tf.saved_model.load(\"/tmp/new_location\")\n print(reloaded_obj.func())\n ```\n\n Attributes:\n asset_path: A path, or a 0-D `tf.string` tensor with path to the asset.\n ", "desc": "Represents a file asset to hermetically include in a SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.build_signature_def", "docs": "Utility function to build a SignatureDef protocol buffer.\n\n Args:\n inputs: Inputs of the SignatureDef defined as a proto map of string to\n tensor info.\n outputs: Outputs of the SignatureDef defined as a proto map of string to\n tensor info.\n method_name: Method name of the SignatureDef as a string.\n\n Returns:\n A SignatureDef protocol buffer constructed based on the supplied arguments.\n ", "desc": "Utility function to build a SignatureDef protocol buffer.", "type": "API"}, {"name": "tf.compat.v1.saved_model.build_tensor_info", "docs": "Utility function to build TensorInfo proto from a Tensor. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.\n\nArgs:\n tensor: Tensor or SparseTensor whose name, dtype and shape are used to\n build the TensorInfo. For SparseTensors, the names of the three\n constituent Tensors are used.\n\nReturns:\n A TensorInfo protocol buffer constructed based on the supplied argument.\n\nRaises:\n RuntimeError: If eager execution is enabled.\n\n@compatibility(TF2)\nThis API is not compatible with eager execution as `tensor` needs to be a\ngraph tensor, and there is no replacement for it in TensorFlow 2.x. To start\nwriting programs using TensorFlow 2.x, please refer to the [Effective\nTensorFlow 2](https://www.tensorflow.org/guide/effective_tf2) guide.\n@end_compatibility", "desc": "Utility function to build TensorInfo proto from a Tensor. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.Builder", "docs": "Builds the `SavedModel` protocol buffer and saves variables and assets.\n\n The `SavedModelBuilder` class provides the functionality to build a\n `SavedModel` protocol buffer. Specifically, this allows multiple meta\n graphs to be saved as part of a single language-neutral `SavedModel`,\n while sharing variables and assets.\n\n To build a SavedModel, the first meta graph must be saved with variables.\n Subsequent meta graphs will simply be saved with their graph definitions. If\n assets need to be saved and written or copied to disk, they can be provided\n when the meta graph def is added. If multiple meta graph defs are associated\n an asset of the same name, only the first version is retained.\n\n Each meta graph added to the SavedModel must be annotated with tags. The tags\n provide a means to identify the specific meta graph to load and restore, along\n with the shared set of variables and assets.\n\n Typical usage for the `SavedModelBuilder`:\n\n ```python\n ...\n builder = tf.compat.v1.saved_model.Builder(export_dir)\n\n with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph_and_variables(sess,\n [\"foo-tag\"],\n signature_def_map=foo_signatures,\n assets_collection=foo_assets)\n ...\n\n with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph([\"bar-tag\", \"baz-tag\"])\n ...\n\n builder.save()\n ```\n\n Note: This function will only be available through the v1 compatibility\n library as tf.compat.v1.saved_model.builder.SavedModelBuilder or\n tf.compat.v1.saved_model.Builder. Tensorflow 2.0 will introduce a new\n object-based method of creating SavedModels.\n ", "desc": "Builds the `SavedModel` protocol buffer and saves variables and assets.", "type": "API"}, {"name": "tf.compat.v1.saved_model.builder.SavedModelBuilder", "docs": "Builds the `SavedModel` protocol buffer and saves variables and assets.\n\n The `SavedModelBuilder` class provides the functionality to build a\n `SavedModel` protocol buffer. Specifically, this allows multiple meta\n graphs to be saved as part of a single language-neutral `SavedModel`,\n while sharing variables and assets.\n\n To build a SavedModel, the first meta graph must be saved with variables.\n Subsequent meta graphs will simply be saved with their graph definitions. If\n assets need to be saved and written or copied to disk, they can be provided\n when the meta graph def is added. If multiple meta graph defs are associated\n an asset of the same name, only the first version is retained.\n\n Each meta graph added to the SavedModel must be annotated with tags. The tags\n provide a means to identify the specific meta graph to load and restore, along\n with the shared set of variables and assets.\n\n Typical usage for the `SavedModelBuilder`:\n\n ```python\n ...\n builder = tf.compat.v1.saved_model.Builder(export_dir)\n\n with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph_and_variables(sess,\n [\"foo-tag\"],\n signature_def_map=foo_signatures,\n assets_collection=foo_assets)\n ...\n\n with tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph([\"bar-tag\", \"baz-tag\"])\n ...\n\n builder.save()\n ```\n\n Note: This function will only be available through the v1 compatibility\n library as tf.compat.v1.saved_model.builder.SavedModelBuilder or\n tf.compat.v1.saved_model.Builder. Tensorflow 2.0 will introduce a new\n object-based method of creating SavedModels.\n ", "desc": "Builds the `SavedModel` protocol buffer and saves variables and assets.", "type": "API"}, {"name": "tf.compat.v1.saved_model.classification_signature_def", "docs": "Creates classification signature from given examples and predictions.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Classify API (tensorflow_serving/apis/prediction_service.proto), and so\n constrains the input and output types to those allowed by TensorFlow Serving.\n\n Args:\n examples: A string `Tensor`, expected to accept serialized tf.Examples.\n classes: A string `Tensor`. Note that the ClassificationResponse message\n requires that class labels are strings, not integers or anything else.\n scores: a float `Tensor`.\n\n Returns:\n A classification-flavored signature_def.\n\n Raises:\n ValueError: If examples is `None`.\n ", "desc": "Creates classification signature from given examples and predictions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.constants", "docs": "Constants for SavedModel save and restore operations.\n\nThe source of truth for these constants is in\ntensorflow/cc/saved_model/constants.h.\n\n\n", "desc": "Constants for SavedModel save and restore operations.", "type": "API"}, {"name": "tf.compat.v1.saved_model.contains_saved_model", "docs": "Checks whether the provided export directory could contain a SavedModel.\n\n Note that the method does not load any data by itself. If the method returns\n `false`, the export directory definitely does not contain a SavedModel. If the\n method returns `true`, the export directory may contain a SavedModel but\n provides no guarantee that it can be loaded.\n\n Args:\n export_dir: Absolute string path to possible export location. For example,\n '/my/foo/model'.\n\n Returns:\n True if the export directory contains SavedModel files, False otherwise.\n ", "desc": "Checks whether the provided export directory could contain a SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.experimental", "docs": "Public API for tf.saved_model.experimental namespace.\n", "desc": "Public API for tf.saved_model.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.saved_model.experimental.save", "docs": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).\n\n The `obj` must inherit from the [`Trackable` class](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/tracking/base.py#L591).\n\n Example usage:\n\n >>> class Adder(tf.Module):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.float32)])\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n The resulting SavedModel is then servable with an input named \"x\", a scalar\n with dtype float32.\n\n _Signatures_\n\n Signatures define the input and output types for a computation. The optional\n save `signatures` argument controls which methods in `obj` will be\n available to programs which consume `SavedModel`s, for example, serving\n APIs. Python functions may be decorated with\n `@tf.function(input_signature=...)` and passed as signatures directly, or\n lazily with a call to `get_concrete_function` on the method decorated with\n `@tf.function`.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',signatures=model.add.get_concrete_function(\n ... tf.TensorSpec([], tf.float32)))\n\n If a `@tf.function` does not have an input signature and\n `get_concrete_function` is not called on that method, the function will not\n be directly callable in the restored SavedModel.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n >>> restored = tf.saved_model.load('/tmp/adder')\n >>> restored.add(1.)\n Traceback (most recent call last):\n ...\n ValueError: Found zero restored functions for caller function.\n\n If the `signatures` argument is omitted, `obj` will be searched for\n `@tf.function`-decorated methods. If exactly one traced `@tf.function` is\n found, that method will be used as the default signature for the SavedModel.\n Else, any `@tf.function` attached to `obj` or its dependencies will be\n exported for use with `tf.saved_model.load`.\n\n When invoking a signature in an exported SavedModel, `Tensor` arguments are\n identified by name. These names will come from the Python function's argument\n names by default. They may be overridden by specifying a `name=...` argument\n in the corresponding `tf.TensorSpec` object. Explicit naming is required if\n multiple `Tensor`s are passed through a single argument to the Python\n function.\n\n The outputs of functions used as `signatures` must either be flat lists, in\n which case outputs will be numbered, or a dictionary mapping string keys to\n `Tensor`, in which case the keys will be used to name outputs.\n\n Signatures are available in objects returned by `tf.saved_model.load` as a\n `.signatures` attribute. This is a reserved attribute: `tf.saved_model.save`\n on an object with a custom `.signatures` attribute will raise an exception.\n\n _Using `tf.saved_model.save` with Keras models_\n\n While Keras has its own [saving and loading API](https://www.tensorflow.org/guide/keras/save_and_serialize),\n this function can be used to export Keras models. For example, exporting with\n a signature specified:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n Exporting from a function without a fixed signature:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',\n ... signatures=model.concat.get_concrete_function(\n ... tf.TensorSpec(shape=[], dtype=tf.string, name=\"string_input\")))\n\n `tf.keras.Model` instances constructed from inputs and outputs already have a\n signature and so do not require a `@tf.function` decorator or a `signatures`\n argument. If neither are specified, the model's forward pass is exported.\n\n >>> x = tf.keras.layers.Input((4,), name=\"x\")\n >>> y = tf.keras.layers.Dense(5, name=\"out\")(x)\n >>> model = tf.keras.Model(x, y)\n >>> tf.saved_model.save(model, '/tmp/saved_model/')\n\n The exported SavedModel takes \"x\" with shape [None, 4] and returns \"out\"\n with shape [None, 5]\n\n _Variables and Checkpoints_\n\n Variables must be tracked by assigning them to an attribute of a tracked\n object or to an attribute of `obj` directly. TensorFlow objects (e.g. layers\n from `tf.keras.layers`, optimizers from `tf.train`) track their variables\n automatically. This is the same tracking scheme that `tf.train.Checkpoint`\n uses, and an exported `Checkpoint` object may be restored as a training\n checkpoint by pointing `tf.train.Checkpoint.restore` to the SavedModel's\n \"variables/\" subdirectory.\n\n `tf.function` does not hard-code device annotations from outside the function\n body, instead of using the calling context's device. This means for example\n that exporting a model that runs on a GPU and serving it on a CPU will\n generally work, with some exceptions:\n\n * `tf.device` annotations inside the body of the function will be hard-coded\n in the exported model; this type of annotation is discouraged.\n * Device-specific operations, e.g. with \"cuDNN\" in the name or with\n device-specific layouts, may cause issues.\n * For `ConcreteFunctions`, active distribution strategies will cause device\n placements to be hard-coded in the function.\n\n SavedModels exported with `tf.saved_model.save` [strip default-valued\n attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes)\n automatically, which removes one source of incompatibilities when the consumer\n of a SavedModel is running an older TensorFlow version than the\n producer. There are however other sources of incompatibilities which are not\n handled automatically, such as when the exported model contains operations\n which the consumer does not have definitions for.\n\n Args:\n obj: A trackable object (e.g. tf.Module or tf.train.Checkpoint) to export.\n export_dir: A directory in which to write the SavedModel.\n signatures: Optional, one of three types:\n * a `tf.function` with an input signature specified, which will use the\n default serving signature key,\n * the result of `f.get_concrete_function` on a `@tf.function`-decorated\n function `f`, in which case `f` will be used to generate a signature for\n the SavedModel under the default serving signature key,\n * a dictionary, which maps signature keys to either `tf.function`\n instances with input signatures or concrete functions. Keys of such a\n dictionary may be arbitrary strings, but will typically be from the\n `tf.saved_model.signature_constants` module.\n options: `tf.saved_model.SaveOptions` object for configuring save options.\n\n Raises:\n ValueError: If `obj` is not trackable.\n\n @compatibility(eager)\n Not well supported when graph building. From TensorFlow 1.x,\n `tf.compat.v1.enable_eager_execution()` should run first. Calling\n tf.saved_model.save in a loop when graph building from TensorFlow 1.x will\n add new save operations to the default graph each iteration.\n\n May not be called from within a function body.\n @end_compatibility\n ", "desc": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).", "type": "API"}, {"name": "tf.compat.v1.saved_model.experimental.VariablePolicy", "docs": "Enum defining options for variable handling when saving.\n\n NONE\n No policy applied: Distributed variables are saved as one variable, with no\n device attached.\n\n SAVE_VARIABLE_DEVICES\n When saving variables, also save their device assignment.\n This is useful if one wants to hardcode devices in saved models, but it also\n makes them non-portable if soft device placement is disabled (more details\n in `tf.config.set_soft_device_placement`). This is currently not\n fully supported by `saved_model.load`, and is mainly intended to be used\n when one will be reading the saved model at a lower API level. In the\n example below, the graph saved by the call to `saved_model.save` will have\n the variable devices correctly specified:\n ```python\n exported = tf.train.Checkpoint()\n with tf.device('/GPU:0'):\n exported.x_gpu = tf.Variable(1.0)\n with tf.device('/CPU:0'):\n exported.x_cpu = tf.Variable(1.0)\n tf.saved_model.save(exported, export_dir,\n options = tf.saved_model.SaveOptions(\n experimental_variable_policy=\n tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))\n ```\n Distributed variables are still saved as one variable under this policy.\n\n EXPAND_DISTRIBUTED_VARIABLES\n Distributed variables will be saved with information about their components,\n allowing for their restoration on load. Also, the saved graph will contain\n references to those variables. This is useful when one wants to use the\n model for training in environments where the original distribution strategy\n is not available.\n ", "desc": "Enum defining options for variable handling when saving.", "type": "API"}, {"name": "tf.compat.v1.saved_model.get_tensor_from_tensor_info", "docs": "Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info.\n\nArgs:\n tensor_info: A TensorInfo proto describing a Tensor or SparseTensor or\n CompositeTensor.\n graph: The tf.Graph in which tensors are looked up. If None, the\n current default graph is used.\n import_scope: If not None, names in `tensor_info` are prefixed with this\n string before lookup.\n\nReturns:\n The Tensor or SparseTensor or CompositeTensor in `graph` described by\n `tensor_info`.\n\nRaises:\n KeyError: If `tensor_info` does not correspond to a tensor in `graph`.\n ValueError: If `tensor_info` is malformed.", "desc": "Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.is_valid_signature", "docs": "Determine whether a SignatureDef can be served by TensorFlow Serving.", "desc": "Determine whether a SignatureDef can be served by TensorFlow Serving.", "type": "API"}, {"name": "tf.compat.v1.saved_model.load", "docs": "Loads the model from a SavedModel as specified by tags. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.\n\nArgs:\n sess: The TensorFlow session to restore the variables.\n tags: Set of string tags to identify the required MetaGraphDef. These should\n correspond to the tags used when saving the variables using the\n SavedModel `save()` API.\n export_dir: Directory in which the SavedModel protocol buffer and variables\n to be loaded are located.\n import_scope: Optional `string` -- if specified, prepend this string\n followed by '/' to all loaded tensor names. This scope is applied to\n tensor instances loaded into the passed session, but it is *not* written\n through to the static `MetaGraphDef` protocol buffer that is returned.\n **saver_kwargs: Optional keyword arguments passed through to Saver.\n\nReturns:\n The `MetaGraphDef` protocol buffer loaded in the provided session. This\n can be used to further extract signature-defs, collection-defs, etc.\n\nRaises:\n RuntimeError: MetaGraphDef associated with the tags cannot be found.\n\n@compatibility(TF2)\n\n`tf.compat.v1.saved_model.load` or `tf.compat.v1.saved_model.loader.load` is\nnot compatible with eager execution. Please use `tf.saved_model.load` instead\nto load your model. You can refer to the [SavedModel guide]\n(https://www.tensorflow.org/guide/saved_model) for more information as well as\n\"Importing SavedModels from TensorFlow 1.x\" in the [`tf.saved_model.load`]\n(https://www.tensorflow.org/api_docs/python/tf/saved_model/load) docstring.\n\n#### How to Map Arguments\n\n| TF1 Arg Name | TF2 Arg Name | Note |\n| :-------------------- | :-------------- | :------------------------- |\n| `sess` | Not supported | - |\n| `tags` | `tags` | - |\n| `export_dir` | `export_dir` | - |\n| `import_scope` | Not supported | Name scopes are not needed.\n: : : By default, variables are :\n: : : associated with the loaded :\n: : : object and function names :\n: : : are deduped. :\n| `saver_kwargs` | Not supported | - |\n\n#### Before & After Usage Example\n\nBefore:\n\n```\nwith tf.compat.v1.Session(graph=tf.Graph()) as sess:\n tf.compat.v1.saved_model.loader.load(sess, [\"foo-tag\"], export_dir)\n```\n\nAfter:\n\n```\nmodel = tf.saved_model.load(export_dir, tags=[\"foo-tag\"])\n```\n@end_compatibility", "desc": "Loads the model from a SavedModel as specified by tags. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.load_v2", "docs": "Load a SavedModel from `export_dir`.\n\n Signatures associated with the SavedModel are available as functions:\n\n ```python\n imported = tf.saved_model.load(path)\n f = imported.signatures[\"serving_default\"]\n print(f(x=tf.constant([[1.]])))\n ```\n\n Objects exported with `tf.saved_model.save` additionally have trackable\n objects and functions assigned to attributes:\n\n ```python\n exported = tf.train.Checkpoint(v=tf.Variable(3.))\n exported.f = tf.function(\n lambda x: exported.v * x,\n input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])\n tf.saved_model.save(exported, path)\n imported = tf.saved_model.load(path)\n assert 3. == imported.v.numpy()\n assert 6. == imported.f(x=tf.constant(2.)).numpy()\n ```\n\n _Loading Keras models_\n\n Keras models are trackable, so they can be saved to SavedModel. The object\n returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have\n `.fit`, `.predict`, etc. methods). A few attributes and functions are still\n available: `.variables`, `.trainable_variables` and `.__call__`.\n\n ```python\n model = tf.keras.Model(...)\n tf.saved_model.save(model, path)\n imported = tf.saved_model.load(path)\n outputs = imported(inputs)\n ```\n\n Use `tf.keras.models.load_model` to restore the Keras model.\n\n _Importing SavedModels from TensorFlow 1.x_\n\n SavedModels from `tf.estimator.Estimator` or 1.x SavedModel APIs have a flat\n graph instead of `tf.function` objects. These SavedModels will be loaded with\n the following attributes:\n\n * `.signatures`: A dictionary mapping signature names to functions.\n * `.prune(feeds, fetches) `: A method which allows you to extract\n functions for new subgraphs. This is equivalent to importing the SavedModel\n and naming feeds and fetches in a Session from TensorFlow 1.x.\n\n ```python\n imported = tf.saved_model.load(path_to_v1_saved_model)\n pruned = imported.prune(\"x:0\", \"out:0\")\n pruned(tf.ones([]))\n ```\n\n See `tf.compat.v1.wrap_function` for details.\n * `.variables`: A list of imported variables.\n * `.graph`: The whole imported graph.\n * `.restore(save_path)`: A function that restores variables from a checkpoint\n saved from `tf.compat.v1.Saver`.\n\n _Consuming SavedModels asynchronously_\n\n When consuming SavedModels asynchronously (the producer is a separate\n process), the SavedModel directory will appear before all files have been\n written, and `tf.saved_model.load` will fail if pointed at an incomplete\n SavedModel. Rather than checking for the directory, check for\n \"saved_model_dir/saved_model.pb\". This file is written atomically as the last\n `tf.saved_model.save` file operation.\n\n Args:\n export_dir: The SavedModel directory to load from.\n tags: A tag or sequence of tags identifying the MetaGraph to load. Optional\n if the SavedModel contains a single MetaGraph, as for those exported from\n `tf.saved_model.save`.\n options: `tf.saved_model.LoadOptions` object that specifies options for\n loading.\n\n Returns:\n A trackable object with a `signatures` attribute mapping from signature\n keys to functions. If the SavedModel was exported by `tf.saved_model.save`,\n it also points to trackable objects, functions, debug info which it has been\n saved.\n\n Raises:\n ValueError: If `tags` don't match a MetaGraph in the SavedModel.\n ", "desc": "Load a SavedModel from `export_dir`.", "type": "API"}, {"name": "tf.compat.v1.saved_model.loader", "docs": "Loader functionality for SavedModel with hermetic, language-neutral exports.\n\nLoad and restore capability for a SavedModel, which may include multiple meta\ngraph defs. Each SavedModel is associated with a single checkpoint. Each meta\ngraph def is saved with one or more tags, which are used to identify the exact\nmeta graph def to load.\n\nThe `load` operation requires the session in which to restore the graph\ndefinition and variables, the tags used to identify the meta graph def to\nload and the location of the SavedModel.\n\nUpon a load, the subset of variables and assets supplied as part of the specific\nmeta graph def, will be restored into the supplied session. The values of the\nvariables though will correspond to the saved values from the first meta graph\nadded to the SavedModel using `add_meta_graph_and_variables(...)` in\n`builder.py`.\n\nTypical usage:\n\n```python\n...\nbuilder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)\n\nwith tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph_and_variables(sess,\n [\"foo-tag\"],\n signature_def_map=foo_signatures,\n assets_collection=foo_assets)\n...\n\nwith tf.compat.v1.Session(graph=tf.Graph()) as sess:\n ...\n builder.add_meta_graph([\"bar-tag\", \"baz-tag\"],\n assets_collection=bar_baz_assets)\n...\n\nbuilder.save()\n\n...\nwith tf.compat.v1.Session(graph=tf.Graph()) as sess:\n tf.compat.v1.saved_model.loader.load(sess, [\"foo-tag\"], export_dir)\n ...\n\n```\n\n", "desc": "Loader functionality for SavedModel with hermetic, language-neutral exports.", "type": "API"}, {"name": "tf.compat.v1.saved_model.loader.load", "docs": "Loads the model from a SavedModel as specified by tags. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.\n\nArgs:\n sess: The TensorFlow session to restore the variables.\n tags: Set of string tags to identify the required MetaGraphDef. These should\n correspond to the tags used when saving the variables using the\n SavedModel `save()` API.\n export_dir: Directory in which the SavedModel protocol buffer and variables\n to be loaded are located.\n import_scope: Optional `string` -- if specified, prepend this string\n followed by '/' to all loaded tensor names. This scope is applied to\n tensor instances loaded into the passed session, but it is *not* written\n through to the static `MetaGraphDef` protocol buffer that is returned.\n **saver_kwargs: Optional keyword arguments passed through to Saver.\n\nReturns:\n The `MetaGraphDef` protocol buffer loaded in the provided session. This\n can be used to further extract signature-defs, collection-defs, etc.\n\nRaises:\n RuntimeError: MetaGraphDef associated with the tags cannot be found.\n\n@compatibility(TF2)\n\n`tf.compat.v1.saved_model.load` or `tf.compat.v1.saved_model.loader.load` is\nnot compatible with eager execution. Please use `tf.saved_model.load` instead\nto load your model. You can refer to the [SavedModel guide]\n(https://www.tensorflow.org/guide/saved_model) for more information as well as\n\"Importing SavedModels from TensorFlow 1.x\" in the [`tf.saved_model.load`]\n(https://www.tensorflow.org/api_docs/python/tf/saved_model/load) docstring.\n\n#### How to Map Arguments\n\n| TF1 Arg Name | TF2 Arg Name | Note |\n| :-------------------- | :-------------- | :------------------------- |\n| `sess` | Not supported | - |\n| `tags` | `tags` | - |\n| `export_dir` | `export_dir` | - |\n| `import_scope` | Not supported | Name scopes are not needed.\n: : : By default, variables are :\n: : : associated with the loaded :\n: : : object and function names :\n: : : are deduped. :\n| `saver_kwargs` | Not supported | - |\n\n#### Before & After Usage Example\n\nBefore:\n\n```\nwith tf.compat.v1.Session(graph=tf.Graph()) as sess:\n tf.compat.v1.saved_model.loader.load(sess, [\"foo-tag\"], export_dir)\n```\n\nAfter:\n\n```\nmodel = tf.saved_model.load(export_dir, tags=[\"foo-tag\"])\n```\n@end_compatibility", "desc": "Loads the model from a SavedModel as specified by tags. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.loader.maybe_saved_model_directory", "docs": "Checks whether the provided export directory could contain a SavedModel.\n\n Note that the method does not load any data by itself. If the method returns\n `false`, the export directory definitely does not contain a SavedModel. If the\n method returns `true`, the export directory may contain a SavedModel but\n provides no guarantee that it can be loaded.\n\n Args:\n export_dir: Absolute string path to possible export location. For example,\n '/my/foo/model'.\n\n Returns:\n True if the export directory contains SavedModel files, False otherwise.\n ", "desc": "Checks whether the provided export directory could contain a SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.main_op", "docs": "SavedModel main op.\n\nBuilds a main op that defines the sequence of ops to be run as part of the\nSavedModel load/restore operations.\n\n", "desc": "SavedModel main op.", "type": "API"}, {"name": "tf.compat.v1.saved_model.main_op.main_op", "docs": "Returns a main op to init variables and tables. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op.main_op.\n\nReturns the main op including the group of ops that initializes all\nvariables, initializes local variables and initialize all tables.\n\nReturns:\n The set of ops to be run as part of the main op upon the load operation.", "desc": "Returns a main op to init variables and tables. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.main_op.main_op_with_restore", "docs": "Returns a main op to init variables, tables and restore the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op_with_restore or tf.compat.v1.saved_model.main_op.main_op_with_restore.\n\nReturns the main op including the group of ops that initializes all\nvariables, initialize local variables, initialize all tables and the restore\nop name.\n\nArgs:\n restore_op_name: Name of the op to use to restore the graph.\n\nReturns:\n The set of ops to be run as part of the main op upon the load operation.", "desc": "Returns a main op to init variables, tables and restore the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.main_op_with_restore", "docs": "Returns a main op to init variables, tables and restore the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.main_op_with_restore or tf.compat.v1.saved_model.main_op.main_op_with_restore.\n\nReturns the main op including the group of ops that initializes all\nvariables, initialize local variables, initialize all tables and the restore\nop name.\n\nArgs:\n restore_op_name: Name of the op to use to restore the graph.\n\nReturns:\n The set of ops to be run as part of the main op upon the load operation.", "desc": "Returns a main op to init variables, tables and restore the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.maybe_saved_model_directory", "docs": "Checks whether the provided export directory could contain a SavedModel.\n\n Note that the method does not load any data by itself. If the method returns\n `false`, the export directory definitely does not contain a SavedModel. If the\n method returns `true`, the export directory may contain a SavedModel but\n provides no guarantee that it can be loaded.\n\n Args:\n export_dir: Absolute string path to possible export location. For example,\n '/my/foo/model'.\n\n Returns:\n True if the export directory contains SavedModel files, False otherwise.\n ", "desc": "Checks whether the provided export directory could contain a SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.predict_signature_def", "docs": "Creates prediction signature from given inputs and outputs.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Predict API (tensorflow_serving/apis/prediction_service.proto). This API\n imposes no constraints on the input and output types.\n\n Args:\n inputs: dict of string to `Tensor`.\n outputs: dict of string to `Tensor`.\n\n Returns:\n A prediction-flavored signature_def.\n\n Raises:\n ValueError: If inputs or outputs is `None`.\n ", "desc": "Creates prediction signature from given inputs and outputs.", "type": "API"}, {"name": "tf.compat.v1.saved_model.regression_signature_def", "docs": "Creates regression signature from given examples and predictions.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Regress API (tensorflow_serving/apis/prediction_service.proto), and so\n constrains the input and output types to those allowed by TensorFlow Serving.\n\n Args:\n examples: A string `Tensor`, expected to accept serialized tf.Examples.\n predictions: A float `Tensor`.\n\n Returns:\n A regression-flavored signature_def.\n\n Raises:\n ValueError: If examples is `None`.\n ", "desc": "Creates regression signature from given examples and predictions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.save", "docs": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).\n\n The `obj` must inherit from the [`Trackable` class](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/tracking/base.py#L591).\n\n Example usage:\n\n >>> class Adder(tf.Module):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.float32)])\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n The resulting SavedModel is then servable with an input named \"x\", a scalar\n with dtype float32.\n\n _Signatures_\n\n Signatures define the input and output types for a computation. The optional\n save `signatures` argument controls which methods in `obj` will be\n available to programs which consume `SavedModel`s, for example, serving\n APIs. Python functions may be decorated with\n `@tf.function(input_signature=...)` and passed as signatures directly, or\n lazily with a call to `get_concrete_function` on the method decorated with\n `@tf.function`.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',signatures=model.add.get_concrete_function(\n ... tf.TensorSpec([], tf.float32)))\n\n If a `@tf.function` does not have an input signature and\n `get_concrete_function` is not called on that method, the function will not\n be directly callable in the restored SavedModel.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n >>> restored = tf.saved_model.load('/tmp/adder')\n >>> restored.add(1.)\n Traceback (most recent call last):\n ...\n ValueError: Found zero restored functions for caller function.\n\n If the `signatures` argument is omitted, `obj` will be searched for\n `@tf.function`-decorated methods. If exactly one traced `@tf.function` is\n found, that method will be used as the default signature for the SavedModel.\n Else, any `@tf.function` attached to `obj` or its dependencies will be\n exported for use with `tf.saved_model.load`.\n\n When invoking a signature in an exported SavedModel, `Tensor` arguments are\n identified by name. These names will come from the Python function's argument\n names by default. They may be overridden by specifying a `name=...` argument\n in the corresponding `tf.TensorSpec` object. Explicit naming is required if\n multiple `Tensor`s are passed through a single argument to the Python\n function.\n\n The outputs of functions used as `signatures` must either be flat lists, in\n which case outputs will be numbered, or a dictionary mapping string keys to\n `Tensor`, in which case the keys will be used to name outputs.\n\n Signatures are available in objects returned by `tf.saved_model.load` as a\n `.signatures` attribute. This is a reserved attribute: `tf.saved_model.save`\n on an object with a custom `.signatures` attribute will raise an exception.\n\n _Using `tf.saved_model.save` with Keras models_\n\n While Keras has its own [saving and loading API](https://www.tensorflow.org/guide/keras/save_and_serialize),\n this function can be used to export Keras models. For example, exporting with\n a signature specified:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n Exporting from a function without a fixed signature:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',\n ... signatures=model.concat.get_concrete_function(\n ... tf.TensorSpec(shape=[], dtype=tf.string, name=\"string_input\")))\n\n `tf.keras.Model` instances constructed from inputs and outputs already have a\n signature and so do not require a `@tf.function` decorator or a `signatures`\n argument. If neither are specified, the model's forward pass is exported.\n\n >>> x = tf.keras.layers.Input((4,), name=\"x\")\n >>> y = tf.keras.layers.Dense(5, name=\"out\")(x)\n >>> model = tf.keras.Model(x, y)\n >>> tf.saved_model.save(model, '/tmp/saved_model/')\n\n The exported SavedModel takes \"x\" with shape [None, 4] and returns \"out\"\n with shape [None, 5]\n\n _Variables and Checkpoints_\n\n Variables must be tracked by assigning them to an attribute of a tracked\n object or to an attribute of `obj` directly. TensorFlow objects (e.g. layers\n from `tf.keras.layers`, optimizers from `tf.train`) track their variables\n automatically. This is the same tracking scheme that `tf.train.Checkpoint`\n uses, and an exported `Checkpoint` object may be restored as a training\n checkpoint by pointing `tf.train.Checkpoint.restore` to the SavedModel's\n \"variables/\" subdirectory.\n\n `tf.function` does not hard-code device annotations from outside the function\n body, instead of using the calling context's device. This means for example\n that exporting a model that runs on a GPU and serving it on a CPU will\n generally work, with some exceptions:\n\n * `tf.device` annotations inside the body of the function will be hard-coded\n in the exported model; this type of annotation is discouraged.\n * Device-specific operations, e.g. with \"cuDNN\" in the name or with\n device-specific layouts, may cause issues.\n * For `ConcreteFunctions`, active distribution strategies will cause device\n placements to be hard-coded in the function.\n\n SavedModels exported with `tf.saved_model.save` [strip default-valued\n attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes)\n automatically, which removes one source of incompatibilities when the consumer\n of a SavedModel is running an older TensorFlow version than the\n producer. There are however other sources of incompatibilities which are not\n handled automatically, such as when the exported model contains operations\n which the consumer does not have definitions for.\n\n Args:\n obj: A trackable object (e.g. tf.Module or tf.train.Checkpoint) to export.\n export_dir: A directory in which to write the SavedModel.\n signatures: Optional, one of three types:\n * a `tf.function` with an input signature specified, which will use the\n default serving signature key,\n * the result of `f.get_concrete_function` on a `@tf.function`-decorated\n function `f`, in which case `f` will be used to generate a signature for\n the SavedModel under the default serving signature key,\n * a dictionary, which maps signature keys to either `tf.function`\n instances with input signatures or concrete functions. Keys of such a\n dictionary may be arbitrary strings, but will typically be from the\n `tf.saved_model.signature_constants` module.\n options: `tf.saved_model.SaveOptions` object for configuring save options.\n\n Raises:\n ValueError: If `obj` is not trackable.\n\n @compatibility(eager)\n Not well supported when graph building. From TensorFlow 1.x,\n `tf.compat.v1.enable_eager_execution()` should run first. Calling\n tf.saved_model.save in a loop when graph building from TensorFlow 1.x will\n add new save operations to the default graph each iteration.\n\n May not be called from within a function body.\n @end_compatibility\n ", "desc": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).", "type": "API"}, {"name": "tf.compat.v1.saved_model.SaveOptions", "docs": "Options for saving to SavedModel.\n\n This function may be used in the `options` argument in functions that\n save a SavedModel (`tf.saved_model.save`, `tf.keras.models.save_model`).\n ", "desc": "Options for saving to SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_constants", "docs": "Signature constants for SavedModel save and restore operations.\n\n\n", "desc": "Signature constants for SavedModel save and restore operations.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils", "docs": "SignatureDef utility functions.\n\nUtility functions for building and inspecting SignatureDef protos.\n\n", "desc": "SignatureDef utility functions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.build_signature_def", "docs": "Utility function to build a SignatureDef protocol buffer.\n\n Args:\n inputs: Inputs of the SignatureDef defined as a proto map of string to\n tensor info.\n outputs: Outputs of the SignatureDef defined as a proto map of string to\n tensor info.\n method_name: Method name of the SignatureDef as a string.\n\n Returns:\n A SignatureDef protocol buffer constructed based on the supplied arguments.\n ", "desc": "Utility function to build a SignatureDef protocol buffer.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.classification_signature_def", "docs": "Creates classification signature from given examples and predictions.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Classify API (tensorflow_serving/apis/prediction_service.proto), and so\n constrains the input and output types to those allowed by TensorFlow Serving.\n\n Args:\n examples: A string `Tensor`, expected to accept serialized tf.Examples.\n classes: A string `Tensor`. Note that the ClassificationResponse message\n requires that class labels are strings, not integers or anything else.\n scores: a float `Tensor`.\n\n Returns:\n A classification-flavored signature_def.\n\n Raises:\n ValueError: If examples is `None`.\n ", "desc": "Creates classification signature from given examples and predictions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.is_valid_signature", "docs": "Determine whether a SignatureDef can be served by TensorFlow Serving.", "desc": "Determine whether a SignatureDef can be served by TensorFlow Serving.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater", "docs": "Updates the method name(s) of the SavedModel stored in the given path.\n\n The `MethodNameUpdater` class provides the functionality to update the method\n name field in the signature_defs of the given SavedModel. For example, it\n can be used to replace the `predict` `method_name` to `regress`.\n\n Typical usages of the `MethodNameUpdater`\n ```python\n ...\n updater = tf.compat.v1.saved_model.signature_def_utils.MethodNameUpdater(\n export_dir)\n # Update all signature_defs with key \"foo\" in all meta graph defs.\n updater.replace_method_name(signature_key=\"foo\", method_name=\"regress\")\n # Update a single signature_def with key \"bar\" in the meta graph def with\n # tags [\"serve\"]\n updater.replace_method_name(signature_key=\"bar\", method_name=\"classify\",\n tags=\"serve\")\n updater.save(new_export_dir)\n ```\n\n Note: This function will only be available through the v1 compatibility\n library as tf.compat.v1.saved_model.builder.MethodNameUpdater.\n ", "desc": "Updates the method name(s) of the SavedModel stored in the given path.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.predict_signature_def", "docs": "Creates prediction signature from given inputs and outputs.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Predict API (tensorflow_serving/apis/prediction_service.proto). This API\n imposes no constraints on the input and output types.\n\n Args:\n inputs: dict of string to `Tensor`.\n outputs: dict of string to `Tensor`.\n\n Returns:\n A prediction-flavored signature_def.\n\n Raises:\n ValueError: If inputs or outputs is `None`.\n ", "desc": "Creates prediction signature from given inputs and outputs.", "type": "API"}, {"name": "tf.compat.v1.saved_model.signature_def_utils.regression_signature_def", "docs": "Creates regression signature from given examples and predictions.\n\n This function produces signatures intended for use with the TensorFlow Serving\n Regress API (tensorflow_serving/apis/prediction_service.proto), and so\n constrains the input and output types to those allowed by TensorFlow Serving.\n\n Args:\n examples: A string `Tensor`, expected to accept serialized tf.Examples.\n predictions: A float `Tensor`.\n\n Returns:\n A regression-flavored signature_def.\n\n Raises:\n ValueError: If examples is `None`.\n ", "desc": "Creates regression signature from given examples and predictions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.simple_save", "docs": "Convenience function to build a SavedModel suitable for serving. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.simple_save.\n\nIn many common cases, saving models for serving will be as simple as:\n\n simple_save(session,\n export_dir,\n inputs={\"x\": x, \"y\": y},\n outputs={\"z\": z})\n\nAlthough in many cases it's not necessary to understand all of the many ways\n to configure a SavedModel, this method has a few practical implications:\n - It will be treated as a graph for inference / serving (i.e. uses the tag\n `saved_model.SERVING`)\n - The SavedModel will load in TensorFlow Serving and supports the\n [Predict\n API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto).\n To use the Classify, Regress, or MultiInference APIs, please\n use either\n [tf.Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator)\n or the lower level\n [SavedModel\n APIs](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md).\n - Some TensorFlow ops depend on information on disk or other information\n called \"assets\". These are generally handled automatically by adding the\n assets to the `GraphKeys.ASSET_FILEPATHS` collection. Only assets in that\n collection are exported; if you need more custom behavior, you'll need to\n use the\n [SavedModelBuilder](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/builder.py).\n\nMore information about SavedModel and signatures can be found here:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md.\n\nArgs:\n session: The TensorFlow session from which to save the meta graph and\n variables.\n export_dir: The path to which the SavedModel will be stored.\n inputs: dict mapping string input names to tensors. These are added\n to the SignatureDef as the inputs.\n outputs: dict mapping string output names to tensors. These are added\n to the SignatureDef as the outputs.\n legacy_init_op: Legacy support for op or group of ops to execute after the\n restore op upon a load.", "desc": "Convenience function to build a SavedModel suitable for serving. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.tag_constants", "docs": "Common tags used for graphs in SavedModel.\n\n\n", "desc": "Common tags used for graphs in SavedModel.", "type": "API"}, {"name": "tf.compat.v1.saved_model.utils", "docs": "SavedModel utility functions.\n\nUtility functions to assist with setup and construction of the SavedModel proto.\n\n", "desc": "SavedModel utility functions.", "type": "API"}, {"name": "tf.compat.v1.saved_model.utils.build_tensor_info", "docs": "Utility function to build TensorInfo proto from a Tensor. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.\n\nArgs:\n tensor: Tensor or SparseTensor whose name, dtype and shape are used to\n build the TensorInfo. For SparseTensors, the names of the three\n constituent Tensors are used.\n\nReturns:\n A TensorInfo protocol buffer constructed based on the supplied argument.\n\nRaises:\n RuntimeError: If eager execution is enabled.\n\n@compatibility(TF2)\nThis API is not compatible with eager execution as `tensor` needs to be a\ngraph tensor, and there is no replacement for it in TensorFlow 2.x. To start\nwriting programs using TensorFlow 2.x, please refer to the [Effective\nTensorFlow 2](https://www.tensorflow.org/guide/effective_tf2) guide.\n@end_compatibility", "desc": "Utility function to build TensorInfo proto from a Tensor. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info", "docs": "Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info.\n\nArgs:\n tensor_info: A TensorInfo proto describing a Tensor or SparseTensor or\n CompositeTensor.\n graph: The tf.Graph in which tensors are looked up. If None, the\n current default graph is used.\n import_scope: If not None, names in `tensor_info` are prefixed with this\n string before lookup.\n\nReturns:\n The Tensor or SparseTensor or CompositeTensor in `graph` described by\n `tensor_info`.\n\nRaises:\n KeyError: If `tensor_info` does not correspond to a tensor in `graph`.\n ValueError: If `tensor_info` is malformed.", "desc": "Returns the Tensor or CompositeTensor described by a TensorInfo proto. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.scalar_mul", "docs": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.\n\n This is a special case of `tf.math.multiply`, where the first value must be a\n `scalar`. Unlike the general form of `tf.math.multiply`, this is operation is\n guaranteed to be efficient for `tf.IndexedSlices`.\n\n >>> x = tf.reshape(tf.range(30, dtype=tf.float32), [10, 3])\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = tf.gather(x, [1, 2]) # IndexedSlices\n ... z = tf.math.scalar_mul(10.0, y)\n\n Args:\n scalar: A 0-D scalar `Tensor`. Must have known shape.\n x: A `Tensor` or `IndexedSlices` to be scaled.\n name: A name for the operation (optional).\n\n Returns:\n `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.\n\n Raises:\n ValueError: if scalar is not a 0-D `scalar`.\n ", "desc": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.", "type": "API"}, {"name": "tf.compat.v1.scan", "docs": "scan on the list of tensors unpacked from `elems` on dimension 0.\n\n See also `tf.map_fn`.\n\n The simplest version of `scan` repeatedly applies the callable `fn` to a\n sequence of elements from first to last. The elements are made of the tensors\n unpacked from `elems` on dimension 0. The callable fn takes two tensors as\n arguments. The first argument is the accumulated value computed from the\n preceding invocation of fn, and the second is the value at the current\n position of `elems`. If `initializer` is None, `elems` must contain at least\n one element, and its first element is used as the initializer.\n\n Suppose that `elems` is unpacked into `values`, a list of tensors. The shape\n of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`.\n If reverse=True, it's fn(initializer, values[-1]).shape.\n\n This method also allows multi-arity `elems` and accumulator. If `elems`\n is a (possibly nested) list or tuple of tensors, then each of these tensors\n must have a matching first (unpack) dimension. The second argument of\n `fn` must match the structure of `elems`.\n\n If no `initializer` is provided, the output structure and dtypes of `fn`\n are assumed to be the same as its input; and in this case, the first\n argument of `fn` must match the structure of `elems`.\n\n If an `initializer` is provided, then the output of `fn` must have the same\n structure as `initializer`; and the first argument of `fn` must match\n this structure.\n\n For example, if `elems` is `(t1, [t2, t3])` and `initializer` is\n `[i1, i2]` then an appropriate signature for `fn` in `python2` is:\n `fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]):` and `fn` must return a list,\n `[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the\n one that works in `python3`, is:\n `fn = lambda a, t:`, where `a` and `t` correspond to the input tuples.\n\n Args:\n fn: The callable to be performed. It accepts two arguments. The first will\n have the same structure as `initializer` if one is provided, otherwise it\n will have the same structure as `elems`. The second will have the same\n (possibly nested) structure as `elems`. Its output must have the same\n structure as `initializer` if one is provided, otherwise it must have the\n same structure as `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n initial value for the accumulator, and the expected output type of `fn`.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) True enables support for back propagation.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n infer_shape: (optional) False disables tests for consistent output shapes.\n reverse: (optional) True scans the tensor last to first (instead of first to\n last).\n name: (optional) Name prefix for the returned tensors.\n\n Returns:\n A tensor or (possibly nested) sequence of tensors. Each tensor packs the\n results of applying `fn` to tensors unpacked from `elems` along the first\n dimension, and the previous accumulator value(s), from first to last (or\n last to first, if `reverse=True`).\n\n Raises:\n TypeError: if `fn` is not callable or the structure of the output of\n `fn` and `initializer` do not match.\n ValueError: if the lengths of the output of `fn` and `initializer`\n do not match.\n\n Examples:\n ```python\n elems = np.array([1, 2, 3, 4, 5, 6])\n sum = scan(lambda a, x: a + x, elems)\n # sum == [1, 3, 6, 10, 15, 21]\n sum = scan(lambda a, x: a + x, elems, reverse=True)\n # sum == [21, 20, 18, 15, 11, 6]\n ```\n\n ```python\n elems = np.array([1, 2, 3, 4, 5, 6])\n initializer = np.array(0)\n sum_one = scan(\n lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)\n # sum_one == [1, 2, 3, 4, 5, 6]\n ```\n\n ```python\n elems = np.array([1, 0, 0, 0, 0, 0])\n initializer = (np.array(0), np.array(1))\n fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)\n # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])\n ```\n ", "desc": "scan on the list of tensors unpacked from `elems` on dimension 0.", "type": "API"}, {"name": "tf.compat.v1.scatter_add", "docs": "Adds sparse updates to the variable referenced by `resource`.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] += updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] += updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the updated value.\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]`.\n\n
\n \n
\n\n Args:\n ref: A `Variable`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to store in `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the assignment will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n Same as `ref`. Returned as a convenience for operations that want\n to use the updated values after the update is done.\n ", "desc": "Adds sparse updates to the variable referenced by `resource`.", "type": "API"}, {"name": "tf.compat.v1.scatter_div", "docs": "Divides a variable reference by sparse updates.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] /= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] /= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions divide.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape =\n []`.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`,\n `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`,\n `uint32`, `uint64`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`. A\n tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`. A tensor of values\n that `ref` is divided by.\n use_locking: An optional `bool`. Defaults to `False`. If True, the operation\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Divides a variable reference by sparse updates.", "type": "API"}, {"name": "tf.compat.v1.scatter_max", "docs": "Reduces sparse updates into a variable reference using the `max` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = max(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...],\n updates[i, ..., j, ...])\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions combine.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape =\n []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `half`,\n `bfloat16`, `float32`, `float64`, `int32`, `int64`. Should be from a\n `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`. A\n tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`. A tensor of updated\n values to reduce into `ref`.\n use_locking: An optional `bool`. Defaults to `False`. If True, the update\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Reduces sparse updates into a variable reference using the `max` operation.", "type": "API"}, {"name": "tf.compat.v1.scatter_min", "docs": "Reduces sparse updates into a variable reference using the `min` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = min(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...],\n updates[i, ..., j, ...])\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions combine.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape =\n []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `half`,\n `bfloat16`, `float32`, `float64`, `int32`, `int64`. Should be from a\n `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`. A\n tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`. A tensor of updated\n values to reduce into `ref`.\n use_locking: An optional `bool`. Defaults to `False`. If True, the update\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Reduces sparse updates into a variable reference using the `min` operation.", "type": "API"}, {"name": "tf.compat.v1.scatter_mul", "docs": "Multiplies sparse updates into a variable reference.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] *= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] *= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions multiply.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape =\n []`.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`,\n `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`,\n `uint32`, `uint64`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`. A\n tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`. A tensor of updated\n values to multiply to `ref`.\n use_locking: An optional `bool`. Defaults to `False`. If True, the operation\n will be protected by a lock; otherwise the behavior is undefined, but may\n exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Multiplies sparse updates into a variable reference.", "type": "API"}, {"name": "tf.compat.v1.scatter_nd", "docs": "Scatters `updates` into a tensor of shape `shape` according to `indices`.\n\n Update the input tensor by scattering sparse `updates` according to individual values at the specified `indices`.\n This op returns an `output` tensor with the `shape` you specify. This op is the\n inverse of the `tf.gather_nd` operator which extracts values or slices from a\n given tensor.\n\n This operation is similar to `tf.tensor_scatter_nd_add`, except that the tensor\n is zero-initialized. Calling `tf.scatter_nd(indices, values, shape)`\n is identical to calling\n `tf.tensor_scatter_nd_add(tf.zeros(shape, values.dtype), indices, values)`\n\n If `indices` contains duplicates, the duplicate `values` are accumulated\n (summed).\n\n **WARNING**: The order in which updates are applied is nondeterministic, so the\n output will be nondeterministic if `indices` contains duplicates;\n numbers summed in different order may yield different results because of some\n numerical approximation issues.\n\n `indices` is an integer tensor of shape `shape`. The last dimension\n of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices of elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`.\n\n `updates` is a tensor with shape:\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of the scatter op is to insert individual elements in\n a tensor by index. Consider an example where you want to insert 4 scattered\n elements in a rank-1 tensor with 8 elements.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n shape = tf.constant([8])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [0, 11, 0, 10, 9, 0, 0, 12]\n\n You can also insert entire slices of a higher rank tensor all at once. For\n example, you can insert two slices in the first dimension of a rank-3 tensor\n with two matrices of new values.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n shape = tf.constant([4, 4, 4])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],\n [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Tensor of indices.\n updates: A `Tensor`. Values to scatter into the output tensor.\n shape: A `Tensor`. Must have the same type as `indices`.\n 1-D. The shape of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `updates`.\n ", "desc": "Scatters `updates` into a tensor of shape `shape` according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.scatter_nd_add", "docs": "Applies sparse addition to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to add 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that addition would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n add = tf.compat.v1.scatter_nd_add(ref, indices, updates)\n with tf.compat.v1.Session() as sess:\n print sess.run(add)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 13, 3, 14, 14, 6, 7, 20]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`,\n `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`,\n `uint32`, `uint64`. A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to add to ref.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the assignment will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse addition to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.compat.v1.scatter_nd_sub", "docs": "Applies sparse subtraction to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to subtract 4 scattered elements from a rank-1 tensor\n with 8 elements. In Python, that update would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1] ,[7]])\n updates = tf.constant([9, 10, 11, 12])\n op = tf.compat.v1.scatter_nd_sub(ref, indices, updates)\n with tf.compat.v1.Session() as sess:\n print sess.run(op)\n ```\n\n The resulting update to ref would look like this:\n\n [1, -9, 3, -6, -6, 6, 7, -4]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`,\n `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`,\n `uint32`, `uint64`. A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to add to ref.\n use_locking: An optional `bool`. Defaults to `False`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse subtraction to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.compat.v1.scatter_nd_update", "docs": "Applies sparse `updates` to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].\n ```\n\n For example, say we want to update 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that update would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1] ,[7]])\n updates = tf.constant([9, 10, 11, 12])\n update = tf.compat.v1.scatter_nd_update(ref, indices, updates)\n with tf.compat.v1.Session() as sess:\n print sess.run(update)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 11, 3, 10, 9, 6, 7, 12]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A Variable.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated\n values to add to ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The value of the variable after the update.\n ", "desc": "Applies sparse `updates` to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.compat.v1.scatter_sub", "docs": "Subtracts sparse updates to a variable reference.\n\n ```python\n # Scalar indices\n ref[indices, ...] -= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] -= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their (negated) contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or\n `updates.shape = []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`,\n `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`,\n `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`,\n `uint32`, `uint64`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to subtract from `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Subtracts sparse updates to a variable reference.", "type": "API"}, {"name": "tf.compat.v1.scatter_update", "docs": "Applies sparse updates to a variable reference.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] = updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] = updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n If values in `ref` is to be updated more than once, because there are\n duplicate entries in `indices`, the order at which the updates happen\n for each value is undefined.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]`.\n\n
\n \n
\n\n Args:\n ref: A `Variable`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to store in `ref`.\n use_locking: An optional `bool`. Defaults to `True`.\n If True, the assignment will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n Same as `ref`. Returned as a convenience for operations that want\n to use the updated values after the update is done.\n ", "desc": "Applies sparse updates to a variable reference.", "type": "API"}, {"name": "tf.compat.v1.searchsorted", "docs": "Searches for where a value would go in a sorted sequence.\n\n This is not a method for checking containment (like python `in`).\n\n The typical use case for this operation is \"binning\", \"bucketing\", or\n \"discretizing\". The `values` are assigned to bucket-indices based on the\n **edges** listed in `sorted_sequence`. This operation\n returns the bucket-index for each value.\n\n >>> edges = [-1, 3.3, 9.1, 10.0]\n >>> values = [0.0, 4.1, 12.0]\n >>> tf.searchsorted(edges, values).numpy()\n array([1, 2, 4], dtype=int32)\n\n The `side` argument controls which index is returned if a value lands exactly\n on an edge:\n\n >>> seq = [0, 3, 9, 10, 10]\n >>> values = [0, 4, 10]\n >>> tf.searchsorted(seq, values).numpy()\n array([0, 2, 3], dtype=int32)\n >>> tf.searchsorted(seq, values, side=\"right\").numpy()\n array([1, 2, 5], dtype=int32)\n\n The `axis` is not settable for this operation. It always operates on the\n innermost dimension (`axis=-1`). The operation will accept any number of\n outer dimensions. Here it is applied to the rows of a matrix:\n\n >>> sorted_sequence = [[0., 3., 8., 9., 10.],\n ... [1., 2., 3., 4., 5.]]\n >>> values = [[9.8, 2.1, 4.3],\n ... [0.1, 6.6, 4.5, ]]\n >>> tf.searchsorted(sorted_sequence, values).numpy()\n array([[4, 1, 2],\n [0, 5, 4]], dtype=int32)\n\n Note: This operation assumes that `sorted_sequence` **is sorted** along the\n innermost axis, maybe using `tf.sort(..., axis=-1)`. **If the sequence is not\n sorted no error is raised** and the content of the returned tensor is not well\n defined.\n\n Args:\n sorted_sequence: N-D `Tensor` containing a sorted sequence.\n values: N-D `Tensor` containing the search values.\n side: 'left' or 'right'; 'left' corresponds to lower_bound and 'right' to\n upper_bound.\n out_type: The output type (`int32` or `int64`). Default is `tf.int32`.\n name: Optional name for the operation.\n\n Returns:\n An N-D `Tensor` the size of `values` containing the result of applying\n either lower_bound or upper_bound (depending on side) to each value. The\n result is not a global index to the entire `Tensor`, but the index in the\n last dimension.\n\n Raises:\n ValueError: If the last dimension of `sorted_sequence >= 2^31-1` elements.\n If the total size of `values` exceeds `2^31 - 1` elements.\n If the first `N-1` dimensions of the two tensors don't match.\n ", "desc": "Searches for where a value would go in a sorted sequence.", "type": "API"}, {"name": "tf.compat.v1.segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\max_j(data_j)\\\\) where `max` is over `j` such\n that `segment_ids[j] == i`.\n\n If the max is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\frac{\\sum_j data_j}{N}\\\\) where `mean` is\n over `j` such that `segment_ids[j] == i` and `N` is the total number of\n values summed.\n\n If the mean is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as a smaller following index when computing the numerator\n of the mean.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy()\n array([[2.5, 2.5, 2.5, 2.5],\n [5., 6., 7., 8.]], dtype=float32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\min_j(data_j)\\\\) where `min` is over `j` such\n that `segment_ids[j] == i`.\n\n If the min is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\prod_j data_j\\\\) where the product is over `j` such\n that `segment_ids[j] == i`.\n\n If the product is empty for a given segment ID `i`, `output[i] = 1`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\sum_j data_j\\\\) where sum is over `j` such\n that `segment_ids[j] == i`.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_sum(c, tf.constant([0, 0, 1])).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.self_adjoint_eig", "docs": "Computes the eigen decomposition of a batch of self-adjoint matrices.\n\n Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices\n in `tensor` such that\n `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of\n each inner inner matrix is referenced.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order.\n v: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most\n matrices contain eigenvectors of the corresponding matrices in `tensor`\n ", "desc": "Computes the eigen decomposition of a batch of self-adjoint matrices.", "type": "API"}, {"name": "tf.compat.v1.self_adjoint_eigvals", "docs": "Computes the eigenvalues of one or more self-adjoint matrices.\n\n Note: If your program backpropagates through this function, you should replace\n it with a call to tf.linalg.eigh (possibly ignoring the second output) to\n avoid computing the eigen decomposition twice. This is because the\n eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See\n _SelfAdjointEigV2Grad in linalg_grad.py.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`\n eigenvalues of `tensor[..., :, :]`.\n ", "desc": "Computes the eigenvalues of one or more self-adjoint matrices.", "type": "API"}, {"name": "tf.compat.v1.sequence_mask", "docs": "Returns a mask tensor representing the first N positions of each cell.\n\n If `lengths` has shape `[d_1, d_2, ..., d_n]` the resulting tensor `mask` has\n dtype `dtype` and shape `[d_1, d_2, ..., d_n, maxlen]`, with\n\n ```\n mask[i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\n ```\n\n Examples:\n\n ```python\n tf.sequence_mask([1, 3, 2], 5) # [[True, False, False, False, False],\n # [True, True, True, False, False],\n # [True, True, False, False, False]]\n\n tf.sequence_mask([[1, 3],[2,0]]) # [[[True, False, False],\n # [True, True, True]],\n # [[True, True, False],\n # [False, False, False]]]\n ```\n\n Args:\n lengths: integer tensor, all its values <= maxlen.\n maxlen: scalar integer tensor, size of last dimension of returned tensor.\n Default is the maximum value in `lengths`.\n dtype: output type of the resulting tensor.\n name: name of the op.\n\n Returns:\n A mask tensor of shape `lengths.shape + (maxlen,)`, cast to specified dtype.\n Raises:\n ValueError: if `maxlen` is not a scalar.\n ", "desc": "Returns a mask tensor representing the first N positions of each cell.", "type": "API"}, {"name": "tf.compat.v1.serialize_many_sparse", "docs": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.\n\n The `SparseTensor` must have rank `R` greater than 1, and the first dimension\n is treated as the minibatch dimension. Elements of the `SparseTensor`\n must be sorted in increasing order of this first dimension. The serialized\n `SparseTensor` objects going into each row of the output `Tensor` will have\n rank `R-1`.\n\n The minibatch size `N` is extracted from `sparse_shape[0]`.\n\n Args:\n sp_input: The input rank `R` `SparseTensor`.\n name: A name prefix for the returned tensors (optional).\n out_type: The `dtype` to use for serialization.\n\n Returns:\n A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column\n represents serialized `SparseTensor`'s indices, values, and shape\n (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.serialize_sparse", "docs": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.\n\n Args:\n sp_input: The input `SparseTensor`.\n name: A name prefix for the returned tensors (optional).\n out_type: The `dtype` to use for serialization.\n\n Returns:\n A 3-vector (1-D `Tensor`), with each column representing the serialized\n `SparseTensor`'s indices, values, and shape (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.", "type": "API"}, {"name": "tf.compat.v1.serialize_tensor", "docs": "Transforms a Tensor into a serialized TensorProto proto.\n\n This operation transforms data in a `tf.Tensor` into a `tf.Tensor` of type\n `tf.string` containing the data in a binary string format. This operation can\n transform scalar data and linear arrays, but it is most useful in converting\n multidimensional arrays into a format accepted by binary storage formats such\n as a `TFRecord` or `tf.train.Example`.\n\n See also:\n - `tf.io.parse_tensor`: inverse operation of `tf.io.serialize_tensor` that\n transforms a scalar string containing a serialized Tensor into a Tensor of a\n specified type.\n - `tf.ensure_shape`: `parse_tensor` cannot statically determine the shape of\n the parsed tensor. Use `tf.ensure_shape` to set the static shape when running\n under a `tf.function`\n - `.SerializeToString`, serializes a proto to a binary-string\n\n Example of serializing scalar data:\n\n >>> t = tf.constant(1)\n >>> tf.io.serialize_tensor(t)\n \n\n Example of storing non-scalar data into a `tf.train.Example`:\n\n >>> t1 = [[1, 2]]\n >>> t2 = [[7, 8]]\n >>> nonscalar = tf.concat([t1, t2], 0)\n >>> nonscalar\n \n\n Serialize the data using `tf.io.serialize_tensor`.\n\n >>> serialized_nonscalar = tf.io.serialize_tensor(nonscalar)\n >>> serialized_nonscalar\n \n\n Store the data in a `tf.train.Feature`.\n\n >>> feature_of_bytes = tf.train.Feature(\n ... bytes_list=tf.train.BytesList(value=[serialized_nonscalar.numpy()]))\n >>> feature_of_bytes\n bytes_list {\n value: \"\\010...\\000\"\n }\n\n Put the `tf.train.Feature` message into a `tf.train.Example`.\n\n >>> features_for_example = {\n ... 'feature0': feature_of_bytes\n ... }\n >>> example_proto = tf.train.Example(\n ... features=tf.train.Features(feature=features_for_example))\n >>> example_proto\n features {\n feature {\n key: \"feature0\"\n value {\n bytes_list {\n value: \"\\010...\\000\"\n }\n }\n }\n }\n\n Args:\n tensor: A `tf.Tensor`.\n name: string. Optional name for the op.\n\n Returns:\n A Tensor of dtype string.\n ", "desc": "Transforms a Tensor into a serialized TensorProto proto.", "type": "API"}, {"name": "tf.compat.v1.Session", "docs": "A class for running TensorFlow operations.\n\n A `Session` object encapsulates the environment in which `Operation`\n objects are executed, and `Tensor` objects are evaluated. For\n example:\n\n ```python\n tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x\n # Build a graph.\n a = tf.constant(5.0)\n b = tf.constant(6.0)\n c = a * b\n\n # Launch the graph in a session.\n sess = tf.compat.v1.Session()\n\n # Evaluate the tensor `c`.\n print(sess.run(c)) # prints 30.0\n ```\n\n A session may own resources, such as\n `tf.Variable`, `tf.queue.QueueBase`,\n and `tf.compat.v1.ReaderBase`. It is important to release\n these resources when they are no longer required. To do this, either\n invoke the `tf.Session.close` method on the session, or use\n the session as a context manager. The following two examples are\n equivalent:\n\n ```python\n # Using the `close()` method.\n sess = tf.compat.v1.Session()\n sess.run(...)\n sess.close()\n\n # Using the context manager.\n with tf.compat.v1.Session() as sess:\n sess.run(...)\n ```\n\n The\n [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)\n protocol buffer exposes various configuration options for a\n session. For example, to create a session that uses soft constraints\n for device placement, and log the resulting placement decisions,\n create a session as follows:\n\n ```python\n # Launch the graph in a session that allows soft device placement and\n # logs the placement decisions.\n sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(\n allow_soft_placement=True,\n log_device_placement=True))\n ```\n\n @compatibility(TF2)\n `Session` does not work with either eager execution or `tf.function`, and you\n should not invoke it directly. To migrate code that uses sessions to TF2,\n rewrite the code without it. See the\n [migration\n guide](https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls)\n on replacing `Session.run` calls.\n @end_compatibility\n ", "desc": "A class for running TensorFlow operations.", "type": "API"}, {"name": "tf.compat.v1.SessionLog", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.set_random_seed", "docs": "Sets the graph-level random seed for the default graph.\n\n Operations that rely on a random seed actually derive it from two seeds:\n the graph-level and operation-level seeds. This sets the graph-level seed.\n\n Its interactions with operation-level seeds is as follows:\n\n 1. If neither the graph-level nor the operation seed is set:\n A random seed is used for this op.\n 2. If the graph-level seed is set, but the operation seed is not:\n The system deterministically picks an operation seed in conjunction with\n the graph-level seed so that it gets a unique random sequence. Within the\n same version of tensorflow and user code, this sequence is deterministic.\n However across different versions, this sequence might change. If the\n code depends on particular seeds to work, specify both graph-level\n and operation-level seeds explicitly.\n 3. If the graph-level seed is not set, but the operation seed is set:\n A default graph-level seed and the specified operation seed are used to\n determine the random sequence.\n 4. If both the graph-level and the operation seed are set:\n Both seeds are used in conjunction to determine the random sequence.\n\n To illustrate the user-visible effects, consider these examples:\n\n To generate different sequences across sessions, set neither\n graph-level nor op-level seeds:\n\n ```python\n a = tf.random.uniform([1])\n b = tf.random.normal([1])\n\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A3'\n print(sess2.run(a)) # generates 'A4'\n print(sess2.run(b)) # generates 'B3'\n print(sess2.run(b)) # generates 'B4'\n ```\n\n To generate the same repeatable sequence for an op across sessions, set the\n seed for the op:\n\n ```python\n a = tf.random.uniform([1], seed=1)\n b = tf.random.normal([1])\n\n # Repeatedly running this block with the same graph will generate the same\n # sequence of values for 'a', but different sequences of values for 'b'.\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A1'\n print(sess2.run(a)) # generates 'A2'\n print(sess2.run(b)) # generates 'B3'\n print(sess2.run(b)) # generates 'B4'\n ```\n\n To make the random sequences generated by all ops be repeatable across\n sessions, set a graph-level seed:\n\n ```python\n tf.compat.v1.random.set_random_seed(1234)\n a = tf.random.uniform([1])\n b = tf.random.normal([1])\n\n # Repeatedly running this block with the same graph will generate the same\n # sequences of 'a' and 'b'.\n print(\"Session 1\")\n with tf.compat.v1.Session() as sess1:\n print(sess1.run(a)) # generates 'A1'\n print(sess1.run(a)) # generates 'A2'\n print(sess1.run(b)) # generates 'B1'\n print(sess1.run(b)) # generates 'B2'\n\n print(\"Session 2\")\n with tf.compat.v1.Session() as sess2:\n print(sess2.run(a)) # generates 'A1'\n print(sess2.run(a)) # generates 'A2'\n print(sess2.run(b)) # generates 'B1'\n print(sess2.run(b)) # generates 'B2'\n ```\n\n @compatibility(TF2)\n 'tf.compat.v1.set_random_seed' is compatible with eager mode. However,\n in eager mode this API will set the global seed instead of the\n graph-level seed of the default graph. In TF2 this API is changed to\n [tf.random.set_seed]\n (https://www.tensorflow.org/api_docs/python/tf/random/set_seed).\n @end_compatibility\n\n Args:\n seed: integer.\n ", "desc": "Sets the graph-level random seed for the default graph.", "type": "API"}, {"name": "tf.compat.v1.setdiff1d", "docs": "Computes the difference between two lists of numbers or strings.\n\n Given a list `x` and a list `y`, this operation returns a list `out` that\n represents all values that are in `x` but not in `y`. The returned list `out`\n is sorted in the same order that the numbers appear in `x` (duplicates are\n preserved). This operation also returns a list `idx` that represents the\n position of each `out` element in `x`. In other words:\n\n `out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`\n\n For example, given this input:\n\n ```\n x = [1, 2, 3, 4, 5, 6]\n y = [1, 3, 5]\n ```\n\n This operation would return:\n\n ```\n out ==> [2, 4, 6]\n idx ==> [1, 3, 5]\n ```\n\n Args:\n x: A `Tensor`. 1-D. Values to keep.\n y: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, idx).\n\n out: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Computes the difference between two lists of numbers or strings.", "type": "API"}, {"name": "tf.compat.v1.sets", "docs": "Tensorflow set operations.\n", "desc": "Tensorflow set operations.", "type": "API"}, {"name": "tf.compat.v1.sets.difference", "docs": "Compute set difference of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_difference` is applied to each aligned pair of sets.\n tf.sets.difference(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{2}, {3}], [{}, {}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 2),\n # ((0, 1, 0), 3),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n aminusb: Whether to subtract `b` from `a`, vs vice versa.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n differences.\n\n Raises:\n TypeError: If inputs are invalid types, or if `a` and `b` have\n different types.\n ValueError: If `a` is sparse and `b` is dense.\n errors_impl.InvalidArgumentError: If the shapes of `a` and `b` do not\n match in any dimension other than the last dimension.\n ", "desc": "Compute set difference of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.sets.intersection", "docs": "Compute set intersection of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2,2,2])\n\n # b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `tf.sets.intersection` is applied to each aligned pair of sets.\n tf.sets.intersection(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{1}, {}], [{4}, {5, 6}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((1, 0, 0), 4),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n intersections.\n ", "desc": "Compute set intersection of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.sets.set_difference", "docs": "Compute set difference of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_difference` is applied to each aligned pair of sets.\n tf.sets.difference(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{2}, {3}], [{}, {}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 2),\n # ((0, 1, 0), 3),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n aminusb: Whether to subtract `b` from `a`, vs vice versa.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n differences.\n\n Raises:\n TypeError: If inputs are invalid types, or if `a` and `b` have\n different types.\n ValueError: If `a` is sparse and `b` is dense.\n errors_impl.InvalidArgumentError: If the shapes of `a` and `b` do not\n match in any dimension other than the last dimension.\n ", "desc": "Compute set difference of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.sets.set_intersection", "docs": "Compute set intersection of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2,2,2])\n\n # b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `tf.sets.intersection` is applied to each aligned pair of sets.\n tf.sets.intersection(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{1}, {}], [{4}, {5, 6}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((1, 0, 0), 4),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n intersections.\n ", "desc": "Compute set intersection of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.sets.set_size", "docs": "Compute number of unique elements along last dimension of `a`.\n\n Args:\n a: `SparseTensor`, with indices sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a`.\n\n Returns:\n `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with\n rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the\n number of unique elements in the corresponding `[0...n-1]` dimension of `a`.\n\n Raises:\n TypeError: If `a` is an invalid types.\n ", "desc": "Compute number of unique elements along last dimension of `a`.", "type": "API"}, {"name": "tf.compat.v1.sets.set_union", "docs": "Compute set union of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # [[{1, 2}, {3}], [{4}, {5, 6}]]\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_union` is applied to each aligned pair of sets.\n tf.sets.union(a, b)\n\n # The result will be a equivalent to either of:\n #\n # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((0, 0, 1), 2),\n # ((0, 0, 2), 3),\n # ((0, 1, 0), 2),\n # ((0, 1, 1), 3),\n # ((1, 0, 0), 4),\n # ((1, 0, 1), 5),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ((1, 1, 2), 7),\n # ((1, 1, 3), 8),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n unions.\n ", "desc": "Compute set union of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.sets.size", "docs": "Compute number of unique elements along last dimension of `a`.\n\n Args:\n a: `SparseTensor`, with indices sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a`.\n\n Returns:\n `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with\n rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the\n number of unique elements in the corresponding `[0...n-1]` dimension of `a`.\n\n Raises:\n TypeError: If `a` is an invalid types.\n ", "desc": "Compute number of unique elements along last dimension of `a`.", "type": "API"}, {"name": "tf.compat.v1.sets.union", "docs": "Compute set union of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # [[{1, 2}, {3}], [{4}, {5, 6}]]\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_union` is applied to each aligned pair of sets.\n tf.sets.union(a, b)\n\n # The result will be a equivalent to either of:\n #\n # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((0, 0, 1), 2),\n # ((0, 0, 2), 3),\n # ((0, 1, 0), 2),\n # ((0, 1, 1), 3),\n # ((1, 0, 0), 4),\n # ((1, 0, 1), 5),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ((1, 1, 2), 7),\n # ((1, 1, 3), 8),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n unions.\n ", "desc": "Compute set union of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.compat.v1.shape", "docs": "Returns the shape of a tensor.\n\n This operation returns a 1-D integer tensor representing the shape of `input`.\n\n For example:\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n tf.shape(t) # [2, 2, 3]\n ```\n\n Args:\n input: A `Tensor` or `SparseTensor`.\n name: A name for the operation (optional).\n out_type: (Optional) The specified output type of the operation (`int32`\n or `int64`). Defaults to `tf.int32`.\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Returns the shape of a tensor.", "type": "API"}, {"name": "tf.compat.v1.shape_n", "docs": "Returns shape of tensors.\n\n Args:\n input: A list of at least 1 `Tensor` object with the same type.\n out_type: The specified output type of the operation (`int32` or `int64`).\n Defaults to `tf.int32`(optional).\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `input` of `Tensor` objects with\n type `out_type`.\n ", "desc": "Returns shape of tensors.", "type": "API"}, {"name": "tf.compat.v1.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.sign", "docs": "Returns an element-wise indication of the sign of a number.\n\n `y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0`.\n\n For complex numbers, `y = sign(x) = x / |x| if x != 0, otherwise y = 0`.\n\n Example usage:\n\n >>> # real number\n >>> tf.math.sign([0., 2., -3.])\n \n\n >>> # complex number\n >>> tf.math.sign([1 + 1j, 0 + 0j])\n \n\n Args:\n x: A Tensor. Must be one of the following types: bfloat16, half, float32,\n float64, int32, int64, complex64, complex128.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor. Has the same type as x.\n\n If x is a SparseTensor, returns SparseTensor(x.indices,\n tf.math.sign(x.values, ...), x.dense_shape).\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sign(x.values, ...), x.dense_shape)`", "desc": "Returns an element-wise indication of the sign of a number.", "type": "API"}, {"name": "tf.compat.v1.signal", "docs": "Signal processing operations.\n\nSee the [tf.signal](https://tensorflow.org/api_guides/python/contrib.signal)\nguide.\n\n@@frame\n@@hamming_window\n@@hann_window\n@@inverse_stft\n@@inverse_stft_window_fn\n@@mfccs_from_log_mel_spectrograms\n@@linear_to_mel_weight_matrix\n@@overlap_and_add\n@@stft\n\n[hamming]: https://en.wikipedia.org/wiki/Window_function#Hamming_window\n[hann]: https://en.wikipedia.org/wiki/Window_function#Hann_window\n[mel]: https://en.wikipedia.org/wiki/Mel_scale\n[mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum\n[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n\n", "desc": "Signal processing operations.", "type": "API"}, {"name": "tf.compat.v1.signal.dct", "docs": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.\n\n Types I, II, III and IV are supported.\n Type I is implemented using a length `2N` padded `tf.signal.rfft`.\n Type II is implemented using a length `2N` padded `tf.signal.rfft`, as\n described here: [Type 2 DCT using 2N FFT padded (Makhoul)]\n (https://dsp.stackexchange.com/a/10606).\n Type III is a fairly straightforward inverse of Type II\n (i.e. using a length `2N` padded `tf.signal.irfft`).\n Type IV is calculated through 2N length DCT2 of padded signal and\n picking the odd indices.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.dct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.dct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The DCT type to perform. Must be 1, 2, 3 or 4.\n n: The length of the transform. If length is less than sequence length,\n only the first n elements of the sequence are considered for the DCT.\n If n is greater than the sequence length, zeros are padded and then\n the DCT is computed as usual.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the DCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2`, `3` or `4`, `axis` is\n not `-1`, `n` is not `None` or greater than 0,\n or `norm` is not `None` or `'ortho'`.\n ValueError: If `type` is `1` and `norm` is `ortho`.\n\n [dct]: https://en.wikipedia.org/wiki/Discrete_cosine_transform\n ", "desc": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.", "type": "API"}, {"name": "tf.compat.v1.signal.fft", "docs": "Fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform over the inner-most\n dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.fft2d", "docs": "2D fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform over the inner-most\n 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.fft3d", "docs": "3D fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform over the inner-most 3\n dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.fftshift", "docs": "Shift the zero-frequency component to the center of the spectrum.\n\n This function swaps half-spaces for all axes listed (defaults to all).\n Note that ``y[0]`` is the Nyquist component only if ``len(x)`` is even.\n\n @compatibility(numpy)\n Equivalent to numpy.fft.fftshift.\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftshift.html\n @end_compatibility\n\n For example:\n\n ```python\n x = tf.signal.fftshift([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.])\n x.numpy() # array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.])\n ```\n\n Args:\n x: `Tensor`, input tensor.\n axes: `int` or shape `tuple`, optional Axes over which to shift. Default is\n None, which shifts all axes.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor`, The shifted tensor.\n ", "desc": "Shift the zero-frequency component to the center of the spectrum.", "type": "API"}, {"name": "tf.compat.v1.signal.frame", "docs": "Expands `signal`'s `axis` dimension into frames of `frame_length`.\n\n Slides a window of size `frame_length` over `signal`'s `axis` dimension\n with a stride of `frame_step`, replacing the `axis` dimension with\n `[frames, frame_length]` frames.\n\n If `pad_end` is True, window positions that are past the end of the `axis`\n dimension are padded with `pad_value` until the window moves fully past the\n end of the dimension. Otherwise, only window positions that fully overlap the\n `axis` dimension are produced.\n\n For example:\n\n >>> # A batch size 3 tensor of 9152 audio samples.\n >>> audio = tf.random.normal([3, 9152])\n >>>\n >>> # Compute overlapping frames of length 512 with a step of 180 (frames overlap\n >>> # by 332 samples). By default, only 49 frames are generated since a frame\n >>> # with start position j*180 for j > 48 would overhang the end.\n >>> frames = tf.signal.frame(audio, 512, 180)\n >>> frames.shape.assert_is_compatible_with([3, 49, 512])\n >>>\n >>> # When pad_end is enabled, the final two frames are kept (padded with zeros).\n >>> frames = tf.signal.frame(audio, 512, 180, pad_end=True)\n >>> frames.shape.assert_is_compatible_with([3, 51, 512])\n\n If the dimension along `axis` is N, and `pad_end=False`, the number of frames\n can be computed by:\n ```python\n num_frames = 1 + (N - frame_size) // frame_step\n ```\n If `pad_end=True`, the number of frames can be computed by:\n ```python\n num_frames = -(-N // frame_step) # ceiling division\n ```\n\n Args:\n signal: A `[..., samples, ...]` `Tensor`. The rank and dimensions\n may be unknown. Rank must be at least 1.\n frame_length: The frame length in samples. An integer or scalar `Tensor`.\n frame_step: The frame hop size in samples. An integer or scalar `Tensor`.\n pad_end: Whether to pad the end of `signal` with `pad_value`.\n pad_value: An optional scalar `Tensor` to use where the input signal\n does not exist when `pad_end` is True.\n axis: A scalar integer `Tensor` indicating the axis to frame. Defaults to\n the last axis. Supports negative values for indexing from the end.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of frames with shape `[..., num_frames, frame_length, ...]`.\n\n Raises:\n ValueError: If `frame_length`, `frame_step`, `pad_value`, or `axis` are not\n scalar.\n ", "desc": "Expands `signal`'s `axis` dimension into frames of `frame_length`.", "type": "API"}, {"name": "tf.compat.v1.signal.hamming_window", "docs": "Generate a [Hamming][hamming] window.\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n periodic: A bool `Tensor` indicating whether to generate a periodic or\n symmetric window. Periodic windows are typically used for spectral\n analysis while symmetric windows are typically used for digital\n filter design.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n Raises:\n ValueError: If `dtype` is not a floating point type.\n\n [hamming]:\n https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows\n ", "desc": "Generate a [Hamming][hamming] window.", "type": "API"}, {"name": "tf.compat.v1.signal.hann_window", "docs": "Generate a [Hann window][hann].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n periodic: A bool `Tensor` indicating whether to generate a periodic or\n symmetric window. Periodic windows are typically used for spectral\n analysis while symmetric windows are typically used for digital\n filter design.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n Raises:\n ValueError: If `dtype` is not a floating point type.\n\n [hann]: https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows\n ", "desc": "Generate a [Hann window][hann].", "type": "API"}, {"name": "tf.compat.v1.signal.idct", "docs": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.\n\n Currently Types I, II, III, IV are supported. Type III is the inverse of\n Type II, and vice versa.\n\n Note that you must re-normalize by 1/(2n) to obtain an inverse if `norm` is\n not `'ortho'`. That is:\n `signal == idct(dct(signal)) * 0.5 / signal.shape[-1]`.\n When `norm='ortho'`, we have:\n `signal == idct(dct(signal, norm='ortho'), norm='ortho')`.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.idct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.idct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The IDCT type to perform. Must be 1, 2, 3 or 4.\n n: For future expansion. The length of the transform. Must be `None`.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the IDCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2` or `3`, `n` is not `None, `axis` is\n not `-1`, or `norm` is not `None` or `'ortho'`.\n\n [idct]:\n https://en.wikipedia.org/wiki/Discrete_cosine_transform#Inverse_transforms\n ", "desc": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.", "type": "API"}, {"name": "tf.compat.v1.signal.ifft", "docs": "Inverse fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform over the\n inner-most dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.ifft2d", "docs": "Inverse 2D fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform over the\n inner-most 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.ifft3d", "docs": "Inverse 3D fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform over the\n inner-most 3 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.ifftshift", "docs": "The inverse of fftshift.\n\n Although identical for even-length x,\n the functions differ by one sample for odd-length x.\n\n @compatibility(numpy)\n Equivalent to numpy.fft.ifftshift.\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.ifftshift.html\n @end_compatibility\n\n For example:\n\n ```python\n x = tf.signal.ifftshift([[ 0., 1., 2.],[ 3., 4., -4.],[-3., -2., -1.]])\n x.numpy() # array([[ 4., -4., 3.],[-2., -1., -3.],[ 1., 2., 0.]])\n ```\n\n Args:\n x: `Tensor`, input tensor.\n axes: `int` or shape `tuple` Axes over which to calculate. Defaults to None,\n which shifts all axes.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor`, The shifted tensor.\n ", "desc": "The inverse of fftshift.", "type": "API"}, {"name": "tf.compat.v1.signal.inverse_mdct", "docs": "Computes the inverse modified DCT of `mdcts`.\n\n To reconstruct an original waveform, the same window function should\n be used with `mdct` and `inverse_mdct`.\n\n Example usage:\n\n >>> @tf.function\n ... def compare_round_trip():\n ... samples = 1000\n ... frame_length = 400\n ... halflen = frame_length // 2\n ... waveform = tf.random.normal(dtype=tf.float32, shape=[samples])\n ... waveform_pad = tf.pad(waveform, [[halflen, 0],])\n ... mdct = tf.signal.mdct(waveform_pad, frame_length, pad_end=True,\n ... window_fn=tf.signal.vorbis_window)\n ... inverse_mdct = tf.signal.inverse_mdct(mdct,\n ... window_fn=tf.signal.vorbis_window)\n ... inverse_mdct = inverse_mdct[halflen: halflen + samples]\n ... return waveform, inverse_mdct\n >>> waveform, inverse_mdct = compare_round_trip()\n >>> np.allclose(waveform.numpy(), inverse_mdct.numpy(), rtol=1e-3, atol=1e-4)\n True\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n mdcts: A `float32`/`float64` `[..., frames, frame_length // 2]`\n `Tensor` of MDCT bins representing a batch of `frame_length // 2`-point\n MDCTs.\n window_fn: A callable that takes a frame_length and a `dtype` keyword\n argument and returns a `[frame_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, a rectangular window with a scale of\n 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct`\n followed by `inverse_mdct`, please use `tf.signal.vorbis_window`,\n `tf.signal.kaiser_bessel_derived_window` or `None`. If using another\n window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1\n and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to\n achieve perfect reconstruction.\n norm: If \"ortho\", orthonormal inverse DCT4 is performed, if it is None,\n a regular dct4 followed by scaling of `1/frame_length` is performed.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `Tensor` of `float32`/`float64` signals representing\n the inverse MDCT for each input MDCT in `mdcts` where `samples` is\n `(frames - 1) * (frame_length // 2) + frame_length`.\n\n Raises:\n ValueError: If `mdcts` is not at least rank 2.\n\n [mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform\n ", "desc": "Computes the inverse modified DCT of `mdcts`.", "type": "API"}, {"name": "tf.compat.v1.signal.inverse_stft", "docs": "Computes the inverse [Short-time Fourier Transform][stft] of `stfts`.\n\n To reconstruct an original waveform, a complementary window function should\n be used with `inverse_stft`. Such a window function can be constructed with\n `tf.signal.inverse_stft_window_fn`.\n Example:\n\n ```python\n frame_length = 400\n frame_step = 160\n waveform = tf.random.normal(dtype=tf.float32, shape=[1000])\n stft = tf.signal.stft(waveform, frame_length, frame_step)\n inverse_stft = tf.signal.inverse_stft(\n stft, frame_length, frame_step,\n window_fn=tf.signal.inverse_stft_window_fn(frame_step))\n ```\n\n If a custom `window_fn` is used with `tf.signal.stft`, it must be passed to\n `tf.signal.inverse_stft_window_fn`:\n\n ```python\n frame_length = 400\n frame_step = 160\n window_fn = tf.signal.hamming_window\n waveform = tf.random.normal(dtype=tf.float32, shape=[1000])\n stft = tf.signal.stft(\n waveform, frame_length, frame_step, window_fn=window_fn)\n inverse_stft = tf.signal.inverse_stft(\n stft, frame_length, frame_step,\n window_fn=tf.signal.inverse_stft_window_fn(\n frame_step, forward_window_fn=window_fn))\n ```\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n stfts: A `complex64`/`complex128` `[..., frames, fft_unique_bins]`\n `Tensor` of STFT bins representing a batch of `fft_length`-point STFTs\n where `fft_unique_bins` is `fft_length // 2 + 1`\n frame_length: An integer scalar `Tensor`. The window length in samples.\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n fft_length: An integer scalar `Tensor`. The size of the FFT that produced\n `stfts`. If not provided, uses the smallest power of 2 enclosing\n `frame_length`.\n window_fn: A callable that takes a window length and a `dtype` keyword\n argument and returns a `[window_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, no windowing is used.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `Tensor` of `float32`/`float64` signals representing\n the inverse STFT for each input STFT in `stfts`.\n\n Raises:\n ValueError: If `stfts` is not at least rank 2, `frame_length` is not scalar,\n `frame_step` is not scalar, or `fft_length` is not scalar.\n\n [stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n ", "desc": "Computes the inverse [Short-time Fourier Transform][stft] of `stfts`.", "type": "API"}, {"name": "tf.compat.v1.signal.inverse_stft_window_fn", "docs": "Generates a window function that can be used in `inverse_stft`.\n\n Constructs a window that is equal to the forward window with a further\n pointwise amplitude correction. `inverse_stft_window_fn` is equivalent to\n `forward_window_fn` in the case where it would produce an exact inverse.\n\n See examples in `inverse_stft` documentation for usage.\n\n Args:\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n forward_window_fn: window_fn used in the forward transform, `stft`.\n name: An optional name for the operation.\n\n Returns:\n A callable that takes a window length and a `dtype` keyword argument and\n returns a `[window_length]` `Tensor` of samples in the provided datatype.\n The returned window is suitable for reconstructing original waveform in\n inverse_stft.\n ", "desc": "Generates a window function that can be used in `inverse_stft`.", "type": "API"}, {"name": "tf.compat.v1.signal.irfft", "docs": "Inverse real-valued fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most dimension of `input`.\n\n The inner-most dimension of `input` is assumed to be the result of `RFFT`: the\n `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If\n `fft_length` is not provided, it is computed from the size of the inner-most\n dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to\n compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller\n than the corresponding dimension of `input`, the dimension is cropped. If it is\n larger, the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.irfft2d", "docs": "Inverse 2D real-valued fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 2 dimensions of `input`.\n\n The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 2 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT2D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.irfft3d", "docs": "Inverse 3D real-valued fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 3 dimensions of `input`.\n\n The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 3 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT3D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.kaiser_bessel_derived_window", "docs": "Generate a [Kaiser Bessel derived window][kbd].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n beta: Beta parameter for Kaiser window.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [kbd]:\n https://en.wikipedia.org/wiki/Kaiser_window#Kaiser%E2%80%93Bessel-derived_(KBD)_window\n ", "desc": "Generate a [Kaiser Bessel derived window][kbd].", "type": "API"}, {"name": "tf.compat.v1.signal.kaiser_window", "docs": "Generate a [Kaiser window][kaiser].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n beta: Beta parameter for Kaiser window, see reference below.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [kaiser]:\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.kaiser.html\n ", "desc": "Generate a [Kaiser window][kaiser].", "type": "API"}, {"name": "tf.compat.v1.signal.linear_to_mel_weight_matrix", "docs": "Returns a matrix to warp linear scale spectrograms to the [mel scale][mel].\n\n Returns a weight matrix that can be used to re-weight a `Tensor` containing\n `num_spectrogram_bins` linearly sampled frequency information from\n `[0, sample_rate / 2]` into `num_mel_bins` frequency information from\n `[lower_edge_hertz, upper_edge_hertz]` on the [mel scale][mel].\n\n This function follows the [Hidden Markov Model Toolkit\n (HTK)](http://htk.eng.cam.ac.uk/) convention, defining the mel scale in\n terms of a frequency in hertz according to the following formula:\n\n $$\\textrm{mel}(f) = 2595 * \\textrm{log}_{10}(1 + \\frac{f}{700})$$\n\n In the returned matrix, all the triangles (filterbanks) have a peak value\n of 1.0.\n\n For example, the returned matrix `A` can be used to right-multiply a\n spectrogram `S` of shape `[frames, num_spectrogram_bins]` of linear\n scale spectrum values (e.g. STFT magnitudes) to generate a \"mel spectrogram\"\n `M` of shape `[frames, num_mel_bins]`.\n\n # `S` has shape [frames, num_spectrogram_bins]\n # `M` has shape [frames, num_mel_bins]\n M = tf.matmul(S, A)\n\n The matrix can be used with `tf.tensordot` to convert an arbitrary rank\n `Tensor` of linear-scale spectral bins into the mel scale.\n\n # S has shape [..., num_spectrogram_bins].\n # M has shape [..., num_mel_bins].\n M = tf.tensordot(S, A, 1)\n\n Args:\n num_mel_bins: Python int. How many bands in the resulting mel spectrum.\n num_spectrogram_bins: An integer `Tensor`. How many bins there are in the\n source spectrogram data, which is understood to be `fft_size // 2 + 1`,\n i.e. the spectrogram only contains the nonredundant FFT bins.\n sample_rate: An integer or float `Tensor`. Samples per second of the input\n signal used to create the spectrogram. Used to figure out the frequencies\n corresponding to each spectrogram bin, which dictates how they are mapped\n into the mel scale.\n lower_edge_hertz: Python float. Lower bound on the frequencies to be\n included in the mel spectrum. This corresponds to the lower edge of the\n lowest triangular band.\n upper_edge_hertz: Python float. The desired top edge of the highest\n frequency band.\n dtype: The `DType` of the result matrix. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[num_spectrogram_bins, num_mel_bins]`.\n\n Raises:\n ValueError: If `num_mel_bins`/`num_spectrogram_bins`/`sample_rate` are not\n positive, `lower_edge_hertz` is negative, frequency edges are incorrectly\n ordered, `upper_edge_hertz` is larger than the Nyquist frequency.\n\n [mel]: https://en.wikipedia.org/wiki/Mel_scale\n ", "desc": "Returns a matrix to warp linear scale spectrograms to the [mel scale][mel].", "type": "API"}, {"name": "tf.compat.v1.signal.mdct", "docs": "Computes the [Modified Discrete Cosine Transform][mdct] of `signals`.\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n signals: A `[..., samples]` `float32`/`float64` `Tensor` of real-valued\n signals.\n frame_length: An integer scalar `Tensor`. The window length in samples\n which must be divisible by 4.\n window_fn: A callable that takes a frame_length and a `dtype` keyword\n argument and returns a `[frame_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, a rectangular window with a scale of\n 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct`\n followed by `inverse_mdct`, please use `tf.signal.vorbis_window`,\n `tf.signal.kaiser_bessel_derived_window` or `None`. If using another\n window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1\n and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to\n achieve perfect reconstruction.\n pad_end: Whether to pad the end of `signals` with zeros when the provided\n frame length and step produces a frame that lies partially past its end.\n norm: If it is None, unnormalized dct4 is used, if it is \"ortho\"\n orthonormal dct4 is used.\n name: An optional name for the operation.\n\n Returns:\n A `[..., frames, frame_length // 2]` `Tensor` of `float32`/`float64`\n MDCT values where `frames` is roughly `samples // (frame_length // 2)`\n when `pad_end=False`.\n\n Raises:\n ValueError: If `signals` is not at least rank 1, `frame_length` is\n not scalar, or `frame_length` is not a multiple of `4`.\n\n [mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform\n ", "desc": "Computes the [Modified Discrete Cosine Transform][mdct] of `signals`.", "type": "API"}, {"name": "tf.compat.v1.signal.mfccs_from_log_mel_spectrograms", "docs": "Computes [MFCCs][mfcc] of `log_mel_spectrograms`.\n\n Implemented with GPU-compatible ops and supports gradients.\n\n [Mel-Frequency Cepstral Coefficient (MFCC)][mfcc] calculation consists of\n taking the DCT-II of a log-magnitude mel-scale spectrogram. [HTK][htk]'s MFCCs\n use a particular scaling of the DCT-II which is almost orthogonal\n normalization. We follow this convention.\n\n All `num_mel_bins` MFCCs are returned and it is up to the caller to select\n a subset of the MFCCs based on their application. For example, it is typical\n to only use the first few for speech recognition, as this results in\n an approximately pitch-invariant representation of the signal.\n\n For example:\n\n ```python\n batch_size, num_samples, sample_rate = 32, 32000, 16000.0\n # A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1].\n pcm = tf.random.normal([batch_size, num_samples], dtype=tf.float32)\n\n # A 1024-point STFT with frames of 64 ms and 75% overlap.\n stfts = tf.signal.stft(pcm, frame_length=1024, frame_step=256,\n fft_length=1024)\n spectrograms = tf.abs(stfts)\n\n # Warp the linear scale spectrograms into the mel-scale.\n num_spectrogram_bins = stfts.shape[-1].value\n lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80\n linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(\n num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,\n upper_edge_hertz)\n mel_spectrograms = tf.tensordot(\n spectrograms, linear_to_mel_weight_matrix, 1)\n mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(\n linear_to_mel_weight_matrix.shape[-1:]))\n\n # Compute a stabilized log to get log-magnitude mel-scale spectrograms.\n log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6)\n\n # Compute MFCCs from log_mel_spectrograms and take the first 13.\n mfccs = tf.signal.mfccs_from_log_mel_spectrograms(\n log_mel_spectrograms)[..., :13]\n ```\n\n Args:\n log_mel_spectrograms: A `[..., num_mel_bins]` `float32`/`float64` `Tensor`\n of log-magnitude mel-scale spectrograms.\n name: An optional name for the operation.\n Returns:\n A `[..., num_mel_bins]` `float32`/`float64` `Tensor` of the MFCCs of\n `log_mel_spectrograms`.\n\n Raises:\n ValueError: If `num_mel_bins` is not positive.\n\n [mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum\n [htk]: https://en.wikipedia.org/wiki/HTK_(software)\n ", "desc": "Computes [MFCCs][mfcc] of `log_mel_spectrograms`.", "type": "API"}, {"name": "tf.compat.v1.signal.overlap_and_add", "docs": "Reconstructs a signal from a framed representation.\n\n Adds potentially overlapping frames of a signal with shape\n `[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`.\n The resulting tensor has shape `[..., output_size]` where\n\n output_size = (frames - 1) * frame_step + frame_length\n\n Args:\n signal: A [..., frames, frame_length] `Tensor`. All dimensions may be\n unknown, and rank must be at least 2.\n frame_step: An integer or scalar `Tensor` denoting overlap offsets. Must be\n less than or equal to `frame_length`.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` with shape `[..., output_size]` containing the overlap-added\n frames of `signal`'s inner-most two dimensions.\n\n Raises:\n ValueError: If `signal`'s rank is less than 2, or `frame_step` is not a\n scalar integer.\n ", "desc": "Reconstructs a signal from a framed representation.", "type": "API"}, {"name": "tf.compat.v1.signal.rfft", "docs": "Real-valued fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most dimension of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the\n `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term,\n followed by the `fft_length / 2` positive-frequency terms.\n\n Along the axis `RFFT` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "Real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.rfft2d", "docs": "2D real-valued fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 2 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.rfft3d", "docs": "3D real-valued fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 3 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.signal.stft", "docs": "Computes the [Short-time Fourier Transform][stft] of `signals`.\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n signals: A `[..., samples]` `float32`/`float64` `Tensor` of real-valued\n signals.\n frame_length: An integer scalar `Tensor`. The window length in samples.\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n fft_length: An integer scalar `Tensor`. The size of the FFT to apply.\n If not provided, uses the smallest power of 2 enclosing `frame_length`.\n window_fn: A callable that takes a window length and a `dtype` keyword\n argument and returns a `[window_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, no windowing is used.\n pad_end: Whether to pad the end of `signals` with zeros when the provided\n frame length and step produces a frame that lies partially past its end.\n name: An optional name for the operation.\n\n Returns:\n A `[..., frames, fft_unique_bins]` `Tensor` of `complex64`/`complex128`\n STFT values where `fft_unique_bins` is `fft_length // 2 + 1` (the unique\n components of the FFT).\n\n Raises:\n ValueError: If `signals` is not at least rank 1, `frame_length` is\n not scalar, or `frame_step` is not scalar.\n\n [stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n ", "desc": "Computes the [Short-time Fourier Transform][stft] of `signals`.", "type": "API"}, {"name": "tf.compat.v1.signal.vorbis_window", "docs": "Generate a [Vorbis power complementary window][vorbis].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [vorbis]:\n https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform#Window_functions\n ", "desc": "Generate a [Vorbis power complementary window][vorbis].", "type": "API"}, {"name": "tf.compat.v1.sin", "docs": "Computes sine of x element-wise.\n\n Given an input tensor, this function computes sine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10, float(\"inf\")])\n tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.sinh", "docs": "Computes hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic sine of every\n element in the tensor. Input range is `[-inf,inf]` and output range\n is `[-inf,inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.size", "docs": "Returns the size of a tensor.\n\n Returns a 0-D `Tensor` representing the number of elements in `input`\n of type `out_type`. Defaults to tf.int32.\n\n For example:\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n tf.size(t) # 12\n ```\n\n Args:\n input: A `Tensor` or `SparseTensor`.\n name: A name for the operation (optional).\n out_type: (Optional) The specified non-quantized numeric output type of the\n operation. Defaults to `tf.int32`.\n\n Returns:\n A `Tensor` of type `out_type`. Defaults to `tf.int32`.\n\n @compatibility(numpy)\n Equivalent to np.size()\n @end_compatibility\n ", "desc": "Returns the size of a tensor.", "type": "API"}, {"name": "tf.compat.v1.slice", "docs": "Extracts a slice from a tensor.\n\n See also `tf.strided_slice`.\n\n This operation extracts a slice of size `size` from a tensor `input_` starting\n at the location specified by `begin`. The slice `size` is represented as a\n tensor shape, where `size[i]` is the number of elements of the 'i'th dimension\n of `input_` that you want to slice. The starting location (`begin`) for the\n slice is represented as an offset in each dimension of `input_`. In other\n words, `begin[i]` is the offset into the i'th dimension of `input_` that you\n want to slice from.\n\n Note that `tf.Tensor.__getitem__` is typically a more pythonic way to\n perform slices, as it allows you to write `foo[3:7, :-2]` instead of\n `tf.slice(foo, [3, 0], [4, foo.get_shape()[1]-2])`.\n\n `begin` is zero-based; `size` is one-based. If `size[i]` is -1,\n all remaining elements in dimension i are included in the\n slice. In other words, this is equivalent to setting:\n\n `size[i] = input_.dim_size(i) - begin[i]`\n\n This operation requires that:\n\n `0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`\n\n For example:\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]],\n [[3, 3, 3], [4, 4, 4]],\n [[5, 5, 5], [6, 6, 6]]])\n tf.slice(t, [1, 0, 0], [1, 1, 3]) # [[[3, 3, 3]]]\n tf.slice(t, [1, 0, 0], [1, 2, 3]) # [[[3, 3, 3],\n # [4, 4, 4]]]\n tf.slice(t, [1, 0, 0], [2, 1, 3]) # [[[3, 3, 3]],\n # [[5, 5, 5]]]\n ```\n\n Args:\n input_: A `Tensor`.\n begin: An `int32` or `int64` `Tensor`.\n size: An `int32` or `int64` `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` the same type as `input_`.\n ", "desc": "Extracts a slice from a tensor.", "type": "API"}, {"name": "tf.compat.v1.sort", "docs": "Sorts a tensor.\n\n Usage:\n\n >>> a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n >>> tf.sort(a).numpy()\n array([ 1. , 2.8 , 10. , 26.9 , 62.3 , 166.32], dtype=float32)\n\n >>> tf.sort(a, direction='DESCENDING').numpy()\n array([166.32, 62.3 , 26.9 , 10. , 2.8 , 1. ], dtype=float32)\n\n For multidimensional inputs you can control which axis the sort is applied\n along. The default `axis=-1` sorts the innermost axis.\n\n >>> mat = [[3,2,1],\n ... [2,1,3],\n ... [1,3,2]]\n >>> tf.sort(mat, axis=-1).numpy()\n array([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]], dtype=int32)\n >>> tf.sort(mat, axis=0).numpy()\n array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]], dtype=int32)\n\n See also:\n\n * `tf.argsort`: Like sort, but it returns the sort indices.\n * `tf.math.top_k`: A partial sort that returns a fixed number of top values\n and corresponding indices.\n\n\n Args:\n values: 1-D or higher **numeric** `Tensor`.\n axis: The axis along which to sort. The default is -1, which sorts the last\n axis.\n direction: The direction in which to sort the values (`'ASCENDING'` or\n `'DESCENDING'`).\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` with the same dtype and shape as `values`, with the elements\n sorted along the given `axis`.\n\n Raises:\n tf.errors.InvalidArgumentError: If the `values.dtype` is not a `float` or\n `int` type.\n ValueError: If axis is not a constant scalar, or the direction is invalid.\n ", "desc": "Sorts a tensor.", "type": "API"}, {"name": "tf.compat.v1.space_to_batch", "docs": "SpaceToBatch for 4-D tensors of type T.\n\n This is a legacy version of the more general SpaceToBatchND.\n\n Zero-pads and then rearranges (permutes) blocks of spatial data into batch.\n More specifically, this op outputs a copy of the input tensor where values from\n the `height` and `width` dimensions are moved to the `batch` dimension. After\n the zero-padding, both `height` and `width` of the input must be divisible by the\n block size.\n\n The attr `block_size` must be greater than one. It indicates the block size.\n\n * Non-overlapping blocks of size `block_size x block size` in the height and\n width dimensions are rearranged into the batch dimension at each location.\n * The batch of the output tensor is `batch * block_size * block_size`.\n * Both height_pad and width_pad must be divisible by block_size.\n\n The shape of the output will be:\n\n [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,\n depth]\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],\n [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`. 4-D with shape `[batch, height, width, depth]`.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies\n the padding of the input with zeros across the spatial dimensions as follows:\n\n paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]\n\n The effective spatial dimensions of the zero-padded input tensor will be:\n\n height_pad = pad_top + height + pad_bottom\n width_pad = pad_left + width + pad_right\n block_size: An `int` that is `>= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for 4-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.space_to_batch_nd", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.space_to_depth", "docs": "SpaceToDepth for tensors of type T.\n\n Rearranges blocks of spatial data, into depth. More specifically,\n this op outputs a copy of the input tensor where values from the `height`\n and `width` dimensions are moved to the `depth` dimension.\n The attr `block_size` indicates the input block size.\n\n * Non-overlapping blocks of size `block_size x block size` are rearranged\n into depth at each location.\n * The depth of the output tensor is `block_size * block_size * input_depth`.\n * The Y, X coordinates within each block of the input become the high order\n component of the output channel index.\n * The input tensor's height and width must be divisible by block_size.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates\n within the output image, bX, bY means coordinates\n within the input block, iC means input channels).\n The output would be a transpose to the following layout:\n n,oY,oX,bY,bX,iC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 2, 2, 1]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n This operation will output a tensor of shape `[1, 1, 1, 4]`:\n\n ```\n [[[[1, 2, 3, 4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,\n the corresponding output will have a single element (i.e. width and height are\n both 1) and will have a depth of 4 channels (1 * block_size * block_size).\n The output element shape is `[1, 1, 4]`.\n\n For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n This operation, for block_size of 2, will return the following tensor of shape\n `[1, 1, 1, 12]`\n\n ```\n [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:\n\n ```\n x = [[[[1], [2], [5], [6]],\n [[3], [4], [7], [8]],\n [[9], [10], [13], [14]],\n [[11], [12], [15], [16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 2 2 4]`:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`. The size of the spatial block.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToDepth for tensors of type T.", "type": "API"}, {"name": "tf.compat.v1.sparse", "docs": "Sparse Tensor Representation.\n\nSee also `tf.sparse.SparseTensor`.\n\n", "desc": "Sparse Tensor Representation.", "type": "API"}, {"name": "tf.compat.v1.sparse.add", "docs": "Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(thresh)`. They will be removed in a future version.\nInstructions for updating:\nthresh is deprecated, use threshold instead\n\nIf one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If\nboth arguments are `SparseTensor`s, this returns a `SparseTensor`. The order\nof arguments does not matter. Use vanilla `tf.add()` for adding two dense\n`Tensor`s.\n\nThe shapes of the two operands must match: broadcasting is not supported.\n\nThe indices of any input `SparseTensor` are assumed ordered in standard\nlexicographic order. If this is not the case, before this step run\n`SparseReorder` to restore index ordering.\n\nIf both arguments are sparse, we perform \"clipping\" as follows. By default,\nif two values sum to zero at some index, the output `SparseTensor` would still\ninclude that particular location in its index, storing a zero in the\ncorresponding value slot. To override this, callers can specify `thresh`,\nindicating that if the sum has a magnitude strictly smaller than `thresh`, its\ncorresponding value and index would then not be included. In particular,\n`thresh == 0.0` (default) means everything is kept and actual thresholding\nhappens only for a positive value.\n\nFor example, suppose the logical sum of two sparse operands is (densified):\n\n [ 2]\n [.1 0]\n [ 6 -.2]\n\nThen,\n\n* `thresh == 0` (the default): all 5 index/value pairs will be returned.\n* `thresh == 0.11`: only .1 and 0 will vanish, and the remaining three\n index/value pairs will be returned.\n* `thresh == 0.21`: .1, 0, and -.2 will vanish.\n\nArgs:\n a: The first operand; `SparseTensor` or `Tensor`.\n b: The second operand; `SparseTensor` or `Tensor`. At least one operand\n must be sparse.\n threshold: An optional 0-D `Tensor` (defaults to `0`). The magnitude\n threshold that determines if an output value/index pair takes space. Its\n dtype should match that of the values if they are real; if the latter are\n complex64/complex128, then the dtype should be float32/float64,\n correspondingly.\n thresh: Deprecated alias for `threshold`.\n\nReturns:\n A `SparseTensor` or a `Tensor`, representing the sum.\n\nRaises:\n TypeError: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead.", "desc": "Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.bincount", "docs": "Count the number of times an integer value appears in a tensor.\n\n This op takes an N-dimensional `Tensor`, `RaggedTensor`, or `SparseTensor`,\n and returns an N-dimensional int64 SparseTensor where element\n `[i0...i[axis], j]` contains the number of times the value `j` appears in\n slice `[i0...i[axis], :]` of the input tensor. Currently, only N=0 and\n N=-1 are supported.\n\n Args:\n values: A Tensor, RaggedTensor, or SparseTensor whose values should be\n counted. These tensors must have a rank of 2 if `axis=-1`.\n weights: If non-None, must be the same shape as arr. For each value in\n `value`, the bin will be incremented by the corresponding weight instead\n of 1.\n axis: The axis to slice over. Axes at and below `axis` will be flattened\n before bin counting. Currently, only `0`, and `-1` are supported. If None,\n all axes will be flattened (identical to passing `0`).\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `values` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n binary_output: If True, this op will output 1 instead of the number of times\n a token appears (equivalent to one_hot + reduce_any instead of one_hot +\n reduce_add). Defaults to False.\n name: A name for this op.\n\n Returns:\n A SparseTensor with `output.shape = values.shape[:axis] + [N]`, where `N` is\n * `maxlength` (if set);\n * `minlength` (if set, and `minlength > reduce_max(values)`);\n * `0` (if `values` is empty);\n * `reduce_max(values) + 1` otherwise.\n\n\n Examples:\n\n **Bin-counting every item in individual batches**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where the value of (i,j) is the\n number of times value j appears in batch i.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(data, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([1 2 1 2 1 1], shape=(6,), dtype=int64),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n **Bin-counting with defined output shape**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where the value of (i,j) is the\n number of times value j appears in batch i. However, all values of j\n above 'maxlength' are ignored. The dense_shape of the output sparse tensor\n is set to 'minlength'. Note that, while the input is identical to the\n example above, the value '10001' in batch item 2 is dropped, and the\n dense shape is [2, 500] instead of [2,10002] or [2, 102].\n\n >>> minlength = maxlength = 500\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(\n ... data, axis=-1, minlength=minlength, maxlength=maxlength)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]], shape=(5, 2), dtype=int64),\n values=tf.Tensor([1 2 1 2 1], shape=(5,), dtype=int64),\n dense_shape=tf.Tensor([ 2 500], shape=(2,), dtype=int64))\n\n **Binary bin-counting**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where (i,j) is 1 if the value j\n appears in batch i at least once and is 0 otherwise. Note that, even though\n some values (like 20 in batch 1 and 11 in batch 2) appear more than once,\n the 'values' tensor is all 1s.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(data, binary_output=True, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([1 1 1 1 1 1], shape=(6,), dtype=int64),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n **Weighted bin-counting**\n\n This example takes two inputs - a values tensor and a weights tensor. These\n tensors must be identically shaped, and have the same row splits or indices\n in the case of RaggedTensors or SparseTensors. When performing a weighted\n count, the op will output a SparseTensor where the value of (i, j) is the\n sum of the values in the weight tensor's batch i in the locations where\n the values tensor has the value j. In this case, the output dtype is the\n same as the dtype of the weights tensor.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> weights = [[2, 0.25, 15, 0.5], [2, 17, 3, 0.9]]\n >>> output = tf.sparse.bincount(data, weights=weights, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([2. 0.75 15. 5. 17. 0.9], shape=(6,), dtype=float32),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n ", "desc": "Count the number of times an integer value appears in a tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.concat", "docs": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(concat_dim)`. They will be removed in a future version.\nInstructions for updating:\nconcat_dim is deprecated, use axis instead\n\nConcatenation is with respect to the dense versions of each sparse input.\nIt is assumed that each inputs is a `SparseTensor` whose elements are ordered\nalong increasing dimension number.\n\nIf expand_nonconcat_dim is False, all inputs' shapes must match, except for\nthe concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are\nallowed to vary among all inputs.\n\nThe `indices`, `values`, and `shapes` lists must have the same length.\n\nIf expand_nonconcat_dim is False, then the output shape is identical to the\ninputs', except along the concat dimension, where it is the sum of the inputs'\nsizes along that dimension.\n\nIf expand_nonconcat_dim is True, then the output shape along the non-concat\ndimensions will be expand to be the largest among all inputs, and it is the\nsum of the inputs sizes along the concat dimension.\n\nThe output elements will be resorted to preserve the sort order along\nincreasing dimension number.\n\nThis op runs in `O(M log M)` time, where `M` is the total number of non-empty\nvalues across all inputs. This is due to the need for an internal sort in\norder to concatenate efficiently across an arbitrary dimension.\n\nFor example, if `axis = 1` and the inputs are\n\n sp_inputs[0]: shape = [2, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nthen the output will be\n\n shape = [2, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b c ] [ ] [b c ]\n\nAnother example, if 'axis = 1' and the inputs are\n\n sp_inputs[0]: shape = [3, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nif expand_nonconcat_dim = False, this will result in an error. But if\nexpand_nonconcat_dim = True, this will result in:\n\n shape = [3, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b ] [ ] [b ]\n [ c ] [ c ]\n\n\nArgs:\n axis: Dimension to concatenate along. Must be in range [-rank, rank),\n where rank is the number of dimensions in each input `SparseTensor`.\n sp_inputs: List of `SparseTensor` to concatenate.\n name: A name prefix for the returned tensors (optional).\n expand_nonconcat_dim: Whether to allow the expansion in the non-concat\n dimensions. Defaulted to False.\n concat_dim: The old (deprecated) name for axis.\n expand_nonconcat_dims: alias for expand_nonconcat_dim\n\nReturns:\n A `SparseTensor` with the concatenated output.\n\nRaises:\n TypeError: If `sp_inputs` is not a list of `SparseTensor`.", "desc": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.cross", "docs": "Generates sparse cross from a list of sparse and dense tensors.\n\n For example, if the inputs are\n\n * inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n * inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n * inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be:\n\n shape = [2, 2]\n [0, 0]: \"a_X_d_X_f\"\n [1, 0]: \"b_X_e_X_g\"\n [1, 1]: \"c_X_e_X_g\"\n\n Customized separator \"_Y_\":\n\n >>> inp_0 = tf.constant([['a'], ['b']])\n >>> inp_1 = tf.constant([['c'], ['d']])\n >>> output = tf.sparse.cross([inp_0, inp_1], separator='_Y_')\n >>> output.values\n \n\n\n Args:\n inputs: An iterable of `Tensor` or `SparseTensor`.\n name: Optional name for the op.\n separator: A string added between each string being joined. Defaults to\n '_X_'.\n\n Returns:\n A `SparseTensor` of type `string`.\n ", "desc": "Generates sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.compat.v1.sparse.cross_hashed", "docs": "Generates hashed sparse cross from a list of sparse and dense tensors.\n\n For example, if the inputs are\n\n * inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n * inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n * inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be:\n\n shape = [2, 2]\n [0, 0]: FingerprintCat64(\n Fingerprint64(\"f\"), FingerprintCat64(\n Fingerprint64(\"d\"), Fingerprint64(\"a\")))\n [1, 0]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"b\")))\n [1, 1]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"c\")))\n\n Args:\n inputs: An iterable of `Tensor` or `SparseTensor`.\n num_buckets: An `int` that is `>= 0`.\n output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.\n hash_key: Integer hash_key that will be used by the `FingerprintCat64`\n function. If not given, will use a default key.\n name: Optional name for the op.\n\n Returns:\n A `SparseTensor` of type `int64`.\n ", "desc": "Generates hashed sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.compat.v1.sparse.expand_dims", "docs": "Returns a tensor with an length 1 axis inserted at index `axis`.\n\n Given a tensor `input`, this operation inserts a dimension of length 1 at the\n dimension index `axis` of `input`'s shape. The dimension index follows python\n indexing rules: It's zero-based, a negative index it is counted backward\n from the end.\n\n This operation is useful to:\n\n * Add an outer \"batch\" dimension to a single element.\n * Align axes for broadcasting.\n * To add an inner vector length axis to a tensor of scalars.\n\n For example:\n\n If you have a sparse tensor with shape `[height, width, depth]`:\n\n >>> sp = tf.sparse.SparseTensor(indices=[[3,4,1]], values=[7,],\n ... dense_shape=[10,10,3])\n\n You can add an outer `batch` axis by passing `axis=0`:\n\n >>> tf.sparse.expand_dims(sp, axis=0).shape.as_list()\n [1, 10, 10, 3]\n\n The new axis location matches Python `list.insert(axis, 1)`:\n\n >>> tf.sparse.expand_dims(sp, axis=1).shape.as_list()\n [10, 1, 10, 3]\n\n Following standard python indexing rules, a negative `axis` counts from the\n end so `axis=-1` adds an inner most dimension:\n\n >>> tf.sparse.expand_dims(sp, axis=-1).shape.as_list()\n [10, 10, 3, 1]\n\n Note: Unlike `tf.expand_dims` this function includes a default value for the\n `axis`: `-1`. So if `axis is not specified, an inner dimension is added.\n\n >>> sp.shape.as_list()\n [10, 10, 3]\n >>> tf.sparse.expand_dims(sp).shape.as_list()\n [10, 10, 3, 1]\n\n This operation requires that `axis` is a valid index for `input.shape`,\n following python indexing rules:\n\n ```\n -1-tf.rank(input) <= axis <= tf.rank(input)\n ```\n\n This operation is related to:\n\n * `tf.expand_dims`, which provides this functionality for dense tensors.\n * `tf.squeeze`, which removes dimensions of size 1, from dense tensors.\n * `tf.sparse.reshape`, which provides more flexible reshaping capability.\n\n Args:\n sp_input: A `SparseTensor`.\n axis: 0-D (scalar). Specifies the dimension index at which to expand the\n shape of `input`. Must be in the range `[-rank(sp_input) - 1,\n rank(sp_input)]`. Defaults to `-1`.\n name: The name of the output `SparseTensor`.\n\n Returns:\n A `SparseTensor` with the same data as `sp_input`, but its shape has an\n additional dimension of size 1 added.\n ", "desc": "Returns a tensor with an length 1 axis inserted at index `axis`.", "type": "API"}, {"name": "tf.compat.v1.sparse.eye", "docs": "Creates a two-dimensional sparse tensor with ones along the diagonal.\n\n Args:\n num_rows: Non-negative integer or `int32` scalar `tensor` giving the number\n of rows in the resulting matrix.\n num_columns: Optional non-negative integer or `int32` scalar `tensor` giving\n the number of columns in the resulting matrix. Defaults to `num_rows`.\n dtype: The type of element in the resulting `Tensor`.\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `SparseTensor` of shape [num_rows, num_columns] with ones along the\n diagonal.\n ", "desc": "Creates a two-dimensional sparse tensor with ones along the diagonal.", "type": "API"}, {"name": "tf.compat.v1.sparse.fill_empty_rows", "docs": "Fills empty rows in the input 2-D `SparseTensor` with a default value.\n\n This op adds entries with the specified `default_value` at index\n `[row, 0]` for any row in the input that does not already have a value.\n\n For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:\n\n [0, 1]: a\n [0, 3]: b\n [1, 0]: default_value\n [2, 0]: c\n [3, 1]: d\n [4, 0]: default_value\n\n Note that the input may have empty columns at the end, with no effect on\n this op.\n\n The output `SparseTensor` will be in row-major order and will have the\n same shape as the input.\n\n This op also returns an indicator vector such that\n\n empty_row_indicator[i] = True iff row i was an empty row.\n\n Args:\n sp_input: A `SparseTensor` with shape `[N, M]`.\n default_value: The value to fill for empty rows, with the same type as\n `sp_input.`\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n sp_ordered_output: A `SparseTensor` with shape `[N, M]`, and with all empty\n rows filled in with `default_value`.\n empty_row_indicator: A bool vector of length `N` indicating whether each\n input row was empty.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Fills empty rows in the input 2-D `SparseTensor` with a default value.", "type": "API"}, {"name": "tf.compat.v1.sparse.from_dense", "docs": "Converts a dense tensor into a sparse tensor.\n\n Only elements not equal to zero will be present in the result. The resulting\n `SparseTensor` has the same dtype and shape as the input.\n\n >>> sp = tf.sparse.from_dense([0, 0, 3, 0, 1])\n >>> sp.shape.as_list()\n [5]\n >>> sp.values.numpy()\n array([3, 1], dtype=int32)\n >>> sp.indices.numpy()\n array([[2],\n [4]])\n\n Args:\n tensor: A dense `Tensor` to be converted to a `SparseTensor`.\n name: Optional name for the op.\n\n Returns:\n The `SparseTensor`.\n ", "desc": "Converts a dense tensor into a sparse tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.mask", "docs": "Masks elements of `IndexedSlices`.\n\n Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that\n contains a subset of the slices of `a`. Only the slices at indices not\n specified in `mask_indices` are returned.\n\n This is useful when you need to extract a subset of slices in an\n `IndexedSlices` object.\n\n For example:\n\n ```python\n # `a` contains slices at indices [12, 26, 37, 45] from a large tensor\n # with shape [1000, 10]\n a.indices # [12, 26, 37, 45]\n tf.shape(a.values) # [4, 10]\n\n # `b` will be the subset of `a` slices at its second and third indices, so\n # we want to mask its first and last indices (which are at absolute\n # indices 12, 45)\n b = tf.sparse.mask(a, [12, 45])\n\n b.indices # [26, 37]\n tf.shape(b.values) # [2, 10]\n ```\n\n Args:\n a: An `IndexedSlices` instance.\n mask_indices: Indices of elements to mask.\n name: A name for the operation (optional).\n\n Returns:\n The masked `IndexedSlices` instance.\n ", "desc": "Masks elements of `IndexedSlices`.", "type": "API"}, {"name": "tf.compat.v1.sparse.matmul", "docs": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix\n\n (or SparseTensor) \"B\". Please note that one and only one of the inputs MUST\n be a SparseTensor and the other MUST be a dense matrix.\n\n The following input format is recommended (but not required) for optimal\n performance:\n\n * If `adjoint_a == false`: `A` should be sorted in lexicographically\n increasing order. Use `sparse.reorder` if you're not sure.\n * If `adjoint_a == true`: `A` should be sorted in order of increasing\n dimension 1 (i.e., \"column major\" order instead of \"row major\" order).\n\n Args:\n sp_a: SparseTensor (or dense Matrix) A, of rank 2.\n b: dense Matrix (or SparseTensor) B, with the same dtype as sp_a.\n adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex,\n this is transpose(conj(A)). Otherwise it's transpose(A).\n adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex,\n this is transpose(conj(B)). Otherwise it's transpose(B).\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense matrix (pseudo-code in dense np.matrix notation):\n `A = A.H if adjoint_a else A`\n `B = B.H if adjoint_b else B`\n `return A*B`\n\n Notes:\n\n Using `tf.nn.embedding_lookup_sparse` for sparse multiplication:\n\n It's not obvious but you can consider `embedding_lookup_sparse` as another\n sparse and dense multiplication. In some situations, you may prefer to use\n `embedding_lookup_sparse` even though you're not dealing with embeddings.\n\n There are two questions to ask in the decision process: Do you need gradients\n computed as sparse too? Is your sparse data represented as two\n `SparseTensor`s: ids and values? There is more explanation about data format\n below. If you answer any of these questions as yes, consider using\n `tf.nn.embedding_lookup_sparse`.\n\n Following explains differences between the expected SparseTensors:\n For example if dense form of your sparse data has shape `[3, 5]` and values:\n\n [[ a ]\n [b c]\n [ d ]]\n\n\n `SparseTensor` format expected by `sparse_tensor_dense_matmul`:\n `sp_a` (indices, values):\n\n [0, 1]: a\n [1, 0]: b\n [1, 4]: c\n [2, 2]: d\n\n `SparseTensor` format expected by `embedding_lookup_sparse`:\n `sp_ids` `sp_weights`\n\n [0, 0]: 1 [0, 0]: a\n [1, 0]: 0 [1, 0]: b\n [1, 1]: 4 [1, 1]: c\n [2, 0]: 2 [2, 0]: d\n\n\n Deciding when to use `sparse_tensor_dense_matmul` vs.\n `matmul`(a_is_sparse=True):\n\n There are a number of questions to ask in the decision process, including:\n\n * Will the SparseTensor `A` fit in memory if densified?\n * Is the column count of the product large (>> 1)?\n * Is the density of `A` larger than approximately 15%?\n\n If the answer to several of these questions is yes, consider\n converting the `SparseTensor` to a dense one and using `tf.matmul` with\n `a_is_sparse=True`.\n\n This operation tends to perform well when `A` is more sparse, if the column\n size of the product is small (e.g. matrix-vector multiplication), if\n `sp_a.dense_shape` takes on large values.\n\n Below is a rough speed comparison between `sparse_tensor_dense_matmul`,\n labeled 'sparse', and `matmul`(a_is_sparse=True), labeled 'dense'. For\n purposes of the comparison, the time spent converting from a `SparseTensor` to\n a dense `Tensor` is not included, so it is overly conservative with respect to\n the time ratio.\n\n Benchmark system:\n CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB\n GPU: NVidia Tesla k40c\n\n Compiled with:\n `-c opt --config=cuda --copt=-mavx`\n\n ```\n tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks\n A sparse [m, k] with % nonzero values between 1% and 80%\n B dense [k, n]\n\n % nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)\n 0.01 1 True 100 100 0.000221166 0.00010154 0.459112\n 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745\n 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385\n 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669\n 0.01 1 False 100 100 0.000208085 0.000107603 0.51711\n 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762\n 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635\n 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124\n 0.01 10 True 100 100 0.000218522 0.000105537 0.482958\n 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506\n 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064\n 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128\n 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354\n 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687\n 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324\n 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549\n 0.01 25 True 100 100 0.000207806 0.000105977 0.509981\n 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181\n 0.01 25 True 1000 100 0.00038262 0.00014158 0.370035\n 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504\n 0.01 25 False 100 100 0.000209401 0.000104696 0.499979\n 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076\n 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856\n 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413\n 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833\n 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959\n 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439\n 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898\n 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746\n 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228\n 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764\n 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648\n 0.2 10 True 100 100 0.000211692 0.000109903 0.519165\n 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753\n 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596\n 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064\n 0.2 10 False 100 100 0.000215727 0.000110502 0.512231\n 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653\n 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132\n 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618\n 0.2 25 True 100 100 0.000218705 0.000129913 0.594009\n 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402\n 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788\n 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052\n 0.2 25 False 100 100 0.000221494 0.0001306 0.589632\n 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969\n 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754\n 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046\n 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836\n 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101\n 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492\n 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851\n 0.5 1 False 100 100 0.000224196 0.000101423 0.452386\n 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841\n 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318\n 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563\n 0.5 10 True 100 100 0.000222125 0.000112308 0.505608\n 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753\n 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422\n 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801\n 0.5 10 False 100 100 0.000232083 0.000114978 0.495418\n 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146\n 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817\n 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638\n 0.5 25 True 100 100 0.00023429 0.000151703 0.647501\n 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386\n 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891\n 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845\n 0.5 25 False 100 100 0.000228981 0.000155334 0.678371\n 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124\n 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287\n 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927\n 0.8 1 True 100 100 0.000222037 0.000105301 0.47425\n 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664\n 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212\n 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633\n 0.8 1 False 100 100 0.000214079 0.000107486 0.502085\n 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261\n 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193\n 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282\n 0.8 10 True 100 100 0.000229159 0.00011825 0.516017\n 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677\n 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336\n 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689\n 0.8 10 False 100 100 0.000230783 0.000124958 0.541452\n 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606\n 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642\n 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024\n 0.8 25 True 100 100 0.000233496 0.000175241 0.75051\n 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458\n 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875\n 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132\n 0.8 25 False 100 100 0.000240243 0.000175047 0.728625\n 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763\n 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138\n 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992\n ```\n\n ", "desc": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix", "type": "API"}, {"name": "tf.compat.v1.sparse.maximum", "docs": "Returns the element-wise max of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.maximum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n The reduction version of this elementwise operation is `tf.sparse.reduce_max`\n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise max of two SparseTensors.", "type": "API"}, {"name": "tf.compat.v1.sparse.merge", "docs": "Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nNo similar op available at this time.\n\nThe most common use case for this function occurs when feature ids and\ntheir corresponding values are stored in `Example` protos on disk.\n`parse_example` will return a batch of ids and a batch of values, and this\nfunction joins them into a single logical `SparseTensor` for use in\nfunctions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc.\n\nThe `SparseTensor` returned by this function has the following properties:\n\n - `indices` is equivalent to `sp_ids.indices` with the last\n dimension discarded and replaced with `sp_ids.values`.\n - `values` is simply `sp_values.values`.\n - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then\n `output.shape = [D0, D1, ..., Dn, vocab_size]`.\n\nFor example, consider the following feature vectors:\n\n```python\n vector1 = [-3, 0, 0, 0, 0, 0]\n vector2 = [ 0, 1, 0, 4, 1, 0]\n vector3 = [ 5, 0, 0, 9, 0, 0]\n```\n\nThese might be stored sparsely in the following Example protos by storing\nonly the feature ids (column number if the vectors are treated as a matrix)\nof the non-zero elements and the corresponding values:\n\n```python\n examples = [Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[0])),\n \"values\": Feature(float_list=FloatList(value=[-3]))}),\n Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[1, 4, 3])),\n \"values\": Feature(float_list=FloatList(value=[1, 1, 4]))}),\n Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[0, 3])),\n \"values\": Feature(float_list=FloatList(value=[5, 9]))})]\n```\n\nThe result of calling parse_example on these examples will produce a\ndictionary with entries for \"ids\" and \"values\". Passing those two objects\nto this function along with vocab_size=6, will produce a `SparseTensor` that\nsparsely represents all three instances. Namely, the `indices` property will\ncontain the coordinates of the non-zero entries in the feature matrix (the\nfirst dimension is the row number in the matrix, i.e., the index within the\nbatch, and the second dimension is the column number, i.e., the feature id);\n`values` will contain the actual values. `shape` will be the shape of the\noriginal matrix, i.e., (3, 6). For our example above, the output will be\nequal to:\n\n```python\n SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],\n values=[-3, 1, 4, 1, 5, 9],\n dense_shape=[3, 6])\n```\n\nThis method generalizes to higher-dimensions by simply providing a list for\nboth the sp_ids as well as the vocab_size.\nIn this case the resulting `SparseTensor` has the following properties:\n - `indices` is equivalent to `sp_ids[0].indices` with the last\n dimension discarded and concatenated with\n `sp_ids[0].values, sp_ids[1].values, ...`.\n - `values` is simply `sp_values.values`.\n - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then\n `output.shape = [D0, D1, ..., Dn] + vocab_size`.\n\nArgs:\n sp_ids: A single `SparseTensor` with `values` property of type `int32`\n or `int64` or a Python list of such `SparseTensor`s or a list thereof.\n sp_values: A `SparseTensor` of any type.\n vocab_size: A scalar `int64` Tensor (or Python int) containing the new size\n of the last dimension, `all(0 <= sp_ids.values < vocab_size)`.\n Or a list thereof with `all(0 <= sp_ids[i].values < vocab_size[i])` for\n all `i`.\n name: A name prefix for the returned tensors (optional)\n already_sorted: A boolean to specify whether the per-batch values in\n `sp_values` are already sorted. If so skip sorting, False by default\n (optional).\n\nReturns:\n A `SparseTensor` compactly representing a batch of feature ids and values,\n useful for passing to functions that expect such a `SparseTensor`.\n\nRaises:\n TypeError: If `sp_values` is not a `SparseTensor`. Or if `sp_ids` is neither\n a `SparseTensor` nor a list thereof. Or if `vocab_size` is not a\n `Tensor` or a Python int and `sp_ids` is a `SparseTensor`. Or if\n `vocab_size` is not a or list thereof and `sp_ids` is a list.\n ValueError: If `sp_ids` and `vocab_size` are lists of different lengths.", "desc": "Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.sparse.minimum", "docs": "Returns the element-wise min of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.minimum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise min of two SparseTensors.", "type": "API"}, {"name": "tf.compat.v1.sparse.placeholder", "docs": "Inserts a placeholder for a sparse tensor that will be always fed.\n\n **Important**: This sparse tensor will produce an error if evaluated.\n Its value must be fed using the `feed_dict` optional argument to\n `Session.run()`, `Tensor.eval()`, or `Operation.run()`.\n\n For example:\n\n ```python\n x = tf.compat.v1.sparse.placeholder(tf.float32)\n y = tf.sparse.reduce_sum(x)\n\n with tf.compat.v1.Session() as sess:\n print(sess.run(y)) # ERROR: will fail because x was not fed.\n\n indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64)\n values = np.array([1.0, 2.0], dtype=np.float32)\n shape = np.array([7, 9, 2], dtype=np.int64)\n print(sess.run(y, feed_dict={\n x: tf.compat.v1.SparseTensorValue(indices, values, shape)})) # Will\n succeed.\n print(sess.run(y, feed_dict={\n x: (indices, values, shape)})) # Will succeed.\n\n sp = tf.sparse.SparseTensor(indices=indices, values=values,\n dense_shape=shape)\n sp_value = sp.eval(session=sess)\n print(sess.run(y, feed_dict={x: sp_value})) # Will succeed.\n ```\n\n @compatibility{eager} Placeholders are not compatible with eager execution.\n\n Args:\n dtype: The type of `values` elements in the tensor to be fed.\n shape: The shape of the tensor to be fed (optional). If the shape is not\n specified, you can feed a sparse tensor of any shape.\n name: A name for prefixing the operations (optional).\n\n Returns:\n A `SparseTensor` that may be used as a handle for feeding a value, but not\n evaluated directly.\n\n Raises:\n RuntimeError: if eager execution is enabled\n ", "desc": "Inserts a placeholder for a sparse tensor that will be always fed.", "type": "API"}, {"name": "tf.compat.v1.sparse.reduce_max", "docs": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version.\nInstructions for updating:\nreduction_axes is deprecated, use axis instead\n\nThis is the reduction operation for the elementwise `tf.sparse.maximum` op.\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_max()`. In particular, this Op also returns a dense `Tensor`\ninstead of a sparse one.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nsimilar to the indexing rules in Python.\n\nThe values not defined in `sp_input` don't participate in the reduce max,\nas opposed to be implicitly assumed 0 -- hence it can return negative values\nfor sparse `reduction_axes`. But, in case there are no values in\n`reduction_axes`, it will reduce to 0. See second example below.\n\nFor example:\n\n # 'x' represents [[1, ?, 2]\n # [?, 3, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 2, 3], [2, 3])\n >>> tf.sparse.reduce_max(x)\n \n >>> tf.sparse.reduce_max(x, 0)\n \n >>> tf.sparse.reduce_max(x, 1)\n \n >>> tf.sparse.reduce_max(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_max(x, [0, 1])\n \n\n # 'y' represents [[-7, ?]\n # [ 4, 3]\n # [ ?, ?]\n\n >>> y = tf.sparse.SparseTensor([[0, 0,], [1, 0], [1, 1]], [-7, 4, 3],\n ... [3, 2])\n >>> tf.sparse.reduce_max(y, 1)\n \n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of `axis`.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced Tensor.", "desc": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.reduce_max_sparse", "docs": "Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_max()`. In contrast to SparseReduceSum, this Op returns a\nSparseTensor.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nwhich are interpreted according to the indexing rules in Python.\n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced SparseTensor.", "desc": "Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.reduce_sum", "docs": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version.\nInstructions for updating:\nreduction_axes is deprecated, use axis instead\n\nThis is the reduction operation for the elementwise `tf.sparse.add` op.\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`\ninstead of a sparse one.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nsimilar to the indexing rules in Python.\n\nFor example:\n\n # 'x' represents [[1, ?, 1]\n # [?, 1, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 1, 1], [2, 3])\n >>> tf.sparse.reduce_sum(x)\n \n >>> tf.sparse.reduce_sum(x, 0)\n \n >>> tf.sparse.reduce_sum(x, 1) # Can also use -1 as the axis\n \n >>> tf.sparse.reduce_sum(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_sum(x, [0, 1])\n \n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of `axis`.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced Tensor.", "desc": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.reduce_sum_sparse", "docs": "Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_sum()`. In contrast to SparseReduceSum, this Op returns a\nSparseTensor.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nwhich are interpreted according to the indexing rules in Python.\n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced SparseTensor.", "desc": "Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.reorder", "docs": "Reorders a `SparseTensor` into the canonical, row-major ordering.\n\n Note that by convention, all sparse ops preserve the canonical ordering\n along increasing dimension number. The only time ordering can be violated\n is during manual manipulation of the indices and values to add entries.\n\n Reordering does not affect the shape of the `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[4, 5]` and\n `indices` / `values`:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same shape and non-empty values, but in\n canonical ordering.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Reorders a `SparseTensor` into the canonical, row-major ordering.", "type": "API"}, {"name": "tf.compat.v1.sparse.reset_shape", "docs": "Resets the shape of a `SparseTensor` with indices and values unchanged.\n\n If `new_shape` is None, returns a copy of `sp_input` with its shape reset\n to the tight bounding box of `sp_input`. This will be a shape consisting of\n all zeros if sp_input has no values.\n\n If `new_shape` is provided, then it must be larger or equal in all dimensions\n compared to the shape of `sp_input`. When this condition is met, the returned\n SparseTensor will have its shape reset to `new_shape` and its indices and\n values unchanged from that of `sp_input.`\n\n For example:\n\n Consider a `sp_input` with shape [2, 3, 5]:\n\n [0, 0, 1]: a\n [0, 1, 0]: b\n [0, 2, 2]: c\n [1, 0, 3]: d\n\n - It is an error to set `new_shape` as [3, 7] since this represents a\n rank-2 tensor while `sp_input` is rank-3. This is either a ValueError\n during graph construction (if both shapes are known) or an OpError during\n run time.\n\n - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or\n equal in every dimension compared to the original shape [2, 3, 5].\n\n - On the other hand, setting new_shape as [2, 3, 4] is also an error: The\n third dimension is smaller than the original shape [2, 3, 5] (and an\n `InvalidArgumentError` will be raised).\n\n - If `new_shape` is None, the returned SparseTensor will have a shape\n [2, 3, 4], which is the tight bounding box of `sp_input`.\n\n Args:\n sp_input: The input `SparseTensor`.\n new_shape: None or a vector representing the new shape for the returned\n `SparseTensor`.\n\n Returns:\n A `SparseTensor` indices and values unchanged from `sp_input`. Its shape is\n `new_shape` if that is set. Otherwise it is the tight bounding box of\n `sp_input`\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If `new_shape` represents a tensor with a different rank from\n that of `sp_input` (if shapes are known when graph is constructed).\n ValueError: If `new_shape` is determined during graph build to have\n dimension sizes that are too small.\n OpError:\n - If `new_shape` has dimension sizes that are too small.\n - If shapes are not known during graph construction time, and during run\n time it is found out that the ranks do not match.\n ", "desc": "Resets the shape of a `SparseTensor` with indices and values unchanged.", "type": "API"}, {"name": "tf.compat.v1.sparse.reshape", "docs": "Reshapes a `SparseTensor` to represent values in a new dense shape.\n\n This operation has the same semantics as `reshape` on the represented dense\n tensor. The indices of non-empty values in `sp_input` are recomputed based\n on the new dense shape, and a new `SparseTensor` is returned containing the\n new indices and new shape. The order of non-empty values in `sp_input` is\n unchanged.\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total dense size remains constant. At\n most one component of `shape` can be -1. The number of dense elements\n implied by `shape` must be the same as the number of dense elements\n originally represented by `sp_input`.\n\n For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`:\n\n [0, 0, 0]: a\n [0, 0, 1]: b\n [0, 1, 0]: c\n [1, 0, 0]: d\n [1, 2, 3]: e\n\n and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of\n shape `[9, 4]` and `indices` / `values`:\n\n [0, 0]: a\n [0, 1]: b\n [1, 2]: c\n [4, 2]: d\n [8, 1]: e\n\n Args:\n sp_input: The input `SparseTensor`.\n shape: A 1-D (vector) int64 `Tensor` specifying the new dense shape of the\n represented `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same non-empty values but with indices calculated\n by the new dense shape.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If argument `shape` requests a `SparseTensor` with a different\n number of elements than `sp_input`.\n ValueError: If `shape` has more than one inferred (== -1) dimension.\n ", "desc": "Reshapes a `SparseTensor` to represent values in a new dense shape.", "type": "API"}, {"name": "tf.compat.v1.sparse.retain", "docs": "Retains specified non-empty values within a `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n and `to_retain = [True, False, False, True]`, then the output will\n be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:\n\n [0, 1]: a\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor` with `N` non-empty elements.\n to_retain: A bool vector of length `N` with `M` true values.\n\n Returns:\n A `SparseTensor` with the same shape as the input and `M` non-empty\n elements corresponding to the true positions in `to_retain`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Retains specified non-empty values within a `SparseTensor`.", "type": "API"}, {"name": "tf.compat.v1.sparse.segment_mean", "docs": "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_mean`, but `segment_ids` can have rank less than\n `data`'s first dimension, selecting a subset of dimension 0, specified by\n `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the mean along sparse segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.segment_sqrt_n", "docs": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the segment being reduced.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.compat.v1.sparse.segment_sum", "docs": "Computes the sum along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_sum`, but `segment_ids` can have rank less than `data`'s\n first dimension, selecting a subset of dimension 0, specified by `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n For example:\n\n ```python\n c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\n\n # Select two rows, one segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))\n # => [[0 0 0 0]]\n\n # Select two rows, two segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))\n # => [[ 1 2 3 4]\n # [-1 -2 -3 -4]]\n\n # With missing segment ids.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]),\n num_segments=4)\n # => [[ 1 2 3 4]\n # [ 0 0 0 0]\n # [-1 -2 -3 -4]\n # [ 0 0 0 0]]\n\n # Select all rows, two segments.\n tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))\n # => [[0 0 0 0]\n # [5 6 7 8]]\n\n # Which is equivalent to:\n tf.math.segment_sum(c, tf.constant([0, 0, 1]))\n ```\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.slice", "docs": "Slice a `SparseTensor` based on the `start` and `size`.\n\n For example, if the input is\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\n Graphically the output tensors are:\n\n sparse.slice([0, 0], [2, 4]) = shape = [2, 4]\n [ a ]\n [b c ]\n\n sparse.slice([0, 4], [2, 3]) = shape = [2, 3]\n [ d e ]\n [ ]\n\n Args:\n sp_input: The `SparseTensor` to split.\n start: 1-D. tensor represents the start of the slice.\n size: 1-D. tensor represents the size of the slice.\n name: A name for the operation (optional).\n\n Returns:\n A `SparseTensor` objects resulting from splicing.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Slice a `SparseTensor` based on the `start` and `size`.", "type": "API"}, {"name": "tf.compat.v1.sparse.softmax", "docs": "Applies softmax to a batched N-D `SparseTensor`.\n\n The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`\n (where `N >= 2`), and with indices sorted in the canonical lexicographic\n order.\n\n This op is equivalent to applying the normal `tf.nn.softmax()` to each\n innermost logical submatrix with shape `[B, C]`, but with the catch that *the\n implicitly zero elements do not participate*. Specifically, the algorithm is\n equivalent to:\n\n (1) Applies `tf.nn.softmax()` to a densified view of each innermost\n submatrix with shape `[B, C]`, along the size-C dimension;\n (2) Masks out the original implicitly-zero locations;\n (3) Renormalizes the remaining elements.\n\n Hence, the `SparseTensor` result has exactly the same non-zero indices and\n shape.\n\n Example:\n\n ```python\n # First batch:\n # [? e.]\n # [1. ? ]\n # Second batch:\n # [e ? ]\n # [e e ]\n shape = [2, 2, 2] # 3-D SparseTensor\n values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]])\n indices = np.vstack(np.where(values)).astype(np.int64).T\n\n result = tf.sparse.softmax(tf.sparse.SparseTensor(indices, values, shape))\n # ...returning a 3-D SparseTensor, equivalent to:\n # [? 1.] [1 ?]\n # [1. ? ] and [.5 .5]\n # where ? means implicitly zero.\n ```\n\n Args:\n sp_input: N-D `SparseTensor`, where `N >= 2`.\n name: optional name of the operation.\n Returns:\n output: N-D `SparseTensor` representing the results.\n ", "desc": "Applies softmax to a batched N-D `SparseTensor`.", "type": "API"}, {"name": "tf.compat.v1.sparse.sparse_dense_matmul", "docs": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix\n\n (or SparseTensor) \"B\". Please note that one and only one of the inputs MUST\n be a SparseTensor and the other MUST be a dense matrix.\n\n The following input format is recommended (but not required) for optimal\n performance:\n\n * If `adjoint_a == false`: `A` should be sorted in lexicographically\n increasing order. Use `sparse.reorder` if you're not sure.\n * If `adjoint_a == true`: `A` should be sorted in order of increasing\n dimension 1 (i.e., \"column major\" order instead of \"row major\" order).\n\n Args:\n sp_a: SparseTensor (or dense Matrix) A, of rank 2.\n b: dense Matrix (or SparseTensor) B, with the same dtype as sp_a.\n adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex,\n this is transpose(conj(A)). Otherwise it's transpose(A).\n adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex,\n this is transpose(conj(B)). Otherwise it's transpose(B).\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense matrix (pseudo-code in dense np.matrix notation):\n `A = A.H if adjoint_a else A`\n `B = B.H if adjoint_b else B`\n `return A*B`\n\n Notes:\n\n Using `tf.nn.embedding_lookup_sparse` for sparse multiplication:\n\n It's not obvious but you can consider `embedding_lookup_sparse` as another\n sparse and dense multiplication. In some situations, you may prefer to use\n `embedding_lookup_sparse` even though you're not dealing with embeddings.\n\n There are two questions to ask in the decision process: Do you need gradients\n computed as sparse too? Is your sparse data represented as two\n `SparseTensor`s: ids and values? There is more explanation about data format\n below. If you answer any of these questions as yes, consider using\n `tf.nn.embedding_lookup_sparse`.\n\n Following explains differences between the expected SparseTensors:\n For example if dense form of your sparse data has shape `[3, 5]` and values:\n\n [[ a ]\n [b c]\n [ d ]]\n\n\n `SparseTensor` format expected by `sparse_tensor_dense_matmul`:\n `sp_a` (indices, values):\n\n [0, 1]: a\n [1, 0]: b\n [1, 4]: c\n [2, 2]: d\n\n `SparseTensor` format expected by `embedding_lookup_sparse`:\n `sp_ids` `sp_weights`\n\n [0, 0]: 1 [0, 0]: a\n [1, 0]: 0 [1, 0]: b\n [1, 1]: 4 [1, 1]: c\n [2, 0]: 2 [2, 0]: d\n\n\n Deciding when to use `sparse_tensor_dense_matmul` vs.\n `matmul`(a_is_sparse=True):\n\n There are a number of questions to ask in the decision process, including:\n\n * Will the SparseTensor `A` fit in memory if densified?\n * Is the column count of the product large (>> 1)?\n * Is the density of `A` larger than approximately 15%?\n\n If the answer to several of these questions is yes, consider\n converting the `SparseTensor` to a dense one and using `tf.matmul` with\n `a_is_sparse=True`.\n\n This operation tends to perform well when `A` is more sparse, if the column\n size of the product is small (e.g. matrix-vector multiplication), if\n `sp_a.dense_shape` takes on large values.\n\n Below is a rough speed comparison between `sparse_tensor_dense_matmul`,\n labeled 'sparse', and `matmul`(a_is_sparse=True), labeled 'dense'. For\n purposes of the comparison, the time spent converting from a `SparseTensor` to\n a dense `Tensor` is not included, so it is overly conservative with respect to\n the time ratio.\n\n Benchmark system:\n CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB\n GPU: NVidia Tesla k40c\n\n Compiled with:\n `-c opt --config=cuda --copt=-mavx`\n\n ```\n tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks\n A sparse [m, k] with % nonzero values between 1% and 80%\n B dense [k, n]\n\n % nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)\n 0.01 1 True 100 100 0.000221166 0.00010154 0.459112\n 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745\n 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385\n 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669\n 0.01 1 False 100 100 0.000208085 0.000107603 0.51711\n 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762\n 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635\n 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124\n 0.01 10 True 100 100 0.000218522 0.000105537 0.482958\n 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506\n 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064\n 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128\n 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354\n 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687\n 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324\n 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549\n 0.01 25 True 100 100 0.000207806 0.000105977 0.509981\n 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181\n 0.01 25 True 1000 100 0.00038262 0.00014158 0.370035\n 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504\n 0.01 25 False 100 100 0.000209401 0.000104696 0.499979\n 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076\n 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856\n 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413\n 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833\n 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959\n 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439\n 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898\n 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746\n 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228\n 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764\n 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648\n 0.2 10 True 100 100 0.000211692 0.000109903 0.519165\n 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753\n 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596\n 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064\n 0.2 10 False 100 100 0.000215727 0.000110502 0.512231\n 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653\n 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132\n 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618\n 0.2 25 True 100 100 0.000218705 0.000129913 0.594009\n 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402\n 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788\n 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052\n 0.2 25 False 100 100 0.000221494 0.0001306 0.589632\n 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969\n 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754\n 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046\n 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836\n 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101\n 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492\n 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851\n 0.5 1 False 100 100 0.000224196 0.000101423 0.452386\n 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841\n 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318\n 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563\n 0.5 10 True 100 100 0.000222125 0.000112308 0.505608\n 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753\n 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422\n 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801\n 0.5 10 False 100 100 0.000232083 0.000114978 0.495418\n 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146\n 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817\n 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638\n 0.5 25 True 100 100 0.00023429 0.000151703 0.647501\n 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386\n 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891\n 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845\n 0.5 25 False 100 100 0.000228981 0.000155334 0.678371\n 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124\n 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287\n 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927\n 0.8 1 True 100 100 0.000222037 0.000105301 0.47425\n 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664\n 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212\n 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633\n 0.8 1 False 100 100 0.000214079 0.000107486 0.502085\n 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261\n 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193\n 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282\n 0.8 10 True 100 100 0.000229159 0.00011825 0.516017\n 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677\n 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336\n 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689\n 0.8 10 False 100 100 0.000230783 0.000124958 0.541452\n 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606\n 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642\n 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024\n 0.8 25 True 100 100 0.000233496 0.000175241 0.75051\n 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458\n 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875\n 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132\n 0.8 25 False 100 100 0.000240243 0.000175047 0.728625\n 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763\n 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138\n 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992\n ```\n\n ", "desc": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix", "type": "API"}, {"name": "tf.compat.v1.sparse.SparseConditionalAccumulator", "docs": "A conditional accumulator for aggregating sparse gradients.\n\n Sparse gradients are represented by `IndexedSlices`.\n\n Up-to-date gradients (i.e., time step at which gradient was computed is\n equal to the accumulator's time step) are added to the accumulator.\n\n Extraction of the average gradient is blocked until the required number of\n gradients has been accumulated.\n\n Args:\n dtype: Datatype of the accumulated gradients.\n shape: Shape of the accumulated gradients.\n shared_name: Optional. If non-empty, this accumulator will be shared under\n the given name across multiple sessions.\n name: Optional name for the accumulator.\n reduction_type: Reduction type to use when taking the gradient.\n ", "desc": "A conditional accumulator for aggregating sparse gradients.", "type": "API"}, {"name": "tf.compat.v1.sparse.SparseTensor", "docs": "Represents a sparse tensor.\n\n TensorFlow represents a sparse tensor as three separate dense tensors:\n `indices`, `values`, and `dense_shape`. In Python, the three tensors are\n collected into a `SparseTensor` class for ease of use. If you have separate\n `indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`\n object before passing to the ops below.\n\n Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`\n comprises the following components, where `N` and `ndims` are the number\n of values and number of dimensions in the `SparseTensor`, respectively:\n\n * `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the\n indices of the elements in the sparse tensor that contain nonzero values\n (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies\n that the elements with indexes of [1,3] and [2,4] have nonzero values.\n\n * `values`: A 1-D tensor of any type and shape `[N]`, which supplies the\n values for each element in `indices`. For example, given `indices=[[1,3],\n [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of\n the sparse tensor has a value of 18, and element [2,4] of the tensor has a\n value of 3.6.\n\n * `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the\n dense_shape of the sparse tensor. Takes a list indicating the number of\n elements in each dimension. For example, `dense_shape=[3,6]` specifies a\n two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a\n three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a\n one-dimensional tensor with 9 elements.\n\n The corresponding dense tensor satisfies:\n\n ```python\n dense.shape = dense_shape\n dense[tuple(indices[i])] = values[i]\n ```\n\n By convention, `indices` should be sorted in row-major order (or equivalently\n lexicographic order on the tuples `indices[i]`). This is not enforced when\n `SparseTensor` objects are constructed, but most ops assume correct ordering.\n If the ordering of sparse tensor `st` is wrong, a fixed version can be\n obtained by calling `tf.sparse.reorder(st)`.\n\n Example: The sparse tensor\n\n ```python\n SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])\n ```\n\n represents the dense tensor\n\n ```python\n [[1, 0, 0, 0]\n [0, 0, 2, 0]\n [0, 0, 0, 0]]\n ```\n ", "desc": "Represents a sparse tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.split", "docs": "Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(split_dim)`. They will be removed in a future version.\nInstructions for updating:\nsplit_dim is deprecated, use axis instead\n\nIf the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split`\neach slice starting from 0:`shape[axis] % num_split` gets extra one\ndimension. For example, if `axis = 1` and `num_split = 2` and the\ninput is:\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\nGraphically the output tensors are:\n\n output_tensor[0] =\n [ a ]\n [b c ]\n\n output_tensor[1] =\n [ d e ]\n [ ]\n\nArgs:\n keyword_required: Python 2 standin for * (temporary for argument reorder)\n sp_input: The `SparseTensor` to split.\n num_split: A Python integer. The number of ways to split.\n axis: A 0-D `int32` `Tensor`. The dimension along which to split. Must be in\n range [-rank, rank), where rank is the number of dimensions in the input\n `SparseTensor`.\n name: A name for the operation (optional).\n split_dim: Deprecated old name for axis.\n\nReturns:\n `num_split` `SparseTensor` objects resulting from splitting `value`.\n\nRaises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If the deprecated `split_dim` and `axis` are both non None.", "desc": "Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse.to_dense", "docs": "Converts a `SparseTensor` into a dense tensor.\n\n For this sparse tensor with three non-empty values:\n\n >>> sp_input = tf.SparseTensor(\n ... dense_shape=[3, 5],\n ... values=[7, 8, 9],\n ... indices =[[0, 1],\n ... [0, 3],\n ... [2, 0]])\n\n The output will be a dense `[3, 5]` tensor with values:\n\n >>> tf.sparse.to_dense(sp_input).numpy()\n array([[0, 7, 0, 8, 0],\n [0, 0, 0, 0, 0],\n [9, 0, 0, 0, 0]], dtype=int32)\n\n Note: Indices must be without repeats. This is only tested if\n `validate_indices` is `True`.\n\n Args:\n sp_input: The input `SparseTensor`.\n default_value: Scalar value to set for indices not specified in\n `sp_input`. Defaults to zero.\n validate_indices: A boolean value. If `True`, indices are checked to make\n sure they are sorted in lexicographic order and that there are no repeats.\n name: A name prefix for the returned tensors (optional).\n\n Returns:\n A dense tensor with shape `sp_input.dense_shape` and values specified by\n the non-empty values in `sp_input`. Indices not in `sp_input` are assigned\n `default_value`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` into a dense tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.to_indicator", "docs": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.\n\n The last dimension of `sp_input.indices` is discarded and replaced with\n the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`,\n then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where\n\n output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True\n\n and False elsewhere in `output`.\n\n For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values:\n\n [0, 0, 0]: 0\n [0, 1, 0]: 10\n [1, 0, 3]: 103\n [1, 1, 1]: 150\n [1, 1, 2]: 149\n [1, 1, 3]: 150\n [1, 2, 1]: 121\n\n and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool\n tensor with False everywhere except at positions\n\n (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),\n (1, 2, 121).\n\n Note that repeats are allowed in the input SparseTensor.\n This op is useful for converting `SparseTensor`s into dense formats for\n compatibility with ops that expect dense tensors.\n\n The input `SparseTensor` must be in row-major order.\n\n Args:\n sp_input: A `SparseTensor` with `values` property of type `int32` or\n `int64`.\n vocab_size: A scalar int64 Tensor (or Python int) containing the new size\n of the last dimension, `all(0 <= sp_input.values < vocab_size)`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense bool indicator tensor representing the indices with specified value.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse.transpose", "docs": "Transposes a `SparseTensor`\n\n The returned tensor's dimension i will correspond to the input dimension\n `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is\n the rank of the input tensor. Hence by default, this operation performs a\n regular matrix transpose on 2-D input Tensors.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[5, 4]` and\n `indices` / `values`:\n\n [0, 2]: c\n [1, 0]: a\n [1, 3]: d\n [3, 0]: b\n\n Args:\n sp_input: The input `SparseTensor`.\n perm: A permutation of the dimensions of `sp_input`.\n name: A name prefix for the returned tensors (optional)\n Returns:\n A transposed `SparseTensor`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Transposes a `SparseTensor`", "type": "API"}, {"name": "tf.compat.v1.sparse_add", "docs": "Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(thresh)`. They will be removed in a future version.\nInstructions for updating:\nthresh is deprecated, use threshold instead\n\nIf one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If\nboth arguments are `SparseTensor`s, this returns a `SparseTensor`. The order\nof arguments does not matter. Use vanilla `tf.add()` for adding two dense\n`Tensor`s.\n\nThe shapes of the two operands must match: broadcasting is not supported.\n\nThe indices of any input `SparseTensor` are assumed ordered in standard\nlexicographic order. If this is not the case, before this step run\n`SparseReorder` to restore index ordering.\n\nIf both arguments are sparse, we perform \"clipping\" as follows. By default,\nif two values sum to zero at some index, the output `SparseTensor` would still\ninclude that particular location in its index, storing a zero in the\ncorresponding value slot. To override this, callers can specify `thresh`,\nindicating that if the sum has a magnitude strictly smaller than `thresh`, its\ncorresponding value and index would then not be included. In particular,\n`thresh == 0.0` (default) means everything is kept and actual thresholding\nhappens only for a positive value.\n\nFor example, suppose the logical sum of two sparse operands is (densified):\n\n [ 2]\n [.1 0]\n [ 6 -.2]\n\nThen,\n\n* `thresh == 0` (the default): all 5 index/value pairs will be returned.\n* `thresh == 0.11`: only .1 and 0 will vanish, and the remaining three\n index/value pairs will be returned.\n* `thresh == 0.21`: .1, 0, and -.2 will vanish.\n\nArgs:\n a: The first operand; `SparseTensor` or `Tensor`.\n b: The second operand; `SparseTensor` or `Tensor`. At least one operand\n must be sparse.\n threshold: An optional 0-D `Tensor` (defaults to `0`). The magnitude\n threshold that determines if an output value/index pair takes space. Its\n dtype should match that of the values if they are real; if the latter are\n complex64/complex128, then the dtype should be float32/float64,\n correspondingly.\n thresh: Deprecated alias for `threshold`.\n\nReturns:\n A `SparseTensor` or a `Tensor`, representing the sum.\n\nRaises:\n TypeError: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead.", "desc": "Adds two tensors, at least one of each is a `SparseTensor`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_concat", "docs": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(concat_dim)`. They will be removed in a future version.\nInstructions for updating:\nconcat_dim is deprecated, use axis instead\n\nConcatenation is with respect to the dense versions of each sparse input.\nIt is assumed that each inputs is a `SparseTensor` whose elements are ordered\nalong increasing dimension number.\n\nIf expand_nonconcat_dim is False, all inputs' shapes must match, except for\nthe concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are\nallowed to vary among all inputs.\n\nThe `indices`, `values`, and `shapes` lists must have the same length.\n\nIf expand_nonconcat_dim is False, then the output shape is identical to the\ninputs', except along the concat dimension, where it is the sum of the inputs'\nsizes along that dimension.\n\nIf expand_nonconcat_dim is True, then the output shape along the non-concat\ndimensions will be expand to be the largest among all inputs, and it is the\nsum of the inputs sizes along the concat dimension.\n\nThe output elements will be resorted to preserve the sort order along\nincreasing dimension number.\n\nThis op runs in `O(M log M)` time, where `M` is the total number of non-empty\nvalues across all inputs. This is due to the need for an internal sort in\norder to concatenate efficiently across an arbitrary dimension.\n\nFor example, if `axis = 1` and the inputs are\n\n sp_inputs[0]: shape = [2, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nthen the output will be\n\n shape = [2, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b c ] [ ] [b c ]\n\nAnother example, if 'axis = 1' and the inputs are\n\n sp_inputs[0]: shape = [3, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nif expand_nonconcat_dim = False, this will result in an error. But if\nexpand_nonconcat_dim = True, this will result in:\n\n shape = [3, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b ] [ ] [b ]\n [ c ] [ c ]\n\n\nArgs:\n axis: Dimension to concatenate along. Must be in range [-rank, rank),\n where rank is the number of dimensions in each input `SparseTensor`.\n sp_inputs: List of `SparseTensor` to concatenate.\n name: A name prefix for the returned tensors (optional).\n expand_nonconcat_dim: Whether to allow the expansion in the non-concat\n dimensions. Defaulted to False.\n concat_dim: The old (deprecated) name for axis.\n expand_nonconcat_dims: alias for expand_nonconcat_dim\n\nReturns:\n A `SparseTensor` with the concatenated output.\n\nRaises:\n TypeError: If `sp_inputs` is not a list of `SparseTensor`.", "desc": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_fill_empty_rows", "docs": "Fills empty rows in the input 2-D `SparseTensor` with a default value.\n\n This op adds entries with the specified `default_value` at index\n `[row, 0]` for any row in the input that does not already have a value.\n\n For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:\n\n [0, 1]: a\n [0, 3]: b\n [1, 0]: default_value\n [2, 0]: c\n [3, 1]: d\n [4, 0]: default_value\n\n Note that the input may have empty columns at the end, with no effect on\n this op.\n\n The output `SparseTensor` will be in row-major order and will have the\n same shape as the input.\n\n This op also returns an indicator vector such that\n\n empty_row_indicator[i] = True iff row i was an empty row.\n\n Args:\n sp_input: A `SparseTensor` with shape `[N, M]`.\n default_value: The value to fill for empty rows, with the same type as\n `sp_input.`\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n sp_ordered_output: A `SparseTensor` with shape `[N, M]`, and with all empty\n rows filled in with `default_value`.\n empty_row_indicator: A bool vector of length `N` indicating whether each\n input row was empty.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Fills empty rows in the input 2-D `SparseTensor` with a default value.", "type": "API"}, {"name": "tf.compat.v1.sparse_mask", "docs": "Masks elements of `IndexedSlices`.\n\n Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that\n contains a subset of the slices of `a`. Only the slices at indices not\n specified in `mask_indices` are returned.\n\n This is useful when you need to extract a subset of slices in an\n `IndexedSlices` object.\n\n For example:\n\n ```python\n # `a` contains slices at indices [12, 26, 37, 45] from a large tensor\n # with shape [1000, 10]\n a.indices # [12, 26, 37, 45]\n tf.shape(a.values) # [4, 10]\n\n # `b` will be the subset of `a` slices at its second and third indices, so\n # we want to mask its first and last indices (which are at absolute\n # indices 12, 45)\n b = tf.sparse.mask(a, [12, 45])\n\n b.indices # [26, 37]\n tf.shape(b.values) # [2, 10]\n ```\n\n Args:\n a: An `IndexedSlices` instance.\n mask_indices: Indices of elements to mask.\n name: A name for the operation (optional).\n\n Returns:\n The masked `IndexedSlices` instance.\n ", "desc": "Masks elements of `IndexedSlices`.", "type": "API"}, {"name": "tf.compat.v1.sparse_matmul", "docs": "Multiply matrix \"a\" by matrix \"b\".\n\n The inputs must be two-dimensional matrices and the inner dimension of \"a\" must\n match the outer dimension of \"b\". Both \"a\" and \"b\" must be `Tensor`s not\n `SparseTensor`s. This op is optimized for the case where at least one of \"a\" or\n \"b\" is sparse, in the sense that they have a large proportion of zero values.\n The breakeven for using this versus a dense matrix multiply on one platform was\n 30% zero values in the sparse matrix.\n\n The gradient computation of this operation will only take advantage of sparsity\n in the input gradient when that gradient comes from a Relu.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `bfloat16`.\n b: A `Tensor`. Must be one of the following types: `float32`, `bfloat16`.\n transpose_a: An optional `bool`. Defaults to `False`.\n transpose_b: An optional `bool`. Defaults to `False`.\n a_is_sparse: An optional `bool`. Defaults to `False`.\n b_is_sparse: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Multiply matrix \"a\" by matrix \"b\".", "type": "API"}, {"name": "tf.compat.v1.sparse_maximum", "docs": "Returns the element-wise max of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.maximum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n The reduction version of this elementwise operation is `tf.sparse.reduce_max`\n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise max of two SparseTensors.", "type": "API"}, {"name": "tf.compat.v1.sparse_merge", "docs": "Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nNo similar op available at this time.\n\nThe most common use case for this function occurs when feature ids and\ntheir corresponding values are stored in `Example` protos on disk.\n`parse_example` will return a batch of ids and a batch of values, and this\nfunction joins them into a single logical `SparseTensor` for use in\nfunctions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc.\n\nThe `SparseTensor` returned by this function has the following properties:\n\n - `indices` is equivalent to `sp_ids.indices` with the last\n dimension discarded and replaced with `sp_ids.values`.\n - `values` is simply `sp_values.values`.\n - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then\n `output.shape = [D0, D1, ..., Dn, vocab_size]`.\n\nFor example, consider the following feature vectors:\n\n```python\n vector1 = [-3, 0, 0, 0, 0, 0]\n vector2 = [ 0, 1, 0, 4, 1, 0]\n vector3 = [ 5, 0, 0, 9, 0, 0]\n```\n\nThese might be stored sparsely in the following Example protos by storing\nonly the feature ids (column number if the vectors are treated as a matrix)\nof the non-zero elements and the corresponding values:\n\n```python\n examples = [Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[0])),\n \"values\": Feature(float_list=FloatList(value=[-3]))}),\n Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[1, 4, 3])),\n \"values\": Feature(float_list=FloatList(value=[1, 1, 4]))}),\n Example(features={\n \"ids\": Feature(int64_list=Int64List(value=[0, 3])),\n \"values\": Feature(float_list=FloatList(value=[5, 9]))})]\n```\n\nThe result of calling parse_example on these examples will produce a\ndictionary with entries for \"ids\" and \"values\". Passing those two objects\nto this function along with vocab_size=6, will produce a `SparseTensor` that\nsparsely represents all three instances. Namely, the `indices` property will\ncontain the coordinates of the non-zero entries in the feature matrix (the\nfirst dimension is the row number in the matrix, i.e., the index within the\nbatch, and the second dimension is the column number, i.e., the feature id);\n`values` will contain the actual values. `shape` will be the shape of the\noriginal matrix, i.e., (3, 6). For our example above, the output will be\nequal to:\n\n```python\n SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],\n values=[-3, 1, 4, 1, 5, 9],\n dense_shape=[3, 6])\n```\n\nThis method generalizes to higher-dimensions by simply providing a list for\nboth the sp_ids as well as the vocab_size.\nIn this case the resulting `SparseTensor` has the following properties:\n - `indices` is equivalent to `sp_ids[0].indices` with the last\n dimension discarded and concatenated with\n `sp_ids[0].values, sp_ids[1].values, ...`.\n - `values` is simply `sp_values.values`.\n - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then\n `output.shape = [D0, D1, ..., Dn] + vocab_size`.\n\nArgs:\n sp_ids: A single `SparseTensor` with `values` property of type `int32`\n or `int64` or a Python list of such `SparseTensor`s or a list thereof.\n sp_values: A `SparseTensor` of any type.\n vocab_size: A scalar `int64` Tensor (or Python int) containing the new size\n of the last dimension, `all(0 <= sp_ids.values < vocab_size)`.\n Or a list thereof with `all(0 <= sp_ids[i].values < vocab_size[i])` for\n all `i`.\n name: A name prefix for the returned tensors (optional)\n already_sorted: A boolean to specify whether the per-batch values in\n `sp_values` are already sorted. If so skip sorting, False by default\n (optional).\n\nReturns:\n A `SparseTensor` compactly representing a batch of feature ids and values,\n useful for passing to functions that expect such a `SparseTensor`.\n\nRaises:\n TypeError: If `sp_values` is not a `SparseTensor`. Or if `sp_ids` is neither\n a `SparseTensor` nor a list thereof. Or if `vocab_size` is not a\n `Tensor` or a Python int and `sp_ids` is a `SparseTensor`. Or if\n `vocab_size` is not a or list thereof and `sp_ids` is a list.\n ValueError: If `sp_ids` and `vocab_size` are lists of different lengths.", "desc": "Combines a batch of feature ids and values into a single `SparseTensor`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.sparse_minimum", "docs": "Returns the element-wise min of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.minimum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise min of two SparseTensors.", "type": "API"}, {"name": "tf.compat.v1.sparse_placeholder", "docs": "Inserts a placeholder for a sparse tensor that will be always fed.\n\n **Important**: This sparse tensor will produce an error if evaluated.\n Its value must be fed using the `feed_dict` optional argument to\n `Session.run()`, `Tensor.eval()`, or `Operation.run()`.\n\n For example:\n\n ```python\n x = tf.compat.v1.sparse.placeholder(tf.float32)\n y = tf.sparse.reduce_sum(x)\n\n with tf.compat.v1.Session() as sess:\n print(sess.run(y)) # ERROR: will fail because x was not fed.\n\n indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64)\n values = np.array([1.0, 2.0], dtype=np.float32)\n shape = np.array([7, 9, 2], dtype=np.int64)\n print(sess.run(y, feed_dict={\n x: tf.compat.v1.SparseTensorValue(indices, values, shape)})) # Will\n succeed.\n print(sess.run(y, feed_dict={\n x: (indices, values, shape)})) # Will succeed.\n\n sp = tf.sparse.SparseTensor(indices=indices, values=values,\n dense_shape=shape)\n sp_value = sp.eval(session=sess)\n print(sess.run(y, feed_dict={x: sp_value})) # Will succeed.\n ```\n\n @compatibility{eager} Placeholders are not compatible with eager execution.\n\n Args:\n dtype: The type of `values` elements in the tensor to be fed.\n shape: The shape of the tensor to be fed (optional). If the shape is not\n specified, you can feed a sparse tensor of any shape.\n name: A name for prefixing the operations (optional).\n\n Returns:\n A `SparseTensor` that may be used as a handle for feeding a value, but not\n evaluated directly.\n\n Raises:\n RuntimeError: if eager execution is enabled\n ", "desc": "Inserts a placeholder for a sparse tensor that will be always fed.", "type": "API"}, {"name": "tf.compat.v1.sparse_reduce_max", "docs": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version.\nInstructions for updating:\nreduction_axes is deprecated, use axis instead\n\nThis is the reduction operation for the elementwise `tf.sparse.maximum` op.\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_max()`. In particular, this Op also returns a dense `Tensor`\ninstead of a sparse one.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nsimilar to the indexing rules in Python.\n\nThe values not defined in `sp_input` don't participate in the reduce max,\nas opposed to be implicitly assumed 0 -- hence it can return negative values\nfor sparse `reduction_axes`. But, in case there are no values in\n`reduction_axes`, it will reduce to 0. See second example below.\n\nFor example:\n\n # 'x' represents [[1, ?, 2]\n # [?, 3, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 2, 3], [2, 3])\n >>> tf.sparse.reduce_max(x)\n \n >>> tf.sparse.reduce_max(x, 0)\n \n >>> tf.sparse.reduce_max(x, 1)\n \n >>> tf.sparse.reduce_max(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_max(x, [0, 1])\n \n\n # 'y' represents [[-7, ?]\n # [ 4, 3]\n # [ ?, ?]\n\n >>> y = tf.sparse.SparseTensor([[0, 0,], [1, 0], [1, 1]], [-7, 4, 3],\n ... [3, 2])\n >>> tf.sparse.reduce_max(y, 1)\n \n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of `axis`.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced Tensor.", "desc": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_reduce_max_sparse", "docs": "Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_max()`. In contrast to SparseReduceSum, this Op returns a\nSparseTensor.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nwhich are interpreted according to the indexing rules in Python.\n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced SparseTensor.", "desc": "Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_reduce_sum", "docs": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(reduction_axes)`. They will be removed in a future version.\nInstructions for updating:\nreduction_axes is deprecated, use axis instead\n\nThis is the reduction operation for the elementwise `tf.sparse.add` op.\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`\ninstead of a sparse one.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nsimilar to the indexing rules in Python.\n\nFor example:\n\n # 'x' represents [[1, ?, 1]\n # [?, 1, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 1, 1], [2, 3])\n >>> tf.sparse.reduce_sum(x)\n \n >>> tf.sparse.reduce_sum(x, 0)\n \n >>> tf.sparse.reduce_sum(x, 1) # Can also use -1 as the axis\n \n >>> tf.sparse.reduce_sum(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_sum(x, [0, 1])\n \n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of `axis`.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced Tensor.", "desc": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_reduce_sum_sparse", "docs": "Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(keep_dims)`. They will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\n\nThis Op takes a SparseTensor and is the sparse counterpart to\n`tf.reduce_sum()`. In contrast to SparseReduceSum, this Op returns a\nSparseTensor.\n\nNote: A gradient is not defined for this function, so it can't be used\nin training models that need gradient descent.\n\nReduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n`keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n`reduction_axes`. If `keepdims` is true, the reduced dimensions are retained\nwith length 1.\n\nIf `reduction_axes` has no entries, all dimensions are reduced, and a tensor\nwith a single element is returned. Additionally, the axes can be negative,\nwhich are interpreted according to the indexing rules in Python.\n\nArgs:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n reduction_axes: Deprecated name of axis.\n keep_dims: Deprecated alias for `keepdims`.\n\nReturns:\n The reduced SparseTensor.", "desc": "Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_reorder", "docs": "Reorders a `SparseTensor` into the canonical, row-major ordering.\n\n Note that by convention, all sparse ops preserve the canonical ordering\n along increasing dimension number. The only time ordering can be violated\n is during manual manipulation of the indices and values to add entries.\n\n Reordering does not affect the shape of the `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[4, 5]` and\n `indices` / `values`:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same shape and non-empty values, but in\n canonical ordering.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Reorders a `SparseTensor` into the canonical, row-major ordering.", "type": "API"}, {"name": "tf.compat.v1.sparse_reset_shape", "docs": "Resets the shape of a `SparseTensor` with indices and values unchanged.\n\n If `new_shape` is None, returns a copy of `sp_input` with its shape reset\n to the tight bounding box of `sp_input`. This will be a shape consisting of\n all zeros if sp_input has no values.\n\n If `new_shape` is provided, then it must be larger or equal in all dimensions\n compared to the shape of `sp_input`. When this condition is met, the returned\n SparseTensor will have its shape reset to `new_shape` and its indices and\n values unchanged from that of `sp_input.`\n\n For example:\n\n Consider a `sp_input` with shape [2, 3, 5]:\n\n [0, 0, 1]: a\n [0, 1, 0]: b\n [0, 2, 2]: c\n [1, 0, 3]: d\n\n - It is an error to set `new_shape` as [3, 7] since this represents a\n rank-2 tensor while `sp_input` is rank-3. This is either a ValueError\n during graph construction (if both shapes are known) or an OpError during\n run time.\n\n - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or\n equal in every dimension compared to the original shape [2, 3, 5].\n\n - On the other hand, setting new_shape as [2, 3, 4] is also an error: The\n third dimension is smaller than the original shape [2, 3, 5] (and an\n `InvalidArgumentError` will be raised).\n\n - If `new_shape` is None, the returned SparseTensor will have a shape\n [2, 3, 4], which is the tight bounding box of `sp_input`.\n\n Args:\n sp_input: The input `SparseTensor`.\n new_shape: None or a vector representing the new shape for the returned\n `SparseTensor`.\n\n Returns:\n A `SparseTensor` indices and values unchanged from `sp_input`. Its shape is\n `new_shape` if that is set. Otherwise it is the tight bounding box of\n `sp_input`\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If `new_shape` represents a tensor with a different rank from\n that of `sp_input` (if shapes are known when graph is constructed).\n ValueError: If `new_shape` is determined during graph build to have\n dimension sizes that are too small.\n OpError:\n - If `new_shape` has dimension sizes that are too small.\n - If shapes are not known during graph construction time, and during run\n time it is found out that the ranks do not match.\n ", "desc": "Resets the shape of a `SparseTensor` with indices and values unchanged.", "type": "API"}, {"name": "tf.compat.v1.sparse_reshape", "docs": "Reshapes a `SparseTensor` to represent values in a new dense shape.\n\n This operation has the same semantics as `reshape` on the represented dense\n tensor. The indices of non-empty values in `sp_input` are recomputed based\n on the new dense shape, and a new `SparseTensor` is returned containing the\n new indices and new shape. The order of non-empty values in `sp_input` is\n unchanged.\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total dense size remains constant. At\n most one component of `shape` can be -1. The number of dense elements\n implied by `shape` must be the same as the number of dense elements\n originally represented by `sp_input`.\n\n For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`:\n\n [0, 0, 0]: a\n [0, 0, 1]: b\n [0, 1, 0]: c\n [1, 0, 0]: d\n [1, 2, 3]: e\n\n and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of\n shape `[9, 4]` and `indices` / `values`:\n\n [0, 0]: a\n [0, 1]: b\n [1, 2]: c\n [4, 2]: d\n [8, 1]: e\n\n Args:\n sp_input: The input `SparseTensor`.\n shape: A 1-D (vector) int64 `Tensor` specifying the new dense shape of the\n represented `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same non-empty values but with indices calculated\n by the new dense shape.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If argument `shape` requests a `SparseTensor` with a different\n number of elements than `sp_input`.\n ValueError: If `shape` has more than one inferred (== -1) dimension.\n ", "desc": "Reshapes a `SparseTensor` to represent values in a new dense shape.", "type": "API"}, {"name": "tf.compat.v1.sparse_retain", "docs": "Retains specified non-empty values within a `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n and `to_retain = [True, False, False, True]`, then the output will\n be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:\n\n [0, 1]: a\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor` with `N` non-empty elements.\n to_retain: A bool vector of length `N` with `M` true values.\n\n Returns:\n A `SparseTensor` with the same shape as the input and `M` non-empty\n elements corresponding to the true positions in `to_retain`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Retains specified non-empty values within a `SparseTensor`.", "type": "API"}, {"name": "tf.compat.v1.sparse_segment_mean", "docs": "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_mean`, but `segment_ids` can have rank less than\n `data`'s first dimension, selecting a subset of dimension 0, specified by\n `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the mean along sparse segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse_segment_sqrt_n", "docs": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the segment being reduced.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.compat.v1.sparse_segment_sum", "docs": "Computes the sum along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_sum`, but `segment_ids` can have rank less than `data`'s\n first dimension, selecting a subset of dimension 0, specified by `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n For example:\n\n ```python\n c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\n\n # Select two rows, one segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))\n # => [[0 0 0 0]]\n\n # Select two rows, two segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))\n # => [[ 1 2 3 4]\n # [-1 -2 -3 -4]]\n\n # With missing segment ids.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]),\n num_segments=4)\n # => [[ 1 2 3 4]\n # [ 0 0 0 0]\n # [-1 -2 -3 -4]\n # [ 0 0 0 0]]\n\n # Select all rows, two segments.\n tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))\n # => [[0 0 0 0]\n # [5 6 7 8]]\n\n # Which is equivalent to:\n tf.math.segment_sum(c, tf.constant([0, 0, 1]))\n ```\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n name: A name for the operation (optional).\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse_slice", "docs": "Slice a `SparseTensor` based on the `start` and `size`.\n\n For example, if the input is\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\n Graphically the output tensors are:\n\n sparse.slice([0, 0], [2, 4]) = shape = [2, 4]\n [ a ]\n [b c ]\n\n sparse.slice([0, 4], [2, 3]) = shape = [2, 3]\n [ d e ]\n [ ]\n\n Args:\n sp_input: The `SparseTensor` to split.\n start: 1-D. tensor represents the start of the slice.\n size: 1-D. tensor represents the size of the slice.\n name: A name for the operation (optional).\n\n Returns:\n A `SparseTensor` objects resulting from splicing.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Slice a `SparseTensor` based on the `start` and `size`.", "type": "API"}, {"name": "tf.compat.v1.sparse_softmax", "docs": "Applies softmax to a batched N-D `SparseTensor`.\n\n The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`\n (where `N >= 2`), and with indices sorted in the canonical lexicographic\n order.\n\n This op is equivalent to applying the normal `tf.nn.softmax()` to each\n innermost logical submatrix with shape `[B, C]`, but with the catch that *the\n implicitly zero elements do not participate*. Specifically, the algorithm is\n equivalent to:\n\n (1) Applies `tf.nn.softmax()` to a densified view of each innermost\n submatrix with shape `[B, C]`, along the size-C dimension;\n (2) Masks out the original implicitly-zero locations;\n (3) Renormalizes the remaining elements.\n\n Hence, the `SparseTensor` result has exactly the same non-zero indices and\n shape.\n\n Example:\n\n ```python\n # First batch:\n # [? e.]\n # [1. ? ]\n # Second batch:\n # [e ? ]\n # [e e ]\n shape = [2, 2, 2] # 3-D SparseTensor\n values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]])\n indices = np.vstack(np.where(values)).astype(np.int64).T\n\n result = tf.sparse.softmax(tf.sparse.SparseTensor(indices, values, shape))\n # ...returning a 3-D SparseTensor, equivalent to:\n # [? 1.] [1 ?]\n # [1. ? ] and [.5 .5]\n # where ? means implicitly zero.\n ```\n\n Args:\n sp_input: N-D `SparseTensor`, where `N >= 2`.\n name: optional name of the operation.\n Returns:\n output: N-D `SparseTensor` representing the results.\n ", "desc": "Applies softmax to a batched N-D `SparseTensor`.", "type": "API"}, {"name": "tf.compat.v1.sparse_split", "docs": "Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(split_dim)`. They will be removed in a future version.\nInstructions for updating:\nsplit_dim is deprecated, use axis instead\n\nIf the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split`\neach slice starting from 0:`shape[axis] % num_split` gets extra one\ndimension. For example, if `axis = 1` and `num_split = 2` and the\ninput is:\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\nGraphically the output tensors are:\n\n output_tensor[0] =\n [ a ]\n [b c ]\n\n output_tensor[1] =\n [ d e ]\n [ ]\n\nArgs:\n keyword_required: Python 2 standin for * (temporary for argument reorder)\n sp_input: The `SparseTensor` to split.\n num_split: A Python integer. The number of ways to split.\n axis: A 0-D `int32` `Tensor`. The dimension along which to split. Must be in\n range [-rank, rank), where rank is the number of dimensions in the input\n `SparseTensor`.\n name: A name for the operation (optional).\n split_dim: Deprecated old name for axis.\n\nReturns:\n `num_split` `SparseTensor` objects resulting from splitting `value`.\n\nRaises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If the deprecated `split_dim` and `axis` are both non None.", "desc": "Split a `SparseTensor` into `num_split` tensors along `axis`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.sparse_tensor_dense_matmul", "docs": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix\n\n (or SparseTensor) \"B\". Please note that one and only one of the inputs MUST\n be a SparseTensor and the other MUST be a dense matrix.\n\n The following input format is recommended (but not required) for optimal\n performance:\n\n * If `adjoint_a == false`: `A` should be sorted in lexicographically\n increasing order. Use `sparse.reorder` if you're not sure.\n * If `adjoint_a == true`: `A` should be sorted in order of increasing\n dimension 1 (i.e., \"column major\" order instead of \"row major\" order).\n\n Args:\n sp_a: SparseTensor (or dense Matrix) A, of rank 2.\n b: dense Matrix (or SparseTensor) B, with the same dtype as sp_a.\n adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex,\n this is transpose(conj(A)). Otherwise it's transpose(A).\n adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex,\n this is transpose(conj(B)). Otherwise it's transpose(B).\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense matrix (pseudo-code in dense np.matrix notation):\n `A = A.H if adjoint_a else A`\n `B = B.H if adjoint_b else B`\n `return A*B`\n\n Notes:\n\n Using `tf.nn.embedding_lookup_sparse` for sparse multiplication:\n\n It's not obvious but you can consider `embedding_lookup_sparse` as another\n sparse and dense multiplication. In some situations, you may prefer to use\n `embedding_lookup_sparse` even though you're not dealing with embeddings.\n\n There are two questions to ask in the decision process: Do you need gradients\n computed as sparse too? Is your sparse data represented as two\n `SparseTensor`s: ids and values? There is more explanation about data format\n below. If you answer any of these questions as yes, consider using\n `tf.nn.embedding_lookup_sparse`.\n\n Following explains differences between the expected SparseTensors:\n For example if dense form of your sparse data has shape `[3, 5]` and values:\n\n [[ a ]\n [b c]\n [ d ]]\n\n\n `SparseTensor` format expected by `sparse_tensor_dense_matmul`:\n `sp_a` (indices, values):\n\n [0, 1]: a\n [1, 0]: b\n [1, 4]: c\n [2, 2]: d\n\n `SparseTensor` format expected by `embedding_lookup_sparse`:\n `sp_ids` `sp_weights`\n\n [0, 0]: 1 [0, 0]: a\n [1, 0]: 0 [1, 0]: b\n [1, 1]: 4 [1, 1]: c\n [2, 0]: 2 [2, 0]: d\n\n\n Deciding when to use `sparse_tensor_dense_matmul` vs.\n `matmul`(a_is_sparse=True):\n\n There are a number of questions to ask in the decision process, including:\n\n * Will the SparseTensor `A` fit in memory if densified?\n * Is the column count of the product large (>> 1)?\n * Is the density of `A` larger than approximately 15%?\n\n If the answer to several of these questions is yes, consider\n converting the `SparseTensor` to a dense one and using `tf.matmul` with\n `a_is_sparse=True`.\n\n This operation tends to perform well when `A` is more sparse, if the column\n size of the product is small (e.g. matrix-vector multiplication), if\n `sp_a.dense_shape` takes on large values.\n\n Below is a rough speed comparison between `sparse_tensor_dense_matmul`,\n labeled 'sparse', and `matmul`(a_is_sparse=True), labeled 'dense'. For\n purposes of the comparison, the time spent converting from a `SparseTensor` to\n a dense `Tensor` is not included, so it is overly conservative with respect to\n the time ratio.\n\n Benchmark system:\n CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB\n GPU: NVidia Tesla k40c\n\n Compiled with:\n `-c opt --config=cuda --copt=-mavx`\n\n ```\n tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks\n A sparse [m, k] with % nonzero values between 1% and 80%\n B dense [k, n]\n\n % nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)\n 0.01 1 True 100 100 0.000221166 0.00010154 0.459112\n 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745\n 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385\n 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669\n 0.01 1 False 100 100 0.000208085 0.000107603 0.51711\n 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762\n 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635\n 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124\n 0.01 10 True 100 100 0.000218522 0.000105537 0.482958\n 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506\n 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064\n 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128\n 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354\n 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687\n 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324\n 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549\n 0.01 25 True 100 100 0.000207806 0.000105977 0.509981\n 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181\n 0.01 25 True 1000 100 0.00038262 0.00014158 0.370035\n 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504\n 0.01 25 False 100 100 0.000209401 0.000104696 0.499979\n 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076\n 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856\n 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413\n 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833\n 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959\n 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439\n 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898\n 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746\n 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228\n 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764\n 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648\n 0.2 10 True 100 100 0.000211692 0.000109903 0.519165\n 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753\n 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596\n 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064\n 0.2 10 False 100 100 0.000215727 0.000110502 0.512231\n 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653\n 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132\n 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618\n 0.2 25 True 100 100 0.000218705 0.000129913 0.594009\n 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402\n 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788\n 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052\n 0.2 25 False 100 100 0.000221494 0.0001306 0.589632\n 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969\n 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754\n 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046\n 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836\n 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101\n 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492\n 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851\n 0.5 1 False 100 100 0.000224196 0.000101423 0.452386\n 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841\n 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318\n 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563\n 0.5 10 True 100 100 0.000222125 0.000112308 0.505608\n 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753\n 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422\n 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801\n 0.5 10 False 100 100 0.000232083 0.000114978 0.495418\n 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146\n 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817\n 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638\n 0.5 25 True 100 100 0.00023429 0.000151703 0.647501\n 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386\n 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891\n 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845\n 0.5 25 False 100 100 0.000228981 0.000155334 0.678371\n 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124\n 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287\n 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927\n 0.8 1 True 100 100 0.000222037 0.000105301 0.47425\n 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664\n 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212\n 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633\n 0.8 1 False 100 100 0.000214079 0.000107486 0.502085\n 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261\n 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193\n 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282\n 0.8 10 True 100 100 0.000229159 0.00011825 0.516017\n 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677\n 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336\n 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689\n 0.8 10 False 100 100 0.000230783 0.000124958 0.541452\n 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606\n 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642\n 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024\n 0.8 25 True 100 100 0.000233496 0.000175241 0.75051\n 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458\n 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875\n 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132\n 0.8 25 False 100 100 0.000240243 0.000175047 0.728625\n 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763\n 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138\n 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992\n ```\n\n ", "desc": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix", "type": "API"}, {"name": "tf.compat.v1.sparse_tensor_to_dense", "docs": "Converts a `SparseTensor` into a dense tensor.\n\n For this sparse tensor with three non-empty values:\n\n >>> sp_input = tf.SparseTensor(\n ... dense_shape=[3, 5],\n ... values=[7, 8, 9],\n ... indices =[[0, 1],\n ... [0, 3],\n ... [2, 0]])\n\n The output will be a dense `[3, 5]` tensor with values:\n\n >>> tf.sparse.to_dense(sp_input).numpy()\n array([[0, 7, 0, 8, 0],\n [0, 0, 0, 0, 0],\n [9, 0, 0, 0, 0]], dtype=int32)\n\n Note: Indices must be without repeats. This is only tested if\n `validate_indices` is `True`.\n\n Args:\n sp_input: The input `SparseTensor`.\n default_value: Scalar value to set for indices not specified in\n `sp_input`. Defaults to zero.\n validate_indices: A boolean value. If `True`, indices are checked to make\n sure they are sorted in lexicographic order and that there are no repeats.\n name: A name prefix for the returned tensors (optional).\n\n Returns:\n A dense tensor with shape `sp_input.dense_shape` and values specified by\n the non-empty values in `sp_input`. Indices not in `sp_input` are assigned\n `default_value`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` into a dense tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse_to_dense", "docs": "Converts a sparse representation into a dense tensor. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nCreate a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.\n\nBuilds an array `dense` with shape `output_shape` such that\n\n```python\n# If sparse_indices is scalar\ndense[i] = (i == sparse_indices ? sparse_values : default_value)\n\n# If sparse_indices is a vector, then for each i\ndense[sparse_indices[i]] = sparse_values[i]\n\n# If sparse_indices is an n by d matrix, then for each i in [0, n)\ndense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]\n```\n\nAll other values in `dense` are set to `default_value`. If `sparse_values`\nis a scalar, all sparse indices are set to this single value.\n\nIndices should be sorted in lexicographic order, and indices must not\ncontain any repeats. If `validate_indices` is True, these properties\nare checked during execution.\n\nArgs:\n sparse_indices: A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`.\n `sparse_indices[i]` contains the complete index where `sparse_values[i]`\n will be placed.\n output_shape: A 1-D `Tensor` of the same type as `sparse_indices`. Shape\n of the dense output tensor.\n sparse_values: A 0-D or 1-D `Tensor`. Values corresponding to each row of\n `sparse_indices`, or a scalar value to be used for all sparse indices.\n default_value: A 0-D `Tensor` of the same type as `sparse_values`. Value\n to set for indices not specified in `sparse_indices`. Defaults to zero.\n validate_indices: A boolean value. If True, indices are checked to make\n sure they are sorted in lexicographic order and that there are no repeats.\n name: A name for the operation (optional).\n\nReturns:\n Dense `Tensor` of shape `output_shape`. Has the same type as\n `sparse_values`.", "desc": "Converts a sparse representation into a dense tensor. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.sparse_to_indicator", "docs": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.\n\n The last dimension of `sp_input.indices` is discarded and replaced with\n the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`,\n then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where\n\n output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True\n\n and False elsewhere in `output`.\n\n For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values:\n\n [0, 0, 0]: 0\n [0, 1, 0]: 10\n [1, 0, 3]: 103\n [1, 1, 1]: 150\n [1, 1, 2]: 149\n [1, 1, 3]: 150\n [1, 2, 1]: 121\n\n and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool\n tensor with False everywhere except at positions\n\n (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),\n (1, 2, 121).\n\n Note that repeats are allowed in the input SparseTensor.\n This op is useful for converting `SparseTensor`s into dense formats for\n compatibility with ops that expect dense tensors.\n\n The input `SparseTensor` must be in row-major order.\n\n Args:\n sp_input: A `SparseTensor` with `values` property of type `int32` or\n `int64`.\n vocab_size: A scalar int64 Tensor (or Python int) containing the new size\n of the last dimension, `all(0 <= sp_input.values < vocab_size)`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense bool indicator tensor representing the indices with specified value.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.", "type": "API"}, {"name": "tf.compat.v1.sparse_transpose", "docs": "Transposes a `SparseTensor`\n\n The returned tensor's dimension i will correspond to the input dimension\n `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is\n the rank of the input tensor. Hence by default, this operation performs a\n regular matrix transpose on 2-D input Tensors.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[5, 4]` and\n `indices` / `values`:\n\n [0, 2]: c\n [1, 0]: a\n [1, 3]: d\n [3, 0]: b\n\n Args:\n sp_input: The input `SparseTensor`.\n perm: A permutation of the dimensions of `sp_input`.\n name: A name prefix for the returned tensors (optional)\n Returns:\n A transposed `SparseTensor`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Transposes a `SparseTensor`", "type": "API"}, {"name": "tf.compat.v1.SparseConditionalAccumulator", "docs": "A conditional accumulator for aggregating sparse gradients.\n\n Sparse gradients are represented by `IndexedSlices`.\n\n Up-to-date gradients (i.e., time step at which gradient was computed is\n equal to the accumulator's time step) are added to the accumulator.\n\n Extraction of the average gradient is blocked until the required number of\n gradients has been accumulated.\n\n Args:\n dtype: Datatype of the accumulated gradients.\n shape: Shape of the accumulated gradients.\n shared_name: Optional. If non-empty, this accumulator will be shared under\n the given name across multiple sessions.\n name: Optional name for the accumulator.\n reduction_type: Reduction type to use when taking the gradient.\n ", "desc": "A conditional accumulator for aggregating sparse gradients.", "type": "API"}, {"name": "tf.compat.v1.SparseFeature", "docs": "Configuration for parsing a sparse input feature from an `Example`.\n\n Note, preferably use `VarLenFeature` (possibly in combination with a\n `SequenceExample`) in order to parse out `SparseTensor`s instead of\n `SparseFeature` due to its simplicity.\n\n Closely mimicking the `SparseTensor` that will be obtained by parsing an\n `Example` with a `SparseFeature` config, a `SparseFeature` contains a\n\n * `value_key`: The name of key for a `Feature` in the `Example` whose parsed\n `Tensor` will be the resulting `SparseTensor.values`.\n\n * `index_key`: A list of names - one for each dimension in the resulting\n `SparseTensor` whose `indices[i][dim]` indicating the position of\n the `i`-th value in the `dim` dimension will be equal to the `i`-th value in\n the Feature with key named `index_key[dim]` in the `Example`.\n\n * `size`: A list of ints for the resulting `SparseTensor.dense_shape`.\n\n For example, we can represent the following 2D `SparseTensor`\n\n ```python\n SparseTensor(indices=[[3, 1], [20, 0]],\n values=[0.5, -1.0]\n dense_shape=[100, 3])\n ```\n\n with an `Example` input proto\n\n ```python\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix0\" value { int64_list { value: [ 3, 20 ] } } }\n feature { key: \"ix1\" value { int64_list { value: [ 1, 0 ] } } }\n }\n ```\n\n and `SparseFeature` config with 2 `index_key`s\n\n ```python\n SparseFeature(index_key=[\"ix0\", \"ix1\"],\n value_key=\"val\",\n dtype=tf.float32,\n size=[100, 3])\n ```\n\n Fields:\n index_key: A single string name or a list of string names of index features.\n For each key the underlying feature's type must be `int64` and its length\n must always match that of the `value_key` feature.\n To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1\n a list of length `rank` should be used.\n value_key: Name of value feature. The underlying feature's type must\n be `dtype` and its length must always match that of all the `index_key`s'\n features.\n dtype: Data type of the `value_key` feature.\n size: A Python int or list thereof specifying the dense shape. Should be a\n list if and only if `index_key` is a list. In that case the list must be\n equal to the length of `index_key`. Each for each entry `i` all values in\n the `index_key`[i] feature must be in `[0, size[i])`.\n already_sorted: A Python boolean to specify whether the values in\n `value_key` are already sorted by their index position. If so skip\n sorting. False by default (optional).\n ", "desc": "Configuration for parsing a sparse input feature from an `Example`.", "type": "API"}, {"name": "tf.compat.v1.SparseTensor", "docs": "Represents a sparse tensor.\n\n TensorFlow represents a sparse tensor as three separate dense tensors:\n `indices`, `values`, and `dense_shape`. In Python, the three tensors are\n collected into a `SparseTensor` class for ease of use. If you have separate\n `indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`\n object before passing to the ops below.\n\n Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`\n comprises the following components, where `N` and `ndims` are the number\n of values and number of dimensions in the `SparseTensor`, respectively:\n\n * `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the\n indices of the elements in the sparse tensor that contain nonzero values\n (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies\n that the elements with indexes of [1,3] and [2,4] have nonzero values.\n\n * `values`: A 1-D tensor of any type and shape `[N]`, which supplies the\n values for each element in `indices`. For example, given `indices=[[1,3],\n [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of\n the sparse tensor has a value of 18, and element [2,4] of the tensor has a\n value of 3.6.\n\n * `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the\n dense_shape of the sparse tensor. Takes a list indicating the number of\n elements in each dimension. For example, `dense_shape=[3,6]` specifies a\n two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a\n three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a\n one-dimensional tensor with 9 elements.\n\n The corresponding dense tensor satisfies:\n\n ```python\n dense.shape = dense_shape\n dense[tuple(indices[i])] = values[i]\n ```\n\n By convention, `indices` should be sorted in row-major order (or equivalently\n lexicographic order on the tuples `indices[i]`). This is not enforced when\n `SparseTensor` objects are constructed, but most ops assume correct ordering.\n If the ordering of sparse tensor `st` is wrong, a fixed version can be\n obtained by calling `tf.sparse.reorder(st)`.\n\n Example: The sparse tensor\n\n ```python\n SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])\n ```\n\n represents the dense tensor\n\n ```python\n [[1, 0, 0, 0]\n [0, 0, 2, 0]\n [0, 0, 0, 0]]\n ```\n ", "desc": "Represents a sparse tensor.", "type": "API"}, {"name": "tf.compat.v1.SparseTensorSpec", "docs": "Type specification for a `tf.sparse.SparseTensor`.", "desc": "Type specification for a `tf.sparse.SparseTensor`.", "type": "API"}, {"name": "tf.compat.v1.SparseTensorValue", "docs": "SparseTensorValue(indices, values, dense_shape)", "desc": "SparseTensorValue(indices, values, dense_shape)", "type": "API"}, {"name": "tf.compat.v1.spectral", "docs": "Public API for tf.spectral namespace.\n", "desc": "Public API for tf.spectral namespace.", "type": "API"}, {"name": "tf.compat.v1.spectral.dct", "docs": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.\n\n Types I, II, III and IV are supported.\n Type I is implemented using a length `2N` padded `tf.signal.rfft`.\n Type II is implemented using a length `2N` padded `tf.signal.rfft`, as\n described here: [Type 2 DCT using 2N FFT padded (Makhoul)]\n (https://dsp.stackexchange.com/a/10606).\n Type III is a fairly straightforward inverse of Type II\n (i.e. using a length `2N` padded `tf.signal.irfft`).\n Type IV is calculated through 2N length DCT2 of padded signal and\n picking the odd indices.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.dct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.dct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The DCT type to perform. Must be 1, 2, 3 or 4.\n n: The length of the transform. If length is less than sequence length,\n only the first n elements of the sequence are considered for the DCT.\n If n is greater than the sequence length, zeros are padded and then\n the DCT is computed as usual.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the DCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2`, `3` or `4`, `axis` is\n not `-1`, `n` is not `None` or greater than 0,\n or `norm` is not `None` or `'ortho'`.\n ValueError: If `type` is `1` and `norm` is `ortho`.\n\n [dct]: https://en.wikipedia.org/wiki/Discrete_cosine_transform\n ", "desc": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.", "type": "API"}, {"name": "tf.compat.v1.spectral.fft", "docs": "Fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform over the inner-most\n dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.fft2d", "docs": "2D fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform over the inner-most\n 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.fft3d", "docs": "3D fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform over the inner-most 3\n dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.idct", "docs": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.\n\n Currently Types I, II, III, IV are supported. Type III is the inverse of\n Type II, and vice versa.\n\n Note that you must re-normalize by 1/(2n) to obtain an inverse if `norm` is\n not `'ortho'`. That is:\n `signal == idct(dct(signal)) * 0.5 / signal.shape[-1]`.\n When `norm='ortho'`, we have:\n `signal == idct(dct(signal, norm='ortho'), norm='ortho')`.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.idct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.idct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The IDCT type to perform. Must be 1, 2, 3 or 4.\n n: For future expansion. The length of the transform. Must be `None`.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the IDCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2` or `3`, `n` is not `None, `axis` is\n not `-1`, or `norm` is not `None` or `'ortho'`.\n\n [idct]:\n https://en.wikipedia.org/wiki/Discrete_cosine_transform#Inverse_transforms\n ", "desc": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.", "type": "API"}, {"name": "tf.compat.v1.spectral.ifft", "docs": "Inverse fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform over the\n inner-most dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.ifft2d", "docs": "Inverse 2D fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform over the\n inner-most 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 2D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.ifft3d", "docs": "Inverse 3D fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform over the\n inner-most 3 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 3D fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.irfft", "docs": "Inverse real-valued fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most dimension of `input`.\n\n The inner-most dimension of `input` is assumed to be the result of `RFFT`: the\n `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If\n `fft_length` is not provided, it is computed from the size of the inner-most\n dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to\n compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller\n than the corresponding dimension of `input`, the dimension is cropped. If it is\n larger, the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.irfft2d", "docs": "Inverse 2D real-valued fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 2 dimensions of `input`.\n\n The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 2 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT2D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.irfft3d", "docs": "Inverse 3D real-valued fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 3 dimensions of `input`.\n\n The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 3 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT3D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.rfft", "docs": "Real-valued fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most dimension of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the\n `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term,\n followed by the `fft_length / 2` positive-frequency terms.\n\n Along the axis `RFFT` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "Real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.rfft2d", "docs": "2D real-valued fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 2 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.spectral.rfft3d", "docs": "3D real-valued fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 3 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.compat.v1.split", "docs": "Splits a tensor `value` into a list of sub tensors.\n\n See also `tf.unstack`.\n\n If `num_or_size_splits` is an `int`, then it splits `value` along the\n dimension `axis` into `num_or_size_splits` smaller tensors. This requires that\n `value.shape[axis]` is divisible by `num_or_size_splits`.\n\n If `num_or_size_splits` is a 1-D Tensor (or list), then `value` is split into\n `len(num_or_size_splits)` elements. The shape of the `i`-th\n element has the same size as the `value` except along dimension `axis` where\n the size is `num_or_size_splits[i]`.\n\n For example:\n\n >>> x = tf.Variable(tf.random.uniform([5, 30], -1, 1))\n >>>\n >>> # Split `x` into 3 tensors along dimension 1\n >>> s0, s1, s2 = tf.split(x, num_or_size_splits=3, axis=1)\n >>> tf.shape(s0).numpy()\n array([ 5, 10], dtype=int32)\n >>>\n >>> # Split `x` into 3 tensors with sizes [4, 15, 11] along dimension 1\n >>> split0, split1, split2 = tf.split(x, [4, 15, 11], 1)\n >>> tf.shape(split0).numpy()\n array([5, 4], dtype=int32)\n >>> tf.shape(split1).numpy()\n array([ 5, 15], dtype=int32)\n >>> tf.shape(split2).numpy()\n array([ 5, 11], dtype=int32)\n\n Args:\n value: The `Tensor` to split.\n num_or_size_splits: Either an `int` indicating the number of splits\n along `axis` or a 1-D integer `Tensor` or Python list containing the sizes\n of each output tensor along `axis`. If an `int`, then it must evenly\n divide `value.shape[axis]`; otherwise the sum of sizes along the split\n axis must match that of the `value`.\n axis: An `int` or scalar `int32` `Tensor`. The dimension along which\n to split. Must be in the range `[-rank(value), rank(value))`. Defaults to\n 0.\n num: Optional, an `int`, used to specify the number of outputs when it\n cannot be inferred from the shape of `size_splits`.\n name: A name for the operation (optional).\n\n Returns:\n if `num_or_size_splits` is an `int` returns a list of\n `num_or_size_splits` `Tensor` objects; if `num_or_size_splits` is a 1-D\n list or 1-D `Tensor` returns `num_or_size_splits.get_shape[0]`\n `Tensor` objects resulting from splitting `value`.\n\n Raises:\n ValueError: If `num` is unspecified and cannot be inferred.\n ValueError: If `num_or_size_splits` is a scalar `Tensor`.\n ", "desc": "Splits a tensor `value` into a list of sub tensors.", "type": "API"}, {"name": "tf.compat.v1.sqrt", "docs": "Computes element-wise square root of the input tensor.\n\n Note: This operation does not support integer types.\n\n >>> x = tf.constant([[4.0], [16.0]])\n >>> tf.sqrt(x)\n \n >>> y = tf.constant([[-4.0], [16.0]])\n >>> tf.sqrt(y)\n \n >>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)\n >>> tf.sqrt(z)\n \n\n Note: In order to support complex type, please provide an input tensor\n of `complex64` or `complex128`.\n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of same size, type and sparsity as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape)`", "desc": "Computes element-wise square root of the input tensor.", "type": "API"}, {"name": "tf.compat.v1.square", "docs": "Computes square of x element-wise.\n\n I.e., \\\\(y = x * x = x^2\\\\).\n\n >>> tf.math.square([-2., 0., 3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)`", "desc": "Computes square of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.squared_difference", "docs": "Returns conj(x - y)(x - y) element-wise.\n\n *NOTE*: `math.squared_difference` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns conj(x - y)(x - y) element-wise.", "type": "API"}, {"name": "tf.compat.v1.squeeze", "docs": "Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(squeeze_dims)`. They will be removed in a future version.\nInstructions for updating:\nUse the `axis` argument instead\n\nGiven a tensor `input`, this operation returns a tensor of the same type with\nall dimensions of size 1 removed. If you don't want to remove all size 1\ndimensions, you can remove specific size 1 dimensions by specifying\n`axis`.\n\nFor example:\n\n>>> # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n>>> t = tf.ones([1, 2, 1, 3, 1, 1])\n>>> print(tf.shape(tf.squeeze(t)).numpy())\n[2 3]\n\nOr, to remove specific size 1 dimensions:\n\n>>> # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n>>> t = tf.ones([1, 2, 1, 3, 1, 1])\n>>> print(tf.shape(tf.squeeze(t, [2, 4])).numpy())\n[1 2 3 1]\n\nNote: if `input` is a `tf.RaggedTensor`, then this operation takes `O(N)`\ntime, where `N` is the number of elements in the squeezed dimensions.\n\nArgs:\n input: A `Tensor`. The `input` to squeeze.\n axis: An optional list of `ints`. Defaults to `[]`. If specified, only\n squeezes the dimensions listed. The dimension index starts at 0. It is an\n error to squeeze a dimension that is not 1. Must be in the range\n `[-rank(input), rank(input))`. Must be specified if `input` is a\n `RaggedTensor`.\n name: A name for the operation (optional).\n squeeze_dims: Deprecated keyword argument that is now axis.\n\nReturns:\n A `Tensor`. Has the same type as `input`.\n Contains the same data as `input`, but has one or more dimensions of\n size 1 removed.\n\nRaises:\n ValueError: When both `squeeze_dims` and `axis` are specified.", "desc": "Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.\n\n See also `tf.concat`, `tf.tile`, `tf.repeat`.\n\n Packs the list of tensors in `values` into a tensor with rank one higher than\n each tensor in `values`, by packing them along the `axis` dimension.\n Given a list of length `N` of tensors of shape `(A, B, C)`;\n\n if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.\n if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.\n Etc.\n\n For example:\n\n >>> x = tf.constant([1, 4])\n >>> y = tf.constant([2, 5])\n >>> z = tf.constant([3, 6])\n >>> tf.stack([x, y, z])\n \n >>> tf.stack([x, y, z], axis=1)\n \n\n This is the opposite of unstack. The numpy equivalent is `np.stack`\n\n >>> np.array_equal(np.stack([x, y, z]), tf.stack([x, y, z]))\n True\n\n Args:\n values: A list of `Tensor` objects with the same shape and type.\n axis: An `int`. The axis to stack along. Defaults to the first dimension.\n Negative values wrap around, so the valid range is `[-(R+1), R+1)`.\n name: A name for this operation (optional).\n\n Returns:\n output: A stacked `Tensor` with the same type as `values`.\n\n Raises:\n ValueError: If `axis` is out of the range [-(R+1), R+1).\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.", "type": "API"}, {"name": "tf.compat.v1.stop_gradient", "docs": "Stops gradient computation.\n\n When executed in a graph, this op outputs its input tensor as-is.\n\n When building ops to compute gradients, this op prevents the contribution of\n its inputs to be taken into account. Normally, the gradient generator adds ops\n to a graph to compute the derivatives of a specified 'loss' by recursively\n finding out inputs that contributed to its computation. If you insert this op\n in the graph it inputs are masked from the gradient generator. They are not\n taken into account for computing gradients.\n\n This is useful any time you want to compute a value with TensorFlow but need\n to pretend that the value was a constant. For example, the softmax function\n for a vector x can be written as\n\n ```python\n\n def softmax(x):\n numerator = tf.exp(x)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n This however is susceptible to overflow if the values in x are large. An\n alternative more stable way is to subtract the maximum of x from each of the\n values.\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.reduce_max(x)\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n However, when we backprop through the softmax to x, we dont want to backprop\n through the `tf.reduce_max(x)` (if the max values are not unique then the\n gradient could flow to the wrong input) calculation and treat that as a\n constant. Therefore, we should write this out as\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.stop_gradient(tf.reduce_max(x))\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n Some other examples include:\n\n * The *EM* algorithm where the *M-step* should not involve backpropagation\n through the output of the *E-step*.\n * Contrastive divergence training of Boltzmann machines where, when\n differentiating the energy function, the training must not backpropagate\n through the graph that generated the samples from the model.\n * Adversarial training, where no backprop should happen through the adversarial\n example generation process.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Stops gradient computation.", "type": "API"}, {"name": "tf.compat.v1.strided_slice", "docs": "Extracts a strided slice of a tensor (generalized Python array indexing).\n\n See also `tf.slice`.\n\n **Instead of calling this op directly most users will want to use the\n NumPy-style slicing syntax (e.g. `tensor[..., 3:4:-1, tf.newaxis, 3]`), which\n is supported via `tf.Tensor.__getitem__` and `tf.Variable.__getitem__`.**\n The interface of this op is a low-level encoding of the slicing syntax.\n\n Roughly speaking, this op extracts a slice of size `(end-begin)/stride`\n from the given `input_` tensor. Starting at the location specified by `begin`\n the slice continues by adding `stride` to the index until all dimensions are\n not less than `end`.\n Note that a stride can be negative, which causes a reverse slice.\n\n Given a Python slice `input[spec0, spec1, ..., specn]`,\n this function will be called as follows.\n\n `begin`, `end`, and `strides` will be vectors of length n.\n n in general is not equal to the rank of the `input_` tensor.\n\n In each mask field (`begin_mask`, `end_mask`, `ellipsis_mask`,\n `new_axis_mask`, `shrink_axis_mask`) the ith bit will correspond to\n the ith spec.\n\n If the ith bit of `begin_mask` is set, `begin[i]` is ignored and\n the fullest possible range in that dimension is used instead.\n `end_mask` works analogously, except with the end range.\n\n `foo[5:,:,:3]` on a 7x8x9 tensor is equivalent to `foo[5:7,0:8,0:3]`.\n `foo[::-1]` reverses a tensor with shape 8.\n\n If the ith bit of `ellipsis_mask` is set, as many unspecified dimensions\n as needed will be inserted between other dimensions. Only one\n non-zero bit is allowed in `ellipsis_mask`.\n\n For example `foo[3:5,...,4:5]` on a shape 10x3x3x10 tensor is\n equivalent to `foo[3:5,:,:,4:5]` and\n `foo[3:5,...]` is equivalent to `foo[3:5,:,:,:]`.\n\n If the ith bit of `new_axis_mask` is set, then `begin`,\n `end`, and `stride` are ignored and a new length 1 dimension is\n added at this point in the output tensor.\n\n For example,\n `foo[:4, tf.newaxis, :2]` would produce a shape `(4, 1, 2)` tensor.\n\n If the ith bit of `shrink_axis_mask` is set, it implies that the ith\n specification shrinks the dimensionality by 1, taking on the value at index\n `begin[i]`. `end[i]` and `strides[i]` are ignored in this case. For example in\n Python one might do `foo[:, 3, :]` which would result in `shrink_axis_mask`\n equal to 2.\n\n\n NOTE: `begin` and `end` are zero-indexed.\n `strides` entries must be non-zero.\n\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]],\n [[3, 3, 3], [4, 4, 4]],\n [[5, 5, 5], [6, 6, 6]]])\n tf.strided_slice(t, [1, 0, 0], [2, 1, 3], [1, 1, 1]) # [[[3, 3, 3]]]\n tf.strided_slice(t, [1, 0, 0], [2, 2, 3], [1, 1, 1]) # [[[3, 3, 3],\n # [4, 4, 4]]]\n tf.strided_slice(t, [1, -1, 0], [2, -3, 3], [1, -1, 1]) # [[[4, 4, 4],\n # [3, 3, 3]]]\n ```\n\n Args:\n input_: A `Tensor`.\n begin: An `int32` or `int64` `Tensor`.\n end: An `int32` or `int64` `Tensor`.\n strides: An `int32` or `int64` `Tensor`.\n begin_mask: An `int32` mask.\n end_mask: An `int32` mask.\n ellipsis_mask: An `int32` mask.\n new_axis_mask: An `int32` mask.\n shrink_axis_mask: An `int32` mask.\n var: The variable corresponding to `input_` or None\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` the same type as `input`.\n ", "desc": "Extracts a strided slice of a tensor (generalized Python array indexing).", "type": "API"}, {"name": "tf.compat.v1.string_join", "docs": "Perform element-wise concatenation of a list of string tensors.\n\n Given a list of string tensors of same shape, performs element-wise\n concatenation of the strings of the same index in all tensors.\n\n\n >>> tf.strings.join(['abc','def']).numpy()\n b'abcdef'\n >>> tf.strings.join([['abc','123'],\n ... ['def','456'],\n ... ['ghi','789']]).numpy()\n array([b'abcdefghi', b'123456789'], dtype=object)\n >>> tf.strings.join([['abc','123'],\n ... ['def','456']],\n ... separator=\" \").numpy()\n array([b'abc def', b'123 456'], dtype=object)\n\n The reduction version of this elementwise operation is\n `tf.strings.reduce_join`\n\n Args:\n inputs: A list of `tf.Tensor` objects of same size and `tf.string` dtype.\n separator: A string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Perform element-wise concatenation of a list of string tensors.", "type": "API"}, {"name": "tf.compat.v1.string_split", "docs": "Split elements of `source` based on `delimiter`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(delimiter)`. They will be removed in a future version.\nInstructions for updating:\ndelimiter is deprecated, please use sep instead.\n\nLet N be the size of `source` (typically N will be the batch size). Split each\nelement of `source` based on `delimiter` and return a `SparseTensor`\nor `RaggedTensor` containing the split tokens. Empty tokens are ignored.\n\nIf `sep` is an empty string, each element of the `source` is split\ninto individual strings, each containing one byte. (This includes splitting\nmultibyte sequences of UTF-8.) If delimiter contains multiple bytes, it is\ntreated as a set of delimiters with each considered a potential split point.\n\nExamples:\n\n>>> print(tf.compat.v1.string_split(['hello world', 'a b c']))\nSparseTensor(indices=tf.Tensor( [[0 0] [0 1] [1 0] [1 1] [1 2]], ...),\n values=tf.Tensor([b'hello' b'world' b'a' b'b' b'c'], ...),\n dense_shape=tf.Tensor([2 3], shape=(2,), dtype=int64))\n\n>>> print(tf.compat.v1.string_split(['hello world', 'a b c'],\n... result_type=\"RaggedTensor\"))\n\n\nArgs:\n source: `1-D` string `Tensor`, the strings to split.\n sep: `0-D` string `Tensor`, the delimiter character, the string should\n be length 0 or 1. Default is ' '.\n skip_empty: A `bool`. If `True`, skip the empty strings from the result.\n delimiter: deprecated alias for `sep`.\n result_type: The tensor type for the result: one of `\"RaggedTensor\"` or\n `\"SparseTensor\"`.\n name: A name for the operation (optional).\n\nRaises:\n ValueError: If delimiter is not a string.\n\nReturns:\n A `SparseTensor` or `RaggedTensor` of rank `2`, the strings split according\n to the delimiter. The first column of the indices corresponds to the row\n in `source` and the second column corresponds to the index of the split\n component in this row.", "desc": "Split elements of `source` based on `delimiter`. (deprecated arguments)", "type": "API"}, {"name": "tf.compat.v1.string_strip", "docs": "Strip leading and trailing whitespaces from the Tensor.\n\n Examples:\n\n >>> tf.strings.strip([\"\\nTensorFlow\", \" The python library \"]).numpy()\n array([b'TensorFlow', b'The python library'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`. A string `Tensor` of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Strip leading and trailing whitespaces from the Tensor.", "type": "API"}, {"name": "tf.compat.v1.string_to_hash_bucket", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process.\n\n Note that the hash function may change from time to time.\n This functionality will be deprecated and it's recommended to use\n `tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`.\n\n Args:\n string_tensor: A `Tensor` of type `string`.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.string_to_hash_bucket_fast", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process and will never change. However, it is not suitable for cryptography.\n This function may be used when CPU time is scarce and inputs are trusted or\n unimportant. There is a risk of adversaries constructing inputs that all hash\n to the same bucket. To prevent this problem, use a strong hash function with\n `tf.string_to_hash_bucket_strong`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_fast([\"Hello\", \"TensorFlow\", \"2.x\"], 3).numpy()\n array([0, 2, 2])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.string_to_hash_bucket_strong", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process. The hash function is a keyed hash function, where attribute `key`\n defines the key of the hash function. `key` is an array of 2 elements.\n\n A strong hash is important when inputs may be malicious, e.g. URLs with\n additional components. Adversaries could try to make their inputs hash to the\n same bucket for a denial-of-service attack or to skew the results. A strong\n hash can be used to make it difficult to find inputs with a skewed hash value\n distribution over buckets. This requires that the hash function is\n seeded by a high-entropy (random) \"key\" unknown to the adversary.\n\n The additional robustness comes at a cost of roughly 4x higher compute\n time than `tf.string_to_hash_bucket_fast`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_strong([\"Hello\", \"TF\"], 3, [1, 2]).numpy()\n array([2, 0])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n key: A list of `ints`.\n The key used to seed the hash function, passed as a list of two uint64\n elements.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.string_to_number", "docs": "Converts each string in the input Tensor to the specified numeric type.\n\n (Note that int32 overflow results in an error while float overflow\n results in a rounded value.)\n\n Example:\n\n >>> strings = [\"5.0\", \"3.0\", \"7.0\"]\n >>> tf.strings.to_number(strings)\n \n\n Args:\n string_tensor: A `Tensor` of type `string`.\n out_type: An optional `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.float32`.\n The numeric type to interpret each string in `string_tensor` as.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Converts each string in the input Tensor to the specified numeric type.", "type": "API"}, {"name": "tf.compat.v1.strings", "docs": "Operations for working with string Tensors.\n", "desc": "Operations for working with string Tensors.", "type": "API"}, {"name": "tf.compat.v1.strings.as_string", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.compat.v1.strings.bytes_split", "docs": "Split string elements of `input` into bytes.\n\n Examples:\n\n >>> tf.strings.bytes_split('hello').numpy()\n array([b'h', b'e', b'l', b'l', b'o'], dtype=object)\n >>> tf.strings.bytes_split(['hello', '123'])\n \n\n Note that this op splits strings into bytes, not unicode characters. To\n split strings into unicode characters, use `tf.strings.unicode_split`.\n\n See also: `tf.io.decode_raw`, `tf.strings.split`, `tf.strings.unicode_split`.\n\n Args:\n input: A string `Tensor` or `RaggedTensor`: the strings to split. Must\n have a statically known rank (`N`).\n name: A name for the operation (optional).\n\n Returns:\n A `RaggedTensor` of rank `N+1`: the bytes that make up the source strings.\n ", "desc": "Split string elements of `input` into bytes.", "type": "API"}, {"name": "tf.compat.v1.strings.format", "docs": "Formats a string template using a list of tensors.\n\n Formats a string template using a list of tensors, abbreviating tensors by\n only printing the first and last `summarize` elements of each dimension\n (recursively). If formatting only one tensor into a template, the tensor does\n not have to be wrapped in a list.\n\n Example:\n Formatting a single-tensor template:\n\n >>> tensor = tf.range(5)\n >>> tf.strings.format(\"tensor: {}, suffix\", tensor)\n \n\n Formatting a multi-tensor template:\n\n >>> tensor_a = tf.range(2)\n >>> tensor_b = tf.range(1, 4, 2)\n >>> tf.strings.format(\"a: {}, b: {}, suffix\", (tensor_a, tensor_b))\n \n\n\n Args:\n template: A string template to format tensor values into.\n inputs: A list of `Tensor` objects, or a single Tensor.\n The list of tensors to format into the template string. If a solitary\n tensor is passed in, the input tensor will automatically be wrapped as a\n list.\n placeholder: An optional `string`. Defaults to `{}`.\n At each placeholder occurring in the template, a subsequent tensor\n will be inserted.\n summarize: An optional `int`. Defaults to `3`.\n When formatting the tensors, show the first and last `summarize`\n entries of each tensor dimension (recursively). If set to -1, all\n elements of the tensor will be shown.\n name: A name for the operation (optional).\n\n Returns:\n A scalar `Tensor` of type `string`.\n\n Raises:\n ValueError: if the number of placeholders does not match the number of\n inputs.\n ", "desc": "Formats a string template using a list of tensors.", "type": "API"}, {"name": "tf.compat.v1.strings.join", "docs": "Perform element-wise concatenation of a list of string tensors.\n\n Given a list of string tensors of same shape, performs element-wise\n concatenation of the strings of the same index in all tensors.\n\n\n >>> tf.strings.join(['abc','def']).numpy()\n b'abcdef'\n >>> tf.strings.join([['abc','123'],\n ... ['def','456'],\n ... ['ghi','789']]).numpy()\n array([b'abcdefghi', b'123456789'], dtype=object)\n >>> tf.strings.join([['abc','123'],\n ... ['def','456']],\n ... separator=\" \").numpy()\n array([b'abc def', b'123 456'], dtype=object)\n\n The reduction version of this elementwise operation is\n `tf.strings.reduce_join`\n\n Args:\n inputs: A list of `tf.Tensor` objects of same size and `tf.string` dtype.\n separator: A string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Perform element-wise concatenation of a list of string tensors.", "type": "API"}, {"name": "tf.compat.v1.strings.length", "docs": "Computes the length of each string given in the input tensor.\n\n >>> strings = tf.constant(['Hello','TensorFlow', '\ud83d\ude42'])\n >>> tf.strings.length(strings).numpy() # default counts bytes\n array([ 5, 10, 4], dtype=int32)\n >>> tf.strings.length(strings, unit=\"UTF8_CHAR\").numpy()\n array([ 5, 10, 1], dtype=int32)\n\n Args:\n input: A `Tensor` of type `string`. The strings for which to compute the\n length for each element.\n name: A name for the operation (optional).\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to\n `\"BYTE\"`. The unit that is counted to compute string length. One of:\n `\"BYTE\"` (for the number of bytes in each string) or `\"UTF8_CHAR\"` (for\n the number of UTF-8 encoded Unicode code points in each string). Results\n are undefined if `unit=UTF8_CHAR` and the `input` strings do not contain\n structurally valid UTF-8.\n\n Returns:\n A `Tensor` of type `int32`, containing the length of the input string in\n the same element of the input tensor.\n ", "desc": "Computes the length of each string given in the input tensor.", "type": "API"}, {"name": "tf.compat.v1.strings.lower", "docs": "Converts all uppercase characters into their respective lowercase replacements.\n\n Example:\n\n >>> tf.strings.lower(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be lower-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all uppercase characters into their respective lowercase replacements.", "type": "API"}, {"name": "tf.compat.v1.strings.ngrams", "docs": "Create a tensor of n-grams based on `data`.\n\n Creates a tensor of n-grams based on `data`. The n-grams are created by\n joining windows of `width` adjacent strings from the inner axis of `data`\n using `separator`.\n\n The input data can be padded on both the start and end of the sequence, if\n desired, using the `pad_values` argument. If set, `pad_values` should contain\n either a tuple of strings or a single string; the 0th element of the tuple\n will be used to pad the left side of the sequence and the 1st element of the\n tuple will be used to pad the right side of the sequence. The `padding_width`\n arg controls how many padding values are added to each side; it defaults to\n `ngram_width-1`.\n\n If this op is configured to not have padding, or if it is configured to add\n padding with `padding_width` set to less than ngram_width-1, it is possible\n that a sequence, or a sequence plus padding, is smaller than the ngram\n width. In that case, no ngrams will be generated for that sequence. This can\n be prevented by setting `preserve_short_sequences`, which will cause the op\n to always generate at least one ngram per non-empty sequence.\n\n Examples:\n\n >>> tf.strings.ngrams([\"A\", \"B\", \"C\", \"D\"], 2).numpy()\n array([b'A B', b'B C', b'C D'], dtype=object)\n >>> tf.strings.ngrams([\"TF\", \"and\", \"keras\"], 1).numpy()\n array([b'TF', b'and', b'keras'], dtype=object)\n\n Args:\n data: A Tensor or RaggedTensor containing the source data for the ngrams.\n ngram_width: The width(s) of the ngrams to create. If this is a list or\n tuple, the op will return ngrams of all specified arities in list order.\n Values must be non-Tensor integers greater than 0.\n separator: The separator string used between ngram elements. Must be a\n string constant, not a Tensor.\n pad_values: A tuple of (left_pad_value, right_pad_value), a single string,\n or None. If None, no padding will be added; if a single string, then that\n string will be used for both left and right padding. Values must be Python\n strings.\n padding_width: If set, `padding_width` pad values will be added to both\n sides of each sequence. Defaults to `ngram_width`-1. Must be greater than\n 0. (Note that 1-grams are never padded, regardless of this value.)\n preserve_short_sequences: If true, then ensure that at least one ngram is\n generated for each input sequence. In particular, if an input sequence is\n shorter than `min(ngram_width) + 2*pad_width`, then generate a single\n ngram containing the entire sequence. If false, then no ngrams are\n generated for these short input sequences.\n name: The op name.\n\n Returns:\n A RaggedTensor of ngrams. If `data.shape=[D1...DN, S]`, then\n `output.shape=[D1...DN, NUM_NGRAMS]`, where\n `NUM_NGRAMS=S-ngram_width+1+2*padding_width`.\n\n Raises:\n TypeError: if `pad_values` is set to an invalid type.\n ValueError: if `pad_values`, `padding_width`, or `ngram_width` is set to an\n invalid value.\n ", "desc": "Create a tensor of n-grams based on `data`.", "type": "API"}, {"name": "tf.compat.v1.strings.reduce_join", "docs": "Joins all strings into a single string, or joins along an axis.\n\n This is the reduction operation for the elementwise `tf.strings.join` op.\n\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']]).numpy()\n b'abc123def456'\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']], axis=-1).numpy()\n array([b'abc123', b'def456'], dtype=object)\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']],\n ... axis=-1,\n ... separator=\" \").numpy()\n array([b'abc 123', b'def 456'], dtype=object)\n\n Args:\n inputs: A `tf.string` tensor.\n axis: Which axis to join along. The default behavior is to join all\n elements, producing a scalar.\n keepdims: If true, retains reduced dimensions with length 1.\n separator: a string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Joins all strings into a single string, or joins along an axis.", "type": "API"}, {"name": "tf.compat.v1.strings.regex_full_match", "docs": "Check if the input matches the regex pattern.\n\n The input is a string tensor of any shape. The pattern is a scalar\n string tensor which is applied to every element of the input tensor.\n The boolean values (True or False) of the output tensor indicate\n if the input matches the regex pattern provided.\n\n The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Examples:\n\n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*lib$\")\n \n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*TF$\")\n \n\n Args:\n input: A `Tensor` of type `string`.\n A string tensor of the text to be processed.\n pattern: A `Tensor` of type `string`.\n A scalar string tensor containing the regular expression to match the input.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Check if the input matches the regex pattern.", "type": "API"}, {"name": "tf.compat.v1.strings.regex_replace", "docs": "Replace elements of `input` matching regex `pattern` with `rewrite`.\n\n >>> tf.strings.regex_replace(\"Text with tags.
contains html\",\n ... \"<[^>]+>\", \" \")\n \n\n Args:\n input: string `Tensor`, the source strings to process.\n pattern: string or scalar string `Tensor`, regular expression to use,\n see more details at https://github.com/google/re2/wiki/Syntax\n rewrite: string or scalar string `Tensor`, value to use in match\n replacement, supports backslash-escaped digits (\\1 to \\9) can be to insert\n text matching corresponding parenthesized group.\n replace_global: `bool`, if `True` replace all non-overlapping matches,\n else replace only the first match.\n name: A name for the operation (optional).\n\n Returns:\n string `Tensor` of the same shape as `input` with specified replacements.\n ", "desc": "Replace elements of `input` matching regex `pattern` with `rewrite`.", "type": "API"}, {"name": "tf.compat.v1.strings.split", "docs": "Split elements of `input` based on `sep`.\n\n Let N be the size of `input` (typically N will be the batch size). Split each\n element of `input` based on `sep` and return a `SparseTensor` or\n `RaggedTensor` containing the split tokens. Empty tokens are ignored.\n\n Examples:\n\n >>> print(tf.compat.v1.strings.split(['hello world', 'a b c']))\n SparseTensor(indices=tf.Tensor( [[0 0] [0 1] [1 0] [1 1] [1 2]], ...),\n values=tf.Tensor([b'hello' b'world' b'a' b'b' b'c'], ...),\n dense_shape=tf.Tensor([2 3], shape=(2,), dtype=int64))\n\n >>> print(tf.compat.v1.strings.split(['hello world', 'a b c'],\n ... result_type=\"RaggedTensor\"))\n \n\n If `sep` is given, consecutive delimiters are not grouped together and are\n deemed to delimit empty strings. For example, `input` of `\"1<>2<><>3\"` and\n `sep` of `\"<>\"` returns `[\"1\", \"2\", \"\", \"3\"]`. If `sep` is None or an empty\n string, consecutive whitespace are regarded as a single separator, and the\n result will contain no empty strings at the start or end if the string has\n leading or trailing whitespace.\n\n Note that the above mentioned behavior matches python's str.split.\n\n Args:\n input: A string `Tensor` of rank `N`, the strings to split. If\n `rank(input)` is not known statically, then it is assumed to be `1`.\n sep: `0-D` string `Tensor`, the delimiter character.\n maxsplit: An `int`. If `maxsplit > 0`, limit of the split of the result.\n result_type: The tensor type for the result: one of `\"RaggedTensor\"` or\n `\"SparseTensor\"`.\n source: alias for \"input\" argument.\n name: A name for the operation (optional).\n\n Raises:\n ValueError: If sep is not a string.\n\n Returns:\n A `SparseTensor` or `RaggedTensor` of rank `N+1`, the strings split\n according to the delimiter.\n ", "desc": "Split elements of `input` based on `sep`.", "type": "API"}, {"name": "tf.compat.v1.strings.strip", "docs": "Strip leading and trailing whitespaces from the Tensor.\n\n Examples:\n\n >>> tf.strings.strip([\"\\nTensorFlow\", \" The python library \"]).numpy()\n array([b'TensorFlow', b'The python library'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`. A string `Tensor` of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Strip leading and trailing whitespaces from the Tensor.", "type": "API"}, {"name": "tf.compat.v1.strings.substr", "docs": "Return substrings from `Tensor` of strings.\n\n For each string in the input `Tensor`, creates a substring starting at index\n `pos` with a total length of `len`.\n\n If `len` defines a substring that would extend beyond the length of the input\n string, or if `len` is negative, then as many characters as possible are used.\n\n A negative `pos` indicates distance within the string backwards from the end.\n\n If `pos` specifies an index which is out of range for any of the input strings,\n then an `InvalidArgumentError` is thrown.\n\n `pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on\n Op creation.\n\n *NOTE*: `Substr` supports broadcasting up to two dimensions. More about\n broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n ---\n\n Examples\n\n Using scalar `pos` and `len`:\n\n ```python\n input = [b'Hello', b'World']\n position = 1\n length = 3\n\n output = [b'ell', b'orl']\n ```\n\n Using `pos` and `len` with same shape as `input`:\n\n ```python\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen']]\n position = [[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]\n length = [[2, 3, 4],\n [4, 3, 2],\n [5, 5, 5]]\n\n output = [[b'en', b'eve', b'lve'],\n [b'hirt', b'urt', b'te'],\n [b'ixtee', b'vente', b'hteen']]\n ```\n\n Broadcasting `pos` and `len` onto `input`:\n\n ```\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen'],\n [b'nineteen', b'twenty', b'twentyone']]\n position = [1, 2, 3]\n length = [1, 2, 3]\n\n output = [[b'e', b'ev', b'lve'],\n [b'h', b'ur', b'tee'],\n [b'i', b've', b'hte'],\n [b'i', b'en', b'nty']]\n ```\n\n Broadcasting `input` onto `pos` and `len`:\n\n ```\n input = b'thirteen'\n position = [1, 5, 7]\n length = [3, 2, 1]\n\n output = [b'hir', b'ee', b'n']\n ```\n\n Raises:\n\n * `ValueError`: If the first argument cannot be converted to a\n Tensor of `dtype string`.\n * `InvalidArgumentError`: If indices are out of range.\n * `ValueError`: If `pos` and `len` are not the same shape.\n\n Args:\n input: A `Tensor` of type `string`. Tensor of strings\n pos: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Scalar defining the position of first character in each substring\n len: A `Tensor`. Must have the same type as `pos`.\n Scalar defining the number of characters to include in each substring\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is used to create the substring. One of: `\"BYTE\"` (for\n defining position and length by bytes) or `\"UTF8_CHAR\"` (for the UTF-8\n encoded Unicode code points). The default is `\"BYTE\"`. Results are undefined if\n `unit=UTF8_CHAR` and the `input` strings do not contain structurally valid\n UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Return substrings from `Tensor` of strings.", "type": "API"}, {"name": "tf.compat.v1.strings.to_hash_bucket", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process.\n\n Note that the hash function may change from time to time.\n This functionality will be deprecated and it's recommended to use\n `tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`.\n\n Args:\n string_tensor: A `Tensor` of type `string`.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.strings.to_hash_bucket_fast", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process and will never change. However, it is not suitable for cryptography.\n This function may be used when CPU time is scarce and inputs are trusted or\n unimportant. There is a risk of adversaries constructing inputs that all hash\n to the same bucket. To prevent this problem, use a strong hash function with\n `tf.string_to_hash_bucket_strong`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_fast([\"Hello\", \"TensorFlow\", \"2.x\"], 3).numpy()\n array([0, 2, 2])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.strings.to_hash_bucket_strong", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process. The hash function is a keyed hash function, where attribute `key`\n defines the key of the hash function. `key` is an array of 2 elements.\n\n A strong hash is important when inputs may be malicious, e.g. URLs with\n additional components. Adversaries could try to make their inputs hash to the\n same bucket for a denial-of-service attack or to skew the results. A strong\n hash can be used to make it difficult to find inputs with a skewed hash value\n distribution over buckets. This requires that the hash function is\n seeded by a high-entropy (random) \"key\" unknown to the adversary.\n\n The additional robustness comes at a cost of roughly 4x higher compute\n time than `tf.string_to_hash_bucket_fast`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_strong([\"Hello\", \"TF\"], 3, [1, 2]).numpy()\n array([2, 0])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n key: A list of `ints`.\n The key used to seed the hash function, passed as a list of two uint64\n elements.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.compat.v1.strings.to_number", "docs": "Converts each string in the input Tensor to the specified numeric type.\n\n (Note that int32 overflow results in an error while float overflow\n results in a rounded value.)\n\n Example:\n\n >>> strings = [\"5.0\", \"3.0\", \"7.0\"]\n >>> tf.strings.to_number(strings)\n \n\n Args:\n string_tensor: A `Tensor` of type `string`.\n out_type: An optional `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.float32`.\n The numeric type to interpret each string in `string_tensor` as.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Converts each string in the input Tensor to the specified numeric type.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_decode", "docs": "Decodes each string in `input` into a sequence of Unicode code points.\n\n `result[i1...iN, j]` is the Unicode codepoint for the `j`th character in\n `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`; and in place of C0 control\n characters in `input` when `replace_control_characters=True`.\n replace_control_characters: Whether to replace the C0 control characters\n `(U+0000 - U+001F)` with the `replacement_char`.\n name: A name for the operation (optional).\n\n Returns:\n A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`.\n The returned tensor is a `tf.Tensor` if `input` is a scalar, or a\n `tf.RaggedTensor` otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> tf.strings.unicode_decode(input, 'UTF-8').to_list()\n [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]]\n ", "desc": "Decodes each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_decode_with_offsets", "docs": "Decodes each string into a sequence of code points with start offsets.\n\n This op is similar to `tf.strings.decode(...)`, but it also returns the\n start offset for each character in its respective string. This information\n can be used to align the characters with the original byte sequence.\n\n Returns a tuple `(codepoints, start_offsets)` where:\n\n * `codepoints[i1...iN, j]` is the Unicode codepoint for the `j`th character\n in `input[i1...iN]`, when decoded using `input_encoding`.\n * `start_offsets[i1...iN, j]` is the start byte offset for the `j`th\n character in `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`; and in place of C0 control\n characters in `input` when `replace_control_characters=True`.\n replace_control_characters: Whether to replace the C0 control characters\n `(U+0000 - U+001F)` with the `replacement_char`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`.\n\n * `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`.\n * `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`.\n\n The returned tensors are `tf.Tensor`s if `input` is a scalar, or\n `tf.RaggedTensor`s otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> result = tf.strings.unicode_decode_with_offsets(input, 'UTF-8')\n >>> result[0].to_list() # codepoints\n [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]]\n >>> result[1].to_list() # offsets\n [[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]]\n\n ", "desc": "Decodes each string into a sequence of code points with start offsets.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_encode", "docs": "Encodes each sequence of Unicode code points in `input` into a string.\n\n `result[i1...iN]` is the string formed by concatenating the Unicode\n codepoints `input[1...iN, :]`, encoded using `output_encoding`.\n\n Args:\n input: An `N+1` dimensional potentially ragged integer tensor with shape\n `[D1...DN, num_chars]`.\n output_encoding: Unicode encoding that should be used to encode each\n codepoint sequence. Can be `\"UTF-8\"`, `\"UTF-16-BE\"`, or `\"UTF-32-BE\"`.\n errors: Specifies the response when an invalid codepoint is encountered\n (optional). One of:\n * `'replace'`: Replace invalid codepoint with the\n `replacement_char`. (default)\n * `'ignore'`: Skip invalid codepoints.\n * `'strict'`: Raise an exception for any invalid codepoint.\n replacement_char: The replacement character codepoint to be used in place of\n any invalid input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character\n which is 0xFFFD (U+65533).\n name: A name for the operation (optional).\n\n Returns:\n A `N` dimensional `string` tensor with shape `[D1...DN]`.\n\n #### Example:\n\n >>> input = tf.ragged.constant(\n ... [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]])\n >>> print(unicode_encode(input, 'UTF-8'))\n tf.Tensor([b'G\\xc3\\xb6\\xc3\\xb6dnight' b'\\xf0\\x9f\\x98\\x8a'],\n shape=(2,), dtype=string)\n ", "desc": "Encodes each sequence of Unicode code points in `input` into a string.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_script", "docs": "Determine the script codes of a given tensor of Unicode integer code points.\n\n This operation converts Unicode code points to script codes corresponding to\n each code point. Script codes correspond to International Components for\n Unicode (ICU) UScriptCode values.\n\n See\n [ICU project docs](http://icu-project.org/apiref/icu4c/uscript_8h.html)\n for more details on script codes.\n\n For an example, see the unicode strings guide on [unicode scripts]\n (https://www.tensorflow.org/tutorials/load_data/unicode#representing_unicode).\n\n Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will\n match input shape.\n\n Examples:\n\n >>> tf.strings.unicode_script([1, 31, 38])\n \n\n Args:\n input: A `Tensor` of type `int32`. A Tensor of int32 Unicode code points.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Determine the script codes of a given tensor of Unicode integer code points.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_split", "docs": "Splits each string in `input` into a sequence of Unicode code points.\n\n `result[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its\n `j`th character, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`.\n name: A name for the operation (optional).\n\n Returns:\n A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`.\n The returned tensor is a `tf.Tensor` if `input` is a scalar, or a\n `tf.RaggedTensor` otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> tf.strings.unicode_split(input, 'UTF-8').to_list()\n [[b'G', b'\\xc3\\xb6', b'\\xc3\\xb6', b'd', b'n', b'i', b'g', b'h', b't'],\n [b'\\xf0\\x9f\\x98\\x8a']]\n ", "desc": "Splits each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_split_with_offsets", "docs": "Splits each string into a sequence of code points with start offsets.\n\n This op is similar to `tf.strings.decode(...)`, but it also returns the\n start offset for each character in its respective string. This information\n can be used to align the characters with the original byte sequence.\n\n Returns a tuple `(chars, start_offsets)` where:\n\n * `chars[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its\n `j`th character, when decoded using `input_encoding`.\n * `start_offsets[i1...iN, j]` is the start byte offset for the `j`th\n character in `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`.\n\n * `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`.\n * `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`.\n\n The returned tensors are `tf.Tensor`s if `input` is a scalar, or\n `tf.RaggedTensor`s otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> result = tf.strings.unicode_split_with_offsets(input, 'UTF-8')\n >>> result[0].to_list() # character substrings\n [[b'G', b'\\xc3\\xb6', b'\\xc3\\xb6', b'd', b'n', b'i', b'g', b'h', b't'],\n [b'\\xf0\\x9f\\x98\\x8a']]\n >>> result[1].to_list() # offsets\n [[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]]\n\n ", "desc": "Splits each string into a sequence of code points with start offsets.", "type": "API"}, {"name": "tf.compat.v1.strings.unicode_transcode", "docs": "Transcode the input text from a source encoding to a destination encoding.\n\n The input is a string tensor of any shape. The output is a string tensor of\n the same shape containing the transcoded strings. Output strings are always\n valid unicode. If the input contains invalid encoding positions, the\n `errors` attribute sets the policy for how to deal with them. If the default\n error-handling policy is used, invalid formatting will be substituted in the\n output by the `replacement_char`. If the errors policy is to `ignore`, any\n invalid encoding positions in the input are skipped and not included in the\n output. If it set to `strict` then any invalid formatting will result in an\n InvalidArgument error.\n\n This operation can be used with `output_encoding = input_encoding` to enforce\n correct formatting for inputs even if they are already in the desired encoding.\n\n If the input is prefixed by a Byte Order Mark needed to determine encoding\n (e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that\n BOM will be consumed and not emitted into the output. If the input encoding\n is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is\n interpreted as a non-breaking-space and is preserved in the output (including\n always for UTF-8).\n\n The end result is that if the input is marked as an explicit endianness the\n transcoding is faithful to all codepoints in the source. If it is not marked\n with an explicit endianness, the BOM is not considered part of the string itself\n but as metadata, and so is not preserved in the output.\n\n Examples:\n\n >>> tf.strings.unicode_transcode([\"Hello\", \"TensorFlow\", \"2.x\"], \"UTF-8\", \"UTF-16-BE\")\n \n >>> tf.strings.unicode_transcode([\"A\", \"B\", \"C\"], \"US ASCII\", \"UTF-8\").numpy()\n array([b'A', b'B', b'C'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`.\n The text to be processed. Can have any shape.\n input_encoding: A `string`.\n Text encoding of the input strings. This is any of the encodings supported\n by ICU ucnv algorithmic converters. Examples: `\"UTF-16\", \"US ASCII\", \"UTF-8\"`.\n output_encoding: A `string` from: `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`.\n The unicode encoding to use in the output. Must be one of\n `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`. Multi-byte encodings will be big-endian.\n errors: An optional `string` from: `\"strict\", \"replace\", \"ignore\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD or U+65533.)\n\n Note that for UTF-8, passing a replacement character expressible in 1 byte, such\n as ' ', will preserve string alignment to the source since invalid bytes will be\n replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte\n replacement character will preserve byte alignment to the source.\n replace_control_characters: An optional `bool`. Defaults to `False`.\n Whether to replace the C0 control characters (00-1F) with the\n `replacement_char`. Default is false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Transcode the input text from a source encoding to a destination encoding.", "type": "API"}, {"name": "tf.compat.v1.strings.unsorted_segment_join", "docs": "Joins the elements of `inputs` based on `segment_ids`.\n\n Computes the string join along segments of a tensor.\n Given `segment_ids` with rank `N` and `data` with rank `N+M`:\n\n `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])`\n\n where the join is over all [j1...jN] such that segment_ids[j1...jN] = i.\n Strings are joined in row-major order.\n\n For example:\n\n ```python\n inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']]\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[1, 0, 1],\n num_segments=2,\n separator=':'))\n # output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']]\n\n\n inputs = ['this', 'is', 'a', 'test']\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[0, 0, 0, 0],\n num_segments=1,\n separator=':'))\n # output_array ==> ['this:is:a:test']\n ```\n\n Args:\n inputs: A `Tensor` of type `string`. The input to be joined.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of data.shape. Negative segment ids are not\n supported.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A scalar.\n separator: An optional `string`. Defaults to `\"\"`.\n The separator to use when joining.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Joins the elements of `inputs` based on `segment_ids`.", "type": "API"}, {"name": "tf.compat.v1.strings.upper", "docs": "Converts all lowercase characters into their respective uppercase replacements.\n\n Example:\n\n >>> tf.strings.upper(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be upper-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all lowercase characters into their respective uppercase replacements.", "type": "API"}, {"name": "tf.compat.v1.substr", "docs": "Return substrings from `Tensor` of strings.\n\n For each string in the input `Tensor`, creates a substring starting at index\n `pos` with a total length of `len`.\n\n If `len` defines a substring that would extend beyond the length of the input\n string, or if `len` is negative, then as many characters as possible are used.\n\n A negative `pos` indicates distance within the string backwards from the end.\n\n If `pos` specifies an index which is out of range for any of the input strings,\n then an `InvalidArgumentError` is thrown.\n\n `pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on\n Op creation.\n\n *NOTE*: `Substr` supports broadcasting up to two dimensions. More about\n broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n ---\n\n Examples\n\n Using scalar `pos` and `len`:\n\n ```python\n input = [b'Hello', b'World']\n position = 1\n length = 3\n\n output = [b'ell', b'orl']\n ```\n\n Using `pos` and `len` with same shape as `input`:\n\n ```python\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen']]\n position = [[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]\n length = [[2, 3, 4],\n [4, 3, 2],\n [5, 5, 5]]\n\n output = [[b'en', b'eve', b'lve'],\n [b'hirt', b'urt', b'te'],\n [b'ixtee', b'vente', b'hteen']]\n ```\n\n Broadcasting `pos` and `len` onto `input`:\n\n ```\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen'],\n [b'nineteen', b'twenty', b'twentyone']]\n position = [1, 2, 3]\n length = [1, 2, 3]\n\n output = [[b'e', b'ev', b'lve'],\n [b'h', b'ur', b'tee'],\n [b'i', b've', b'hte'],\n [b'i', b'en', b'nty']]\n ```\n\n Broadcasting `input` onto `pos` and `len`:\n\n ```\n input = b'thirteen'\n position = [1, 5, 7]\n length = [3, 2, 1]\n\n output = [b'hir', b'ee', b'n']\n ```\n\n Raises:\n\n * `ValueError`: If the first argument cannot be converted to a\n Tensor of `dtype string`.\n * `InvalidArgumentError`: If indices are out of range.\n * `ValueError`: If `pos` and `len` are not the same shape.\n\n Args:\n input: A `Tensor` of type `string`. Tensor of strings\n pos: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Scalar defining the position of first character in each substring\n len: A `Tensor`. Must have the same type as `pos`.\n Scalar defining the number of characters to include in each substring\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is used to create the substring. One of: `\"BYTE\"` (for\n defining position and length by bytes) or `\"UTF8_CHAR\"` (for the UTF-8\n encoded Unicode code points). The default is `\"BYTE\"`. Results are undefined if\n `unit=UTF8_CHAR` and the `input` strings do not contain structurally valid\n UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Return substrings from `Tensor` of strings.", "type": "API"}, {"name": "tf.compat.v1.subtract", "docs": "Returns x - y element-wise.\n\n *NOTE*: `tf.subtract` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Both input and output have a range `(-inf, inf)`.\n\n Example usages below.\n\n Subtract operation between an array and a scalar:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.subtract(x, y)\n \n >>> tf.subtract(y, x)\n \n\n Note that binary `-` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x - y\n \n\n Subtract operation between an array and a tensor of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([5, 4, 3, 2, 1])\n >>> tf.subtract(y, x)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**8 + 1, 2**8 + 2]\n >>> tf.subtract(x, y)\n \n\n When subtracting two input values of different shapes, `tf.subtract` follows the\n [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules)\n . The two input array shapes are compared element-wise. Starting with the\n trailing dimensions, the two dimensions either have to be equal or one of them\n needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(2, 1, 3)\n >>> tf.subtract(x, y)\n \n\n Example with inputs of different dimensions:\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(1, 6)\n >>> tf.subtract(x, y)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x - y element-wise.", "type": "API"}, {"name": "tf.compat.v1.Summary", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.all_v2_summary_ops", "docs": "Returns all V2-style summary ops defined in the current default graph.\n\n This includes ops from TF 2.0 tf.summary and TF 1.x tf.contrib.summary (except\n for `tf.contrib.summary.graph` and `tf.contrib.summary.import_event`), but\n does *not* include TF 1.x tf.summary ops.\n\n Returns:\n List of summary ops, or None if called under eager execution.\n ", "desc": "Returns all V2-style summary ops defined in the current default graph.", "type": "API"}, {"name": "tf.compat.v1.Summary.Audio", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.Event", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.FileWriter", "docs": "Writes `Summary` protocol buffers to event files.\n\n The `FileWriter` class provides a mechanism to create an event file in a\n given directory and add summaries and events to it. The class updates the\n file contents asynchronously. This allows a training program to call methods\n to add data to the file directly from the training loop, without slowing down\n training.\n\n When constructed with a `tf.compat.v1.Session` parameter, a `FileWriter`\n instead forms a compatibility layer over new graph-based summaries\n to facilitate the use of new summary writing with\n pre-existing code that expects a `FileWriter` instance.\n\n This class is not thread-safe.\n\n @compatibility(TF2)\n This API is not compatible with eager execution or `tf.function`. To migrate\n to TF2, please use `tf.summary.create_file_writer` instead for summary\n management. To specify the summary step, you can manage the context with\n `tf.summary.SummaryWriter`, which is returned by\n `tf.summary.create_file_writer()`. Or, you can also use the `step` argument\n of summary functions such as `tf.summary.histogram`.\n See the usage example shown below.\n\n For a comprehensive `tf.summary` migration guide, please follow\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :---------------- | :---------------- | :-------------------------------- |\n | `logdir` | `logdir` | - |\n | `graph` | Not supported | - |\n | `max_queue` | `max_queue` | - |\n | `flush_secs` | `flush_millis` | The unit of time is changed |\n : : : from seconds to milliseconds. :\n | `graph_def` | Not supported | - |\n | `filename_suffix` | `filename_suffix` | - |\n | `name` | `name` | - |\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n ```python\n dist = tf.compat.v1.placeholder(tf.float32, [100])\n tf.compat.v1.summary.histogram(name=\"distribution\", values=dist)\n writer = tf.compat.v1.summary.FileWriter(\"/tmp/tf1_summary_example\")\n summaries = tf.compat.v1.summary.merge_all()\n\n sess = tf.compat.v1.Session()\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})\n writer.add_summary(summ, global_step=step)\n ```\n\n TF2:\n\n ```python\n writer = tf.summary.create_file_writer(\"/tmp/tf2_summary_example\")\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n with writer.as_default(step=step):\n tf.summary.histogram(name='distribution', data=mean_moving_normal)\n ```\n\n @end_compatibility\n ", "desc": "Writes `Summary` protocol buffers to event files.", "type": "API"}, {"name": "tf.compat.v1.summary.FileWriterCache", "docs": "Cache for file writers.\n\n This class caches file writers, one per directory.\n ", "desc": "Cache for file writers.", "type": "API"}, {"name": "tf.compat.v1.summary.get_summary_description", "docs": "Given a TensorSummary node_def, retrieve its SummaryDescription.\n\n When a Summary op is instantiated, a SummaryDescription of associated\n metadata is stored in its NodeDef. This method retrieves the description.\n\n Args:\n node_def: the node_def_pb2.NodeDef of a TensorSummary op\n\n Returns:\n a summary_pb2.SummaryDescription\n\n Raises:\n ValueError: if the node is not a summary op.\n\n @compatibility(eager)\n Not compatible with eager execution. To write TensorBoard\n summaries under eager execution, use `tf.contrib.summary` instead.\n @end_compatibility\n ", "desc": "Given a TensorSummary node_def, retrieve its SummaryDescription.", "type": "API"}, {"name": "tf.compat.v1.summary.histogram", "docs": "Outputs a `Summary` protocol buffer with a histogram.\n\n Adding a histogram summary makes it possible to visualize your data's\n distribution in TensorBoard. You can see a detailed explanation of the\n TensorBoard histogram dashboard\n [here](https://www.tensorflow.org/get_started/tensorboard_histograms).\n\n The generated\n [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)\n has one summary value containing a histogram for `values`.\n\n This op reports an `InvalidArgument` error if any value is not finite.\n\n Args:\n name: A name for the generated node. Will also serve as a series name in\n TensorBoard.\n values: A real numeric `Tensor`. Any shape. Values to use to\n build the histogram.\n collections: Optional list of graph collections keys. The new summary op is\n added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.\n family: Optional; if provided, used as the prefix of the summary tag name,\n which controls the tab name used for display on Tensorboard.\n\n Returns:\n A scalar `Tensor` of type `string`. The serialized `Summary` protocol\n buffer.\n\n @compatibility(TF2)\n For compatibility purposes, when invoked in TF2 where the outermost context is\n eager mode, this API will check if there is a suitable TF2 summary writer\n context available, and if so will forward this call to that writer instead. A\n \"suitable\" writer context means that the writer is set as the default writer,\n and there is an associated non-empty value for `step` (see\n `tf.summary.SummaryWriter.as_default`, `tf.summary.experimental.set_step` or\n alternatively `tf.compat.v1.train.create_global_step`). For the forwarded\n call, the arguments here will be passed to the TF2 implementation of\n `tf.summary.histogram`, and the return value will be an empty bytestring\n tensor, to avoid duplicate summary writing. This forwarding is best-effort and\n not all arguments will be preserved.\n\n To migrate to TF2, please use `tf.summary.histogram` instead. Please check\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete\n steps for migration.\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------ | :-------------- | :------------------------------------- |\n | `name` | `name` | - |\n | `values` | `data` | - |\n | - | `step` | Explicit int64-castable monotonic step |\n : : : value. If omitted, this defaults to :\n : : : `tf.summary.experimental.get_step()` :\n | - | `buckets` | Optional positive `int` specifying |\n : : : the histogram bucket number. :\n | `collections` | Not Supported | - |\n | `family` | Removed | Please use `tf.name_scope` instead |\n : : : to manage summary name prefix. :\n | - | `description` | Optional long-form `str` description |\n : : : for the summary. Markdown is supported.:\n : : : Defaults to empty. :\n\n @end_compatibility\n ", "desc": "Outputs a `Summary` protocol buffer with a histogram.", "type": "API"}, {"name": "tf.compat.v1.Summary.Image", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.initialize", "docs": "Initializes summary writing for graph execution mode.\n\n This operation is a no-op when executing eagerly.\n\n This helper method provides a higher-level alternative to using\n `tf.contrib.summary.summary_writer_initializer_op` and\n `tf.contrib.summary.graph`.\n\n Most users will also want to call `tf.compat.v1.train.create_global_step`\n which can happen before or after this function is called.\n\n Args:\n graph: A `tf.Graph` or `tf.compat.v1.GraphDef` to output to the writer.\n This function will not write the default graph by default. When\n writing to an event log file, the associated step will be zero.\n session: So this method can call `tf.Session.run`. This defaults\n to `tf.compat.v1.get_default_session`.\n\n Raises:\n RuntimeError: If the current thread has no default\n `tf.contrib.summary.SummaryWriter`.\n ValueError: If session wasn't passed and no default session.\n ", "desc": "Initializes summary writing for graph execution mode.", "type": "API"}, {"name": "tf.compat.v1.summary.merge", "docs": "Merges summaries.\n\n This op creates a\n [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)\n protocol buffer that contains the union of all the values in the input\n summaries.\n\n When the Op is run, it reports an `InvalidArgument` error if multiple values\n in the summaries to merge use the same tag.\n\n Args:\n inputs: A list of `string` `Tensor` objects containing serialized `Summary`\n protocol buffers.\n collections: Optional list of graph collections keys. The new summary op is\n added to these collections. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A scalar `Tensor` of type `string`. The serialized `Summary` protocol\n buffer resulting from the merging.\n\n Raises:\n RuntimeError: If called with eager mode enabled.\n\n @compatibility(TF2)\n This API is not compatible with eager execution or `tf.function`. To migrate\n to TF2, this API can be omitted entirely, because in TF2 individual summary\n ops, like `tf.summary.scalar()`, write directly to the default summary writer\n if one is active. Thus, it's not necessary to merge summaries or to manually\n add the resulting merged summary output to the writer. See the usage example\n shown below.\n\n For a comprehensive `tf.summary` migration guide, please follow\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n ```python\n dist = tf.compat.v1.placeholder(tf.float32, [100])\n tf.compat.v1.summary.histogram(name=\"distribution\", values=dist)\n writer = tf.compat.v1.summary.FileWriter(\"/tmp/tf1_summary_example\")\n summaries = tf.compat.v1.summary.merge_all()\n\n sess = tf.compat.v1.Session()\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})\n writer.add_summary(summ, global_step=step)\n ```\n\n TF2:\n\n ```python\n writer = tf.summary.create_file_writer(\"/tmp/tf2_summary_example\")\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n with writer.as_default(step=step):\n tf.summary.histogram(name='distribution', data=mean_moving_normal)\n ```\n\n @end_compatibility\n ", "desc": "Merges summaries.", "type": "API"}, {"name": "tf.compat.v1.summary.merge_all", "docs": "Merges all summaries collected in the default graph.\n\n Args:\n key: `GraphKey` used to collect the summaries. Defaults to\n `GraphKeys.SUMMARIES`.\n scope: Optional scope used to filter the summary ops, using `re.match`.\n name: A name for the operation (optional).\n\n Returns:\n If no summaries were collected, returns None. Otherwise returns a scalar\n `Tensor` of type `string` containing the serialized `Summary` protocol\n buffer resulting from the merging.\n\n Raises:\n RuntimeError: If called with eager execution enabled.\n\n @compatibility(TF2)\n This API is not compatible with eager execution or `tf.function`. To migrate\n to TF2, this API can be omitted entirely, because in TF2 individual summary\n ops, like `tf.summary.scalar()`, write directly to the default summary writer\n if one is active. Thus, it's not necessary to merge summaries or to manually\n add the resulting merged summary output to the writer. See the usage example\n shown below.\n\n For a comprehensive `tf.summary` migration guide, please follow\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x).\n\n #### TF1 & TF2 Usage Example\n\n TF1:\n\n ```python\n dist = tf.compat.v1.placeholder(tf.float32, [100])\n tf.compat.v1.summary.histogram(name=\"distribution\", values=dist)\n writer = tf.compat.v1.summary.FileWriter(\"/tmp/tf1_summary_example\")\n summaries = tf.compat.v1.summary.merge_all()\n\n sess = tf.compat.v1.Session()\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n summ = sess.run(summaries, feed_dict={dist: mean_moving_normal})\n writer.add_summary(summ, global_step=step)\n ```\n\n TF2:\n\n ```python\n writer = tf.summary.create_file_writer(\"/tmp/tf2_summary_example\")\n for step in range(100):\n mean_moving_normal = np.random.normal(loc=step, scale=1, size=[100])\n with writer.as_default(step=step):\n tf.summary.histogram(name='distribution', data=mean_moving_normal)\n ```\n\n @end_compatibility\n ", "desc": "Merges all summaries collected in the default graph.", "type": "API"}, {"name": "tf.compat.v1.summary.scalar", "docs": "Outputs a `Summary` protocol buffer containing a single scalar value.\n\n The generated Summary has a Tensor.proto containing the input Tensor.\n\n Args:\n name: A name for the generated node. Will also serve as the series name in\n TensorBoard.\n tensor: A real numeric Tensor containing a single value.\n collections: Optional list of graph collections keys. The new summary op is\n added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.\n family: Optional; if provided, used as the prefix of the summary tag name,\n which controls the tab name used for display on Tensorboard.\n\n Returns:\n A scalar `Tensor` of type `string`. Which contains a `Summary` protobuf.\n\n Raises:\n ValueError: If tensor has the wrong shape or type.\n\n @compatibility(TF2)\n For compatibility purposes, when invoked in TF2 where the outermost context is\n eager mode, this API will check if there is a suitable TF2 summary writer\n context available, and if so will forward this call to that writer instead. A\n \"suitable\" writer context means that the writer is set as the default writer,\n and there is an associated non-empty value for `step` (see\n `tf.summary.SummaryWriter.as_default`, `tf.summary.experimental.set_step` or\n alternatively `tf.compat.v1.train.create_global_step`). For the forwarded\n call, the arguments here will be passed to the TF2 implementation of\n `tf.summary.scalar`, and the return value will be an empty bytestring tensor,\n to avoid duplicate summary writing. This forwarding is best-effort and not all\n arguments will be preserved.\n\n To migrate to TF2, please use `tf.summary.scalar` instead. Please check\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete\n steps for migration. `tf.summary.scalar` can also log training metrics in\n Keras, you can check [Logging training metrics in\n Keras](https://www.tensorflow.org/tensorboard/scalars_and_keras) for detials.\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------ | :-------------- | :------------------------------------- |\n | `name` | `name` | - |\n | `tensor` | `data` | - |\n | - | `step` | Explicit int64-castable monotonic step |\n : : : value. If omitted, this defaults to :\n : : : `tf.summary.experimental.get_step()`. :\n | `collections` | Not Supported | - |\n | `family` | Removed | Please use `tf.name_scope` instead to |\n : : : manage summary name prefix. :\n | - | `description` | Optional long-form `str` description |\n : : : for the summary. Markdown is supported.:\n : : : Defaults to empty. :\n\n @end_compatibility\n ", "desc": "Outputs a `Summary` protocol buffer containing a single scalar value.", "type": "API"}, {"name": "tf.compat.v1.summary.SessionLog", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.Summary", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.Summary.Audio", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.Summary.Image", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.Summary.Value", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.SummaryDescription", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.TaggedRunMetadata", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.summary.tensor_summary", "docs": "Outputs a `Summary` protocol buffer with a serialized tensor.proto.\n\n Args:\n name: A name for the generated node. If display_name is not set, it will\n also serve as the tag name in TensorBoard. (In that case, the tag\n name will inherit tf name scopes.)\n tensor: A tensor of any type and shape to serialize.\n summary_description: A long description of the summary sequence. Markdown\n is supported.\n collections: Optional list of graph collections keys. The new summary op is\n added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.\n summary_metadata: Optional SummaryMetadata proto (which describes which\n plugins may use the summary value).\n family: Optional; if provided, used as the prefix of the summary tag,\n which controls the name used for display on TensorBoard when\n display_name is not set.\n display_name: A string used to name this data in TensorBoard. If this is\n not set, then the node name will be used instead.\n\n Returns:\n A scalar `Tensor` of type `string`. The serialized `Summary` protocol\n buffer.\n ", "desc": "Outputs a `Summary` protocol buffer with a serialized tensor.proto.", "type": "API"}, {"name": "tf.compat.v1.summary.text", "docs": "Summarizes textual data.\n\n Text data summarized via this plugin will be visible in the Text Dashboard\n in TensorBoard. The standard TensorBoard Text Dashboard will render markdown\n in the strings, and will automatically organize 1d and 2d tensors into tables.\n If a tensor with more than 2 dimensions is provided, a 2d subarray will be\n displayed along with a warning message. (Note that this behavior is not\n intrinsic to the text summary api, but rather to the default TensorBoard text\n plugin.)\n\n Args:\n name: A name for the generated node. Will also serve as a series name in\n TensorBoard.\n tensor: a string-type Tensor to summarize.\n collections: Optional list of ops.GraphKeys. The collections to add the\n summary to. Defaults to [_ops.GraphKeys.SUMMARIES]\n\n Returns:\n A TensorSummary op that is configured so that TensorBoard will recognize\n that it contains textual data. The TensorSummary is a scalar `Tensor` of\n type `string` which contains `Summary` protobufs.\n\n Raises:\n ValueError: If tensor has the wrong type.\n\n @compatibility(TF2)\n For compatibility purposes, when invoked in TF2 where the outermost context is\n eager mode, this API will check if there is a suitable TF2 summary writer\n context available, and if so will forward this call to that writer instead. A\n \"suitable\" writer context means that the writer is set as the default writer,\n and there is an associated non-empty value for `step` (see\n `tf.summary.SummaryWriter.as_default`, `tf.summary.experimental.set_step` or\n alternatively `tf.compat.v1.train.create_global_step`). For the forwarded\n call, the arguments here will be passed to the TF2 implementation of\n `tf.summary.text`, and the return value will be an empty bytestring tensor, to\n avoid duplicate summary writing. This forwarding is best-effort and not all\n arguments will be preserved.\n\n To migrate to TF2, please use `tf.summary.text` instead. Please check\n [Migrating tf.summary usage to\n TF 2.0](https://www.tensorflow.org/tensorboard/migrate#in_tf_1x) for concrete\n steps for migration.\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------ | :-------------- | :------------------------------------- |\n | `name` | `name` | - |\n | `tensor` | `data` | - |\n | - | `step` | Explicit int64-castable monotonic step |\n : : : value. If omitted, this defaults to :\n : : : `tf.summary.experimental.get_step()`. :\n | `collections` | Not Supported | - |\n | - | `description` | Optional long-form `str` description |\n : : : for the summary. Markdown is supported.:\n : : : Defaults to empty. :\n\n @end_compatibility\n ", "desc": "Summarizes textual data.", "type": "API"}, {"name": "tf.compat.v1.Summary.Value", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.SummaryMetadata", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.SummaryMetadata.PluginData", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.svd", "docs": "Computes the singular value decompositions of one or more matrices.\n\n Computes the SVD of each inner matrix in `tensor` such that\n `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) *\n transpose(conj(v[..., :, :]))`\n\n ```python\n # a is a tensor.\n # s is a tensor of singular values.\n # u is a tensor of left singular vectors.\n # v is a tensor of right singular vectors.\n s, u, v = svd(a)\n s = svd(a, compute_uv=False)\n ```\n\n Args:\n tensor: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and\n `N`.\n full_matrices: If true, compute full-sized `u` and `v`. If false\n (the default), compute only the leading `P` singular vectors.\n Ignored if `compute_uv` is `False`.\n compute_uv: If `True` then left and right singular vectors will be\n computed and returned in `u` and `v`, respectively. Otherwise, only the\n singular values will be computed, which can be significantly faster.\n name: string, optional name of the operation.\n\n Returns:\n s: Singular values. Shape is `[..., P]`. The values are sorted in reverse\n order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the\n second largest, etc.\n u: Left singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., M, P]`; if `full_matrices` is `True` then shape is\n `[..., M, M]`. Not returned if `compute_uv` is `False`.\n v: Right singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., N, P]`. If `full_matrices` is `True` then shape is\n `[..., N, N]`. Not returned if `compute_uv` is `False`.\n\n @compatibility(numpy)\n Mostly equivalent to numpy.linalg.svd, except that\n * The order of output arguments here is `s`, `u`, `v` when `compute_uv` is\n `True`, as opposed to `u`, `s`, `v` for numpy.linalg.svd.\n * full_matrices is `False` by default as opposed to `True` for\n numpy.linalg.svd.\n * tf.linalg.svd uses the standard definition of the SVD\n \\\\(A = U \\Sigma V^H\\\\), such that the left singular vectors of `a` are\n the columns of `u`, while the right singular vectors of `a` are the\n columns of `v`. On the other hand, numpy.linalg.svd returns the adjoint\n \\\\(V^H\\\\) as the third output argument.\n ```python\n import tensorflow as tf\n import numpy as np\n s, u, v = tf.linalg.svd(a)\n tf_a_approx = tf.matmul(u, tf.matmul(tf.linalg.diag(s), v, adjoint_b=True))\n u, s, v_adj = np.linalg.svd(a, full_matrices=False)\n np_a_approx = np.dot(u, np.dot(np.diag(s), v_adj))\n # tf_a_approx and np_a_approx should be numerically close.\n ```\n @end_compatibility\n ", "desc": "Computes the singular value decompositions of one or more matrices.", "type": "API"}, {"name": "tf.compat.v1.switch_case", "docs": "Create a switch/case operation, i.e. an integer-indexed conditional.\n\n See also `tf.case`.\n\n This op can be substantially more efficient than `tf.case` when exactly one\n branch will be selected. `tf.switch_case` is more like a C++ switch/case\n statement than `tf.case`, which is more like an if/elif/elif/else chain.\n\n The `branch_fns` parameter is either a dict from `int` to callables, or list\n of (`int`, callable) pairs, or simply a list of callables (in which case the\n index is implicitly the key). The `branch_index` `Tensor` is used to select an\n element in `branch_fns` with matching `int` key, falling back to `default`\n if none match, or `max(keys)` if no `default` is provided. The keys must form\n a contiguous set from `0` to `len(branch_fns) - 1`.\n\n `tf.switch_case` supports nested structures as implemented in `tf.nest`. All\n callables must return the same (possibly nested) value structure of lists,\n tuples, and/or named tuples.\n\n **Example:**\n\n Pseudocode:\n\n ```c++\n switch (branch_index) { // c-style switch\n case 0: return 17;\n case 1: return 31;\n default: return -1;\n }\n ```\n or\n ```python\n branches = {0: lambda: 17, 1: lambda: 31}\n branches.get(branch_index, lambda: -1)()\n ```\n\n Expressions:\n\n ```python\n def f1(): return tf.constant(17)\n def f2(): return tf.constant(31)\n def f3(): return tf.constant(-1)\n r = tf.switch_case(branch_index, branch_fns={0: f1, 1: f2}, default=f3)\n # Equivalent: tf.switch_case(branch_index, branch_fns={0: f1, 1: f2, 2: f3})\n ```\n\n Args:\n branch_index: An int Tensor specifying which of `branch_fns` should be\n executed.\n branch_fns: A `dict` mapping `int`s to callables, or a `list` of\n (`int`, callable) pairs, or simply a list of callables (in which case the\n index serves as the key). Each callable must return a matching structure\n of tensors.\n default: Optional callable that returns a structure of tensors.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the callable identified by `branch_index`, or those\n returned by `default` if no key matches and `default` was provided, or those\n returned by the max-keyed `branch_fn` if no `default` is provided.\n\n Raises:\n TypeError: If `branch_fns` is not a list/dictionary.\n TypeError: If `branch_fns` is a list but does not contain 2-tuples or\n callables.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable.\n ", "desc": "Create a switch/case operation, i.e. an integer-indexed conditional.", "type": "API"}, {"name": "tf.compat.v1.sysconfig", "docs": "System configuration library.\n", "desc": "System configuration library.", "type": "API"}, {"name": "tf.compat.v1.sysconfig.get_build_info", "docs": "Get a dictionary describing TensorFlow's build environment.\n\n Values are generated when TensorFlow is compiled, and are static for each\n TensorFlow package. The return value is a dictionary with string keys such as:\n\n - cuda_version\n - cudnn_version\n - is_cuda_build\n - is_rocm_build\n - msvcp_dll_names\n - nvcuda_dll_name\n - cudart_dll_name\n - cudnn_dll_name\n\n Note that the actual keys and values returned by this function is subject to\n change across different versions of TensorFlow or across platforms.\n\n Returns:\n A Dictionary describing TensorFlow's build environment.\n ", "desc": "Get a dictionary describing TensorFlow's build environment.", "type": "API"}, {"name": "tf.compat.v1.sysconfig.get_compile_flags", "docs": "Get the compilation flags for custom operators.\n\n Returns:\n The compilation flags.\n ", "desc": "Get the compilation flags for custom operators.", "type": "API"}, {"name": "tf.compat.v1.sysconfig.get_include", "docs": "Get the directory containing the TensorFlow C++ header files.\n\n Returns:\n The directory as string.\n ", "desc": "Get the directory containing the TensorFlow C++ header files.", "type": "API"}, {"name": "tf.compat.v1.sysconfig.get_lib", "docs": "Get the directory containing the TensorFlow framework library.\n\n Returns:\n The directory as string.\n ", "desc": "Get the directory containing the TensorFlow framework library.", "type": "API"}, {"name": "tf.compat.v1.sysconfig.get_link_flags", "docs": "Get the link flags for custom operators.\n\n Returns:\n The link flags.\n ", "desc": "Get the link flags for custom operators.", "type": "API"}, {"name": "tf.compat.v1.tables_initializer", "docs": "Returns an Op that initializes all tables of the default graph.\n\n Args:\n name: Optional name for the initialization op.\n\n Returns:\n An Op that initializes all tables. Note that if there are\n not tables the returned Op is a NoOp.\n\n @compatibility(TF2)\n `tf.compat.v1.tables_initializer` is no longer needed with eager execution and\n `tf.function`. In TF2, when creating an initializable table like a\n `tf.lookup.StaticHashTable`, the table will automatically be initialized on\n creation.\n\n #### Before & After Usage Example\n\n Before:\n\n >>> with tf.compat.v1.Session():\n ... init = tf.compat.v1.lookup.KeyValueTensorInitializer(['a', 'b'], [1, 2])\n ... table = tf.compat.v1.lookup.StaticHashTable(init, default_value=-1)\n ... tf.compat.v1.tables_initializer().run()\n ... result = table.lookup(tf.constant(['a', 'c'])).eval()\n >>> result\n array([ 1, -1], dtype=int32)\n\n After:\n\n >>> init = tf.lookup.KeyValueTensorInitializer(['a', 'b'], [1, 2])\n >>> table = tf.lookup.StaticHashTable(init, default_value=-1)\n >>> table.lookup(tf.constant(['a', 'c'])).numpy()\n array([ 1, -1], dtype=int32)\n\n @end_compatibility\n ", "desc": "Returns an Op that initializes all tables of the default graph.", "type": "API"}, {"name": "tf.compat.v1.tan", "docs": "Computes tan of x element-wise.\n\n Given an input tensor, this function computes tangent of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `(-inf, inf)`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes tan of x element-wise.", "type": "API"}, {"name": "tf.compat.v1.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.compat.v1.Tensor", "docs": "A `tf.Tensor` represents a multidimensional array of elements.\n\n All elements are of a single known data type.\n\n When writing a TensorFlow program, the main object that is\n manipulated and passed around is the `tf.Tensor`.\n\n A `tf.Tensor` has the following properties:\n\n * a single data type (float32, int32, or string, for example)\n * a shape\n\n TensorFlow supports eager execution and graph execution. In eager\n execution, operations are evaluated immediately. In graph\n execution, a computational graph is constructed for later\n evaluation.\n\n TensorFlow defaults to eager execution. In the example below, the\n matrix multiplication results are calculated immediately.\n\n >>> # Compute some values using a Tensor\n >>> c = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n >>> d = tf.constant([[1.0, 1.0], [0.0, 1.0]])\n >>> e = tf.matmul(c, d)\n >>> print(e)\n tf.Tensor(\n [[1. 3.]\n [3. 7.]], shape=(2, 2), dtype=float32)\n\n Note that during eager execution, you may discover your `Tensors` are actually\n of type `EagerTensor`. This is an internal detail, but it does give you\n access to a useful function, `numpy`:\n\n >>> type(e)\n \n >>> print(e.numpy())\n [[1. 3.]\n [3. 7.]]\n\n In TensorFlow, `tf.function`s are a common way to define graph execution.\n\n A Tensor's shape (that is, the rank of the Tensor and the size of\n each dimension) may not always be fully known. In `tf.function`\n definitions, the shape may only be partially known.\n\n Most operations produce tensors of fully-known shapes if the shapes of their\n inputs are also fully known, but in some cases it's only possible to find the\n shape of a tensor at execution time.\n\n A number of specialized tensors are available: see `tf.Variable`,\n `tf.constant`, `tf.placeholder`, `tf.sparse.SparseTensor`, and\n `tf.RaggedTensor`.\n\n Caution: when constructing a tensor from a numpy array or pandas dataframe\n the underlying buffer may be re-used:\n\n ```python\n a = np.array([1, 2, 3])\n b = tf.constant(a)\n a[0] = 4\n print(b) # tf.Tensor([4 2 3], shape=(3,), dtype=int64)\n ```\n\n Note: this is an implementation detail that is subject to change and users\n should not rely on this behaviour.\n\n For more on Tensors, see the [guide](https://tensorflow.org/guide/tensor).\n\n ", "desc": "A `tf.Tensor` represents a multidimensional array of elements.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_add", "docs": "Adds sparse `updates` to an existing tensor according to `indices`.\n\n This operation creates a new tensor by adding sparse `updates` to the passed\n in `tensor`.\n This operation is very similar to `tf.compat.v1.scatter_nd_add`, except that the\n updates are added onto an existing tensor (as opposed to a variable). If the\n memory for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `tensor.shape`. The last dimension of `indices` can be at most the rank of\n `tensor.shape`:\n\n ```\n indices.shape[-1] <= tensor.shape.rank\n ```\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = tensor.shape.rank`) or slices\n (if `indices.shape[-1] < tensor.shape.rank`) along dimension\n `indices.shape[-1]` of `tensor.shape`. `updates` is a tensor with shape\n\n ```\n indices.shape[:-1] + tensor.shape[indices.shape[-1]:]\n ```\n\n The simplest form of `tensor_scatter_nd_add` is to add individual elements to a\n tensor by index. For example, say we want to add 4 elements in a rank-1\n tensor with 8 elements.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[4], [3], [1], [7]])\n >>> updates = tf.constant([9, 10, 11, 12])\n >>> tensor = tf.ones([8], dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[0], [2]])\n >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]],\n ... [[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]]])\n >>> tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n Note: on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Adds sparse `updates` to an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_nd_add", "docs": "Adds sparse `updates` to an existing tensor according to `indices`.\n\n This operation creates a new tensor by adding sparse `updates` to the passed\n in `tensor`.\n This operation is very similar to `tf.compat.v1.scatter_nd_add`, except that the\n updates are added onto an existing tensor (as opposed to a variable). If the\n memory for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `tensor.shape`. The last dimension of `indices` can be at most the rank of\n `tensor.shape`:\n\n ```\n indices.shape[-1] <= tensor.shape.rank\n ```\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = tensor.shape.rank`) or slices\n (if `indices.shape[-1] < tensor.shape.rank`) along dimension\n `indices.shape[-1]` of `tensor.shape`. `updates` is a tensor with shape\n\n ```\n indices.shape[:-1] + tensor.shape[indices.shape[-1]:]\n ```\n\n The simplest form of `tensor_scatter_nd_add` is to add individual elements to a\n tensor by index. For example, say we want to add 4 elements in a rank-1\n tensor with 8 elements.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[4], [3], [1], [7]])\n >>> updates = tf.constant([9, 10, 11, 12])\n >>> tensor = tf.ones([8], dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[0], [2]])\n >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]],\n ... [[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]]])\n >>> tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n Note: on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Adds sparse `updates` to an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_nd_max", "docs": "Apply a sparse update to a tensor taking the element-wise maximum.\n\n Returns a new tensor copied from `tensor` whose values are element-wise maximum between\n tensor and updates according to the indices.\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] \n >>> indices = [[1], [4], [5]]\n >>> updates = [1, -1, 1]\n >>> tf.tensor_scatter_nd_max(tensor, indices, updates).numpy()\n array([0, 1, 0, 0, 0, 1, 0, 0], dtype=int32)\n\n Refer to `tf.tensor_scatter_nd_update` for more details.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Apply a sparse update to a tensor taking the element-wise maximum.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_nd_min", "docs": "TODO: add doc.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_nd_sub", "docs": "Subtracts sparse `updates` from an existing tensor according to `indices`.\n\n This operation creates a new tensor by subtracting sparse `updates` from the\n passed in `tensor`.\n This operation is very similar to `tf.scatter_nd_sub`, except that the updates\n are subtracted from an existing tensor (as opposed to a variable). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `shape`. The last dimension of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`. `updates` is a tensor with shape\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of tensor_scatter_sub is to subtract individual elements\n from a tensor by index. For example, say we want to insert 4 scattered elements\n in a rank-1 tensor with 8 elements.\n\n In Python, this scatter subtract operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n tensor = tf.ones([8], dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [1, -10, 1, -9, -8, 1, 1, -11]\n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],\n [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Subtracts sparse `updates` from an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_nd_update", "docs": "Scatter `updates` into an existing tensor according to `indices`.\n\n This operation creates a new tensor by applying sparse `updates` to the\n input `tensor`. This is similar to an index assignment.\n\n ```\n # Not implemented: tensors cannot be updated inplace.\n tensor[indices] = updates\n ```\n\n If an out of bound index is found on CPU, an error is returned.\n\n > **WARNING**: There are some GPU specific semantics for this operation.\n >\n > - If an out of bound index is found, the index is ignored.\n > - The order in which updates are applied is nondeterministic, so the output\n > will be nondeterministic if `indices` contains duplicates.\n\n This operation is very similar to `tf.scatter_nd`, except that the updates are\n scattered onto an existing tensor (as opposed to a zero-tensor). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n In general:\n\n * `indices` is an integer tensor - the indices to update in `tensor`.\n * `indices` has **at least two** axes, the last axis is the depth of the\n index vectors.\n * For each index vector in `indices` there is a corresponding entry in\n `updates`.\n * If the length of the index vectors matches the rank of the `tensor`, then\n the index vectors each point to scalars in `tensor` and each update is a\n scalar.\n * If the length of the index vectors is less than the rank of `tensor`, then\n the index vectors each point to slices of `tensor` and shape of the updates\n must match that slice.\n\n Overall this leads to the following shape constraints:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Typical usage is often much simpler than this general form, and it\n can be better understood starting with simple examples:\n\n ### Scalar updates\n\n The simplest usage inserts scalar elements into a tensor by index.\n In this case, the `index_depth` must equal the rank of the\n input `tensor`, slice each column of `indices` is an index into an axis of the\n input `tensor`.\n\n In this simplest case the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n assert updates.shape == [num_updates]\n assert index_depth == tf.rank(tensor)`\n ```\n\n For example, to insert 4 scattered elements in a rank-1 tensor with\n 8 elements.\n\n
\n \n
\n\n This scatter operation would look like this:\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] # tf.rank(tensor) == 1\n >>> indices = [[1], [3], [4], [7]] # num_updates == 4, index_depth == 1\n >>> updates = [9, 10, 11, 12] # num_updates == 4\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor([ 0 9 0 10 11 0 0 12], shape=(8,), dtype=int32)\n\n The length (first axis) of `updates` must equal the length of the `indices`:\n `num_updates`. This is the number of updates being inserted. Each scalar\n update is inserted into `tensor` at the indexed location.\n\n For a higher rank input `tensor` scalar updates can be inserted by using an\n `index_depth` that matches `tf.rank(tensor)`:\n\n >>> tensor = [[1, 1], [1, 1], [1, 1]] # tf.rank(tensor) == 2\n >>> indices = [[0, 1], [2, 0]] # num_updates == 2, index_depth == 2\n >>> updates = [5, 10] # num_updates == 2\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor(\n [[ 1 5]\n [ 1 1]\n [10 1]], shape=(3, 2), dtype=int32)\n\n ### Slice updates\n\n When the input `tensor` has more than one axis scatter can be used to update\n entire slices.\n\n In this case it's helpful to think of the input `tensor` as being a two level\n array-of-arrays. The shape of this two level array is split into the\n `outer_shape` and the `inner_shape`.\n\n `indices` indexes into the outer level of the input tensor (`outer_shape`).\n and replaces the sub-array at that location with the corresponding item from\n the `updates` list. The shape of each update is `inner_shape`.\n\n When updating a list of slices the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n inner_shape = tensor.shape[:index_depth]\n outer_shape = tensor.shape[index_depth:]\n assert updates.shape == [num_updates, inner_shape]\n ```\n\n For example, to update rows of a `(6, 3)` `tensor`:\n\n >>> tensor = tf.zeros([6, 3], dtype=tf.int32)\n\n Use an index depth of one.\n\n >>> indices = tf.constant([[2], [4]]) # num_updates == 2, index_depth == 1\n >>> num_updates, index_depth = indices.shape.as_list()\n\n The `outer_shape` is `6`, the inner shape is `3`:\n\n >>> outer_shape = tensor.shape[:index_depth]\n >>> inner_shape = tensor.shape[index_depth:]\n\n 2 rows are being indexed so 2 `updates` must be supplied.\n Each update must be shaped to match the `inner_shape`.\n\n >>> # num_updates == 2, inner_shape==3\n >>> updates = tf.constant([[1, 2, 3],\n ... [4, 5, 6]])\n\n Altogether this gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[0, 0, 0],\n [0, 0, 0],\n [1, 2, 3],\n [0, 0, 0],\n [4, 5, 6],\n [0, 0, 0]], dtype=int32)\n\n #### More slice update examples\n\n A tensor representing a batch of uniformly sized video clips naturally has 5\n axes: `[batch_size, time, width, height, channels]`.\n\n For example:\n\n >>> batch_size, time, width, height, channels = 13,11,7,5,3\n >>> video_batch = tf.zeros([batch_size, time, width, height, channels])\n\n To replace a selection of video clips:\n * Use an `index_depth` of 1 (indexing the `outer_shape`: `[batch_size]`)\n * Provide updates each with a shape matching the `inner_shape`:\n `[time, width, height, channels]`.\n\n To replace the first two clips with ones:\n\n >>> indices = [[0],[1]]\n >>> new_clips = tf.ones([2, time, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_clips)\n\n To replace a selection of frames in the videos:\n\n * `indices` must have an `index_depth` of 2 for the `outer_shape`:\n `[batch_size, time]`.\n * `updates` must be shaped like a list of images. Each update must have a\n shape, matching the `inner_shape`: `[width, height, channels]`.\n\n To replace the first frame of the first three video clips:\n\n >>> indices = [[0, 0], [1, 0], [2, 0]] # num_updates=3, index_depth=2\n >>> new_images = tf.ones([\n ... # num_updates=3, inner_shape=(width, height, channels)\n ... 3, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_images)\n\n ### Folded indices\n\n In simple cases it's convenient to think of `indices` and `updates` as\n lists, but this is not a strict requirement. Instead of a flat `num_updates`,\n the `indices` and `updates` can be folded into a `batch_shape`. This\n `batch_shape` is all axes of the `indices`, except for the innermost\n `index_depth` axis.\n\n ```\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n ```\n\n Note: The one exception is that the `batch_shape` cannot be `[]`. You can't\n update a single index by passing indices with shape `[index_depth]`.\n\n `updates` must have a matching `batch_shape` (the axes before `inner_shape`).\n\n ```\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Note: The result is equivalent to flattening the `batch_shape` axes of\n `indices` and `updates`. This generalization just avoids the need\n for reshapes when it is more natural to construct \"folded\" indices and\n updates.\n\n With this generalization the full shape constraints are:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n For example, to draw an `X` on a `(5,5)` matrix start with these indices:\n\n >>> tensor = tf.zeros([5,5])\n >>> indices = tf.constant([\n ... [[0,0],\n ... [1,1],\n ... [2,2],\n ... [3,3],\n ... [4,4]],\n ... [[0,4],\n ... [1,3],\n ... [2,2],\n ... [3,1],\n ... [4,0]],\n ... ])\n >>> indices.shape.as_list() # batch_shape == [2, 5], index_depth == 2\n [2, 5, 2]\n\n Here the `indices` do not have a shape of `[num_updates, index_depth]`, but a\n shape of `batch_shape+[index_depth]`.\n\n Since the `index_depth` is equal to the rank of `tensor`:\n\n * `outer_shape` is `(5,5)`\n * `inner_shape` is `()` - each update is scalar\n * `updates.shape` is `batch_shape + inner_shape == (5,2) + ()`\n\n >>> updates = [\n ... [1,1,1,1,1],\n ... [1,1,1,1,1],\n ... ]\n\n Putting this together gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[1., 0., 0., 0., 1.],\n [0., 1., 0., 1., 0.],\n [0., 0., 1., 0., 0.],\n [0., 1., 0., 1., 0.],\n [1., 0., 0., 0., 1.]], dtype=float32)\n\n Args:\n tensor: Tensor to copy/update.\n indices: Indices to update.\n updates: Updates to apply at the indices.\n name: Optional name for the operation.\n\n Returns:\n A new tensor with the given shape and updates applied according to the\n indices.\n ", "desc": "Scatter `updates` into an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_sub", "docs": "Subtracts sparse `updates` from an existing tensor according to `indices`.\n\n This operation creates a new tensor by subtracting sparse `updates` from the\n passed in `tensor`.\n This operation is very similar to `tf.scatter_nd_sub`, except that the updates\n are subtracted from an existing tensor (as opposed to a variable). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `shape`. The last dimension of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`. `updates` is a tensor with shape\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of tensor_scatter_sub is to subtract individual elements\n from a tensor by index. For example, say we want to insert 4 scattered elements\n in a rank-1 tensor with 8 elements.\n\n In Python, this scatter subtract operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n tensor = tf.ones([8], dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [1, -10, 1, -9, -8, 1, 1, -11]\n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],\n [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Subtracts sparse `updates` from an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.tensor_scatter_update", "docs": "Scatter `updates` into an existing tensor according to `indices`.\n\n This operation creates a new tensor by applying sparse `updates` to the\n input `tensor`. This is similar to an index assignment.\n\n ```\n # Not implemented: tensors cannot be updated inplace.\n tensor[indices] = updates\n ```\n\n If an out of bound index is found on CPU, an error is returned.\n\n > **WARNING**: There are some GPU specific semantics for this operation.\n >\n > - If an out of bound index is found, the index is ignored.\n > - The order in which updates are applied is nondeterministic, so the output\n > will be nondeterministic if `indices` contains duplicates.\n\n This operation is very similar to `tf.scatter_nd`, except that the updates are\n scattered onto an existing tensor (as opposed to a zero-tensor). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n In general:\n\n * `indices` is an integer tensor - the indices to update in `tensor`.\n * `indices` has **at least two** axes, the last axis is the depth of the\n index vectors.\n * For each index vector in `indices` there is a corresponding entry in\n `updates`.\n * If the length of the index vectors matches the rank of the `tensor`, then\n the index vectors each point to scalars in `tensor` and each update is a\n scalar.\n * If the length of the index vectors is less than the rank of `tensor`, then\n the index vectors each point to slices of `tensor` and shape of the updates\n must match that slice.\n\n Overall this leads to the following shape constraints:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Typical usage is often much simpler than this general form, and it\n can be better understood starting with simple examples:\n\n ### Scalar updates\n\n The simplest usage inserts scalar elements into a tensor by index.\n In this case, the `index_depth` must equal the rank of the\n input `tensor`, slice each column of `indices` is an index into an axis of the\n input `tensor`.\n\n In this simplest case the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n assert updates.shape == [num_updates]\n assert index_depth == tf.rank(tensor)`\n ```\n\n For example, to insert 4 scattered elements in a rank-1 tensor with\n 8 elements.\n\n
\n \n
\n\n This scatter operation would look like this:\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] # tf.rank(tensor) == 1\n >>> indices = [[1], [3], [4], [7]] # num_updates == 4, index_depth == 1\n >>> updates = [9, 10, 11, 12] # num_updates == 4\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor([ 0 9 0 10 11 0 0 12], shape=(8,), dtype=int32)\n\n The length (first axis) of `updates` must equal the length of the `indices`:\n `num_updates`. This is the number of updates being inserted. Each scalar\n update is inserted into `tensor` at the indexed location.\n\n For a higher rank input `tensor` scalar updates can be inserted by using an\n `index_depth` that matches `tf.rank(tensor)`:\n\n >>> tensor = [[1, 1], [1, 1], [1, 1]] # tf.rank(tensor) == 2\n >>> indices = [[0, 1], [2, 0]] # num_updates == 2, index_depth == 2\n >>> updates = [5, 10] # num_updates == 2\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor(\n [[ 1 5]\n [ 1 1]\n [10 1]], shape=(3, 2), dtype=int32)\n\n ### Slice updates\n\n When the input `tensor` has more than one axis scatter can be used to update\n entire slices.\n\n In this case it's helpful to think of the input `tensor` as being a two level\n array-of-arrays. The shape of this two level array is split into the\n `outer_shape` and the `inner_shape`.\n\n `indices` indexes into the outer level of the input tensor (`outer_shape`).\n and replaces the sub-array at that location with the corresponding item from\n the `updates` list. The shape of each update is `inner_shape`.\n\n When updating a list of slices the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n inner_shape = tensor.shape[:index_depth]\n outer_shape = tensor.shape[index_depth:]\n assert updates.shape == [num_updates, inner_shape]\n ```\n\n For example, to update rows of a `(6, 3)` `tensor`:\n\n >>> tensor = tf.zeros([6, 3], dtype=tf.int32)\n\n Use an index depth of one.\n\n >>> indices = tf.constant([[2], [4]]) # num_updates == 2, index_depth == 1\n >>> num_updates, index_depth = indices.shape.as_list()\n\n The `outer_shape` is `6`, the inner shape is `3`:\n\n >>> outer_shape = tensor.shape[:index_depth]\n >>> inner_shape = tensor.shape[index_depth:]\n\n 2 rows are being indexed so 2 `updates` must be supplied.\n Each update must be shaped to match the `inner_shape`.\n\n >>> # num_updates == 2, inner_shape==3\n >>> updates = tf.constant([[1, 2, 3],\n ... [4, 5, 6]])\n\n Altogether this gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[0, 0, 0],\n [0, 0, 0],\n [1, 2, 3],\n [0, 0, 0],\n [4, 5, 6],\n [0, 0, 0]], dtype=int32)\n\n #### More slice update examples\n\n A tensor representing a batch of uniformly sized video clips naturally has 5\n axes: `[batch_size, time, width, height, channels]`.\n\n For example:\n\n >>> batch_size, time, width, height, channels = 13,11,7,5,3\n >>> video_batch = tf.zeros([batch_size, time, width, height, channels])\n\n To replace a selection of video clips:\n * Use an `index_depth` of 1 (indexing the `outer_shape`: `[batch_size]`)\n * Provide updates each with a shape matching the `inner_shape`:\n `[time, width, height, channels]`.\n\n To replace the first two clips with ones:\n\n >>> indices = [[0],[1]]\n >>> new_clips = tf.ones([2, time, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_clips)\n\n To replace a selection of frames in the videos:\n\n * `indices` must have an `index_depth` of 2 for the `outer_shape`:\n `[batch_size, time]`.\n * `updates` must be shaped like a list of images. Each update must have a\n shape, matching the `inner_shape`: `[width, height, channels]`.\n\n To replace the first frame of the first three video clips:\n\n >>> indices = [[0, 0], [1, 0], [2, 0]] # num_updates=3, index_depth=2\n >>> new_images = tf.ones([\n ... # num_updates=3, inner_shape=(width, height, channels)\n ... 3, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_images)\n\n ### Folded indices\n\n In simple cases it's convenient to think of `indices` and `updates` as\n lists, but this is not a strict requirement. Instead of a flat `num_updates`,\n the `indices` and `updates` can be folded into a `batch_shape`. This\n `batch_shape` is all axes of the `indices`, except for the innermost\n `index_depth` axis.\n\n ```\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n ```\n\n Note: The one exception is that the `batch_shape` cannot be `[]`. You can't\n update a single index by passing indices with shape `[index_depth]`.\n\n `updates` must have a matching `batch_shape` (the axes before `inner_shape`).\n\n ```\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Note: The result is equivalent to flattening the `batch_shape` axes of\n `indices` and `updates`. This generalization just avoids the need\n for reshapes when it is more natural to construct \"folded\" indices and\n updates.\n\n With this generalization the full shape constraints are:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n For example, to draw an `X` on a `(5,5)` matrix start with these indices:\n\n >>> tensor = tf.zeros([5,5])\n >>> indices = tf.constant([\n ... [[0,0],\n ... [1,1],\n ... [2,2],\n ... [3,3],\n ... [4,4]],\n ... [[0,4],\n ... [1,3],\n ... [2,2],\n ... [3,1],\n ... [4,0]],\n ... ])\n >>> indices.shape.as_list() # batch_shape == [2, 5], index_depth == 2\n [2, 5, 2]\n\n Here the `indices` do not have a shape of `[num_updates, index_depth]`, but a\n shape of `batch_shape+[index_depth]`.\n\n Since the `index_depth` is equal to the rank of `tensor`:\n\n * `outer_shape` is `(5,5)`\n * `inner_shape` is `()` - each update is scalar\n * `updates.shape` is `batch_shape + inner_shape == (5,2) + ()`\n\n >>> updates = [\n ... [1,1,1,1,1],\n ... [1,1,1,1,1],\n ... ]\n\n Putting this together gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[1., 0., 0., 0., 1.],\n [0., 1., 0., 1., 0.],\n [0., 0., 1., 0., 0.],\n [0., 1., 0., 1., 0.],\n [1., 0., 0., 0., 1.]], dtype=float32)\n\n Args:\n tensor: Tensor to copy/update.\n indices: Indices to update.\n updates: Updates to apply at the indices.\n name: Optional name for the operation.\n\n Returns:\n A new tensor with the given shape and updates applied according to the\n indices.\n ", "desc": "Scatter `updates` into an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.compat.v1.TensorArray", "docs": "Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.\n\n This class is meant to be used with dynamic iteration primitives such as\n `while_loop` and `map_fn`. It supports gradient back-propagation via special\n \"flow\" control flow dependencies.\n\n Example 1: Plain reading and writing.\n\n >>> ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)\n >>> ta = ta.write(0, 10)\n >>> ta = ta.write(1, 20)\n >>> ta = ta.write(2, 30)\n >>>\n >>> ta.read(0)\n \n >>> ta.read(1)\n \n >>> ta.read(2)\n \n >>> ta.stack()\n \n\n Example 2: Fibonacci sequence algorithm that writes in a loop then returns.\n\n >>> @tf.function\n ... def fibonacci(n):\n ... ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)\n ... ta = ta.unstack([0., 1.])\n ...\n ... for i in range(2, n):\n ... ta = ta.write(i, ta.read(i - 1) + ta.read(i - 2))\n ...\n ... return ta.stack()\n >>>\n >>> fibonacci(7)\n \n\n Example 3: A simple loop interacting with a `tf.Variable`.\n\n >>> v = tf.Variable(1)\n >>> @tf.function\n ... def f(x):\n ... ta = tf.TensorArray(tf.int32, size=0, dynamic_size=True)\n ... for i in tf.range(x):\n ... v.assign_add(i)\n ... ta = ta.write(i, v)\n ... return ta.stack()\n >>> f(5)\n \n ", "desc": "Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.", "type": "API"}, {"name": "tf.compat.v1.TensorArraySpec", "docs": "Type specification for a `tf.TensorArray`.", "desc": "Type specification for a `tf.TensorArray`.", "type": "API"}, {"name": "tf.compat.v1.tensordot", "docs": "Tensor contraction of a and b along specified axes and outer product.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `axes`.\n\n This operation corresponds to `numpy.tensordot(a, b, axes)`.\n\n Example 1: When `a` and `b` are matrices (order 2), the case `axes=1`\n is equivalent to matrix multiplication.\n\n Example 2: When `a` and `b` are matrices (order 2), the case\n `axes = [[1], [0]]` is equivalent to matrix multiplication.\n\n Example 3: When `a` and `b` are matrices (order 2), the case `axes=0` gives\n the outer product, a tensor of order 4.\n\n Example 4: Suppose that \\\\(a_{ijk}\\\\) and \\\\(b_{lmn}\\\\) represent two\n tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor\n \\\\(c_{jklm}\\\\) whose entry\n corresponding to the indices \\\\((j,k,l,m)\\\\) is given by:\n\n \\\\( c_{jklm} = \\sum_i a_{ijk} b_{lmi} \\\\).\n\n In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.\n\n Args:\n a: `Tensor` of type `float32` or `float64`.\n b: `Tensor` with the same type as `a`.\n axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].\n If axes is a scalar, sum over the last N axes of a and the first N axes of\n b in order. If axes is a list or `Tensor` the first and second row contain\n the set of unique integers specifying axes along which the contraction is\n computed, for `a` and `b`, respectively. The number of axes for `a` and\n `b` must be equal. If `axes=0`, computes the outer product between `a` and\n `b`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `a`.\n\n Raises:\n ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.\n IndexError: If the values in axes exceed the rank of the corresponding\n tensor.\n ", "desc": "Tensor contraction of a and b along specified axes and outer product.", "type": "API"}, {"name": "tf.compat.v1.TensorInfo", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.TensorInfo.CompositeTensor", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.TensorInfo.CooSparse", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.TensorShape", "docs": "Represents the shape of a `Tensor`.\n\n A `TensorShape` represents a possibly-partial shape specification for a\n `Tensor`. It may be one of the following:\n\n * *Fully-known shape:* has a known number of dimensions and a known size\n for each dimension. e.g. `TensorShape([16, 256])`\n * *Partially-known shape:* has a known number of dimensions, and an unknown\n size for one or more dimension. e.g. `TensorShape([None, 256])`\n * *Unknown shape:* has an unknown number of dimensions, and an unknown\n size in all dimensions. e.g. `TensorShape(None)`\n\n If a tensor is produced by an operation of type `\"Foo\"`, its shape\n may be inferred if there is a registered shape function for\n `\"Foo\"`. See [Shape\n functions](https://www.tensorflow.org/guide/create_op#shape_functions_in_c)\n for details of shape functions and how to register them. Alternatively,\n you may set the shape explicitly using `tf.Tensor.set_shape`.\n ", "desc": "Represents the shape of a `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.TensorSpec", "docs": "Describes a tf.Tensor.\n\n Metadata for describing the `tf.Tensor` objects accepted or returned\n by some TensorFlow APIs.\n ", "desc": "Describes a tf.Tensor.", "type": "API"}, {"name": "tf.compat.v1.test", "docs": "Testing.\n", "desc": "Testing.", "type": "API"}, {"name": "tf.compat.v1.test.assert_equal_graph_def", "docs": "Asserts that two `GraphDef`s are (mostly) the same.\n\n Compares two `GraphDef` protos for equality, ignoring versions and ordering of\n nodes, attrs, and control inputs. Node names are used to match up nodes\n between the graphs, so the naming of nodes must be consistent.\n\n Args:\n actual: The `GraphDef` we have.\n expected: The `GraphDef` we expected.\n checkpoint_v2: boolean determining whether to ignore randomized attribute\n values that appear in V2 checkpoints.\n hash_table_shared_name: boolean determining whether to ignore randomized\n shared_names that appear in HashTableV2 op defs.\n\n Raises:\n AssertionError: If the `GraphDef`s do not match.\n TypeError: If either argument is not a `GraphDef`.\n ", "desc": "Asserts that two `GraphDef`s are (mostly) the same.", "type": "API"}, {"name": "tf.compat.v1.test.Benchmark", "docs": "Abstract class that provides helpers for TensorFlow benchmarks.", "desc": "Abstract class that provides helpers for TensorFlow benchmarks.", "type": "API"}, {"name": "tf.compat.v1.test.benchmark_config", "docs": "Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.\n\n Returns:\n A TensorFlow ConfigProto object.\n ", "desc": "Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.", "type": "API"}, {"name": "tf.compat.v1.test.compute_gradient", "docs": "Computes and returns the theoretical and numerical Jacobian. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed.\n\nIf `x` or `y` is complex, the Jacobian will still be real but the\ncorresponding Jacobian dimension(s) will be twice as large. This is required\neven if both input and output is complex since TensorFlow graphs are not\nnecessarily holomorphic, and may have gradients not expressible as complex\nnumbers. For example, if `x` is complex with shape `[m]` and `y` is complex\nwith shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with\n\n J[:m, :n] = d(Re y)/d(Re x)\n J[:m, n:] = d(Im y)/d(Re x)\n J[m:, :n] = d(Re y)/d(Im x)\n J[m:, n:] = d(Im y)/d(Im x)\n\nArgs:\n x: a tensor or list of tensors\n x_shape: the dimensions of x as a tuple or an array of ints. If x is a list,\n then this is the list of shapes.\n y: a tensor\n y_shape: the dimensions of y as a tuple or an array of ints.\n x_init_value: (optional) a numpy array of the same shape as \"x\"\n representing the initial value of x. If x is a list, this should be a list\n of numpy arrays. If this is none, the function will pick a random tensor\n as the initial value.\n delta: (optional) the amount of perturbation.\n init_targets: list of targets to run to initialize model params.\n extra_feed_dict: dict that allows fixing specified tensor values\n during the Jacobian calculation.\n\nReturns:\n Two 2-d numpy arrays representing the theoretical and numerical\n Jacobian for dy/dx. Each has \"x_size\" rows and \"y_size\" columns\n where \"x_size\" is the number of elements in x and \"y_size\" is the\n number of elements in y. If x is a list, returns a list of two numpy arrays.", "desc": "Computes and returns the theoretical and numerical Jacobian. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.test.compute_gradient_error", "docs": "Computes the gradient error. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed.\n\nComputes the maximum error for dy/dx between the computed Jacobian and the\nnumerically estimated Jacobian.\n\nThis function will modify the tensors passed in as it adds more operations\nand hence changing the consumers of the operations of the input tensors.\n\nThis function adds operations to the current session. To compute the error\nusing a particular device, such as a GPU, use the standard methods for\nsetting a device (e.g. using with sess.graph.device() or setting a device\nfunction in the session constructor).\n\nArgs:\n x: a tensor or list of tensors\n x_shape: the dimensions of x as a tuple or an array of ints. If x is a list,\n then this is the list of shapes.\n y: a tensor\n y_shape: the dimensions of y as a tuple or an array of ints.\n x_init_value: (optional) a numpy array of the same shape as \"x\"\n representing the initial value of x. If x is a list, this should be a list\n of numpy arrays. If this is none, the function will pick a random tensor\n as the initial value.\n delta: (optional) the amount of perturbation.\n init_targets: list of targets to run to initialize model params.\n extra_feed_dict: dict that allows fixing specified tensor values\n during the Jacobian calculation.\n\nReturns:\n The maximum error in between the two Jacobians.", "desc": "Computes the gradient error. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.test.create_local_cluster", "docs": "Create and start local servers and return the associated `Server` objects.\n\n \"PS\" stands for \"parameter server\": a task responsible for storing and\n updating the model's parameters. Other tasks send updates to these parameters\n as they work on optimizing the parameters. This particular division of labor\n between tasks is not required, but is common for distributed training.\n\n Read more at https://www.tensorflow.org/guide/extend/architecture\n\n ![components](https://www.tensorflow.org/images/diag1.svg \"components\")\n\n\n Figure illustrates the interaction of these components.\n \"/job:worker/task:0\" and \"/job:ps/task:0\" are both tasks with worker services.\n\n\n Example:\n ```python\n workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2)\n\n worker_sessions = [tf.compat.v1.Session(w.target) for w in workers]\n\n with tf.device(\"/job:ps/task:0\"):\n ...\n with tf.device(\"/job:ps/task:1\"):\n ...\n with tf.device(\"/job:worker/task:0\"):\n ...\n with tf.device(\"/job:worker/task:1\"):\n ...\n\n worker_sessions[0].run(...)\n ```\n\n Args:\n num_workers: Number of worker servers to start.\n num_ps: Number of PS servers to start.\n protocol: Communication protocol. Allowed values are documented in the\n documentation of `tf.distribute.Server`.\n worker_config: (optional) `tf.ConfigProto` to initialize workers. Can be\n used to instantiate multiple devices etc.\n ps_config: (optional) `tf.ConfigProto` to initialize PS servers.\n\n Returns:\n A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list\n of `num_workers` objects of type `tf.distribute.Server` (all running\n locally);\n and `ps_servers` is a list of `num_ps` objects of similar type.\n\n Raises:\n ImportError: if portpicker module was not found at load time\n ", "desc": "Create and start local servers and return the associated `Server` objects.", "type": "API"}, {"name": "tf.compat.v1.test.disable_with_predicate", "docs": "Disables the test if pred is true.", "desc": "Disables the test if pred is true.", "type": "API"}, {"name": "tf.compat.v1.test.get_temp_dir", "docs": "Returns a temporary directory for use during tests.\n\n There is no need to delete the directory after the test.\n\n @compatibility(TF2)\n This function is removed in TF2. Please use `TestCase.get_temp_dir` instead\n in a test case.\n Outside of a unit test, obtain a temporary directory through Python's\n `tempfile` module.\n @end_compatibility\n\n Returns:\n The temporary directory.\n ", "desc": "Returns a temporary directory for use during tests.", "type": "API"}, {"name": "tf.compat.v1.test.gpu_device_name", "docs": "Returns the name of a GPU device if available or a empty string.\n\n This method should only be used in tests written with `tf.test.TestCase`.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_gpu_support():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(tf.test.gpu_device_name()):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n ", "desc": "Returns the name of a GPU device if available or a empty string.", "type": "API"}, {"name": "tf.compat.v1.test.is_built_with_cuda", "docs": "Returns whether TensorFlow was built with CUDA (GPU) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with CUDA (GPU).\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_cuda():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is built with CUDA.\n ", "desc": "Returns whether TensorFlow was built with CUDA (GPU) support.", "type": "API"}, {"name": "tf.compat.v1.test.is_built_with_gpu_support", "docs": "Returns whether TensorFlow was built with GPU (CUDA or ROCm) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with GPU.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_gpu_support():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is built with CUDA GPU support.\n ", "desc": "Returns whether TensorFlow was built with GPU (CUDA or ROCm) support.", "type": "API"}, {"name": "tf.compat.v1.test.is_built_with_rocm", "docs": "Returns whether TensorFlow was built with ROCm (GPU) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with ROCm (GPU).\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_rocm():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is NOT built with ROCm.\n ", "desc": "Returns whether TensorFlow was built with ROCm (GPU) support.", "type": "API"}, {"name": "tf.compat.v1.test.is_built_with_xla", "docs": "Returns whether TensorFlow was built with XLA support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with XLA.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_xla(self):\n ... if not tf.test.is_built_with_xla():\n ... self.skipTest(\"test is only applicable on XLA\")\n\n ... @tf.function(jit_compile=True)\n ... def add(x, y):\n ... return tf.math.add(x, y)\n ...\n ... self.assertEqual(add(tf.ones(()), tf.ones(())), 2.0)\n\n TensorFlow official binary is built with XLA.\n ", "desc": "Returns whether TensorFlow was built with XLA support.", "type": "API"}, {"name": "tf.compat.v1.test.is_gpu_available", "docs": "Returns whether TensorFlow can access a GPU. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.config.list_physical_devices('GPU')` instead.\n\nWarning: if a non-GPU version of the package is installed, the function would\nalso return False. Use `tf.test.is_built_with_cuda` to validate if TensorFlow\nwas build with CUDA support.\n\nFor example,\n>>> gpu_available = tf.test.is_gpu_available()\n>>> is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True)\n>>> is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0))\n\nArgs:\n cuda_only: limit the search to CUDA GPUs.\n min_cuda_compute_capability: a (major,minor) pair that indicates the minimum\n CUDA compute capability required, or None if no requirement.\n\nNote that the keyword arg name \"cuda_only\" is misleading (since routine will\nreturn true when a GPU device is available irrespective of whether TF was\nbuilt with CUDA support or ROCm support. However no changes here because\n\n++ Changing the name \"cuda_only\" to something more generic would break\n backward compatibility\n\n++ Adding an equivalent \"rocm_only\" would require the implementation check\n the build type. This in turn would require doing the same for CUDA and thus\n potentially break backward compatibility\n\n++ Adding a new \"cuda_or_rocm_only\" would not break backward compatibility,\n but would require most (if not all) callers to update the call to use\n \"cuda_or_rocm_only\" instead of \"cuda_only\"\n\nReturns:\n True if a GPU device of the requested kind is available.", "desc": "Returns whether TensorFlow can access a GPU. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.test.main", "docs": "Runs all unit tests.", "desc": "Runs all unit tests.", "type": "API"}, {"name": "tf.compat.v1.test.StubOutForTesting", "docs": "Support class for stubbing methods out for unit testing.\n\n Sample Usage:\n\n You want os.path.exists() to always return true during testing.\n\n stubs = StubOutForTesting()\n stubs.Set(os.path, 'exists', lambda x: 1)\n ...\n stubs.CleanUp()\n\n The above changes os.path.exists into a lambda that returns 1. Once\n the ... part of the code finishes, the CleanUp() looks up the old\n value of os.path.exists and restores it.\n ", "desc": "Support class for stubbing methods out for unit testing.", "type": "API"}, {"name": "tf.compat.v1.test.test_src_dir_path", "docs": "Creates an absolute test srcdir path given a relative path.\n\n Args:\n relative_path: a path relative to tensorflow root.\n e.g. \"core/platform\".\n\n Returns:\n An absolute path to the linked in runfiles.\n ", "desc": "Creates an absolute test srcdir path given a relative path.", "type": "API"}, {"name": "tf.compat.v1.test.TestCase", "docs": "Base class for tests that need to test TensorFlow.", "desc": "Base class for tests that need to test TensorFlow.", "type": "API"}, {"name": "tf.compat.v1.test.TestCase.failureException", "docs": "Assertion failed.", "desc": "Assertion failed.", "type": "API"}, {"name": "tf.compat.v1.TextLineReader", "docs": "A Reader that outputs the lines of a file delimited by newlines.\n\n Newlines are stripped from the output.\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs the lines of a file delimited by newlines.", "type": "API"}, {"name": "tf.compat.v1.TFRecordReader", "docs": "A Reader that outputs the records from a TFRecords file.\n\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs the records from a TFRecords file.", "type": "API"}, {"name": "tf.compat.v1.tile", "docs": "Constructs a tensor by tiling a given tensor.\n\n This operation creates a new tensor by replicating `input` `multiples` times.\n The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,\n and the values of `input` are replicated `multiples[i]` times along the 'i'th\n dimension. For example, tiling `[a b c d]` by `[2]` produces\n `[a b c d a b c d]`.\n\n >>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32)\n >>> b = tf.constant([1,2], tf.int32)\n >>> tf.tile(a, b)\n \n >>> c = tf.constant([2,1], tf.int32)\n >>> tf.tile(a, c)\n \n >>> d = tf.constant([2,2], tf.int32)\n >>> tf.tile(a, d)\n \n\n Args:\n input: A `Tensor`. 1-D or higher.\n multiples: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. Length must be the same as the number of dimensions in `input`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Constructs a tensor by tiling a given tensor.", "type": "API"}, {"name": "tf.compat.v1.timestamp", "docs": "Provides the time since epoch in seconds.\n\n Returns the timestamp as a `float64` for seconds since the Unix epoch.\n\n Note: the timestamp is computed when the op is executed, not when it is added\n to the graph.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float64`.\n ", "desc": "Provides the time since epoch in seconds.", "type": "API"}, {"name": "tf.compat.v1.to_bfloat16", "docs": "Casts a tensor to type `bfloat16`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `bfloat16`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `bfloat16`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.bfloat16)`. There are no further issues with eager execution\nor tf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_bfloat16(tf.constant(3.14, dtype=tf.float32))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(3.14, dtype=tf.float32), tf.bfloat16)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `bfloat16`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_complex128", "docs": "Casts a tensor to type `complex128`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `complex128`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `complex128`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.complex128)`. There are no further issues with eager\nexecution or tf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_complex128(tf.constant(1. + 2.j, dtype=tf.complex64))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(1. + 2.j, dtype=tf.complex64), tf.complex128)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `complex128`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_complex64", "docs": "Casts a tensor to type `complex64`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `complex64`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `complex64`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.complex64)`. There are no further issues with eager execution\nor tf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_complex64(tf.constant(1. + 2.j, dtype=tf.complex128))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(1. + 2.j, dtype=tf.complex128), tf.complex64)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `complex64`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_double", "docs": "Casts a tensor to type `float64`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `float64`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `float64`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.double)`. There are no further issues with eager execution or\ntf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_double(tf.constant(3.14, dtype=tf.float32))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(3.14, dtype=tf.float32), tf.double)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `float64`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_float", "docs": "Casts a tensor to type `float32`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `float32`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `float32`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.float32)`. There are no further issues with eager execution\nor tf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_float(tf.constant(3.14, dtype=tf.double))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(3.14, dtype=tf.double), tf.float32)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `float32`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_int32", "docs": "Casts a tensor to type `int32`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `int32`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `int32`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.int32)`. There are no further issues with eager execution or\ntf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_int32(tf.constant(1, dtype=tf.int64))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(1, dtype=tf.int64), tf.int32)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `int32`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.to_int64", "docs": "Casts a tensor to type `int64`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n\nArgs:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with\n type `int64`.\n\nRaises:\n TypeError: If `x` cannot be cast to the `int64`.\n\n@compatibility(TF2)\n\nThis name was deprecated and removed in TF2, but has an exact replacement\n`tf.cast(..., tf.int64)`. There are no further issues with eager execution or\ntf.function.\n\nBefore:\n\n>>> tf.compat.v1.to_int64(tf.constant(1, dtype=tf.int32))\n\n\nAfter:\n\n>>> tf.cast(tf.constant(1, dtype=tf.int32), tf.int64)\n\n\n@end_compatibility", "desc": "Casts a tensor to type `int64`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.tpu", "docs": "Ops related to Tensor Processing Units.\n", "desc": "Ops related to Tensor Processing Units.", "type": "API"}, {"name": "tf.compat.v1.tpu.batch_parallel", "docs": "Shards `computation` along the batch dimension for parallel execution.\n\n Convenience wrapper around shard().\n\n `inputs` must be a list of Tensors or None (equivalent to an empty list).\n Each input is split into `num_shards` pieces along the 0-th dimension, and\n computation is applied to each shard in parallel.\n\n Tensors are broadcast to all shards if they are lexically captured by\n `computation`. e.g.,\n\n x = tf.constant(7)\n def computation():\n return x + 3\n ... = shard(computation, ...)\n\n The outputs from all shards are concatenated back together along their 0-th\n dimension.\n\n Inputs and outputs of the computation must be at least rank-1 Tensors.\n\n Args:\n computation: A Python function that builds a computation to apply to each\n shard of the input.\n inputs: A list of input tensors or None (equivalent to an empty list). The\n 0-th dimension of each Tensor must have size divisible by `num_shards`.\n num_shards: The number of shards.\n infeed_queue: If not `None`, the `InfeedQueue` from which to append a tuple\n of arguments as inputs to `computation`.\n device_assignment: If not `None`, a `DeviceAssignment` describing the\n mapping between logical cores in the computation with physical cores in\n the TPU topology. Uses a default device assignment if `None`. The\n `DeviceAssignment` may be omitted if each shard of the computation uses\n only one core, and there is either only one shard, or the number of shards\n is equal to the number of cores in the TPU system.\n name: (Deprecated) Does nothing.\n xla_options: An instance of `tpu.XLAOptions` which indicates the options\n passed to XLA compiler. Use `None` for default options.\n Returns:\n A list of output tensors.\n Raises:\n ValueError: If `num_shards <= 0`\n ", "desc": "Shards `computation` along the batch dimension for parallel execution.", "type": "API"}, {"name": "tf.compat.v1.tpu.bfloat16_scope", "docs": "Scope class for bfloat16 variables so that the model uses custom getter.\n\n This enables variables to be read as bfloat16 type when using get_variable.\n\n Arguments:\n name: Name to use for scope.\n\n Yields:\n a variable scope.\n ", "desc": "Scope class for bfloat16 variables so that the model uses custom getter.", "type": "API"}, {"name": "tf.compat.v1.tpu.core", "docs": "Returns the device name for a core in a replicated TPU computation.\n\n Args:\n num: the virtual core number within each replica to which operators should\n be assigned.\n Returns:\n A device name, suitable for passing to `tf.device()`.\n ", "desc": "Returns the device name for a core in a replicated TPU computation.", "type": "API"}, {"name": "tf.compat.v1.tpu.cross_replica_sum", "docs": "Sum the input tensor across replicas according to group_assignment.\n\n Args:\n x: The local tensor to the sum.\n group_assignment: Optional 2d int32 lists with shape [num_groups,\n num_replicas_per_group]. `group_assignment[i]` represents the replica ids\n in the ith subgroup.\n name: Optional op name.\n\n Returns:\n A `Tensor` which is summed across replicas.\n ", "desc": "Sum the input tensor across replicas according to group_assignment.", "type": "API"}, {"name": "tf.compat.v1.tpu.CrossShardOptimizer", "docs": "An optimizer that averages gradients across TPU shards.", "desc": "An optimizer that averages gradients across TPU shards.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental", "docs": "Public API for tf.tpu.experimental namespace.\n", "desc": "Public API for tf.tpu.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.AdagradParameters", "docs": "Optimization parameters for Adagrad with TPU embeddings.\n\n Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the\n `optimization_parameters` argument to set the optimizer and its parameters.\n See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec`\n for more details.\n\n ```\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=tf.tpu.experimental.AdagradParameters(0.1),\n ...))\n ```\n\n ", "desc": "Optimization parameters for Adagrad with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.AdamParameters", "docs": "Optimization parameters for Adam with TPU embeddings.\n\n Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the\n `optimization_parameters` argument to set the optimizer and its parameters.\n See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec`\n for more details.\n\n ```\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=tf.tpu.experimental.AdamParameters(0.1),\n ...))\n ```\n\n ", "desc": "Optimization parameters for Adam with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.DeviceAssignment", "docs": "Mapping from logical cores in a computation to the physical TPU topology.\n\n Prefer to use the `DeviceAssignment.build()` helper to construct a\n `DeviceAssignment`; it is easier if less flexible than constructing a\n `DeviceAssignment` directly.\n ", "desc": "Mapping from logical cores in a computation to the physical TPU topology.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding", "docs": "Public API for tf.tpu.experimental.embedding namespace.\n", "desc": "Public API for tf.tpu.experimental.embedding namespace.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.Adagrad", "docs": "Optimization parameters for Adagrad with TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for Adagrad with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.Adam", "docs": "Optimization parameters for Adam with TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n NOTE: By default this optimizer is lazy, i.e. it will not apply the gradient\n update of zero to rows that were not looked up. You can change this behavior\n by setting `lazy_adam` to `False`.\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.Adam(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for Adam with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.FeatureConfig", "docs": "Configuration data for one embedding feature.\n\n This class holds the configuration data for a single embedding feature. The\n main use is to assign features to `tf.tpu.experimental.embedding.TableConfig`s\n via the table parameter:\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n The above configuration has 2 tables, and three features. The first two\n features will be looked up in the first table and the third feature will be\n looked up in the second table.\n\n You can also specify the output shape for each feature. The output shape\n should be the expected activation shape excluding the table dimension. For\n dense and sparse tensor, the output shape should be the same as the input\n shape excluding the last dimension. For ragged tensor, the output shape can\n mismatch the input shape.\n\n NOTE: The `max_sequence_length` will be only used when the input tensor has\n rank 2 and the `output_shape` is not set in the feature config.\n\n When feeding features into `embedding.enqueue` they can be `tf.Tensor`s,\n `tf.SparseTensor`s or `tf.RaggedTensor`s. When the argument\n `max_sequence_length` is 0, the default, you should expect a output of\n `embedding.dequeue` for this feature of shape `(batch_size, dim)`. If\n `max_sequence_length` is greater than 0, the feature is embedded as a sequence\n and padded up to the given length. The shape of the output for this feature\n will be `(batch_size, max_sequence_length, dim)`.\n ", "desc": "Configuration data for one embedding feature.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.FTRL", "docs": "Optimization parameters for FTRL with TPU embeddings.\n\n See Algorithm 1 of this\n [paper](https://research.google.com/pubs/archive/41159.pdf).\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.FTRL(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.FTRL(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.FTRL(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for FTRL with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.serving_embedding_lookup", "docs": "Apply standard lookup ops with `tf.tpu.experimental.embedding` configs.\n\n This function is a utility which allows using the\n `tf.tpu.experimental.embedding` config objects with standard lookup functions.\n This can be used when exporting a model which uses\n `tf.tpu.experimental.embedding.TPUEmbedding` for serving on CPU. In particular\n `tf.tpu.experimental.embedding.TPUEmbedding` only supports lookups on TPUs and\n should not be part of your serving graph.\n\n Note that TPU specific options (such as `max_sequence_length`) in the\n configuration objects will be ignored.\n\n In the following example we take a trained model (see the documentation for\n `tf.tpu.experimental.embedding.TPUEmbedding` for the context) and create a\n saved model with a serving function that will perform the embedding lookup and\n pass the results to your model:\n\n ```python\n model = model_fn(...)\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=1024,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.restore(...)\n\n @tf.function(input_signature=[{'feature_one': tf.TensorSpec(...),\n 'feature_two': tf.TensorSpec(...),\n 'feature_three': tf.TensorSpec(...)}])\n def serve_tensors(embedding_features):\n embedded_features = tf.tpu.experimental.embedding.serving_embedding_lookup(\n embedding_features, None, embedding.embedding_tables,\n feature_config)\n return model(embedded_features)\n\n model.embedding_api = embedding\n tf.saved_model.save(model,\n export_dir=...,\n signatures={'serving_default': serve_tensors})\n\n ```\n\n NOTE: It's important to assign the embedding API object to a member of your\n model as `tf.saved_model.save` only supports saving variables as one\n `Trackable` object. Since the model's weights are in `model` and the\n embedding table are managed by `embedding`, we assign `embedding` to an\n attribute of `model` so that tf.saved_model.save can find the embedding\n variables.\n\n NOTE: The same `serve_tensors` function and `tf.saved_model.save` call will\n work directly from training.\n\n Args:\n inputs: a nested structure of Tensors, SparseTensors or RaggedTensors.\n weights: a nested structure of Tensors, SparseTensors or RaggedTensors or\n None for no weights. If not None, structure must match that of inputs, but\n entries are allowed to be None.\n tables: a dict of mapping TableConfig objects to Variables.\n feature_config: a nested structure of FeatureConfig objects with the same\n structure as inputs.\n\n Returns:\n A nested structure of Tensors with the same structure as inputs.\n ", "desc": "Apply standard lookup ops with `tf.tpu.experimental.embedding` configs.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.SGD", "docs": "Optimization parameters for stochastic gradient descent for TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.SGD(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for stochastic gradient descent for TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.TableConfig", "docs": "Configuration data for one embedding table.\n\n This class holds the configuration data for a single embedding table. It is\n used as the `table` parameter of a\n `tf.tpu.experimental.embedding.FeatureConfig`. Multiple\n `tf.tpu.experimental.embedding.FeatureConfig` objects can use the same\n `tf.tpu.experimental.embedding.TableConfig` object. In this case a shared\n table will be created for those feature lookups.\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n The above configuration has 2 tables, and three features. The first two\n features will be looked up in the first table and the third feature will be\n looked up in the second table.\n\n ", "desc": "Configuration data for one embedding table.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding.TPUEmbedding", "docs": "The TPUEmbedding mid level API.\n\n NOTE: When instantiated under a TPUStrategy, this class can only be created\n once per call to `tf.tpu.experimental.initialize_tpu_system`. If you wish to\n re-initialize the embedding engine you must re-initialize the tpu as well.\n Doing this will clear any variables from TPU, so ensure you have checkpointed\n before you do this. If a further instances of the class are needed,\n set the `initialize_tpu_embedding` argument to `False`.\n\n This class can be used to support training large embeddings on TPU. When\n creating an instance of this class, you must specify the complete set of\n tables and features you expect to lookup in those tables. See the\n documentation of `tf.tpu.experimental.embedding.TableConfig` and\n `tf.tpu.experimental.embedding.FeatureConfig` for more details on the complete\n set of options. We will cover the basic usage here.\n\n NOTE: multiple `FeatureConfig` objects can use the same `TableConfig` object,\n allowing different features to share the same table:\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n ```\n\n There are two modes under which the `TPUEmbedding` class can used. This\n depends on if the class was created under a `TPUStrategy` scope or not.\n\n Under `TPUStrategy`, we allow access to the method `enqueue`, `dequeue` and\n `apply_gradients`. We will show examples below of how to use these to train\n and evaluate your model. Under CPU, we only access to the `embedding_tables`\n property which allow access to the embedding tables so that you can use them\n to run model evaluation/prediction on CPU.\n\n First lets look at the `TPUStrategy` mode. Initial setup looks like:\n\n ```python\n strategy = tf.distribute.TPUStrategy(...)\n with strategy.scope():\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n When creating a distributed dataset that is to be passed to the enqueue\n operation a special input option must be specified:\n\n ```python\n distributed_dataset = (\n strategy.distribute_datasets_from_function(\n dataset_fn=...,\n options=tf.distribute.InputOptions(\n experimental_fetch_to_device=False))\n dataset_iterator = iter(distributed_dataset)\n ```\n\n Different feature inputs can have different shapes. For dense and sparse\n tensor, rank 2 and above is supported. For ragged tensor, although only rank 2\n is supported, you can specify the output shape to be rank 2 and above. The\n output shape specified in the FeatureConfig has the first priority. The input\n shape passed in build method has second priority and the input shapes\n auto detected from input feature has the lowest priority. The latter two will\n be converted to output shapes by omitting the last dimension. If the lower\n priority one has output shapes which don't match the former one. A ValueError\n will be raised. Only when the former one has undefined output shapes, the\n latter one can override.\n\n NOTE: All batches passed to the layer can have different input shapes. But\n these input shapes need to match with the output shapes set by either\n `FeatureConfig` or build method except for ragged tensor. Only 2D\n ragged tensor with output shape set to higher dimensions is allowed as\n long as the total number of elements matches. All subsequent calls must have\n the same input shapes. In the event that the input shapes cannot be\n automatically determined by the enqueue method, you must call\n the build method with the input shapes or provide output shapes in the\n `FeatureConfig` to initialize the layer.\n\n To use this API on TPU you should use a custom training loop. Below is an\n example of a training and evaluation step:\n\n ```python\n @tf.function\n def training_step(dataset_iterator, num_steps):\n def tpu_step(tpu_features):\n with tf.GradientTape() as tape:\n activations = embedding.dequeue()\n tape.watch(activations)\n model_output = model(activations)\n loss = ... # some function of labels and model_output\n\n embedding_gradients = tape.gradient(loss, activations)\n embedding.apply_gradients(embedding_gradients)\n # Insert your model gradient and optimizer application here\n\n for _ in tf.range(num_steps):\n embedding_features, tpu_features = next(dataset_iterator)\n embedding.enqueue(embedding_features, training=True)\n strategy.run(tpu_step, args=(tpu_features, ))\n\n @tf.function\n def evalution_step(dataset_iterator, num_steps):\n def tpu_step(tpu_features):\n activations = embedding.dequeue()\n model_output = model(activations)\n # Insert your evaluation code here.\n\n for _ in tf.range(num_steps):\n embedding_features, tpu_features = next(dataset_iterator)\n embedding.enqueue(embedding_features, training=False)\n strategy.run(tpu_step, args=(tpu_features, ))\n ```\n\n NOTE: The calls to `enqueue` have `training` set to `True` when\n `embedding.apply_gradients` is used and set to `False` when\n `embedding.apply_gradients` is not present in the function. If you don't\n follow this pattern you may cause an error to be raised or the tpu may\n deadlock.\n\n In the above examples, we assume that the user has a dataset which returns\n a tuple where the first element of the tuple matches the structure of what\n was passed as the `feature_config` argument to the object initializer. Also we\n utilize `tf.range` to get a `tf.while_loop` in order to increase performance.\n\n When checkpointing your model, you should include your\n `tf.tpu.experimental.embedding.TPUEmbedding` object in the checkpoint. It is a\n trackable object and saving it will save the embedding tables and their\n optimizer slot variables:\n\n ```python\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.save(...)\n ```\n\n On CPU, only the `embedding_table` property is usable. This will allow you to\n restore a checkpoint to the object and have access to the table variables:\n\n ```python\n model = model_fn(...)\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.restore(...)\n\n tables = embedding.embedding_tables\n ```\n\n You can now use table in functions like `tf.nn.embedding_lookup` to perform\n your embedding lookup and pass to your model.\n\n ", "desc": "The TPUEmbedding mid level API.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.embedding_column", "docs": "TPU version of `tf.compat.v1.feature_column.embedding_column`.\n\n Note that the interface for `tf.tpu.experimental.embedding_column` is\n different from that of `tf.compat.v1.feature_column.embedding_column`: The\n following arguments are NOT supported: `ckpt_to_load_from`,\n `tensor_name_in_ckpt`, `max_norm` and `trainable`.\n\n Use this function in place of `tf.compat.v1.feature_column.embedding_column`\n when you want to use the TPU to accelerate your embedding lookups via TPU\n embeddings.\n\n ```\n column = tf.feature_column.categorical_column_with_identity(...)\n tpu_column = tf.tpu.experimental.embedding_column(column, 10)\n ...\n def model_fn(features):\n dense_feature = tf.keras.layers.DenseFeature(tpu_column)\n embedded_feature = dense_feature(features)\n ...\n\n estimator = tf.estimator.tpu.TPUEstimator(\n model_fn=model_fn,\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n column=[tpu_column],\n ...))\n ```\n\n Args:\n categorical_column: A categorical column returned from\n `categorical_column_with_identity`, `weighted_categorical_column`,\n `categorical_column_with_vocabulary_file`,\n `categorical_column_with_vocabulary_list`,\n `sequence_categorical_column_with_identity`,\n `sequence_categorical_column_with_vocabulary_file`,\n `sequence_categorical_column_with_vocabulary_list`\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries\n in a single row for a non-sequence column. For more information, see\n `tf.feature_column.embedding_column`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `tf.compat.v1.truncated_normal_initializer` with mean `0.0` and\n standard deviation `1/sqrt(dimension)`.\n max_sequence_length: An non-negative integer specifying the max sequence\n length. Any sequence shorter then this will be padded with 0 embeddings\n and any sequence longer will be truncated. This must be positive for\n sequence features and 0 for non-sequence features.\n learning_rate_fn: A function that takes global step and returns learning\n rate for the embedding table. If you intend to use the same learning rate\n for multiple embedding tables, please ensure that you pass the exact same\n python function to all calls of embedding_column, otherwise performence\n may suffer.\n embedding_lookup_device: The device on which to run the embedding lookup.\n Valid options are \"cpu\", \"tpu_tensor_core\", and \"tpu_embedding_core\".\n If specifying \"tpu_tensor_core\", a tensor_core_shape must be supplied.\n If not specified, the default behavior is embedding lookup on\n \"tpu_embedding_core\" for training and \"cpu\" for inference.\n Valid options for training : [\"tpu_embedding_core\", \"tpu_tensor_core\"]\n Valid options for serving : [\"cpu\", \"tpu_tensor_core\"]\n For training, tpu_embedding_core is good for large embedding vocab (>1M),\n otherwise, tpu_tensor_core is often sufficient.\n For serving, doing embedding lookup on tpu_tensor_core during serving is\n a way to reduce host cpu usage in cases where that is a bottleneck.\n tensor_core_shape: If supplied, a list of integers which specifies\n the intended dense shape to run embedding lookup for this feature on\n TensorCore. The batch dimension can be left None or -1 to indicate\n a dynamic shape. Only rank 2 shapes currently supported.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n A `_TPUEmbeddingColumnV2`.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if `initializer` is specified but not callable.\n ", "desc": "TPU version of `tf.compat.v1.feature_column.embedding_column`.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.FtrlParameters", "docs": "Optimization parameters for Ftrl with TPU embeddings.\n\n Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the\n `optimization_parameters` argument to set the optimizer and its parameters.\n See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec`\n for more details.\n\n ```\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=tf.tpu.experimental.FtrlParameters(0.1),\n ...))\n ```\n\n ", "desc": "Optimization parameters for Ftrl with TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.initialize_tpu_system", "docs": "Initialize the TPU devices.\n\n Args:\n cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver,\n which provides information about the TPU cluster.\n Returns:\n The tf.tpu.Topology object for the topology of the TPU cluster. If called\n inside tf.function, it returns the serialized topology object instead.\n\n Raises:\n RuntimeError: If running inside a tf.function.\n NotFoundError: If no TPU devices found in eager mode.\n ", "desc": "Initialize the TPU devices.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.shared_embedding_columns", "docs": "TPU version of `tf.compat.v1.feature_column.shared_embedding_columns`.\n\n Note that the interface for `tf.tpu.experimental.shared_embedding_columns` is\n different from that of `tf.compat.v1.feature_column.shared_embedding_columns`:\n The following arguments are NOT supported: `ckpt_to_load_from`,\n `tensor_name_in_ckpt`, `max_norm` and `trainable`.\n\n Use this function in place of\n tf.compat.v1.feature_column.shared_embedding_columns` when you want to use the\n TPU to accelerate your embedding lookups via TPU embeddings.\n\n ```\n column_a = tf.feature_column.categorical_column_with_identity(...)\n column_b = tf.feature_column.categorical_column_with_identity(...)\n tpu_columns = tf.tpu.experimental.shared_embedding_columns(\n [column_a, column_b], 10)\n ...\n def model_fn(features):\n dense_feature = tf.keras.layers.DenseFeature(tpu_columns)\n embedded_feature = dense_feature(features)\n ...\n\n estimator = tf.estimator.tpu.TPUEstimator(\n model_fn=model_fn,\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n column=tpu_columns,\n ...))\n ```\n\n Args:\n categorical_columns: A list of categorical columns returned from\n `categorical_column_with_identity`, `weighted_categorical_column`,\n `categorical_column_with_vocabulary_file`,\n `categorical_column_with_vocabulary_list`,\n `sequence_categorical_column_with_identity`,\n `sequence_categorical_column_with_vocabulary_file`,\n `sequence_categorical_column_with_vocabulary_list`\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries in\n a single row for a non-sequence column. For more information, see\n `tf.feature_column.embedding_column`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `tf.truncated_normal_initializer` with mean `0.0` and standard deviation\n `1/sqrt(dimension)`.\n shared_embedding_collection_name: Optional name of the collection where\n shared embedding weights are added. If not given, a reasonable name will\n be chosen based on the names of `categorical_columns`. This is also used\n in `variable_scope` when creating shared embedding weights.\n max_sequence_lengths: An list of non-negative integers, either None or empty\n or the same length as the argument categorical_columns. Entries\n corresponding to non-sequence columns must be 0 and entries corresponding\n to sequence columns specify the max sequence length for the column. Any\n sequence shorter then this will be padded with 0 embeddings and any\n sequence longer will be truncated.\n learning_rate_fn: A function that takes global step and returns learning\n rate for the embedding table. If you intend to use the same learning rate\n for multiple embedding tables, please ensure that you pass the exact same\n python function to all calls of shared_embedding_columns, otherwise\n performence may suffer.\n embedding_lookup_device: The device on which to run the embedding lookup.\n Valid options are \"cpu\", \"tpu_tensor_core\", and \"tpu_embedding_core\". If\n specifying \"tpu_tensor_core\", a tensor_core_shape must be supplied.\n Defaults to \"cpu\". If not specified, the default behavior is embedding\n lookup on \"tpu_embedding_core\" for training and \"cpu\" for inference.\n Valid options for training : [\"tpu_embedding_core\", \"tpu_tensor_core\"]\n Valid options for serving : [\"cpu\", \"tpu_tensor_core\"]\n For training, tpu_embedding_core is good for large embedding vocab (>1M),\n otherwise, tpu_tensor_core is often sufficient.\n For serving, doing embedding lookup on tpu_tensor_core during serving is\n a way to reduce host cpu usage in cases where that is a bottleneck.\n tensor_core_shape: If supplied, a list of integers which specifies the\n intended dense shape to run embedding lookup for this feature on\n TensorCore. The batch dimension can be left None or -1 to indicate a\n dynamic shape. Only rank 2 shapes currently supported.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n A list of `_TPUSharedEmbeddingColumnV2`.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if `initializer` is specified but not callable.\n ValueError: if `max_sequence_lengths` is specified and not the same length\n as `categorical_columns`.\n ValueError: if `max_sequence_lengths` is positive for a non sequence column\n or 0 for a sequence column.\n ", "desc": "TPU version of `tf.compat.v1.feature_column.shared_embedding_columns`.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.shutdown_tpu_system", "docs": "Shuts down the TPU devices.\n\n This will clear all caches, even those that are maintained through sequential\n calls to tf.tpu.experimental.initialize_tpu_system, such as the compilation\n cache.\n\n Args:\n cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver,\n which provides information about the TPU cluster.\n\n Raises:\n RuntimeError: If no TPU devices found for eager execution or if run in a\n tf.function.\n ", "desc": "Shuts down the TPU devices.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.StochasticGradientDescentParameters", "docs": "Optimization parameters for stochastic gradient descent for TPU embeddings.\n\n Pass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the\n `optimization_parameters` argument to set the optimizer and its parameters.\n See the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec`\n for more details.\n\n ```\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=(\n tf.tpu.experimental.StochasticGradientDescentParameters(0.1))))\n ```\n\n ", "desc": "Optimization parameters for stochastic gradient descent for TPU embeddings.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.Topology", "docs": "Describes a set of TPU devices.\n\n Represents both the shape of the physical mesh, and the mapping between\n TensorFlow TPU devices to physical mesh coordinates.\n ", "desc": "Describes a set of TPU devices.", "type": "API"}, {"name": "tf.compat.v1.tpu.experimental.TPUSystemMetadata", "docs": "Describes some metadata about the TPU system.\n\n Attributes:\n num_cores: interger. Total number of TPU cores in the TPU system.\n num_hosts: interger. Total number of hosts (TPU workers) in the TPU system.\n num_of_cores_per_host: interger. Number of TPU cores per host (TPU worker).\n topology: an instance of `tf.tpu.experimental.Topology`, which describes the\n physical topology of TPU system.\n devices: a tuple of strings, which describes all the TPU devices in the\n system.\n ", "desc": "Describes some metadata about the TPU system.", "type": "API"}, {"name": "tf.compat.v1.tpu.initialize_system", "docs": "Initializes a distributed TPU system for use with TensorFlow.\n\n Args:\n embedding_config: If not None, a `TPUEmbeddingConfiguration` proto\n describing the desired configuration of the hardware embedding lookup\n tables. If embedding_config is None, no hardware embeddings can be used.\n job: The job (the XXX in TensorFlow device specification /job:XXX) that\n contains the TPU devices that will be initialized. If job=None it is\n assumed there is only one job in the TensorFlow flock, and an error will\n be returned if this assumption does not hold.\n compilation_failure_closes_chips: Set the configuration whether\n we want to close TPU chips when there is a compilation failure.\n tpu_cancellation_closes_chips: Set the configuration whether\n we want to close TPU chips when a TPU execution is cancelled. If the value\n is None, the behavior will be determined by the command line flag\n `tpu_cancellation_closes_chips` for the TPU worker. WARNING: this argument\n only applies to TFRT TPU runtime.\n Returns:\n A serialized `TopologyProto` that describes the TPU system. Note:\n the topology must be evaluated using `Session.run` before it can be used.\n ", "desc": "Initializes a distributed TPU system for use with TensorFlow.", "type": "API"}, {"name": "tf.compat.v1.tpu.outside_compilation", "docs": "Builds part of a computation outside any current TPU replicate scope.\n\n `tf.tpu.outside_compilation()` is used to run ops in `computation` on CPU\n instead of running on TPU. For example, users can run ops that are not\n supported on TPU's (e.g. tf.summary.write()) by explicitly placing those\n ops on CPU's. Below usage of outside compilation will place ops in\n `computation_with_string_ops` on CPU.\n\n Example usage:\n\n ```python\n def computation_with_string_ops(x):\n # strings types are not supported on TPU's and below ops must\n # run on CPU instead.\n output = tf.strings.format('1{}', x)\n return tf.strings.to_number(output)\n\n def tpu_computation():\n # Expected output is 11.\n output = tf.tpu.outside_compilation(computation_with_string_ops, 1)\n ```\n\n Outside compilation should be called inside TPUReplicateContext. That is,\n `tf.tpu.outside_compilation()` should be called inside a function that is\n passed to `tpu.split_compile_and_replicate()` -- this is implied when\n outside compilation is invoked inside a function passed to TPUStrategy\n `run()`. If invoked outside of TPUReplicateContext,\n then this simply returns the result of `computation`, and therefore,\n would be a no-op. Note that outside compilation is different from\n `tf.distribute.experimental.TPUStrategy.merge_call()` as logic in\n outside compilation is replicated and executed separately for each\n replica. On the other hand, `merge_call()` requires a `merge_fn`\n to aggregate the inputs from different replicas and is executed only\n once.\n\n For variables placed in TPU device, which includes variables created inside\n TPUStrategy scope, outside compilation logic must not include variable\n read/write. For variables placed on host, which is the case when variables\n created via TPUEstimator, variable read/write is only allowed if the variable\n is not accessed by any other ops in the TPU computation. Variable read/write\n from outside compilation cluster is not visible from TPU computation and\n vice versa. Therefore, if outside compilation logic contains such host\n variables read/write ops and if the variables are accessed by TPU\n computation as well, then this may lead to deadlock.\n\n Internally, `tf.tpu.outside_compilation()` adds outside compilation\n attributes to all ops in `computation`. During later graph pass, these\n ops with outside compilation attribute is extracted out and replicated\n into a host-side graph. Inputs to this extract host-side graph is sent\n from TPU computation graph to host graph via a pair of XlaSendToHost and\n XlaRecvFromHost ops. Note that using `tf.tpu.outside_compilation()`\n may result in tensor transfer between TPU and CPU, leading to non-trivial\n performance impact.\n\n Args:\n computation: A Python function that builds the computation to\n place on the host.\n *args: the positional arguments for the computation.\n **kwargs: the keyword arguments for the computation.\n\n Returns:\n The Tensors returned by computation.\n ", "desc": "Builds part of a computation outside any current TPU replicate scope.", "type": "API"}, {"name": "tf.compat.v1.tpu.PaddingSpec", "docs": "Represents the type of padding policies for tpu.replicate.", "desc": "Represents the type of padding policies for tpu.replicate.", "type": "API"}, {"name": "tf.compat.v1.tpu.replicate", "docs": "Builds a graph operator that runs a replicated TPU computation.\n\n Example for the basic usage that `inputs` has static shape:\n\n ```python\n\n def computation(x):\n x = x + 1\n return tf.math.reduce_mean(x)\n\n x = tf.convert_to_tensor([1., 2., 3.])\n y = tf.convert_to_tensor([4., 5., 6.])\n tf.compat.v1.tpu.replicate(computation, inputs=[[x], [y]])\n ```\n\n If the `inputs` has dynamic shapes and you would like to automatically\n bucketize the inputs to avoid XLA recompilation. See the advanced example\n below:\n\n ```python\n\n def computation(x):\n x = x + 1\n return tf.math.reduce_mean(x)\n\n # Assume input tensors in two replicas `x` and `y` both have dynamic shape\n # ([None, 2]).\n tf.compat.v1.tpu.replicate(\n computation,\n inputs=[x, y],\n maximum_shapes=[tf.TensorShape([None, None])],\n padding_spec=tf.compat.v1.tpu.PaddingSpec.POWER_OF_TWO)\n ```\n\n Args:\n computation: A Python function that builds the computation to replicate.\n inputs: A list of lists of input tensors or `None` (equivalent to\n `[[]]`), indexed by `[replica_num][input_num]`. All replicas must\n have the same number of inputs. Each input can be a nested structure\n containing values that are convertible to tensors. Note that passing an\n N-dimension list of compatible values will result in a N-dimension list of\n scalar tensors rather than a single Rank-N tensors. If you need different\n behavior, convert part of inputs to tensors with `tf.convert_to_tensor`.\n infeed_queue: If not `None`, the `InfeedQueue` from which to append a tuple\n of arguments as inputs to computation.\n device_assignment: If not `None`, a `DeviceAssignment` describing the\n mapping between logical cores in the computation with physical cores in\n the TPU topology. Uses a default device assignment if `None`. The\n `DeviceAssignment` may be omitted if each replica of the computation uses\n only one core, and there is either only one replica, or the number of\n replicas is equal to the number of cores in the TPU system.\n name: (Deprecated) Does nothing.\n maximum_shapes: A nested structure of tf.TensorShape representing the shape\n to which the respective component of each input element in each replica\n should be padded. Any unknown dimensions (e.g.\n tf.compat.v1.Dimension(None) in a tf.TensorShape or -1 in a tensor-like\n object) will be padded to the maximum size of that dimension over all\n replicas. The structure of `maximum_shapes` needs to be the same as\n `inputs[0]`.\n padding_spec: An enum specified by `tpu.PaddingSpec`. This describes the\n padding policy when the `inputs` to `tpu.replicate` is dynamic.\n One usage is to enable automatic bucketizing on the inputs by setting the\n value to `tpu.PaddingSpec.POWER_OF_TWO`, which can help to reduce the\n recompilation in the XLA side.\n xla_options: An instance of `tpu.XLAOptions` which indicates the options\n passed to XLA compiler. Use `None` for default options.\n Returns:\n A list of outputs, indexed by `[replica_num]` each output can be a nested\n structure same as what computation() returns with a few exceptions.\n\n Exceptions include:\n 1) None output: a NoOp would be returned which control-depends on\n computation.\n 2) Single value output: A tuple containing the value would be returned.\n 3) Operation-only outputs: a NoOp would be returned which\n control-depends on computation.\n TODO(b/121383831): Investigate into removing these special cases.\n\n Raises:\n ValueError: If all replicas do not have equal numbers of input tensors.\n ValueError: If the number of inputs per replica does not match\n the number of formal parameters to `computation`.\n ValueError: If the static `inputs` dimensions don't match with the values\n given in `maximum_shapes`.\n ValueError: If the structure of inputs per replica does not match\n the structure of `maximum_shapes`.\n ", "desc": "Builds a graph operator that runs a replicated TPU computation.", "type": "API"}, {"name": "tf.compat.v1.tpu.rewrite", "docs": "Rewrites `computation` for execution on a TPU system.\n\n Args:\n computation: A Python function that builds a computation to apply to the\n input. If the function takes n inputs, 'inputs' should be a list of n\n tensors.\n\n `computation` may return a list of operations and tensors. Tensors must\n come before operations in the returned list. The return value of\n `rewrite` is a list of tensors corresponding to the tensors from the\n output of `computation`.\n\n All `Operation`s constructed during `computation` will be executed when\n evaluating any of the returned output tensors, not just the ones returned.\n inputs: A list of input tensors or `None` (equivalent to an empty list).\n Each input can be a nested structure containing values that are\n convertible to tensors. Note that passing an N-dimension list of\n compatible values will result in a N-dimension list of scalar tensors\n rather than a single Rank-N tensors. If you need different behavior,\n convert part of inputs to tensors with `tf.convert_to_tensor`.\n infeed_queue: If not `None`, the `InfeedQueue` from which to append a tuple\n of arguments as inputs to `computation`.\n device_assignment: if not `None`, a `DeviceAssignment` describing the\n mapping between logical cores in the computation with physical cores in\n the TPU topology. May be omitted for a single-core computation, in which\n case the core attached to task 0, TPU device 0 is used.\n name: (Deprecated) Does nothing.\n xla_options: An instance of `tpu.XLAOptions` which indicates the options\n passed to XLA compiler. Use `None` for default options.\n Returns:\n Same data structure as if computation(*inputs) is called directly with some\n exceptions for correctness. Exceptions include:\n 1) None output: a NoOp would be returned which control-depends on\n computation.\n 2) Single value output: A tuple containing the value would be returned.\n 3) Operation-only outputs: a NoOp would be returned which\n control-depends on computation.\n TODO(b/121383831): Investigate into removing these special cases.\n ", "desc": "Rewrites `computation` for execution on a TPU system.", "type": "API"}, {"name": "tf.compat.v1.tpu.shard", "docs": "Shards `computation` for parallel execution.\n\n `inputs` must be a list of Tensors or None (equivalent to an empty list), each\n of which has a corresponding split axis (from `input_shard_axes`). Each input\n is split into `num_shards` pieces along the corresponding axis, and\n computation is applied to each shard in parallel.\n\n Tensors are broadcast to all shards if they are lexically captured by\n `computation`. e.g.,\n\n x = tf.constant(7)\n def computation():\n return x + 3\n ... = shard(computation, ...)\n\n TODO(phawkins): consider adding support for broadcasting Tensors passed\n as inputs.\n\n If `outputs_from_all_shards` is true, the outputs from all shards of\n `computation` are concatenated back together along their `output_shard_axes`.\n Otherwise, each output is taken from an arbitrary shard.\n\n Inputs and outputs of the computation must be at least rank-1 Tensors.\n\n Args:\n computation: A Python function that builds a computation to apply to each\n shard of the input.\n inputs: A list of input tensors or None (equivalent to an empty list). Each\n input tensor has a corresponding shard axes, given by `input_shard_axes`,\n which must have size divisible by `num_shards`.\n num_shards: The number of shards.\n input_shard_axes: A list of dimensions along which to shard `inputs`, or\n `None`. `None` means \"shard all inputs along dimension 0\". If not `None`,\n there must be one dimension per input.\n outputs_from_all_shards: Boolean or list of boolean. For each output, if\n `True`, outputs from all shards are concatenated along the corresponding\n `output_shard_axes` entry. Otherwise, each output is taken\n from an arbitrary shard. If the argument is a boolean, the argument's\n value is used for each output.\n output_shard_axes: A list of dimensions along which to concatenate the\n outputs of `computation`, or `None`. `None` means \"concatenate all outputs\n along dimension 0\". If not `None`, there must be one dimension per output.\n Ignored if `outputs_from_all_shards` is False.\n infeed_queue: If not `None`, the `InfeedQueue` to use to augment the inputs\n of `computation`.\n device_assignment: If not `None`, a `DeviceAssignment` describing the\n mapping between logical cores in the computation with physical cores in\n the TPU topology. Uses a default device assignment if `None`. The\n `DeviceAssignment` may be omitted if each shard of the computation uses\n only one core, and there is either only one shard, or the number of shards\n is equal to the number of cores in the TPU system.\n name: (Deprecated) Does nothing.\n xla_options: An instance of `tpu.XLAOptions` which indicates the options\n passed to XLA compiler. Use `None` for default options.\n Returns:\n A list of output tensors.\n Raises:\n ValueError: If num_shards <= 0\n ValueError: If len(input_shard_axes) != len(inputs)\n ValueError: If len(output_shard_axes) != len(outputs from `computation`)\n ", "desc": "Shards `computation` for parallel execution.", "type": "API"}, {"name": "tf.compat.v1.tpu.shutdown_system", "docs": "Shuts down a running a distributed TPU system.\n\n Args:\n job: The job (the XXX in TensorFlow device specification /job:XXX) that\n contains the TPU devices that will be shutdown. If job=None it is\n assumed there is only one job in the TensorFlow flock, and an error will\n be returned if this assumption does not hold.\n ", "desc": "Shuts down a running a distributed TPU system.", "type": "API"}, {"name": "tf.compat.v1.tpu.XLAOptions", "docs": "XLA compilation options.\n\n Attributes:\n use_spmd_for_xla_partitioning: Boolean. Whether to use XLA's SPMD\n partitioner instead of MPMD partitioner when compiler partitioning is\n requested.\n enable_xla_dynamic_padder: Boolean. Whether to enable XLA dynamic padder\n infrastructure to handle dynamic shapes inputs inside XLA. True by\n default. Disabling this may cause correctness issues with dynamic shapes\n inputs, as XLA will just assume the inputs are with padded shapes. However\n users can optionally set it to False to improve device time if masking is\n already handled in the user side.\n ", "desc": "XLA compilation options.", "type": "API"}, {"name": "tf.compat.v1.trace", "docs": "Compute the trace of a tensor `x`.\n\n `trace(x)` returns the sum along the main diagonal of each inner-most matrix\n in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output\n is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where\n\n `output[i, j, k, ..., l] = trace(x[i, j, k, ..., l, :, :])`\n\n For example:\n\n ```python\n x = tf.constant([[1, 2], [3, 4]])\n tf.linalg.trace(x) # 5\n\n x = tf.constant([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n tf.linalg.trace(x) # 15\n\n x = tf.constant([[[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]],\n [[-1, -2, -3],\n [-4, -5, -6],\n [-7, -8, -9]]])\n tf.linalg.trace(x) # [15, -15]\n ```\n\n Args:\n x: tensor.\n name: A name for the operation (optional).\n\n Returns:\n The trace of input tensor.\n ", "desc": "Compute the trace of a tensor `x`.", "type": "API"}, {"name": "tf.compat.v1.train", "docs": "Support for training models.\n\nSee the [Training](https://tensorflow.org/api_guides/python/train) guide.\n\n", "desc": "Support for training models.", "type": "API"}, {"name": "tf.compat.v1.train.AdadeltaOptimizer", "docs": "Optimizer that implements the Adadelta algorithm.\n\n References:\n ADADELTA - An Adaptive Learning Rate Method:\n [Zeiler, 2012](http://arxiv.org/abs/1212.5701)\n ([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))\n\n @compatibility(TF2)\n tf.compat.v1.train.AdadeltaOptimizer is compatible with eager mode and\n `tf.function`.\n When eager execution is enabled, `learning_rate`, `rho`,\n and `epsilon` can each be a callable that\n takes no arguments and returns the actual value to use. This can be useful\n for changing these values across different invocations of optimizer\n functions.\n\n To switch to native TF2 style, use [`tf.keras.optimizers.Adadelta`]\n (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adadelta)\n instead. Please notice that due to the implementation differences,\n `tf.keras.optimizers.Adadelta` and\n `tf.compat.v1.train.AdadeltaOptimizer` may have slight differences in\n floating point numerics even though the formula used for the variable\n updates still matches.\n\n #### Structural mapping to native TF2\n\n Before:\n\n ```python\n optimizer = tf.compat.v1.train.AdadeltaOptimizer(\n learning_rate=learning_rate,\n rho=rho,\n epsilon=epsilon)\n ```\n\n After:\n\n ```python\n optimizer = tf.keras.optimizers.Adadelta(\n learning_rate=learning_rate,\n rho=rho,\n epsilon=epsilon)\n ```\n\n #### How to map arguments\n | TF1 Arg Name | TF2 Arg Name | Note |\n | ------------------ | ------------- | ------------------------------- |\n | `learning_rate` | `learning_rate`| Be careful of setting |\n : : : learning_rate tensor value computed from the global step. :\n : : : In TF1 this was usually meant to imply a dynamic learning rate and :\n : : : would recompute in each step. In TF2 (eager + function) it will :\n : : : treat it as a scalar value that only gets computed once instead of :\n : : : a symbolic placeholder to be computed each time. :\n | `rho` | `rho` | - |\n | `epsilon` | `epsilon` | Default value is 1e-08 in TF1, |\n : : : but 1e-07 in TF2. :\n | `use_locking` | - | Not applicable in TF2. |\n\n #### Before & after usage example\n Before:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.compat.v1.train.AdadeltaOptimizer(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n After:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.keras.optimizers.Adadelta(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n @end_compatibility\n ", "desc": "Optimizer that implements the Adadelta algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.AdagradDAOptimizer", "docs": "Adagrad Dual Averaging algorithm for sparse linear models.\n\n This optimizer takes care of regularization of unseen features in a mini batch\n by updating them when they are seen with a closed form update rule that is\n equivalent to having updated them on every mini-batch.\n\n AdagradDA is typically used when there is a need for large sparsity in the\n trained model. This optimizer only guarantees sparsity for linear models. Be\n careful when using AdagradDA for deep networks as it will require careful\n initialization of the gradient accumulators for it to train.\n\n References:\n Adaptive Subgradient Methods for Online Learning and Stochastic Optimization\n :[Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html)\n ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf))\n ", "desc": "Adagrad Dual Averaging algorithm for sparse linear models.", "type": "API"}, {"name": "tf.compat.v1.train.AdagradOptimizer", "docs": "Optimizer that implements the Adagrad algorithm.\n\n References:\n Adaptive Subgradient Methods for Online Learning and Stochastic Optimization\n :[Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html)\n ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf))\n\n @compatibility(TF2)\n tf.compat.v1.train.AdagradOptimizer is compatible with eager mode and\n `tf.function`.\n When eager execution is enabled, `learning_rate`,\n `initial_accumulator_value`, and `epsilon` can each be a callable that\n takes no arguments and returns the actual value to use. This can be useful\n for changing these values across different invocations of optimizer\n functions.\n\n To switch to native TF2 style, use [`tf.keras.optimizers.Adagrad`]\n (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adagrad)\n instead. Please notice that due to the implementation differences,\n `tf.keras.optimizers.Adagrad` and\n `tf.compat.v1.train.AdagradOptimizer` may have slight differences in\n floating point numerics even though the formula used for the variable\n updates still matches.\n\n #### Structural mapping to native TF2\n\n Before:\n\n ```python\n optimizer = tf.compat.v1.train.AdagradOptimizer(\n learning_rate=learning_rate,\n initial_accumulator_value=initial_accumulator_value)\n ```\n\n After:\n\n ```python\n optimizer = tf.keras.optimizers.Adagrad(\n learning_rate=learning_rate,\n initial_accumulator_value=initial_accumulator_value,\n epsilon=1e-07)\n ```\n\n #### How to map arguments\n | TF1 Arg Name | TF2 Arg Name | Note |\n | ------------------ | ------------- | ------------------------------- |\n | `learning_rate` | `learning_rate` | Be careful of setting |\n : : : learning_rate tensor value computed from the global step. :\n : : : In TF1 this was usually meant to imply a dynamic learning rate and :\n : : : would recompute in each step. In TF2 (eager + function) it will :\n : : : treat it as a scalar value that only gets computed once instead of :\n : : : a symbolic placeholder to be computed each time. :\n | `initial_accumulator_value` | `initial_accumulator_value` | The |\n : : : argument can be value of zero in TF2, which is not accepted in TF1.|\n | - | `epsilon` | `epsilon` is become configurable in TF2. The |\n : : : defualt value is changed from 1e-8 to 1e-7 :\n | `use_locking` | - | Not applicable in TF2. |\n\n #### Before & after usage example\n Before:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.compat.v1.train.AdagradOptimizer(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n After:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n @end_compatibility\n ", "desc": "Optimizer that implements the Adagrad algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.AdamOptimizer", "docs": "Optimizer that implements the Adam algorithm.\n\n References:\n Adam - A Method for Stochastic Optimization:\n [Kingma et al., 2015](https://arxiv.org/abs/1412.6980)\n ([pdf](https://arxiv.org/pdf/1412.6980.pdf))\n\n @compatibility(TF2)\n tf.compat.v1.train.AdamOptimizer is compatible with eager mode and\n `tf.function`.\n When eager execution is enabled, `learning_rate`, `beta1`, `beta2`, and\n `epsilon` can each be a callable that takes no arguments and returns the\n actual value to use. This can be useful for changing these values across\n different invocations of optimizer functions.\n\n To switch to native TF2 style, use [`tf.keras.optimizers.Adam`]\n (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)\n instead. Please notice that due to the implementation differences,\n `tf.keras.optimizers.Adam` and\n `tf.compat.v1.train.AdamOptimizer` may have slight differences in\n floating point numerics even though the formula used for the variable\n updates still matches.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=0.001)\n ```\n\n After:\n\n ```python\n optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\n ```\n\n #### How to Map Arguments\n |TF1 Arg Name |TF2 Arg Name |Note |\n |----------------------|-------------|----------------------|\n |learning_rate |learning_rate|Be careful of setting learning_rate as a\n : : : tensor value computed from the global\n : : : step. In TF1 this was usually meant to\n : : : imply a dynamic learning rate and would\n : : : recompute in each step. In TF2 (eager +\n : : : function) it will treat it as a scalar\n : : : value that only gets computed once\n : : : instead of a symbolic placeholder to be\n : : : computed each time. :\n |beta1 |beta_1 | |\n |beta2 |beta_2 | |\n |epsilon |epsilon | Default value is 1e-08 in TF1, but\n : : : 1e-07 in TF2. :\n |use_locking |N/A |Not applicable in TF2. |\n\n #### Before & After Usage Example\n Before:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n After:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n @end_compatibility\n ", "desc": "Optimizer that implements the Adam algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.add_queue_runner", "docs": "Adds a `QueueRunner` to a collection in the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\n\nWhen building a complex model that uses many queues it is often difficult to\ngather all the queue runners that need to be run. This convenience function\nallows you to add a queue runner to a well known collection in the graph.\n\nThe companion method `start_queue_runners()` can be used to start threads for\nall the collected queue runners.\n\n@compatibility(TF2)\nQueueRunners are not compatible with eager execution. Instead, please\nuse [tf.data](https://www.tensorflow.org/guide/data) to get data into your\nmodel.\n@end_compatibility\n\nArgs:\n qr: A `QueueRunner`.\n collection: A `GraphKey` specifying the graph collection to add\n the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.", "desc": "Adds a `QueueRunner` to a collection in the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.assert_global_step", "docs": "Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.\n\n Args:\n global_step_tensor: `Tensor` to test.\n ", "desc": "Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.", "type": "API"}, {"name": "tf.compat.v1.train.basic_train_loop", "docs": "Basic loop to train a model.\n\n Calls `train_step_fn` in a loop to train a model. The function is called as:\n\n ```python\n train_step_fn(session, *args, **kwargs)\n ```\n\n It is passed a `tf.compat.v1.Session` in addition to `args` and `kwargs`. The\n function\n typically runs one training step in the session.\n\n Args:\n supervisor: `tf.compat.v1.train.Supervisor` to run the training services.\n train_step_fn: Callable to execute one training step. Called repeatedly as\n `train_step_fn(session, *args **kwargs)`.\n args: Optional positional arguments passed to `train_step_fn`.\n kwargs: Optional keyword arguments passed to `train_step_fn`.\n master: Master to use to create the training session. Defaults to `\"\"`\n which causes the session to be created in the local process.\n ", "desc": "Basic loop to train a model.", "type": "API"}, {"name": "tf.compat.v1.train.batch", "docs": "Creates batches of tensors in `tensors`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n\nThe argument `tensors` can be a list or a dictionary of tensors.\nThe value returned by the function will be of the same type\nas `tensors`.\n\nThis function is implemented using a queue. A `QueueRunner` for the\nqueue is added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\nIf `enqueue_many` is `False`, `tensors` is assumed to represent a single\nexample. An input tensor with shape `[x, y, z]` will be output as a tensor\nwith shape `[batch_size, x, y, z]`.\n\nIf `enqueue_many` is `True`, `tensors` is assumed to represent a batch of\nexamples, where the first dimension is indexed by example, and all members of\n`tensors` should have the same size in the first dimension. If an input\ntensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,\ny, z]`. The `capacity` argument controls the how long the prefetching is\nallowed to grow the queues.\n\nThe returned operation is a dequeue operation and will throw\n`tf.errors.OutOfRangeError` if the input queue is exhausted. If this\noperation is feeding another input queue, its queue runner will catch\nthis exception, however, if this operation is used in your main thread\nyou are responsible for catching this yourself.\n\n*N.B.:* If `dynamic_pad` is `False`, you must ensure that either\n(i) the `shapes` argument is passed, or (ii) all of the tensors in\n`tensors` must have fully-defined shapes. `ValueError` will be\nraised if neither of these conditions holds.\n\nIf `dynamic_pad` is `True`, it is sufficient that the *rank* of the\ntensors is known, but individual dimensions may have shape `None`.\nIn this case, for each enqueue the dimensions with value `None`\nmay have a variable length; upon dequeue, the output tensors will be padded\non the right to the maximum shape of the tensors in the current minibatch.\nFor numbers, this padding takes value 0. For strings, this padding is\nthe empty string. See `PaddingFIFOQueue` for more info.\n\nIf `allow_smaller_final_batch` is `True`, a smaller batch value than\n`batch_size` is returned when the queue is closed and there are not enough\nelements to fill the batch, otherwise the pending elements are discarded.\nIn addition, all output tensors' static shapes, as accessed via the\n`shape` property will have a first `Dimension` value of `None`, and\noperations that depend on fixed batch_size would fail.\n\nArgs:\n tensors: The list or dictionary of tensors to enqueue.\n batch_size: The new batch size pulled from the queue.\n num_threads: The number of threads enqueuing `tensors`. The batching will\n be nondeterministic if `num_threads > 1`.\n capacity: An integer. The maximum number of elements in the queue.\n enqueue_many: Whether each tensor in `tensors` is a single example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensors`.\n dynamic_pad: Boolean. Allow variable dimensions in input shapes.\n The given dimensions are padded upon dequeue so that tensors within a\n batch have the same shapes.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same types as `tensors` (except if\n the input is a list of one element, then it returns a tensor, not a list).\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Creates batches of tensors in `tensors`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.batch_join", "docs": "Runs a list of tensors to fill a queue to create batches of examples. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n\nThe `tensors_list` argument is a list of tuples of tensors, or a list of\ndictionaries of tensors. Each element in the list is treated similarly\nto the `tensors` argument of `tf.compat.v1.train.batch()`.\n\nWARNING: This function is nondeterministic, since it starts a separate thread\nfor each tensor.\n\nEnqueues a different list of tensors in different threads.\nImplemented using a queue -- a `QueueRunner` for the queue\nis added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\n`len(tensors_list)` threads will be started,\nwith thread `i` enqueuing the tensors from\n`tensors_list[i]`. `tensors_list[i1][j]` must match\n`tensors_list[i2][j]` in type and shape, except in the first\ndimension if `enqueue_many` is true.\n\nIf `enqueue_many` is `False`, each `tensors_list[i]` is assumed\nto represent a single example. An input tensor `x` will be output as a\ntensor with shape `[batch_size] + x.shape`.\n\nIf `enqueue_many` is `True`, `tensors_list[i]` is assumed to\nrepresent a batch of examples, where the first dimension is indexed\nby example, and all members of `tensors_list[i]` should have the\nsame size in the first dimension. The slices of any input tensor\n`x` are treated as examples, and the output tensors will have shape\n`[batch_size] + x.shape[1:]`.\n\nThe `capacity` argument controls the how long the prefetching is allowed to\ngrow the queues.\n\nThe returned operation is a dequeue operation and will throw\n`tf.errors.OutOfRangeError` if the input queue is exhausted. If this\noperation is feeding another input queue, its queue runner will catch\nthis exception, however, if this operation is used in your main thread\nyou are responsible for catching this yourself.\n\n*N.B.:* If `dynamic_pad` is `False`, you must ensure that either\n(i) the `shapes` argument is passed, or (ii) all of the tensors in\n`tensors_list` must have fully-defined shapes. `ValueError` will be\nraised if neither of these conditions holds.\n\nIf `dynamic_pad` is `True`, it is sufficient that the *rank* of the\ntensors is known, but individual dimensions may have value `None`.\nIn this case, for each enqueue the dimensions with value `None`\nmay have a variable length; upon dequeue, the output tensors will be padded\non the right to the maximum shape of the tensors in the current minibatch.\nFor numbers, this padding takes value 0. For strings, this padding is\nthe empty string. See `PaddingFIFOQueue` for more info.\n\nIf `allow_smaller_final_batch` is `True`, a smaller batch value than\n`batch_size` is returned when the queue is closed and there are not enough\nelements to fill the batch, otherwise the pending elements are discarded.\nIn addition, all output tensors' static shapes, as accessed via the\n`shape` property will have a first `Dimension` value of `None`, and\noperations that depend on fixed batch_size would fail.\n\nArgs:\n tensors_list: A list of tuples or dictionaries of tensors to enqueue.\n batch_size: An integer. The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n enqueue_many: Whether each tensor in `tensor_list_list` is a single\n example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensor_list_list[i]`.\n dynamic_pad: Boolean. Allow variable dimensions in input shapes.\n The given dimensions are padded upon dequeue so that tensors within a\n batch have the same shapes.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional) If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same number and types as\n `tensors_list[i]`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensor_list_list`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Runs a list of tensors to fill a queue to create batches of examples. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.BytesList", "docs": "Used in `tf.train.Example` protos. Holds a list of byte-strings.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[bytes]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {bytes_list {value: ['abc', '12345' ]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].bytes_list.value\n[\"abc\", \"12345\"]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_feature': }\n\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of byte-strings.", "type": "API"}, {"name": "tf.compat.v1.train.Checkpoint", "docs": "Groups trackable objects, saving and restoring them.\n\n `Checkpoint`'s constructor accepts keyword arguments whose values are types\n that contain trackable state, such as `tf.compat.v1.train.Optimizer`\n implementations, `tf.Variable`, `tf.keras.Layer` implementations, or\n `tf.keras.Model` implementations. It saves these values with a checkpoint, and\n maintains a `save_counter` for numbering checkpoints.\n\n Example usage when graph building:\n\n ```python\n import tensorflow as tf\n import os\n\n checkpoint_directory = \"/tmp/training_checkpoints\"\n checkpoint_prefix = os.path.join(checkpoint_directory, \"ckpt\")\n\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory))\n train_op = optimizer.minimize( ... )\n status.assert_consumed() # Optional sanity checks.\n with tf.compat.v1.Session() as session:\n # Use the Session to restore variables, or initialize them if\n # tf.train.latest_checkpoint returned None.\n status.initialize_or_restore(session)\n for _ in range(num_training_steps):\n session.run(train_op)\n checkpoint.save(file_prefix=checkpoint_prefix)\n ```\n\n Example usage with eager execution enabled:\n\n ```python\n import tensorflow as tf\n import os\n\n tf.compat.v1.enable_eager_execution()\n\n checkpoint_directory = \"/tmp/training_checkpoints\"\n checkpoint_prefix = os.path.join(checkpoint_directory, \"ckpt\")\n\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory))\n for _ in range(num_training_steps):\n optimizer.minimize( ... ) # Variables will be restored on creation.\n status.assert_consumed() # Optional sanity checks.\n checkpoint.save(file_prefix=checkpoint_prefix)\n ```\n\n `Checkpoint.save` and `Checkpoint.restore` write and read object-based\n checkpoints, in contrast to `tf.compat.v1.train.Saver` which writes and reads\n `variable.name` based checkpoints. Object-based checkpointing saves a graph of\n dependencies between Python objects (`Layer`s, `Optimizer`s, `Variable`s,\n etc.) with named edges, and this graph is used to match variables when\n restoring a checkpoint. It can be more robust to changes in the Python\n program, and helps to support restore-on-create for variables when executing\n eagerly. Prefer `tf.train.Checkpoint` over `tf.compat.v1.train.Saver` for new\n code.\n\n `Checkpoint` objects have dependencies on the objects passed as keyword\n arguments to their constructors, and each dependency is given a name that is\n identical to the name of the keyword argument for which it was created.\n TensorFlow classes like `Layer`s and `Optimizer`s will automatically add\n dependencies on their variables (e.g. \"kernel\" and \"bias\" for\n `tf.keras.layers.Dense`). Inheriting from `tf.keras.Model` makes managing\n dependencies easy in user-defined classes, since `Model` hooks into attribute\n assignment. For example:\n\n ```python\n class Regress(tf.keras.Model):\n\n def __init__(self):\n super(Regress, self).__init__()\n self.input_transform = tf.keras.layers.Dense(10)\n # ...\n\n def call(self, inputs):\n x = self.input_transform(inputs)\n # ...\n ```\n\n This `Model` has a dependency named \"input_transform\" on its `Dense` layer,\n which in turn depends on its variables. As a result, saving an instance of\n `Regress` using `tf.train.Checkpoint` will also save all the variables created\n by the `Dense` layer.\n\n When variables are assigned to multiple workers, each worker writes its own\n section of the checkpoint. These sections are then merged/re-indexed to behave\n as a single checkpoint. This avoids copying all variables to one worker, but\n does require that all workers see a common filesystem.\n\n While `tf.keras.Model.save_weights` and `tf.train.Checkpoint.save` save in the\n same format, note that the root of the resulting checkpoint is the object the\n save method is attached to. This means saving a `tf.keras.Model` using\n `save_weights` and loading into a `tf.train.Checkpoint` with a `Model`\n attached (or vice versa) will not match the `Model`'s variables. See the\n [guide to training\n checkpoints](https://www.tensorflow.org/guide/checkpoint) for\n details. Prefer `tf.train.Checkpoint` over `tf.keras.Model.save_weights` for\n training checkpoints.\n\n Attributes:\n save_counter: Incremented when `save()` is called. Used to number\n checkpoints.\n ", "desc": "Groups trackable objects, saving and restoring them.", "type": "API"}, {"name": "tf.compat.v1.train.checkpoint_exists", "docs": "Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse standard file APIs to check for files with this prefix.\n\nThis is the recommended way to check if a checkpoint exists, since it takes\ninto account the naming difference between V1 and V2 formats.\n\nArgs:\n checkpoint_prefix: the prefix of a V1 or V2 checkpoint, with V2 taking\n priority. Typically the result of `Saver.save()` or that of\n `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or\n V1/V2.\n\nReturns:\n A bool, true if a checkpoint referred to by `checkpoint_prefix` exists.", "desc": "Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.CheckpointManager", "docs": "Manages multiple checkpoints by keeping some and deleting unneeded ones.\n\n Example usage:\n\n ```python\n import tensorflow as tf\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n manager = tf.train.CheckpointManager(\n checkpoint, directory=\"/tmp/model\", max_to_keep=5)\n status = checkpoint.restore(manager.latest_checkpoint)\n while True:\n # train\n manager.save()\n ```\n\n `CheckpointManager` preserves its own state across instantiations (see the\n `__init__` documentation for details). Only one should be active in a\n particular directory at a time.\n ", "desc": "Manages multiple checkpoints by keeping some and deleting unneeded ones.", "type": "API"}, {"name": "tf.compat.v1.train.CheckpointOptions", "docs": "Options for constructing a Checkpoint.\n\n Used as the `options` argument to either `tf.train.Checkpoint.save()` or\n `tf.train.Checkpoint.restore()` methods to adjust how variables are\n saved/restored.\n\n Example: Run IO ops on \"localhost\" while saving a checkpoint:\n\n ```\n step = tf.Variable(0, name=\"step\")\n checkpoint = tf.train.Checkpoint(step=step)\n options = tf.train.CheckpointOptions(experimental_io_device=\"/job:localhost\")\n checkpoint.save(\"/tmp/ckpt\", options=options)\n ```\n ", "desc": "Options for constructing a Checkpoint.", "type": "API"}, {"name": "tf.compat.v1.train.checkpoints_iterator", "docs": "Continuously yield new checkpoint files as they appear.\n\n The iterator only checks for new checkpoints when control flow has been\n reverted to it. This means it can miss checkpoints if your code takes longer\n to run between iterations than `min_interval_secs` or the interval at which\n new checkpoints are written.\n\n The `timeout` argument is the maximum number of seconds to block waiting for\n a new checkpoint. It is used in combination with the `timeout_fn` as\n follows:\n\n * If the timeout expires and no `timeout_fn` was specified, the iterator\n stops yielding.\n * If a `timeout_fn` was specified, that function is called and if it returns\n a true boolean value the iterator stops yielding.\n * If the function returns a false boolean value then the iterator resumes the\n wait for new checkpoints. At this point the timeout logic applies again.\n\n This behavior gives control to callers on what to do if checkpoints do not\n come fast enough or stop being generated. For example, if callers have a way\n to detect that the training has stopped and know that no new checkpoints\n will be generated, they can provide a `timeout_fn` that returns `True` when\n the training has stopped. If they know that the training is still going on\n they return `False` instead.\n\n Args:\n checkpoint_dir: The directory in which checkpoints are saved.\n min_interval_secs: The minimum number of seconds between yielding\n checkpoints.\n timeout: The maximum number of seconds to wait between checkpoints. If left\n as `None`, then the process will wait indefinitely.\n timeout_fn: Optional function to call after a timeout. If the function\n returns True, then it means that no new checkpoints will be generated and\n the iterator will exit. The function is called with no arguments.\n\n Yields:\n String paths to latest checkpoint files as they arrive.\n ", "desc": "Continuously yield new checkpoint files as they appear.", "type": "API"}, {"name": "tf.compat.v1.train.CheckpointSaverHook", "docs": "Saves checkpoints every N steps or seconds.", "desc": "Saves checkpoints every N steps or seconds.", "type": "API"}, {"name": "tf.compat.v1.train.CheckpointSaverListener", "docs": "Interface for listeners that take action before or after checkpoint save.\n\n `CheckpointSaverListener` triggers only in steps when `CheckpointSaverHook` is\n triggered, and provides callbacks at the following points:\n - before using the session\n - before each call to `Saver.save()`\n - after each call to `Saver.save()`\n - at the end of session\n\n To use a listener, implement a class and pass the listener to a\n `CheckpointSaverHook`, as in this example:\n\n ```python\n class ExampleCheckpointSaverListener(CheckpointSaverListener):\n def begin(self):\n # You can add ops to the graph here.\n print('Starting the session.')\n self.your_tensor = ...\n\n def before_save(self, session, global_step_value):\n print('About to write a checkpoint')\n\n def after_save(self, session, global_step_value):\n print('Done writing checkpoint.')\n if decided_to_stop_training():\n return True\n\n def end(self, session, global_step_value):\n print('Done with the session.')\n\n ...\n listener = ExampleCheckpointSaverListener()\n saver_hook = tf.estimator.CheckpointSaverHook(\n checkpoint_dir, listeners=[listener])\n with\n tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):\n ...\n ```\n\n A `CheckpointSaverListener` may simply take some action after every\n checkpoint save. It is also possible for the listener to use its own schedule\n to act less frequently, e.g. based on global_step_value. In this case,\n implementors should implement the `end()` method to handle actions related to\n the last checkpoint save. But the listener should not act twice if\n `after_save()` already handled this last checkpoint save.\n\n A `CheckpointSaverListener` can request training to be stopped, by returning\n True in `after_save`. Please note that, in replicated distributed training\n setting, only `chief` should use this behavior. Otherwise each worker will do\n their own evaluation, which may be wasteful of resources.\n ", "desc": "Interface for listeners that take action before or after checkpoint save.", "type": "API"}, {"name": "tf.compat.v1.train.ChiefSessionCreator", "docs": "Creates a tf.compat.v1.Session for a chief.", "desc": "Creates a tf.compat.v1.Session for a chief.", "type": "API"}, {"name": "tf.compat.v1.train.ClusterDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.ClusterSpec", "docs": "Represents a cluster as a set of \"tasks\", organized into \"jobs\".\n\n A `tf.train.ClusterSpec` represents the set of processes that\n participate in a distributed TensorFlow computation. Every\n `tf.distribute.Server` is constructed in a particular cluster.\n\n To create a cluster with two jobs and five tasks, you specify the\n mapping from job names to lists of network addresses (typically\n hostname-port pairs).\n\n ```python\n cluster = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\",\n \"worker2.example.com:2222\"],\n \"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n ```\n\n Each job may also be specified as a sparse mapping from task indices\n to network addresses. This enables a server to be configured without\n needing to know the identity of (for example) all other worker\n tasks:\n\n ```python\n cluster = tf.train.ClusterSpec({\"worker\": {1: \"worker1.example.com:2222\"},\n \"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n ```\n ", "desc": "Represents a cluster as a set of \"tasks\", organized into \"jobs\".", "type": "API"}, {"name": "tf.compat.v1.train.Coordinator", "docs": "A coordinator for threads.\n\n This class implements a simple mechanism to coordinate the termination of a\n set of threads.\n\n #### Usage:\n\n ```python\n # Create a coordinator.\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate.\n coord.join(threads)\n ```\n\n Any of the threads can call `coord.request_stop()` to ask for all the threads\n to stop. To cooperate with the requests, each thread must check for\n `coord.should_stop()` on a regular basis. `coord.should_stop()` returns\n `True` as soon as `coord.request_stop()` has been called.\n\n A typical thread running with a coordinator will do something like:\n\n ```python\n while not coord.should_stop():\n ...do some work...\n ```\n\n #### Exception handling:\n\n A thread can report an exception to the coordinator as part of the\n `request_stop()` call. The exception will be re-raised from the\n `coord.join()` call.\n\n Thread code:\n\n ```python\n try:\n while not coord.should_stop():\n ...do some work...\n except Exception as e:\n coord.request_stop(e)\n ```\n\n Main code:\n\n ```python\n try:\n ...\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate.\n coord.join(threads)\n except Exception as e:\n ...exception that was passed to coord.request_stop()\n ```\n\n To simplify the thread implementation, the Coordinator provides a\n context handler `stop_on_exception()` that automatically requests a stop if\n an exception is raised. Using the context handler the thread code above\n can be written as:\n\n ```python\n with coord.stop_on_exception():\n while not coord.should_stop():\n ...do some work...\n ```\n\n #### Grace period for stopping:\n\n After a thread has called `coord.request_stop()` the other threads have a\n fixed time to stop, this is called the 'stop grace period' and defaults to 2\n minutes. If any of the threads is still alive after the grace period expires\n `coord.join()` raises a RuntimeError reporting the laggards.\n\n ```python\n try:\n ...\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate, give them 10s grace period\n coord.join(threads, stop_grace_period_secs=10)\n except RuntimeError:\n ...one of the threads took more than 10s to stop after request_stop()\n ...was called.\n except Exception:\n ...exception that was passed to coord.request_stop()\n ```\n ", "desc": "A coordinator for threads.", "type": "API"}, {"name": "tf.compat.v1.train.cosine_decay", "docs": "Applies cosine decay to the learning rate.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies a cosine decay function\n to a provided initial learning rate. It requires a `global_step` value to\n compute the decayed learning rate. You can just pass a TensorFlow variable\n that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n ```python\n global_step = min(global_step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * global_step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n decayed_learning_rate = learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed = cosine_decay(learning_rate, global_step, decay_steps)\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` Tensor or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation.\n decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number. Number\n of steps to decay over.\n alpha: A scalar `float32` or `float64` Tensor or a Python number. Minimum\n learning rate value as a fraction of learning_rate.\n name: String. Optional name of the operation. Defaults to 'CosineDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n Raises:\n ValueError: if `global_step` is not supplied.\n\n References:\n Stochastic Gradient Descent with Warm Restarts:\n [Loshchilov et al., 2017]\n (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx)\n ([pdf](https://openreview.net/pdf?id=Skq89Scxx))\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies cosine decay to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.cosine_decay_restarts", "docs": "Applies cosine decay with restarts to the learning rate.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies a cosine decay function with\n restarts to a provided initial learning rate. It requires a `global_step`\n value to compute the decayed learning rate. You can just pass a TensorFlow\n variable that you increment at each training step.\n\n The function returns the decayed learning rate while taking into account\n possible warm restarts. The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more steps\n and with `m_mul` times smaller initial learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed = cosine_decay_restarts(learning_rate, global_step,\n first_decay_steps)\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` Tensor or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation.\n first_decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number.\n Number of steps to decay over.\n t_mul: A scalar `float32` or `float64` `Tensor` or a Python number. Used to\n derive the number of iterations in the i-th period\n m_mul: A scalar `float32` or `float64` `Tensor` or a Python number.\n Used to derive the initial learning rate of the i-th period:\n alpha: A scalar `float32` or `float64` Tensor or a Python number. Minimum\n learning rate value as a fraction of the learning_rate.\n name: String. Optional name of the operation. Defaults to 'SGDRDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n Raises:\n ValueError: if `global_step` is not supplied.\n\n References:\n Stochastic Gradient Descent with Warm Restarts:\n [Loshchilov et al., 2017]\n (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx)\n ([pdf](https://openreview.net/pdf?id=Skq89Scxx))\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies cosine decay with restarts to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.create_global_step", "docs": "Create global step tensor in graph.\n\n Args:\n graph: The graph in which to create the global step tensor. If missing, use\n default graph.\n\n Returns:\n Global step tensor.\n\n Raises:\n ValueError: if global step tensor is already defined.\n\n @compatibility(TF2)\n With the deprecation of global graphs, TF no longer tracks variables in\n collections. In other words, there are no global variables in TF2. Thus, the\n global step functions have been removed (`get_or_create_global_step`,\n `create_global_step`, `get_global_step`) . You have two options for migrating:\n\n 1. Create a Keras optimizer, which generates an `iterations` variable. This\n variable is automatically incremented when calling `apply_gradients`.\n 2. Manually create and increment a `tf.Variable`.\n\n Below is an example of migrating away from using a global step to using a\n Keras optimizer:\n\n Define a dummy model and loss:\n\n >>> def compute_loss(x):\n ... v = tf.Variable(3.0)\n ... y = x * v\n ... loss = x * 5 - x * v\n ... return loss, [v]\n\n Before migrating:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... x = tf.compat.v1.placeholder(tf.float32, [])\n ... loss, var_list = compute_loss(x)\n ... global_step = tf.compat.v1.train.create_global_step()\n ... global_init = tf.compat.v1.global_variables_initializer()\n ... optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)\n ... train_op = optimizer.minimize(loss, global_step, var_list)\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> sess.run(global_init)\n >>> print(\"before training:\", sess.run(global_step))\n before training: 0\n >>> sess.run(train_op, feed_dict={x: 3})\n >>> print(\"after training:\", sess.run(global_step))\n after training: 1\n\n Migrating to a Keras optimizer:\n\n >>> optimizer = tf.keras.optimizers.SGD(.01)\n >>> print(\"before training:\", optimizer.iterations.numpy())\n before training: 0\n >>> with tf.GradientTape() as tape:\n ... loss, var_list = compute_loss(3)\n ... grads = tape.gradient(loss, var_list)\n ... optimizer.apply_gradients(zip(grads, var_list))\n >>> print(\"after training:\", optimizer.iterations.numpy())\n after training: 1\n\n @end_compatibility\n ", "desc": "Create global step tensor in graph.", "type": "API"}, {"name": "tf.compat.v1.train.do_quantize_training_on_graphdef", "docs": "A general quantization scheme is being developed in `tf.contrib.quantize`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nGraphDef quantized training rewriter is deprecated in the long term.\n\nConsider using that instead, though since it is in the tf.contrib namespace,\nit is not subject to backward compatibility guarantees.\n\nArgs:\n input_graph: A `GraphDef`.\n num_bits: The number of bits for quantize training.\n\nReturns:\n The graph with quantize training done.", "desc": "A general quantization scheme is being developed in `tf.contrib.quantize`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.Example", "docs": "An `Example` is a standard proto storing data for training and inference.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nIt contains a key-value store `Example.features` where each key (string) maps\nto a `tf.train.Feature` message which contains a fixed-type list. This flexible\nand compact format allows the storage of large amounts of typed data, but\nrequires that the data shape and use be determined by the configuration files\nand parsers that are used to read and write this format (refer to\n`tf.io.parse_example` for details).\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {int64_list {value: [1, 2, 3, 4]}}}\n... }''',\n... tf.train.Example())\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)})\n{'my_feature': }\n\nWhile the list of keys, and the contents of each key _could_ be different for\nevery `Example`, TensorFlow expects a fixed list of keys, each with a fixed\n`tf.dtype`. A conformant `Example` dataset obeys the following conventions:\n\n - If a Feature `K` exists in one example with data type `T`, it must be of\n type `T` in all other examples when present. It may be omitted.\n - The number of instances of Feature `K` list data may vary across examples,\n depending on the requirements of the model.\n - If a Feature `K` doesn't exist in an example, a `K`-specific default will be\n used, if configured.\n - If a Feature `K` exists in an example but contains no items, the intent\n is considered to be an empty tensor and no default will be used.\n\n", "desc": "An `Example` is a standard proto storing data for training and inference.", "type": "API"}, {"name": "tf.compat.v1.train.experimental", "docs": "Public API for tf.train.experimental namespace.\n", "desc": "Public API for tf.train.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.disable_mixed_precision_graph_rewrite", "docs": "Disables the mixed precision graph rewrite.\n\n After this is called, the mixed precision graph rewrite will no longer run for\n new Sessions, and so float32 operations will no longer be converted to float16\n in such Sessions. However, any existing Sessions will continue to have the\n graph rewrite enabled if they were created after\n `enable_mixed_precision_graph_rewrite` was called but before\n `disable_mixed_precision_graph_rewrite` was called.\n\n This does not undo the effects of loss scaling. Any optimizers wrapped with a\n LossScaleOptimizer will continue to do loss scaling, although this loss\n scaling will no longer be useful if the optimizer is used in new Sessions, as\n the graph rewrite no longer converts the graph to use float16.\n\n This function is useful for unit testing. A unit tests can test using the\n mixed precision graph rewrite, then disable it so future unit tests continue\n using float32. If this is done, unit tests should not share a single session,\n as `enable_mixed_precision_graph_rewrite` and\n `disable_mixed_precision_graph_rewrite` have no effect on existing sessions.\n ", "desc": "Disables the mixed precision graph rewrite.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.DynamicLossScale", "docs": "Loss scale that dynamically adjusts itself.\n\n Dynamic loss scaling works by adjusting the loss scale as training progresses.\n The goal is to keep the loss scale as high as possible without overflowing the\n gradients. As long as the gradients do not overflow, raising the loss scale\n never hurts.\n\n The algorithm starts by setting the loss scale to an initial value. Every N\n steps that the gradients are finite, the loss scale is increased by some\n factor. However, if a NaN or Inf gradient is found, the gradients for that\n step are not applied, and the loss scale is decreased by the factor. This\n process tends to keep the loss scale as high as possible without gradients\n overflowing.\n ", "desc": "Loss scale that dynamically adjusts itself.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.enable_mixed_precision_graph_rewrite", "docs": "Enable mixed precision via a graph rewrite.\n\n Mixed precision is the use of both float32 and float16 data types when\n training a model to improve performance. This is achieved via a graph rewrite\n operation and a loss-scale optimizer.\n\n Performing arithmetic operations in float16 takes advantage of specialized\n processing units, such as NVIDIA Tensor Cores, for much higher arithmetic\n throughput. However, due to the smaller representable range, performing the\n entire training with float16 can result in gradient underflow, that is, small\n gradient values becoming zeroes. Instead, performing only select arithmetic\n operations in float16 results in higher throughput and decreased training\n time when using compatible hardware accelerators while also reducing memory\n usage, typically without sacrificing model accuracy.\n\n Note: While the mixed precision rewrite changes the datatype of various\n layers throughout the model, the same accuracy reached in float32 is\n expected. If a `NaN` gradient occurs with dynamic loss scaling, the model\n update for that batch is skipped. In this case, the global step count is not\n incremented, and the `LossScaleOptimizer` attempts to decrease the loss\n scaling value to avoid `NaN` values in subsequent iterations. This approach\n has been shown to achieve the same accuracy as float32 and, in most cases,\n better training throughput.\n\n Example:\n\n ```python\n model = tf.keras.models.Sequential([\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(64, activation='softmax'),\n ])\n\n opt = tf.keras.optimizers.SGD()\n opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)\n model.compile(loss=\"mse\", optimizer=opt)\n\n x_train = np.random.random((1024, 64))\n y_train = np.random.random((1024, 64))\n model.fit(x_train, y_train)\n ```\n\n Calling `enable_mixed_precision_graph_rewrite(opt)` enables the graph rewrite\n operation before computing gradients. The function additionally returns an\n `Optimizer` (`opt`) wrapped with a `LossScaleOptimizer`. This prevents\n underflow in the float16 tensors during the backward pass. An optimizer of\n type `tf.train.Optimizer` or `tf.keras.optimizers.Optimizer` must be passed\n to this function, which will then be wrapped to use loss scaling.\n\n The graph rewrite operation changes the `dtype` of certain operations in the\n graph from float32 to float16. There are several categories of operations\n that are either included or excluded by this rewrite operation. The following\n categories of Ops are defined inside corresponding functions under the class\n `AutoMixedPrecisionLists` in\n \n auto_mixed_precision_lists.h:\n\n * `ClearList`: Ops that do not have numerically significant adverse effects.\n E.g. `ArgMax` and `Floor`.\n * `AllowList`: Ops that are considered numerically safe for execution in\n float16, and thus are always converted. E.g. `Conv2D`.\n * `DenyList`: Ops that are numerically unsafe to execute in float16 and\n can negatively affect downstream nodes. E.g. `Softmax`.\n * `GrayList`: Ops that are considered numerically safe for execution in\n float16 unless downstream from a DenyList Op. E.g. `Add` and `AvgPool`.\n\n When this function is used, gradients should only be computed and applied\n with the returned optimizer, either by calling `opt.minimize()` or\n `opt.compute_gradients()` followed by `opt.apply_gradients()`.\n Gradients should not be computed with `tf.gradients` or `tf.GradientTape`.\n This is because the returned optimizer will apply loss scaling, and\n `tf.gradients` or `tf.GradientTape` will not. If you do directly use\n `tf.gradients` or `tf.GradientTape`, your model may not converge due to\n float16 underflow problems.\n\n When eager execution is enabled, the mixed precision graph rewrite is only\n enabled within `tf.function`s, as outside `tf.function`s, there is no graph.\n\n For NVIDIA GPUs with Tensor cores, as a general performance guide, dimensions\n (such as batch size, input size, output size, and channel counts)\n should be powers of two if under 256, or otherwise divisible by 8 if above\n 256. For more information, check out the\n [NVIDIA Deep Learning Performance Guide](\n https://docs.nvidia.com/deeplearning/sdk/dl-performance-guide/index.html).\n\n Currently, mixed precision is only enabled on NVIDIA Tensor Core GPUs with\n Compute Capability 7.0 and above (Volta, Turing, or newer architectures). The\n parts of the graph on CPUs and TPUs are untouched by the graph rewrite.\n\n Raises:\n `ValueError`, if the `tf.keras.mixed_precision` API is also used by calling\n `tf.keras.mixed_precision.set_global_policy`. Only one mixed precision\n API can be used.\n\n Args:\n opt: An instance of a `tf.keras.optimizers.Optimizer` or a\n `tf.train.Optimizer`.\n loss_scale: Either an int/float, the string `\"dynamic\"`, or an instance of\n a `tf.mixed_precision.experimental.LossScale`. The loss scale to use. It\n is recommended to keep this as its default value of `\"dynamic\"`, which\n will adjust the scaling automatically to prevent `Inf` or `NaN` values.\n\n Returns:\n A version of `opt` that will use loss scaling to prevent underflow.\n ", "desc": "Enable mixed precision via a graph rewrite.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.FixedLossScale", "docs": "Loss scale with a fixed value.\n\n The loss scale is not updated for the lifetime of instances of this class.\n A given instance of this class always returns the same number when called.\n ", "desc": "Loss scale with a fixed value.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.LossScale", "docs": "Base class for all TF1 loss scales.\n\n This is an abstract base class, so you cannot instantiate it directly.\n Instead, use one of its concrete subclasses:\n * `tf.compat.v1.mixed_precision.DynamicLossScale`\n * `tf.compat.v1.mixed_precision.FixedLossScale`\n\n Loss scaling is a process that multiplies the loss by a multiplier called the\n loss scale, and divides each gradient by the same multiplier. The pseudocode\n for this process is:\n\n ```\n loss = ...\n loss *= loss_scale\n grads = gradients(loss, vars)\n grads /= loss_scale\n ```\n\n Mathematically, loss scaling has no effect, but can help avoid numerical\n underflow in intermediate gradients when float16 tensors are used for mixed\n precision training. By multiplying the loss, each intermediate gradient will\n have the same multiplier applied.\n\n Instances of this class represent a loss scale. Calling instances of this\n class returns the loss scale as a scalar float32 tensor, while method\n `update()` updates the loss scale depending on the values of the gradients.\n Optimizers use instances of this class to scale loss and gradients.\n\n In most functions that accept a LossScale, you can also pass an int (such as\n 8) to create a `FixedLossScale` or the string `\"dynamic\"` to create a dynamic\n loss scale.\n ", "desc": "Base class for all TF1 loss scales.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.MixedPrecisionLossScaleOptimizer", "docs": "An optimizer that applies loss scaling.\n\n Loss scaling is a process that multiplies the loss by a multiplier called the\n loss scale, and divides each gradient by the same multiplier. The pseudocode\n for this process is:\n\n ```\n loss = ...\n loss *= loss_scale\n grads = gradients(loss, vars)\n grads /= loss_scale\n ```\n\n Mathematically, loss scaling has no effect, but can help avoid numerical\n underflow in intermediate gradients when float16 tensors are used for mixed\n precision training. By multiplying the loss, each intermediate gradient will\n have the same multiplier applied.\n\n The loss scale can either be a fixed constant, chosen by the user, or be\n dynamically determined. Dynamically determining the loss scale is convenient\n as a loss scale does not have to be explicitly chosen. However it reduces\n performance.\n\n This optimizer wraps another optimizer and applies loss scaling to it via a\n `LossScale`. Loss scaling is applied whenever gradients are\n computed, such as through `minimize()`.\n ", "desc": "An optimizer that applies loss scaling.", "type": "API"}, {"name": "tf.compat.v1.train.experimental.PythonState", "docs": "A mixin for putting Python state in an object-based checkpoint.\n\n This is an abstract class which allows extensions to TensorFlow's object-based\n checkpointing (see `tf.train.Checkpoint`). For example a wrapper for NumPy\n arrays:\n\n ```python\n import io\n import numpy\n\n class NumpyWrapper(tf.train.experimental.PythonState):\n\n def __init__(self, array):\n self.array = array\n\n def serialize(self):\n string_file = io.BytesIO()\n try:\n numpy.save(string_file, self.array, allow_pickle=False)\n serialized = string_file.getvalue()\n finally:\n string_file.close()\n return serialized\n\n def deserialize(self, string_value):\n string_file = io.BytesIO(string_value)\n try:\n self.array = numpy.load(string_file, allow_pickle=False)\n finally:\n string_file.close()\n ```\n\n Instances of `NumpyWrapper` are checkpointable objects, and will be saved and\n restored from checkpoints along with TensorFlow state like variables.\n\n ```python\n root = tf.train.Checkpoint(numpy=NumpyWrapper(numpy.array([1.])))\n save_path = root.save(prefix)\n root.numpy.array *= 2.\n assert [2.] == root.numpy.array\n root.restore(save_path)\n assert [1.] == root.numpy.array\n ```\n ", "desc": "A mixin for putting Python state in an object-based checkpoint.", "type": "API"}, {"name": "tf.compat.v1.train.exponential_decay", "docs": "Applies exponential decay to the learning rate.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies an exponential decay function\n to a provided initial learning rate. It requires a `global_step` value to\n compute the decayed learning rate. You can just pass a TensorFlow variable\n that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n\n ```python\n decayed_learning_rate = learning_rate *\n decay_rate ^ (global_step / decay_steps)\n ```\n\n If the argument `staircase` is `True`, then `global_step / decay_steps` is an\n integer division and the decayed learning rate follows a staircase function.\n\n Example: decay every 100000 steps with a base of 0.96:\n\n ```python\n ...\n global_step = tf.Variable(0, trainable=False)\n starter_learning_rate = 0.1\n learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,\n global_step,\n 100000, 0.96, staircase=True)\n # Passing global_step to minimize() will increment it at each step.\n learning_step = (\n tf.compat.v1.train.GradientDescentOptimizer(learning_rate)\n .minimize(...my loss..., global_step=global_step)\n )\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` `Tensor` or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation. Must not be negative.\n decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number. Must\n be positive. See the decay computation above.\n decay_rate: A scalar `float32` or `float64` `Tensor` or a Python number.\n The decay rate.\n staircase: Boolean. If `True` decay the learning rate at discrete intervals\n name: String. Optional name of the operation. Defaults to\n 'ExponentialDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n\n Raises:\n ValueError: if `global_step` is not supplied.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies exponential decay to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.ExponentialMovingAverage", "docs": "Maintains moving averages of variables by employing an exponential decay.\n\n When training a model, it is often beneficial to maintain moving averages of\n the trained parameters. Evaluations that use averaged parameters sometimes\n produce significantly better results than the final trained values.\n\n The `apply()` method adds shadow copies of trained variables the first time\n it is called, and maintains a moving average of the trained variables in\n their shadow copies at every additional invocation.\n It should generally be called immediately after creating the model weights,\n and then after each training step.\n\n The `average()` method gives access to the shadow variables.\n It allows you to use the moving averages in place of the last trained values\n for evaluations, by loading the moving averages into your model via\n `var.assign(ema.average(var))`.\n Additionally, although `ExponentialMovingAverage`\n objects are not directly trackable by checkpoints,\n `average()` returns the moving average variables for your model weights,\n which you can then checkpoint. (There is an example\n of this near the bottom of this docstring).\n So, `average()` is useful when\n building an evaluation model, or when restoring a model from a checkpoint\n file.\n\n The moving averages are computed using exponential decay. You specify the\n decay value (as a scalar float value, `Tensor`, or `Variable`) when creating\n the `ExponentialMovingAverage` object. The shadow variables are initialized\n with the same initial values as the trained variables. When you run `apply`\n to update the moving averages, each shadow variable is updated with the\n formula:\n\n `shadow_variable -= (1 - decay) * (shadow_variable - variable)`\n\n This is mathematically equivalent to the classic formula below, but the use\n of an `assign_sub` op (the `\"-=\"` in the formula) allows concurrent lockless\n updates to the variables:\n\n `shadow_variable = decay * shadow_variable + (1 - decay) * variable`\n\n Reasonable values for `decay` are close to 1.0, typically in the\n multiple-nines range: 0.999, 0.9999, etc.\n\n To have fine-grained control over the value of the decay parameter during\n training, pass a scalar `tf.Variable` as the `decay` value to the constructor,\n and update the variable as needed.\n\n Example usage when creating a training model:\n\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n\n # The first `apply` creates the shadow variables that hold the moving averages\n ema.apply([var0, var1])\n\n # grab the moving averages for checkpointing purposes or to be able to\n # load the moving averages into the model weights\n averages = [ema.average(var0), ema.average(var1)]\n\n ...\n def train_step(...):\n ...\n # Apply the optimizer.\n opt.minimize(my_loss, [var0, var1])\n\n # Update the moving averages\n # of var0 and var1 with additional calls to `apply`\n ema.apply([var0, var1])\n\n ...train the model by running train_step multiple times...\n ```\n\n There are several ways to use the moving averages for evaluations:\n\n 1. Assign the values of the shadow variables to your model variables with\n `Variable.assign(...)` before evaluating your\n model. You can use the `average()`\n method to get the shadow variable for a given variable. To continue\n training after using this approach, make sure to record the unaveraged\n weights and restore them before continuing to train. You can see the\n tensorflow-addons' MovingAverage optimizer's `swap_weights` method for\n one example of how to swap variables efficiently in distributed settings:\n https://github.com/tensorflow/addons/blob/v0.13.0/tensorflow_addons/optimizers/moving_average.py#L151\n 2. Make sure to checkpoint out your moving average variables in your\n `tf.train.Checkpoint`. At evaluation time, create your shadow variables and\n use `tf.train.Checkpoint` to restore the moving averages into the shadow\n variables. Then, load the moving averages into the actual model weights via\n `var.assign(moving_avg)`.\n 3. Checkpoint out your moving average variables in your `tf.train.Checkpoint`.\n For evaluation, restore your model weights directly from the moving\n averages instead of from the non-averaged weights.\n Caution: If you choose this approach, include only the object-graph paths\n to the averaged path in your checkpoint restore.\n If you point both the unaveraged and averaged paths in a checkpoint\n restore to the same variables, it is hard to reason about whether your\n model will restore the averaged or non-averaged variables.\n\n Example of saving out then restoring the shadow variable values:\n\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object, create the shadow variables,\n # and grab the moving averages for checkpointing purposes.\n # (The ExponentialMovingAverage object itself is not checkpointable)\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n ema.apply([var0, var1])\n avg_var0 = ema.average(var0)\n avg_var1 = ema.average(var1)\n\n # Create a Checkpoint that will manage the model weights and the averages,\n checkpoint = tf.train.Checkpoint(model_weights=[var0, var1],\n averaged_weights=[avg_var0, avg_var1])\n ... # Do training\n\n # Save out the checkpoint including the model weights and the moving averages\n checkpoint.save(...)\n ```\n\n Restore option: restore all averaged & non-averaged weights, then load\n moving averages into the model via `var.assign()`\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object, create the shadow variables,\n # and grab the moving averages for checkpoint restore purposes.\n # (The ExponentialMovingAverage object itself is not checkpointable)\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n ema.apply([var0, var1])\n avg_var0 = ema.average(var0)\n avg_var1 = ema.average(var1)\n\n # Create a Checkpoint that will manage the model weights and the averages,\n checkpoint = tf.train.Checkpoint(model_weights=[var0, var1],\n averaged_weights=[avg_var0, avg_var1])\n checkpoint.restore(...)\n var0.assign(avg_var0)\n var1.assign(avg_var1)\n # var0 and var1 now hold the moving average values\n ```\n\n Restore option: Directly restore the moving averages into the model weights.\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create a Checkpoint that will manage two objects with trackable state,\n checkpoint = tf.train.Checkpoint(averaged_weights=[var0, var1])\n checkpoint.restore(...)\n # var0 and var1 now hold the moving average values\n ```\n ", "desc": "Maintains moving averages of variables by employing an exponential decay.", "type": "API"}, {"name": "tf.compat.v1.train.export_meta_graph", "docs": "Returns `MetaGraphDef` proto.\n\n Optionally writes it to filename.\n\n This function exports the graph, saver, and collection objects into\n `MetaGraphDef` protocol buffer with the intention of it being imported\n at a later time or location to restart training, run inference, or be\n a subgraph.\n\n Args:\n filename: Optional filename including the path for writing the generated\n `MetaGraphDef` protocol buffer.\n meta_info_def: `MetaInfoDef` protocol buffer.\n graph_def: `GraphDef` protocol buffer.\n saver_def: `SaverDef` protocol buffer.\n collection_list: List of string keys to collect.\n as_text: If `True`, writes the `MetaGraphDef` as an ASCII proto.\n graph: The `Graph` to export. If `None`, use the default graph.\n export_scope: Optional `string`. Name scope under which to extract the\n subgraph. The scope name will be striped from the node definitions for\n easy import later into new name scopes. If `None`, the whole graph is\n exported. graph_def and export_scope cannot both be specified.\n clear_devices: Whether or not to clear the device field for an `Operation`\n or `Tensor` during export.\n clear_extraneous_savers: Remove any Saver-related information from the graph\n (both Save/Restore ops and SaverDefs) that are not associated with the\n provided SaverDef.\n strip_default_attrs: Boolean. If `True`, default-valued attributes will be\n removed from the NodeDefs. For a detailed guide, see\n [Stripping Default-Valued Attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes).\n save_debug_info: If `True`, save the GraphDebugInfo to a separate file,\n which in the same directory of filename and with `_debug` added before the\n file extend.\n **kwargs: Optional keyed arguments.\n\n Returns:\n A `MetaGraphDef` proto.\n\n Raises:\n ValueError: When the `GraphDef` is larger than 2GB.\n RuntimeError: If called with eager execution enabled.\n\n @compatibility(eager)\n Exporting/importing meta graphs is not supported unless both `graph_def` and\n `graph` are provided. No graph exists when eager execution is enabled.\n @end_compatibility\n ", "desc": "Returns `MetaGraphDef` proto.", "type": "API"}, {"name": "tf.compat.v1.train.Feature", "docs": "Used in `tf.train.Example` protos. Contains a list of values.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `Union`.\n\nThe contained list can be one of three types:\n\n - `tf.train.BytesList`\n - `tf.train.FloatList`\n - `tf.train.Int64List`\n\n>>> int_feature = tf.train.Feature(\n... int64_list=tf.train.Int64List(value=[1, 2, 3, 4]))\n>>> float_feature = tf.train.Feature(\n... float_list=tf.train.FloatList(value=[1., 2., 3., 4.]))\n>>> bytes_feature = tf.train.Feature(\n... bytes_list=tf.train.BytesList(value=[b\"abc\", b\"1234\"]))\n>>>\n>>> example = tf.train.Example(\n... features=tf.train.Features(feature={\n... 'my_ints': int_feature,\n... 'my_floats': float_feature,\n... 'my_bytes': bytes_feature,\n... }))\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {\n... 'my_ints': tf.io.RaggedFeature(dtype=tf.int64),\n... 'my_floats': tf.io.RaggedFeature(dtype=tf.float32),\n... 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_bytes': ,\n 'my_floats': ,\n 'my_ints': }\n\n", "desc": "Used in `tf.train.Example` protos. Contains a list of values.", "type": "API"}, {"name": "tf.compat.v1.train.FeatureList", "docs": "Mainly used as part of a `tf.train.SequenceExample`.\n\nContains a list of `tf.train.Feature`s.\n\nThe `tf.train.SequenceExample` proto can be thought of as a\nproto implementation of the following python type:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nThis proto implements the `List[Feature]` portion.\n\n", "desc": "Mainly used as part of a `tf.train.SequenceExample`.", "type": "API"}, {"name": "tf.compat.v1.train.FeatureLists", "docs": "Mainly used as part of a `tf.train.SequenceExample`.\n\nContains a list of `tf.train.Feature`s.\n\nThe `tf.train.SequenceExample` proto can be thought of as a\nproto implementation of the following python type:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nThis proto implements the `Dict[str, FeatureList]` portion.\n", "desc": "Mainly used as part of a `tf.train.SequenceExample`.", "type": "API"}, {"name": "tf.compat.v1.train.FeatureLists.FeatureListEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.Features", "docs": "Used in `tf.train.Example` protos. Contains the mapping from keys to `Feature`.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `Dict`.\n\n>>> int_feature = tf.train.Feature(\n... int64_list=tf.train.Int64List(value=[1, 2, 3, 4]))\n>>> float_feature = tf.train.Feature(\n... float_list=tf.train.FloatList(value=[1., 2., 3., 4.]))\n>>> bytes_feature = tf.train.Feature(\n... bytes_list=tf.train.BytesList(value=[b\"abc\", b\"1234\"]))\n>>>\n>>> example = tf.train.Example(\n... features=tf.train.Features(feature={\n... 'my_ints': int_feature,\n... 'my_floats': float_feature,\n... 'my_bytes': bytes_feature,\n... }))\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {\n... 'my_ints': tf.io.RaggedFeature(dtype=tf.int64),\n... 'my_floats': tf.io.RaggedFeature(dtype=tf.float32),\n... 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_bytes': ,\n 'my_floats': ,\n 'my_ints': }\n\n", "desc": "Used in `tf.train.Example` protos. Contains the mapping from keys to `Feature`.", "type": "API"}, {"name": "tf.compat.v1.train.Features.FeatureEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.FeedFnHook", "docs": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "desc": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "type": "API"}, {"name": "tf.compat.v1.train.FinalOpsHook", "docs": "A hook which evaluates `Tensors` at the end of a session.", "desc": "A hook which evaluates `Tensors` at the end of a session.", "type": "API"}, {"name": "tf.compat.v1.train.FloatList", "docs": "Used in `tf.train.Example` protos. Holds a list of floats.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[float]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {float_list {value: [1., 2., 3., 4. ]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].float_list.value\n[1.0, 2.0, 3.0, 4.0]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.float32)})\n{'my_feature': }\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of floats.", "type": "API"}, {"name": "tf.compat.v1.train.FtrlOptimizer", "docs": "Optimizer that implements the FTRL algorithm.\n\n This version has support for both online L2 (McMahan et al., 2013) and\n shrinkage-type L2, which is the addition of an L2 penalty\n to the loss function.\n\n References:\n Ad-click prediction:\n [McMahan et al., 2013](https://dl.acm.org/citation.cfm?id=2488200)\n ([pdf](https://dl.acm.org/ft_gateway.cfm?id=2488200&ftid=1388399&dwn=1&CFID=32233078&CFTOKEN=d60fe57a294c056a-CB75C374-F915-E7A6-1573FBBC7BF7D526))\n ", "desc": "Optimizer that implements the FTRL algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.generate_checkpoint_state_proto", "docs": "Generates a checkpoint state proto.\n\n Args:\n save_dir: Directory where the model was saved.\n model_checkpoint_path: The checkpoint file.\n all_model_checkpoint_paths: List of strings. Paths to all not-yet-deleted\n checkpoints, sorted from oldest to newest. If this is a non-empty list,\n the last element must be equal to model_checkpoint_path. These paths\n are also saved in the CheckpointState proto.\n all_model_checkpoint_timestamps: A list of floats, indicating the number of\n seconds since the Epoch when each checkpoint was generated.\n last_preserved_timestamp: A float, indicating the number of seconds since\n the Epoch when the last preserved checkpoint was written, e.g. due to a\n `keep_checkpoint_every_n_hours` parameter (see\n `tf.train.CheckpointManager` for an implementation).\n Returns:\n CheckpointState proto with model_checkpoint_path and\n all_model_checkpoint_paths updated to either absolute paths or\n relative paths to the current save_dir.\n\n Raises:\n ValueError: If `all_model_checkpoint_timestamps` was provided but its length\n does not match `all_model_checkpoint_paths`.\n ", "desc": "Generates a checkpoint state proto.", "type": "API"}, {"name": "tf.compat.v1.train.get_checkpoint_mtimes", "docs": "Returns the mtimes (modification timestamps) of the checkpoints. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse standard file utilities to get mtimes.\n\nGlobs for the checkpoints pointed to by `checkpoint_prefixes`. If the files\nexist, collect their mtime. Both V2 and V1 checkpoints are considered, in\nthat priority.\n\nThis is the recommended way to get the mtimes, since it takes into account\nthe naming difference between V1 and V2 formats.\n\nNote: If not all checkpoints exist, the length of the returned mtimes list\nwill be smaller than the length of `checkpoint_prefixes` list, so mapping\ncheckpoints to corresponding mtimes will not be possible.\n\nArgs:\n checkpoint_prefixes: a list of checkpoint paths, typically the results of\n `Saver.save()` or those of `tf.train.latest_checkpoint()`, regardless of\n sharded/non-sharded or V1/V2.\nReturns:\n A list of mtimes (in microseconds) of the found checkpoints.", "desc": "Returns the mtimes (modification timestamps) of the checkpoints. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.get_checkpoint_state", "docs": "Returns CheckpointState proto from the \"checkpoint\" file.\n\n If the \"checkpoint\" file contains a valid CheckpointState\n proto, returns it.\n\n Args:\n checkpoint_dir: The directory of checkpoints.\n latest_filename: Optional name of the checkpoint file. Default to\n 'checkpoint'.\n\n Returns:\n A CheckpointState if the state was available, None\n otherwise.\n\n Raises:\n ValueError: if the checkpoint read doesn't have model_checkpoint_path set.\n ", "desc": "Returns CheckpointState proto from the \"checkpoint\" file.", "type": "API"}, {"name": "tf.compat.v1.train.get_global_step", "docs": "Get the global step tensor.\n\n The global step tensor must be an integer variable. We first try to find it\n in the collection `GLOBAL_STEP`, or by name `global_step:0`.\n\n Args:\n graph: The graph to find the global step in. If missing, use default graph.\n\n Returns:\n The global step variable, or `None` if none was found.\n\n Raises:\n TypeError: If the global step tensor has a non-integer type, or if it is not\n a `Variable`.\n\n @compatibility(TF2)\n With the deprecation of global graphs, TF no longer tracks variables in\n collections. In other words, there are no global variables in TF2. Thus, the\n global step functions have been removed (`get_or_create_global_step`,\n `create_global_step`, `get_global_step`) . You have two options for migrating:\n\n 1. Create a Keras optimizer, which generates an `iterations` variable. This\n variable is automatically incremented when calling `apply_gradients`.\n 2. Manually create and increment a `tf.Variable`.\n\n Below is an example of migrating away from using a global step to using a\n Keras optimizer:\n\n Define a dummy model and loss:\n\n >>> def compute_loss(x):\n ... v = tf.Variable(3.0)\n ... y = x * v\n ... loss = x * 5 - x * v\n ... return loss, [v]\n\n Before migrating:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... x = tf.compat.v1.placeholder(tf.float32, [])\n ... loss, var_list = compute_loss(x)\n ... global_step = tf.compat.v1.train.get_or_create_global_step()\n ... global_init = tf.compat.v1.global_variables_initializer()\n ... optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)\n ... train_op = optimizer.minimize(loss, global_step, var_list)\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> sess.run(global_init)\n >>> print(\"before training:\", sess.run(global_step))\n before training: 0\n >>> sess.run(train_op, feed_dict={x: 3})\n >>> print(\"after training:\", sess.run(global_step))\n after training: 1\n\n Using `get_global_step`:\n\n >>> with g.as_default():\n ... print(sess.run(tf.compat.v1.train.get_global_step()))\n 1\n\n Migrating to a Keras optimizer:\n\n >>> optimizer = tf.keras.optimizers.SGD(.01)\n >>> print(\"before training:\", optimizer.iterations.numpy())\n before training: 0\n >>> with tf.GradientTape() as tape:\n ... loss, var_list = compute_loss(3)\n ... grads = tape.gradient(loss, var_list)\n ... optimizer.apply_gradients(zip(grads, var_list))\n >>> print(\"after training:\", optimizer.iterations.numpy())\n after training: 1\n\n @end_compatibility\n ", "desc": "Get the global step tensor.", "type": "API"}, {"name": "tf.compat.v1.train.get_or_create_global_step", "docs": "Returns and create (if necessary) the global step tensor.\n\n Args:\n graph: The graph in which to create the global step tensor. If missing, use\n default graph.\n\n Returns:\n The global step tensor.\n\n @compatibility(TF2)\n With the deprecation of global graphs, TF no longer tracks variables in\n collections. In other words, there are no global variables in TF2. Thus, the\n global step functions have been removed (`get_or_create_global_step`,\n `create_global_step`, `get_global_step`) . You have two options for migrating:\n\n 1. Create a Keras optimizer, which generates an `iterations` variable. This\n variable is automatically incremented when calling `apply_gradients`.\n 2. Manually create and increment a `tf.Variable`.\n\n Below is an example of migrating away from using a global step to using a\n Keras optimizer:\n\n Define a dummy model and loss:\n\n >>> def compute_loss(x):\n ... v = tf.Variable(3.0)\n ... y = x * v\n ... loss = x * 5 - x * v\n ... return loss, [v]\n\n Before migrating:\n\n >>> g = tf.Graph()\n >>> with g.as_default():\n ... x = tf.compat.v1.placeholder(tf.float32, [])\n ... loss, var_list = compute_loss(x)\n ... global_step = tf.compat.v1.train.get_or_create_global_step()\n ... global_init = tf.compat.v1.global_variables_initializer()\n ... optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)\n ... train_op = optimizer.minimize(loss, global_step, var_list)\n >>> sess = tf.compat.v1.Session(graph=g)\n >>> sess.run(global_init)\n >>> print(\"before training:\", sess.run(global_step))\n before training: 0\n >>> sess.run(train_op, feed_dict={x: 3})\n >>> print(\"after training:\", sess.run(global_step))\n after training: 1\n\n Migrating to a Keras optimizer:\n\n >>> optimizer = tf.keras.optimizers.SGD(.01)\n >>> print(\"before training:\", optimizer.iterations.numpy())\n before training: 0\n >>> with tf.GradientTape() as tape:\n ... loss, var_list = compute_loss(3)\n ... grads = tape.gradient(loss, var_list)\n ... optimizer.apply_gradients(zip(grads, var_list))\n >>> print(\"after training:\", optimizer.iterations.numpy())\n after training: 1\n\n @end_compatibility\n ", "desc": "Returns and create (if necessary) the global step tensor.", "type": "API"}, {"name": "tf.compat.v1.train.global_step", "docs": "Small helper to get the global step.\n\n ```python\n # Create a variable to hold the global_step.\n global_step_tensor = tf.Variable(10, trainable=False, name='global_step')\n # Create a session.\n sess = tf.compat.v1.Session()\n # Initialize the variable\n sess.run(global_step_tensor.initializer)\n # Get the variable value.\n print('global_step: %s' % tf.compat.v1.train.global_step(sess,\n global_step_tensor))\n\n global_step: 10\n ```\n\n Args:\n sess: A TensorFlow `Session` object.\n global_step_tensor: `Tensor` or the `name` of the operation that contains\n the global step.\n\n Returns:\n The global step value.\n ", "desc": "Small helper to get the global step.", "type": "API"}, {"name": "tf.compat.v1.train.GlobalStepWaiterHook", "docs": "Delays execution until global step reaches `wait_until_step`.\n\n This hook delays execution until global step reaches to `wait_until_step`. It\n is used to gradually start workers in distributed settings. One example usage\n would be setting `wait_until_step=int(K*log(task_id+1))` assuming that\n task_id=0 is the chief.\n ", "desc": "Delays execution until global step reaches `wait_until_step`.", "type": "API"}, {"name": "tf.compat.v1.train.GradientDescentOptimizer", "docs": "Optimizer that implements the gradient descent algorithm.\n ", "desc": "Optimizer that implements the gradient descent algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.import_meta_graph", "docs": "Recreates a Graph saved in a `MetaGraphDef` proto.\n\n This function takes a `MetaGraphDef` protocol buffer as input. If\n the argument is a file containing a `MetaGraphDef` protocol buffer ,\n it constructs a protocol buffer from the file content. The function\n then adds all the nodes from the `graph_def` field to the\n current graph, recreates all the collections, and returns a saver\n constructed from the `saver_def` field.\n\n In combination with `export_meta_graph()`, this function can be used to\n\n * Serialize a graph along with other Python objects such as `QueueRunner`,\n `Variable` into a `MetaGraphDef`.\n\n * Restart training from a saved graph and checkpoints.\n\n * Run inference from a saved graph and checkpoints.\n\n ```Python\n ...\n # Create a saver.\n saver = tf.compat.v1.train.Saver(...variables...)\n # Remember the training_op we want to run by adding it to a collection.\n tf.compat.v1.add_to_collection('train_op', train_op)\n sess = tf.compat.v1.Session()\n for step in range(1000000):\n sess.run(train_op)\n if step % 1000 == 0:\n # Saves checkpoint, which by default also exports a meta_graph\n # named 'my-model-global_step.meta'.\n saver.save(sess, 'my-model', global_step=step)\n ```\n\n Later we can continue training from this saved `meta_graph` without building\n the model from scratch.\n\n ```Python\n with tf.Session() as sess:\n new_saver =\n tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')\n new_saver.restore(sess, 'my-save-dir/my-model-10000')\n # tf.get_collection() returns a list. In this example we only want\n # the first one.\n train_op = tf.get_collection('train_op')[0]\n for step in range(1000000):\n sess.run(train_op)\n ```\n\n NOTE: Restarting training from saved `meta_graph` only works if the\n device assignments have not changed.\n\n Example:\n Variables, placeholders, and independent operations can also be stored, as\n shown in the following example.\n\n ```Python\n # Saving contents and operations.\n v1 = tf.placeholder(tf.float32, name=\"v1\")\n v2 = tf.placeholder(tf.float32, name=\"v2\")\n v3 = tf.math.multiply(v1, v2)\n vx = tf.Variable(10.0, name=\"vx\")\n v4 = tf.add(v3, vx, name=\"v4\")\n saver = tf.train.Saver([vx])\n sess = tf.Session()\n sess.run(tf.global_variables_initializer())\n sess.run(vx.assign(tf.add(vx, vx)))\n result = sess.run(v4, feed_dict={v1:12.0, v2:3.3})\n print(result)\n saver.save(sess, \"./model_ex1\")\n ```\n\n Later this model can be restored and contents loaded.\n\n ```Python\n # Restoring variables and running operations.\n saver = tf.train.import_meta_graph(\"./model_ex1.meta\")\n sess = tf.Session()\n saver.restore(sess, \"./model_ex1\")\n result = sess.run(\"v4:0\", feed_dict={\"v1:0\": 12.0, \"v2:0\": 3.3})\n print(result)\n ```\n\n Args:\n meta_graph_or_file: `MetaGraphDef` protocol buffer or filename (including\n the path) containing a `MetaGraphDef`.\n clear_devices: Whether or not to clear the device field for an `Operation`\n or `Tensor` during import.\n import_scope: Optional `string`. Name scope to add. Only used when\n initializing from protocol buffer.\n **kwargs: Optional keyed arguments.\n\n Returns:\n A saver constructed from `saver_def` in `MetaGraphDef` or None.\n\n A None value is returned if no variables exist in the `MetaGraphDef`\n (i.e., there are no variables to restore).\n\n Raises:\n RuntimeError: If called with eager execution enabled.\n\n @compatibility(eager)\n Exporting/importing meta graphs is not supported. No graph exists when eager\n execution is enabled.\n @end_compatibility\n ", "desc": "Recreates a Graph saved in a `MetaGraphDef` proto.", "type": "API"}, {"name": "tf.compat.v1.train.init_from_checkpoint", "docs": "Replaces `tf.Variable` initializers so they load from a checkpoint file.\n\n @compatibility(TF2)\n `tf.compat.v1.train.init_from_checkpoint` is not recommended for restoring\n variable values in TF2.\n\n To restore checkpoints in TF2, please use\n `tf.keras.Model.load_weights` or `tf.train.Checkpoint.restore`. These APIs use\n use an [object-based method of checkpointing]\n (https://www.tensorflow.org/guide/checkpoint#loading_mechanics), while\n `tf.compat.v1.init_from_checkpoint` relies on a more-fragile variable-name\n based method of checkpointing. There is no object-based equivalent of\n `init_from_checkpoint` in TF2.\n\n Please re-write your checkpoints immediately using the object-based APIs,\n see [migration guide]\n (https://www.tensorflow.org/guide/migrate#checkpoint_compatibility) for more\n details.\n\n You can load a name-based checkpoint written by `tf.compat.v1.train.Saver`\n using `tf.train.Checkpoint.restore` or `tf.keras.Model.load_weights`. However,\n you may have to change the names of the variables in your model to match the\n variable names in the name-based checkpoint, which can be viewed with\n `tf.train.list_variables(path)`.\n\n Another option is to create an `assignment_map` that maps the name of the\n variables in the name-based checkpoint to the variables in your model, eg:\n ```\n {\n 'sequential/dense/bias': model.variables[0],\n 'sequential/dense/kernel': model.variables[1]\n }\n ```\n and use `tf.compat.v1.train.init_from_checkpoint(path, assignment_map)` to\n restore the name-based checkpoint.\n\n After restoring, re-encode your checkpoint using `tf.train.Checkpoint.save`\n or `tf.keras.Model.save_weights`.\n\n @end_compatibility\n\n Values are not loaded immediately, but when the initializer is run\n (typically by running a `tf.compat.v1.global_variables_initializer` op).\n\n Note: This overrides default initialization ops of specified variables and\n redefines dtype.\n\n Assignment map supports following syntax:\n\n * `'checkpoint_scope_name/': 'scope_name/'` - will load all variables in\n current `scope_name` from `checkpoint_scope_name` with matching tensor\n names.\n * `'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name'` -\n will initialize `scope_name/variable_name` variable\n from `checkpoint_scope_name/some_other_variable`.\n * `'scope_variable_name': variable` - will initialize given `tf.Variable`\n object with tensor 'scope_variable_name' from the checkpoint.\n * `'scope_variable_name': list(variable)` - will initialize list of\n partitioned variables with tensor 'scope_variable_name' from the checkpoint.\n * `'/': 'scope_name/'` - will load all variables in current `scope_name` from\n checkpoint's root (e.g. no scope).\n\n Supports loading into partitioned variables, which are represented as\n `'/part_'`.\n\n Assignment map can be a dict, or a list of pairs. The latter is\n necessary to initialize multiple variables in the current graph from\n the same variable in the checkpoint.\n\n Example:\n\n ```python\n\n # Say, '/tmp/model.ckpt' has the following tensors:\n # -- name='old_scope_1/var1', shape=[20, 2]\n # -- name='old_scope_1/var2', shape=[50, 4]\n # -- name='old_scope_2/var3', shape=[100, 100]\n\n # Create new model's variables\n with tf.compat.v1.variable_scope('new_scope_1'):\n var1 = tf.compat.v1.get_variable('var1', shape=[20, 2],\n initializer=tf.compat.v1.zeros_initializer())\n with tf.compat.v1.variable_scope('new_scope_2'):\n var2 = tf.compat.v1.get_variable('var2', shape=[50, 4],\n initializer=tf.compat.v1.zeros_initializer())\n # Partition into 5 variables along the first axis.\n var3 = tf.compat.v1.get_variable(name='var3', shape=[100, 100],\n initializer=tf.compat.v1.zeros_initializer(),\n partitioner=lambda shape, dtype: [5, 1])\n\n # Initialize all variables in `new_scope_1` from `old_scope_1`.\n init_from_checkpoint('/tmp/model.ckpt', {'old_scope_1/': 'new_scope_1/'})\n\n # Use names to specify which variables to initialize from checkpoint.\n init_from_checkpoint('/tmp/model.ckpt',\n {'old_scope_1/var1': 'new_scope_1/var1',\n 'old_scope_1/var2': 'new_scope_2/var2'})\n\n # Or use tf.Variable objects to identify what to initialize.\n init_from_checkpoint('/tmp/model.ckpt',\n {'old_scope_1/var1': var1,\n 'old_scope_1/var2': var2})\n\n # Initialize partitioned variables using variable's name\n init_from_checkpoint('/tmp/model.ckpt',\n {'old_scope_2/var3': 'new_scope_2/var3'})\n\n # Or specify the list of tf.Variable objects.\n init_from_checkpoint('/tmp/model.ckpt',\n {'old_scope_2/var3': var3._get_variable_list()})\n\n ```\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint.\n assignment_map: Dict, or a list of key-value pairs, where keys are names\n of the variables in the checkpoint and values are current variables or\n names of current variables (in default graph).\n\n Raises:\n ValueError: If missing variables in current graph, or if missing\n checkpoints or tensors in checkpoints.\n\n ", "desc": "Replaces `tf.Variable` initializers so they load from a checkpoint file.", "type": "API"}, {"name": "tf.compat.v1.train.input_producer", "docs": "Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n\nNote: if `num_epochs` is not `None`, this function creates local counter\n`epochs`. Use `local_variables_initializer()` to initialize local variables.\n\nArgs:\n input_tensor: A tensor with the rows to produce. Must be at least\n one-dimensional. Must either have a fully-defined shape, or\n `element_shape` must be defined.\n element_shape: (Optional.) A `TensorShape` representing the shape of a\n row of `input_tensor`, if it cannot be inferred.\n num_epochs: (Optional.) An integer. If specified `input_producer` produces\n each row of `input_tensor` `num_epochs` times before generating an\n `OutOfRange` error. If not specified, `input_producer` can cycle through\n the rows of `input_tensor` an unlimited number of times.\n shuffle: (Optional.) A boolean. If true, the rows are randomly shuffled\n within each epoch.\n seed: (Optional.) An integer. The seed to use if `shuffle` is true.\n capacity: (Optional.) The capacity of the queue to be used for buffering\n the input.\n shared_name: (Optional.) If set, this queue will be shared under the given\n name across multiple sessions.\n summary_name: (Optional.) If set, a scalar summary for the current queue\n size will be generated, using this name as part of the tag.\n name: (Optional.) A name for queue.\n cancel_op: (Optional.) Cancel op for the queue\n\nReturns:\n A queue with the output rows. A `QueueRunner` for the queue is\n added to the current `QUEUE_RUNNER` collection of the current\n graph.\n\nRaises:\n ValueError: If the shape of the input cannot be inferred from the arguments.\n RuntimeError: If called with eager execution enabled.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Output the rows of `input_tensor` to a queue for an input pipeline. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.Int64List", "docs": "Used in `tf.train.Example` protos. Holds a list of Int64s.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[int64]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {int64_list {value: [1, 2, 3, 4]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].int64_list.value\n[1, 2, 3, 4]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)})\n{'my_feature': }\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of Int64s.", "type": "API"}, {"name": "tf.compat.v1.train.inverse_time_decay", "docs": "Applies inverse time decay to the initial learning rate.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies an inverse decay function\n to a provided initial learning rate. It requires an `global_step` value to\n compute the decayed learning rate. You can just pass a TensorFlow variable\n that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n\n ```python\n decayed_learning_rate = learning_rate / (1 + decay_rate * global_step /\n decay_step)\n ```\n\n or, if `staircase` is `True`, as:\n\n ```python\n decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step /\n decay_step))\n ```\n\n Example: decay 1/t with a rate of 0.5:\n\n ```python\n ...\n global_step = tf.Variable(0, trainable=False)\n learning_rate = 0.1\n decay_steps = 1.0\n decay_rate = 0.5\n learning_rate = tf.compat.v1.train.inverse_time_decay(learning_rate,\n global_step,\n decay_steps, decay_rate)\n\n # Passing global_step to minimize() will increment it at each step.\n learning_step = (\n tf.compat.v1.train.GradientDescentOptimizer(learning_rate)\n .minimize(...my loss..., global_step=global_step)\n )\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` `Tensor` or a Python number.\n The initial learning rate.\n global_step: A Python number. Global step to use for the decay computation.\n Must not be negative.\n decay_steps: How often to apply decay.\n decay_rate: A Python number. The decay rate.\n staircase: Whether to apply decay in a discrete staircase, as opposed to\n continuous, fashion.\n name: String. Optional name of the operation. Defaults to\n 'InverseTimeDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n\n Raises:\n ValueError: if `global_step` is not supplied.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies inverse time decay to the initial learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.JobDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.JobDef.TasksEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.latest_checkpoint", "docs": "Finds the filename of latest saved checkpoint file.\n\n Gets the checkpoint state given the provided checkpoint_dir and looks for a\n corresponding TensorFlow 2 (preferred) or TensorFlow 1.x checkpoint path.\n The latest_filename argument is only applicable if you are saving checkpoint\n using `v1.train.Saver.save`\n\n\n See the [Training Checkpoints\n Guide](https://www.tensorflow.org/guide/checkpoint) for more details and\n examples.`\n\n Args:\n checkpoint_dir: Directory where the variables were saved.\n latest_filename: Optional name for the protocol buffer file that\n contains the list of most recent checkpoint filenames.\n See the corresponding argument to `v1.train.Saver.save`.\n\n Returns:\n The full path to the latest checkpoint or `None` if no checkpoint was found.\n ", "desc": "Finds the filename of latest saved checkpoint file.", "type": "API"}, {"name": "tf.compat.v1.train.limit_epochs", "docs": "Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`.\n\nNote: creates local counter `epochs`. Use `local_variables_initializer()` to\ninitialize local variables.\n\nArgs:\n tensor: Any `Tensor`.\n num_epochs: A positive integer (optional). If specified, limits the number\n of steps the output tensor may be evaluated.\n name: A name for the operations (optional).\n\nReturns:\n tensor or `OutOfRange`.\n\nRaises:\n ValueError: if `num_epochs` is invalid.", "desc": "Returns tensor `num_epochs` times and then raises an `OutOfRange` error. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.linear_cosine_decay", "docs": "Applies linear cosine decay to the learning rate.\n\n Note that linear cosine decay is more aggressive than cosine decay and\n larger initial learning rates can typically be used.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies a linear cosine decay function\n to a provided initial learning rate. It requires a `global_step` value to\n compute the decayed learning rate. You can just pass a TensorFlow variable\n that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n ```python\n global_step = min(global_step, decay_steps)\n linear_decay = (decay_steps - global_step) / decay_steps)\n cosine_decay = 0.5 * (\n 1 + cos(pi * 2 * num_periods * global_step / decay_steps))\n decayed = (alpha + linear_decay) * cosine_decay + beta\n decayed_learning_rate = learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed = linear_cosine_decay(learning_rate, global_step, decay_steps)\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` Tensor or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation.\n decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number. Number\n of steps to decay over.\n num_periods: Number of periods in the cosine part of the decay. See\n computation above.\n alpha: See computation above.\n beta: See computation above.\n name: String. Optional name of the operation. Defaults to\n 'LinearCosineDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n Raises:\n ValueError: if `global_step` is not supplied.\n\n References:\n Neural Optimizer Search with Reinforcement Learning:\n [Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html)\n ([pdf](http://proceedings.mlr.press/v70/bello17a/bello17a.pdf))\n Stochastic Gradient Descent with Warm Restarts:\n [Loshchilov et al., 2017]\n (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx)\n ([pdf](https://openreview.net/pdf?id=Skq89Scxx))\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies linear cosine decay to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.list_variables", "docs": "Lists the checkpoint keys and shapes of variables in a checkpoint.\n\n Checkpoint keys are paths in a checkpoint graph.\n\n Example usage:\n\n ```python\n import tensorflow as tf\n import os\n ckpt_directory = \"/tmp/training_checkpoints/ckpt\"\n ckpt = tf.train.Checkpoint(optimizer=optimizer, model=model)\n manager = tf.train.CheckpointManager(ckpt, ckpt_directory, max_to_keep=3)\n train_and_checkpoint(model, manager)\n tf.train.list_variables(manager.latest_checkpoint)\n ```\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint.\n\n Returns:\n List of tuples `(key, shape)`.\n ", "desc": "Lists the checkpoint keys and shapes of variables in a checkpoint.", "type": "API"}, {"name": "tf.compat.v1.train.load_checkpoint", "docs": "Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`.\n\n If `ckpt_dir_or_file` resolves to a directory with multiple checkpoints,\n reader for the latest checkpoint is returned.\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint\n file.\n\n Returns:\n `CheckpointReader` object.\n\n Raises:\n ValueError: If `ckpt_dir_or_file` resolves to a directory with no\n checkpoints.\n ", "desc": "Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`.", "type": "API"}, {"name": "tf.compat.v1.train.load_variable", "docs": "Returns the tensor value of the given variable in the checkpoint.\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint.\n name: Name of the variable to return.\n\n Returns:\n A numpy `ndarray` with a copy of the value of this variable.\n ", "desc": "Returns the tensor value of the given variable in the checkpoint.", "type": "API"}, {"name": "tf.compat.v1.train.LoggingTensorHook", "docs": "Prints the given tensors every N local steps, every N seconds, or at end.\n\n The tensors will be printed to the log, with `INFO` severity. If you are not\n seeing the logs, you might want to add the following line after your imports:\n\n ```python\n tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)\n ```\n\n Note that if `at_end` is True, `tensors` should not include any tensor\n whose evaluation produces a side effect such as consuming additional inputs.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n\n ", "desc": "Prints the given tensors every N local steps, every N seconds, or at end.", "type": "API"}, {"name": "tf.compat.v1.train.LooperThread", "docs": "A thread that runs code repeatedly, optionally on a timer.\n\n This thread class is intended to be used with a `Coordinator`. It repeatedly\n runs code specified either as `target` and `args` or by the `run_loop()`\n method.\n\n Before each run the thread checks if the coordinator has requested stop. In\n that case the looper thread terminates immediately.\n\n If the code being run raises an exception, that exception is reported to the\n coordinator and the thread terminates. The coordinator will then request all\n the other threads it coordinates to stop.\n\n You typically pass looper threads to the supervisor `Join()` method.\n ", "desc": "A thread that runs code repeatedly, optionally on a timer.", "type": "API"}, {"name": "tf.compat.v1.train.match_filenames_once", "docs": "Save the list of files matching pattern, so it is only computed once.\n\n NOTE: The order of the files returned is deterministic.\n\n Args:\n pattern: A file pattern (glob), or 1D tensor of file patterns.\n name: A name for the operations (optional).\n\n Returns:\n A variable that is initialized to the list of files matching the pattern(s).\n ", "desc": "Save the list of files matching pattern, so it is only computed once.", "type": "API"}, {"name": "tf.compat.v1.train.maybe_batch", "docs": "Conditionally creates batches of tensors based on `keep_input`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n\nSee docstring in `batch` for more details.\n\nArgs:\n tensors: The list or dictionary of tensors to enqueue.\n keep_input: A `bool` Tensor. This tensor controls whether the input is\n added to the queue or not. If it is a scalar and evaluates `True`, then\n `tensors` are all added to the queue. If it is a vector and `enqueue_many`\n is `True`, then each example is added to the queue only if the\n corresponding value in `keep_input` is `True`. This tensor essentially\n acts as a filtering mechanism.\n batch_size: The new batch size pulled from the queue.\n num_threads: The number of threads enqueuing `tensors`. The batching will\n be nondeterministic if `num_threads > 1`.\n capacity: An integer. The maximum number of elements in the queue.\n enqueue_many: Whether each tensor in `tensors` is a single example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensors`.\n dynamic_pad: Boolean. Allow variable dimensions in input shapes.\n The given dimensions are padded upon dequeue so that tensors within a\n batch have the same shapes.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same types as `tensors`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors`.", "desc": "Conditionally creates batches of tensors based on `keep_input`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.maybe_batch_join", "docs": "Runs a list of tensors to conditionally fill a queue to create batches. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.interleave(...).filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n\nSee docstring in `batch_join` for more details.\n\nArgs:\n tensors_list: A list of tuples or dictionaries of tensors to enqueue.\n keep_input: A `bool` Tensor. This tensor controls whether the input is\n added to the queue or not. If it is a scalar and evaluates `True`, then\n `tensors` are all added to the queue. If it is a vector and `enqueue_many`\n is `True`, then each example is added to the queue only if the\n corresponding value in `keep_input` is `True`. This tensor essentially\n acts as a filtering mechanism.\n batch_size: An integer. The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n enqueue_many: Whether each tensor in `tensor_list_list` is a single\n example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensor_list_list[i]`.\n dynamic_pad: Boolean. Allow variable dimensions in input shapes.\n The given dimensions are padded upon dequeue so that tensors within a\n batch have the same shapes.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional) If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same number and types as\n `tensors_list[i]`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensor_list_list`.", "desc": "Runs a list of tensors to conditionally fill a queue to create batches. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.maybe_shuffle_batch", "docs": "Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size)`.\n\nSee docstring in `shuffle_batch` for more details.\n\nArgs:\n tensors: The list or dictionary of tensors to enqueue.\n batch_size: The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n min_after_dequeue: Minimum number elements in the queue after a\n dequeue, used to ensure a level of mixing of elements.\n keep_input: A `bool` Tensor. This tensor controls whether the input is\n added to the queue or not. If it is a scalar and evaluates `True`, then\n `tensors` are all added to the queue. If it is a vector and `enqueue_many`\n is `True`, then each example is added to the queue only if the\n corresponding value in `keep_input` is `True`. This tensor essentially\n acts as a filtering mechanism.\n num_threads: The number of threads enqueuing `tensor_list`.\n seed: Seed for the random shuffling within the queue.\n enqueue_many: Whether each tensor in `tensor_list` is a single example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensor_list`.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional) If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the types as `tensors`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.maybe_shuffle_batch_join", "docs": "Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size)`.\n\nSee docstring in `shuffle_batch_join` for more details.\n\nArgs:\n tensors_list: A list of tuples or dictionaries of tensors to enqueue.\n batch_size: An integer. The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n min_after_dequeue: Minimum number elements in the queue after a\n dequeue, used to ensure a level of mixing of elements.\n keep_input: A `bool` Tensor. This tensor controls whether the input is\n added to the queue or not. If it is a scalar and evaluates `True`, then\n `tensors` are all added to the queue. If it is a vector and `enqueue_many`\n is `True`, then each example is added to the queue only if the\n corresponding value in `keep_input` is `True`. This tensor essentially\n acts as a filtering mechanism.\n seed: Seed for the random shuffling within the queue.\n enqueue_many: Whether each tensor in `tensor_list_list` is a single\n example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensors_list[i]`.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same number and types as\n `tensors_list[i]`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors_list`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.MomentumOptimizer", "docs": "Optimizer that implements the Momentum algorithm.\n\n Computes (if `use_nesterov = False`):\n\n ```\n accumulation = momentum * accumulation + gradient\n variable -= learning_rate * accumulation\n ```\n\n Note that in the dense version of this algorithm, `accumulation` is updated\n and applied regardless of a gradient's value, whereas the sparse version (when\n the gradient is an `IndexedSlices`, typically because of `tf.gather` or an\n embedding) only updates variable slices and corresponding `accumulation` terms\n when that part of the variable was used in the forward pass.\n\n @compatibility(TF2)\n tf.compat.v1.train.MomentumOptimizer is compatible with eager mode and\n `tf.function`.\n When eager execution is enabled, `learning_rate`,`momentum`, can each be a\n callable that takes no arguments and returns the actual value to use. This\n can be useful for changing these values across different invocations of\n optimizer functions.\n\n To switch to native TF2 style, please directly use\n [`tf.keras.optimizers.SGD`]\n (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/SGD)\n with the `momentum` argument.\n\n #### Structural mapping to native TF2\n\n Before:\n\n ```python\n optimizer = tf.compat.v1.train.MomentumOptimizer(\n learning_rate=learning_rate,\n momentum=momentum,\n use_nesterov=use_nesterov)\n ```\n\n After:\n\n ```python\n optimizer = tf.keras.optimizers.SGD(\n learning_rate=learning_rate,\n momentum=momentum,\n nesterov=use_nesterov)\n ```\n\n #### How to map arguments\n | TF1 Arg Name | TF2 Arg Name | Note |\n | ------------------ | ------------- | ------------------------------- |\n | `learning_rate` | `learning_rate`| Be careful of setting |\n : : : learning_rate tensor value computed from the global step. :\n : : : In TF1 this was usually meant to imply a dynamic learning rate and :\n : : : would recompute in each step. In TF2 (eager + function) it will :\n : : : treat it as a scalar value that only gets computed once instead of :\n : : : a symbolic placeholder to be computed each time. :\n | `momentum` | `momentum` | - |\n | `use_locking` | - | Not applicable in TF2. |\n | `use_nesterov` | `nesterov` | - |\n\n #### Before & after usage example\n Before:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.compat.v1.train.MomentumOptimizer(\n learning_rate=0.001,\n momentum=0.9,\n use_nesterov=False)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n After:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.keras.optimizers.SGD(\n learning_rate=0.001,\n momentum=0.9,\n nesterov=False)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n @end_compatibility\n\n ", "desc": "Optimizer that implements the Momentum algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.MonitoredSession", "docs": "Session-like object that handles initialization, recovery and hooks.\n\n Example usage:\n\n ```python\n saver_hook = CheckpointSaverHook(...)\n summary_hook = SummarySaverHook(...)\n with MonitoredSession(session_creator=ChiefSessionCreator(...),\n hooks=[saver_hook, summary_hook]) as sess:\n while not sess.should_stop():\n sess.run(train_op)\n ```\n\n Initialization: At creation time the monitored session does following things\n in given order:\n\n * calls `hook.begin()` for each given hook\n * finalizes the graph via `scaffold.finalize()`\n * create session\n * initializes the model via initialization ops provided by `Scaffold`\n * restores variables if a checkpoint exists\n * launches queue runners\n * calls `hook.after_create_session()`\n\n Run: When `run()` is called, the monitored session does following things:\n\n * calls `hook.before_run()`\n * calls TensorFlow `session.run()` with merged fetches and feed_dict\n * calls `hook.after_run()`\n * returns result of `session.run()` asked by user\n * if `AbortedError` or `UnavailableError` occurs, it recovers or\n reinitializes the session before executing the run() call again\n\n\n Exit: At the `close()`, the monitored session does following things in order:\n\n * calls `hook.end()`\n * closes the queue runners and the session\n * suppresses `OutOfRange` error which indicates that all inputs have been\n processed if the monitored_session is used as a context\n\n How to set `tf.compat.v1.Session` arguments:\n\n * In most cases you can set session arguments as follows:\n\n ```python\n MonitoredSession(\n session_creator=ChiefSessionCreator(master=..., config=...))\n ```\n\n * In distributed setting for a non-chief worker, you can use following:\n\n ```python\n MonitoredSession(\n session_creator=WorkerSessionCreator(master=..., config=...))\n ```\n\n See `MonitoredTrainingSession` for an example usage based on chief or worker.\n\n Note: This is not a `tf.compat.v1.Session`. For example, it cannot do\n following:\n\n * it cannot be set as default session.\n * it cannot be sent to saver.save.\n * it cannot be sent to tf.train.start_queue_runners.\n\n @compatibility(TF2)\n This API is not compatible with eager execution and `tf.function`. To migrate\n to TF2, rewrite the code to be compatible with eager execution. Check the\n [migration\n guide](https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls)\n on replacing `Session.run` calls. In Keras, session hooks can be replaced by\n Callbacks e.g. [logging hook notebook](\n https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb)\n For more details please read [Better\n performance with tf.function](https://www.tensorflow.org/guide/function).\n @end_compatibility\n\n Args:\n session_creator: A factory object to create session. Typically a\n `ChiefSessionCreator` which is the default one.\n hooks: An iterable of `SessionRunHook' objects.\n\n Returns:\n A MonitoredSession object.\n ", "desc": "Session-like object that handles initialization, recovery and hooks.", "type": "API"}, {"name": "tf.compat.v1.train.MonitoredSession.StepContext", "docs": "Control flow instrument for the `step_fn` from `run_step_fn()`.\n\n Users of `step_fn` may perform `run()` calls without running hooks\n by accessing the `session`. A `run()` call with hooks may be performed\n using `run_with_hooks()`. Computation flow can be interrupted using\n `request_stop()`.\n ", "desc": "Control flow instrument for the `step_fn` from `run_step_fn()`.", "type": "API"}, {"name": "tf.compat.v1.train.MonitoredTrainingSession", "docs": "Creates a `MonitoredSession` for training.\n\n For a chief, this utility sets proper session initializer/restorer. It also\n creates hooks related to checkpoint and summary saving. For workers, this\n utility sets proper session creator which waits for the chief to\n initialize/restore. Please check `tf.compat.v1.train.MonitoredSession` for\n more\n information.\n\n @compatibility(TF2)\n This API is not compatible with eager execution and `tf.function`. To migrate\n to TF2, rewrite the code to be compatible with eager execution. Check the\n [migration\n guide](https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls)\n on replacing `Session.run` calls. In Keras, session hooks can be replaced by\n Callbacks e.g. [logging hook notebook](\n https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb)\n For more details please read [Better\n performance with tf.function](https://www.tensorflow.org/guide/function).\n @end_compatibility\n\n Args:\n master: `String` the TensorFlow master to use.\n is_chief: If `True`, it will take care of initialization and recovery the\n underlying TensorFlow session. If `False`, it will wait on a chief to\n initialize or recover the TensorFlow session.\n checkpoint_dir: A string. Optional path to a directory where to restore\n variables.\n scaffold: A `Scaffold` used for gathering or building supportive ops. If not\n specified, a default one is created. It's used to finalize the graph.\n hooks: Optional list of `SessionRunHook` objects.\n chief_only_hooks: list of `SessionRunHook` objects. Activate these hooks if\n `is_chief==True`, ignore otherwise.\n save_checkpoint_secs: The frequency, in seconds, that a checkpoint is saved\n using a default checkpoint saver. If both `save_checkpoint_steps` and\n `save_checkpoint_secs` are set to `None`, then the default checkpoint\n saver isn't used. If both are provided, then only `save_checkpoint_secs`\n is used. Default 600.\n save_summaries_steps: The frequency, in number of global steps, that the\n summaries are written to disk using a default summary saver. If both\n `save_summaries_steps` and `save_summaries_secs` are set to `None`, then\n the default summary saver isn't used. Default 100.\n save_summaries_secs: The frequency, in secs, that the summaries are written\n to disk using a default summary saver. If both `save_summaries_steps` and\n `save_summaries_secs` are set to `None`, then the default summary saver\n isn't used. Default not enabled.\n config: an instance of `tf.compat.v1.ConfigProto` proto used to configure\n the session. It's the `config` argument of constructor of\n `tf.compat.v1.Session`.\n stop_grace_period_secs: Number of seconds given to threads to stop after\n `close()` has been called.\n log_step_count_steps: The frequency, in number of global steps, that the\n global step/sec is logged.\n max_wait_secs: Maximum time workers should wait for the session to become\n available. This should be kept relatively short to help detect incorrect\n code, but sometimes may need to be increased if the chief takes a while to\n start up.\n save_checkpoint_steps: The frequency, in number of global steps, that a\n checkpoint is saved using a default checkpoint saver. If both\n `save_checkpoint_steps` and `save_checkpoint_secs` are set to `None`, then\n the default checkpoint saver isn't used. If both are provided, then only\n `save_checkpoint_secs` is used. Default not enabled.\n summary_dir: A string. Optional path to a directory where to save\n summaries. If None, checkpoint_dir is used instead.\n save_graph_def: Whether to save the GraphDef and MetaGraphDef to\n `checkpoint_dir`. The GraphDef is saved after the session is created as\n `graph.pbtxt`. MetaGraphDefs are saved out for every checkpoint as\n `model.ckpt-*.meta`.\n\n Returns:\n A `MonitoredSession` object.\n ", "desc": "Creates a `MonitoredSession` for training.", "type": "API"}, {"name": "tf.compat.v1.train.NanLossDuringTrainingError", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.NanTensorHook", "docs": "Monitors the loss tensor and stops training if loss is NaN.\n\n Can either fail with exception or just stop training.\n ", "desc": "Monitors the loss tensor and stops training if loss is NaN.", "type": "API"}, {"name": "tf.compat.v1.train.natural_exp_decay", "docs": "Applies natural exponential decay to the initial learning rate.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies an exponential decay function\n to a provided initial learning rate. It requires an `global_step` value to\n compute the decayed learning rate. You can just pass a TensorFlow variable\n that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n\n ```python\n decayed_learning_rate = learning_rate * exp(-decay_rate * global_step /\n decay_step)\n ```\n\n or, if `staircase` is `True`, as:\n\n ```python\n decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step /\n decay_step))\n ```\n\n Example: decay exponentially with a base of 0.96:\n\n ```python\n ...\n global_step = tf.Variable(0, trainable=False)\n learning_rate = 0.1\n decay_steps = 5\n k = 0.5\n learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate,\n global_step,\n decay_steps, k)\n\n # Passing global_step to minimize() will increment it at each step.\n learning_step = (\n tf.compat.v1.train.GradientDescentOptimizer(learning_rate)\n .minimize(...my loss..., global_step=global_step)\n )\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` `Tensor` or a Python number.\n The initial learning rate.\n global_step: A Python number. Global step to use for the decay computation.\n Must not be negative.\n decay_steps: How often to apply decay.\n decay_rate: A Python number. The decay rate.\n staircase: Whether to apply decay in a discrete staircase, as opposed to\n continuous, fashion.\n name: String. Optional name of the operation. Defaults to\n 'ExponentialTimeDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n\n Raises:\n ValueError: if `global_step` is not supplied.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies natural exponential decay to the initial learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.NewCheckpointReader", "docs": "A function that returns a CheckPointReader.\n\n Args:\n filepattern: The filename.\n\n Returns:\n A CheckpointReader object.\n ", "desc": "A function that returns a CheckPointReader.", "type": "API"}, {"name": "tf.compat.v1.train.noisy_linear_cosine_decay", "docs": "Applies noisy linear cosine decay to the learning rate.\n\n Note that linear cosine decay is more aggressive than cosine decay and\n larger initial learning rates can typically be used.\n\n When training a model, it is often recommended to lower the learning rate as\n the training progresses. This function applies a noisy linear\n cosine decay function to a provided initial learning rate.\n It requires a `global_step` value to compute the decayed learning rate.\n You can just pass a TensorFlow variable that you increment at each\n training step.\n\n The function returns the decayed learning rate. It is computed as:\n ```python\n global_step = min(global_step, decay_steps)\n linear_decay = (decay_steps - global_step) / decay_steps)\n cosine_decay = 0.5 * (\n 1 + cos(pi * 2 * num_periods * global_step / decay_steps))\n decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta\n decayed_learning_rate = learning_rate * decayed\n ```\n where eps_t is 0-centered gaussian noise with variance\n initial_variance / (1 + global_step) ** variance_decay\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed = noisy_linear_cosine_decay(\n learning_rate, global_step, decay_steps)\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` Tensor or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation.\n decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number. Number\n of steps to decay over.\n initial_variance: initial variance for the noise. See computation above.\n variance_decay: decay for the noise's variance. See computation above.\n num_periods: Number of periods in the cosine part of the decay. See\n computation above.\n alpha: See computation above.\n beta: See computation above.\n name: String. Optional name of the operation. Defaults to\n 'NoisyLinearCosineDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n Raises:\n ValueError: if `global_step` is not supplied.\n\n References:\n Neural Optimizer Search with Reinforcement Learning:\n [Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html)\n ([pdf](http://proceedings.mlr.press/v70/bello17a/bello17a.pdf))\n Stochastic Gradient Descent with Warm Restarts:\n [Loshchilov et al., 2017]\n (https://openreview.net/forum?id=Skq89Scxx¬eId=Skq89Scxx)\n ([pdf](https://openreview.net/pdf?id=Skq89Scxx))\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies noisy linear cosine decay to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.Optimizer", "docs": "Base class for optimizers.\n\n This class defines the API to add Ops to train a model. You never use this\n class directly, but instead instantiate one of its subclasses such as\n `GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.\n\n ### Usage\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = GradientDescentOptimizer(learning_rate=0.1)\n # Add Ops to the graph to minimize a cost by updating a list of variables.\n # \"cost\" is a Tensor, and the list of variables contains tf.Variable\n # objects.\n opt_op = opt.minimize(cost, var_list=)\n ```\n\n In the training program you will just have to run the returned Op.\n\n ```python\n # Execute opt_op to do one step of training:\n opt_op.run()\n ```\n\n ### Processing gradients before applying them.\n\n Calling `minimize()` takes care of both computing the gradients and\n applying them to the variables. If you want to process the gradients\n before applying them you can instead use the optimizer in three steps:\n\n 1. Compute the gradients with `compute_gradients()`.\n 2. Process the gradients as you wish.\n 3. Apply the processed gradients with `apply_gradients()`.\n\n Example:\n\n ```python\n # Create an optimizer.\n opt = GradientDescentOptimizer(learning_rate=0.1)\n\n # Compute the gradients for a list of variables.\n grads_and_vars = opt.compute_gradients(loss, )\n\n # grads_and_vars is a list of tuples (gradient, variable). Do whatever you\n # need to the 'gradient' part, for example cap them, etc.\n capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]\n\n # Ask the optimizer to apply the capped gradients.\n opt.apply_gradients(capped_grads_and_vars)\n ```\n\n ### Gating Gradients\n\n Both `minimize()` and `compute_gradients()` accept a `gate_gradients`\n argument that controls the degree of parallelism during the application of\n the gradients.\n\n The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.\n\n `GATE_NONE`: Compute and apply gradients in parallel. This provides\n the maximum parallelism in execution, at the cost of some non-reproducibility\n in the results. For example the two gradients of `matmul` depend on the input\n values: With `GATE_NONE` one of the gradients could be applied to one of the\n inputs _before_ the other gradient is computed resulting in non-reproducible\n results.\n\n `GATE_OP`: For each Op, make sure all gradients are computed before\n they are used. This prevents race conditions for Ops that generate gradients\n for multiple inputs where the gradients depend on the inputs.\n\n `GATE_GRAPH`: Make sure all gradients for all variables are computed\n before any one of them is used. This provides the least parallelism but can\n be useful if you want to process all gradients before applying any of them.\n\n ### Slots\n\n Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`\n allocate and manage additional variables associated with the variables to\n train. These are called Slots. Slots have names and you can ask the\n optimizer for the names of the slots that it uses. Once you have a slot name\n you can ask the optimizer for the variable it created to hold the slot value.\n\n This can be useful if you want to log debug a training algorithm, report stats\n about the slots, etc.\n\n @compatibility(TF2)\n `tf.compat.v1.train.Optimizer` can be used in eager mode and `tf.function`,\n but it is not recommended. Please use the subclasses of\n `tf.keras.optimizers.Optimizer` instead in TF2. Please see [Basic training\n loops](https://www.tensorflow.org/guide/basic_training_loops) or\n [Writing a training loop from scratch]\n (https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)\n for examples.\n\n If your TF1 code contains a `tf.compat.v1.train.Optimizer` symbol, whether it\n is used with or without a `tf.estimator.Estimator`, you cannot simply replace\n that with the corresponding `tf.keras.optimizers.Optimizer`s. To migrate to\n TF2, it is advised the whole training program used with `Estimator` to be\n migrated to Keras `Model.fit` based or TF2 custom training loops.\n\n #### Structural Mapping to Native TF2\n\n Before:\n\n ```python\n sgd_op = tf.compat.v1.train.GradientDescentOptimizer(3.0)\n opt_op = sgd_op.minimize(cost, global_step, [var0, var1])\n opt_op.run(session=session)\n ```\n\n After:\n\n ```python\n sgd = tf.keras.optimizers.SGD(3.0)\n sgd.minimize(cost_fn, [var0, var1])\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `use_locking` | Not supported | - |\n | `name` | `name. ` | - |\n\n #### Before & After Usage Example\n\n Before:\n\n >>> g = tf.compat.v1.Graph()\n >>> with g.as_default():\n ... var0 = tf.compat.v1.Variable([1.0, 2.0])\n ... var1 = tf.compat.v1.Variable([3.0, 4.0])\n ... cost = 5 * var0 + 3 * var1\n ... global_step = tf.compat.v1.Variable(\n ... tf.compat.v1.zeros([], tf.compat.v1.int64), name='global_step')\n ... init_op = tf.compat.v1.initialize_all_variables()\n ... sgd_op = tf.compat.v1.train.GradientDescentOptimizer(3.0)\n ... opt_op = sgd_op.minimize(cost, global_step, [var0, var1])\n >>> session = tf.compat.v1.Session(graph=g)\n >>> session.run(init_op)\n >>> opt_op.run(session=session)\n >>> print(session.run(var0))\n [-14. -13.]\n\n\n After:\n >>> var0 = tf.Variable([1.0, 2.0])\n >>> var1 = tf.Variable([3.0, 4.0])\n >>> cost_fn = lambda: 5 * var0 + 3 * var1\n >>> sgd = tf.keras.optimizers.SGD(3.0)\n >>> sgd.minimize(cost_fn, [var0, var1])\n >>> print(var0.numpy())\n [-14. -13.]\n\n @end_compatibility\n\n\n ", "desc": "Base class for optimizers.", "type": "API"}, {"name": "tf.compat.v1.train.piecewise_constant", "docs": "Piecewise constant from boundaries and interval values.\n\n Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5\n for the next 10000 steps, and 0.1 for any additional steps.\n\n ```python\n global_step = tf.Variable(0, trainable=False)\n boundaries = [100000, 110000]\n values = [1.0, 0.5, 0.1]\n learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries,\n values)\n\n # Later, whenever we perform an optimization step, we increment global_step.\n ```\n\n Args:\n x: A 0-D scalar `Tensor`. Must be one of the following types: `float32`,\n `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.\n boundaries: A list of `Tensor`s or `int`s or `float`s with strictly\n increasing entries, and with all elements having the same type as `x`.\n values: A list of `Tensor`s or `float`s or `int`s that specifies the values\n for the intervals defined by `boundaries`. It should have one more element\n than `boundaries`, and all elements should have the same type.\n name: A string. Optional name of the operation. Defaults to\n 'PiecewiseConstant'.\n\n Returns:\n A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`,\n `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`, ...,\n and values[-1] when `x > boundaries[-1]`.\n\n Raises:\n ValueError: if types of `x` and `boundaries` do not match, or types of all\n `values` do not match or\n the number of elements in the lists does not match.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Piecewise constant from boundaries and interval values.", "type": "API"}, {"name": "tf.compat.v1.train.piecewise_constant_decay", "docs": "Piecewise constant from boundaries and interval values.\n\n Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5\n for the next 10000 steps, and 0.1 for any additional steps.\n\n ```python\n global_step = tf.Variable(0, trainable=False)\n boundaries = [100000, 110000]\n values = [1.0, 0.5, 0.1]\n learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries,\n values)\n\n # Later, whenever we perform an optimization step, we increment global_step.\n ```\n\n Args:\n x: A 0-D scalar `Tensor`. Must be one of the following types: `float32`,\n `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.\n boundaries: A list of `Tensor`s or `int`s or `float`s with strictly\n increasing entries, and with all elements having the same type as `x`.\n values: A list of `Tensor`s or `float`s or `int`s that specifies the values\n for the intervals defined by `boundaries`. It should have one more element\n than `boundaries`, and all elements should have the same type.\n name: A string. Optional name of the operation. Defaults to\n 'PiecewiseConstant'.\n\n Returns:\n A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`,\n `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`, ...,\n and values[-1] when `x > boundaries[-1]`.\n\n Raises:\n ValueError: if types of `x` and `boundaries` do not match, or types of all\n `values` do not match or\n the number of elements in the lists does not match.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Piecewise constant from boundaries and interval values.", "type": "API"}, {"name": "tf.compat.v1.train.polynomial_decay", "docs": "Applies a polynomial decay to the learning rate.\n\n It is commonly observed that a monotonically decreasing learning rate, whose\n degree of change is carefully chosen, results in a better performing model.\n This function applies a polynomial decay function to a provided initial\n `learning_rate` to reach an `end_learning_rate` in the given `decay_steps`.\n\n It requires a `global_step` value to compute the decayed learning rate. You\n can just pass a TensorFlow variable that you increment at each training step.\n\n The function returns the decayed learning rate. It is computed as:\n\n ```python\n global_step = min(global_step, decay_steps)\n decayed_learning_rate = (learning_rate - end_learning_rate) *\n (1 - global_step / decay_steps) ^ (power) +\n end_learning_rate\n\n ```\n\n If `cycle` is True then a multiple of `decay_steps` is used, the first one\n that is bigger than `global_steps`.\n\n ```python\n decay_steps = decay_steps * ceil(global_step / decay_steps)\n decayed_learning_rate = (learning_rate - end_learning_rate) *\n (1 - global_step / decay_steps) ^ (power) +\n end_learning_rate\n\n ```\n\n Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):\n\n ```python\n ...\n global_step = tf.Variable(0, trainable=False)\n starter_learning_rate = 0.1\n end_learning_rate = 0.01\n decay_steps = 10000\n learning_rate = tf.compat.v1.train.polynomial_decay(starter_learning_rate,\n global_step,\n decay_steps, end_learning_rate,\n power=0.5)\n # Passing global_step to minimize() will increment it at each step.\n learning_step = (\n tf.compat.v1.train.GradientDescentOptimizer(learning_rate)\n .minimize(...my loss..., global_step=global_step)\n )\n ```\n\n Args:\n learning_rate: A scalar `float32` or `float64` `Tensor` or a Python number.\n The initial learning rate.\n global_step: A scalar `int32` or `int64` `Tensor` or a Python number. Global\n step to use for the decay computation. Must not be negative.\n decay_steps: A scalar `int32` or `int64` `Tensor` or a Python number. Must\n be positive. See the decay computation above.\n end_learning_rate: A scalar `float32` or `float64` `Tensor` or a Python\n number. The minimal end learning rate.\n power: A scalar `float32` or `float64` `Tensor` or a Python number. The\n power of the polynomial. Defaults to linear, 1.0.\n cycle: A boolean, whether or not it should cycle beyond decay_steps.\n name: String. Optional name of the operation. Defaults to\n 'PolynomialDecay'.\n\n Returns:\n A scalar `Tensor` of the same type as `learning_rate`. The decayed\n learning rate.\n\n Raises:\n ValueError: if `global_step` is not supplied.\n\n @compatibility(eager)\n When eager execution is enabled, this function returns a function which in\n turn returns the decayed learning rate Tensor. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n @end_compatibility\n ", "desc": "Applies a polynomial decay to the learning rate.", "type": "API"}, {"name": "tf.compat.v1.train.ProfilerHook", "docs": "Captures CPU/GPU profiling information every N steps or seconds.\n\n This produces files called \"timeline-.json\", which are in Chrome\n Trace format.\n\n For more information see:\n https://github.com/catapult-project/catapult/blob/master/tracing/README.md\n ", "desc": "Captures CPU/GPU profiling information every N steps or seconds.", "type": "API"}, {"name": "tf.compat.v1.train.ProximalAdagradOptimizer", "docs": "Optimizer that implements the Proximal Adagrad algorithm.\n\n References:\n Adaptive Subgradient Methods for Online Learning and Stochastic Optimization:\n [Duchi et al., 2011](http://jmlr.org/papers/v12/duchi11a.html)\n ([pdf](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf))\n Efficient Learning using Forward-Backward Splitting:\n [Duchi et al., 2009](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting)\n ([pdf](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf))\n ", "desc": "Optimizer that implements the Proximal Adagrad algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.ProximalGradientDescentOptimizer", "docs": "Optimizer that implements the proximal gradient descent algorithm.\n\n References:\n Efficient Learning using Forward-Backward Splitting:\n [Duchi et al., 2009](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting)\n ([pdf](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf))\n ", "desc": "Optimizer that implements the proximal gradient descent algorithm.", "type": "API"}, {"name": "tf.compat.v1.train.queue_runner", "docs": "Public API for tf.train.queue_runner namespace.\n", "desc": "Public API for tf.train.queue_runner namespace.", "type": "API"}, {"name": "tf.compat.v1.train.queue_runner.add_queue_runner", "docs": "Adds a `QueueRunner` to a collection in the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\n\nWhen building a complex model that uses many queues it is often difficult to\ngather all the queue runners that need to be run. This convenience function\nallows you to add a queue runner to a well known collection in the graph.\n\nThe companion method `start_queue_runners()` can be used to start threads for\nall the collected queue runners.\n\n@compatibility(TF2)\nQueueRunners are not compatible with eager execution. Instead, please\nuse [tf.data](https://www.tensorflow.org/guide/data) to get data into your\nmodel.\n@end_compatibility\n\nArgs:\n qr: A `QueueRunner`.\n collection: A `GraphKey` specifying the graph collection to add\n the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.", "desc": "Adds a `QueueRunner` to a collection in the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.queue_runner.QueueRunner", "docs": "Holds a list of enqueue operations for a queue, each to be run in a thread.\n\n Queues are a convenient TensorFlow mechanism to compute tensors\n asynchronously using multiple threads. For example in the canonical 'Input\n Reader' setup one set of threads generates filenames in a queue; a second set\n of threads read records from the files, processes them, and enqueues tensors\n on a second queue; a third set of threads dequeues these input records to\n construct batches and runs them through training operations.\n\n There are several delicate issues when running multiple threads that way:\n closing the queues in sequence as the input is exhausted, correctly catching\n and reporting exceptions, etc.\n\n The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.\n\n @compatibility(TF2)\n QueueRunners are not compatible with eager execution. Instead, please\n use [tf.data](https://www.tensorflow.org/guide/data) to get data into your\n model.\n @end_compatibility\n ", "desc": "Holds a list of enqueue operations for a queue, each to be run in a thread.", "type": "API"}, {"name": "tf.compat.v1.train.queue_runner.start_queue_runners", "docs": "Starts all queue runners collected in the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\n\nThis is a companion method to `add_queue_runner()`. It just starts\nthreads for all queue runners collected in the graph. It returns\nthe list of all threads.\n\n@compatibility(TF2)\nQueueRunners are not compatible with eager execution. Instead, please\nuse [tf.data](https://www.tensorflow.org/guide/data) to get data into your\nmodel.\n@end_compatibility\n\nArgs:\n sess: `Session` used to run the queue ops. Defaults to the\n default session.\n coord: Optional `Coordinator` for coordinating the started threads.\n daemon: Whether the threads should be marked as `daemons`, meaning\n they don't block program exit.\n start: Set to `False` to only create the threads, not start them.\n collection: A `GraphKey` specifying the graph collection to\n get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.\n\nRaises:\n ValueError: if `sess` is None and there isn't any default session.\n TypeError: if `sess` is not a `tf.compat.v1.Session` object.\n\nReturns:\n A list of threads.\n\nRaises:\n RuntimeError: If called with eager execution enabled.\n ValueError: If called without a default `tf.compat.v1.Session` registered.", "desc": "Starts all queue runners collected in the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.QueueRunner", "docs": "Holds a list of enqueue operations for a queue, each to be run in a thread.\n\n Queues are a convenient TensorFlow mechanism to compute tensors\n asynchronously using multiple threads. For example in the canonical 'Input\n Reader' setup one set of threads generates filenames in a queue; a second set\n of threads read records from the files, processes them, and enqueues tensors\n on a second queue; a third set of threads dequeues these input records to\n construct batches and runs them through training operations.\n\n There are several delicate issues when running multiple threads that way:\n closing the queues in sequence as the input is exhausted, correctly catching\n and reporting exceptions, etc.\n\n The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.\n\n @compatibility(TF2)\n QueueRunners are not compatible with eager execution. Instead, please\n use [tf.data](https://www.tensorflow.org/guide/data) to get data into your\n model.\n @end_compatibility\n ", "desc": "Holds a list of enqueue operations for a queue, each to be run in a thread.", "type": "API"}, {"name": "tf.compat.v1.train.range_input_producer", "docs": "Produces the integers from 0 to limit-1 in a queue. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n\nNote: if `num_epochs` is not `None`, this function creates local counter\n`epochs`. Use `local_variables_initializer()` to initialize local variables.\n\nArgs:\n limit: An int32 scalar tensor.\n num_epochs: An integer (optional). If specified, `range_input_producer`\n produces each integer `num_epochs` times before generating an\n OutOfRange error. If not specified, `range_input_producer` can cycle\n through the integers an unlimited number of times.\n shuffle: Boolean. If true, the integers are randomly shuffled within each\n epoch.\n seed: An integer (optional). Seed used if shuffle == True.\n capacity: An integer. Sets the queue capacity.\n shared_name: (optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: A name for the operations (optional).\n\nReturns:\n A Queue with the output integers. A `QueueRunner` for the Queue\n is added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Produces the integers from 0 to limit-1 in a queue. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.remove_checkpoint", "docs": "Removes a checkpoint given by `checkpoint_prefix`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse standard file APIs to delete files with this prefix.\n\nArgs:\n checkpoint_prefix: The prefix of a V1 or V2 checkpoint. Typically the result\n of `Saver.save()` or that of `tf.train.latest_checkpoint()`, regardless of\n sharded/non-sharded or V1/V2.\n checkpoint_format_version: `SaverDef.CheckpointFormatVersion`, defaults to\n `SaverDef.V2`.\n meta_graph_suffix: Suffix for `MetaGraphDef` file. Defaults to 'meta'.", "desc": "Removes a checkpoint given by `checkpoint_prefix`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.replica_device_setter", "docs": "Return a `device function` to use when building a Graph for replicas.\n\n Device Functions are used in `with tf.device(device_function):` statement to\n automatically assign devices to `Operation` objects as they are constructed,\n Device constraints are added from the inner-most context first, working\n outwards. The merging behavior adds constraints to fields that are yet unset\n by a more inner context. Currently the fields are (job, task, cpu/gpu).\n\n If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.\n Otherwise, the value of `ps_tasks` is derived from `cluster`.\n\n By default, only Variable ops are placed on ps tasks, and the placement\n strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used\n to do more intelligent placement, such as\n `tf.contrib.training.GreedyLoadBalancingStrategy`.\n\n For example,\n\n ```python\n # To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker\n # jobs on hosts worker0, worker1 and worker2.\n cluster_spec = {\n \"ps\": [\"ps0:2222\", \"ps1:2222\"],\n \"worker\": [\"worker0:2222\", \"worker1:2222\", \"worker2:2222\"]}\n with\n tf.compat.v1.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)):\n # Build your graph\n v1 = tf.Variable(...) # assigned to /job:ps/task:0\n v2 = tf.Variable(...) # assigned to /job:ps/task:1\n v3 = tf.Variable(...) # assigned to /job:ps/task:0\n # Run compute\n ```\n\n Args:\n ps_tasks: Number of tasks in the `ps` job. Ignored if `cluster` is\n provided.\n ps_device: String. Device of the `ps` job. If empty no `ps` job is used.\n Defaults to `ps`.\n worker_device: String. Device of the `worker` job. If empty no `worker`\n job is used.\n merge_devices: `Boolean`. If `True`, merges or only sets a device if the\n device constraint is completely unset. merges device specification rather\n than overriding them.\n cluster: `ClusterDef` proto or `ClusterSpec`.\n ps_ops: List of strings representing `Operation` types that need to be\n placed on `ps` devices. If `None`, defaults to `STANDARD_PS_OPS`.\n ps_strategy: A callable invoked for every ps `Operation` (i.e. matched by\n `ps_ops`), that takes the `Operation` and returns the ps task index to\n use. If `None`, defaults to a round-robin strategy across all `ps`\n devices.\n\n Returns:\n A function to pass to `tf.device()`.\n\n Raises:\n TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer,\n or if `ps_strategy` is provided but not a callable.\n ", "desc": "Return a `device function` to use when building a Graph for replicas.", "type": "API"}, {"name": "tf.compat.v1.train.RMSPropOptimizer", "docs": "Optimizer that implements the RMSProp algorithm (Tielemans et al.\n\n 2012).\n\n References:\n Coursera slide 29:\n Hinton, 2012\n ([pdf](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf))\n\n @compatibility(TF2)\n tf.compat.v1.train.RMSPropOptimizer is compatible with eager mode and\n `tf.function`.\n When eager execution is enabled, `learning_rate`, `decay`, `momentum`,\n and `epsilon` can each be a callable that\n takes no arguments and returns the actual value to use. This can be useful\n for changing these values across different invocations of optimizer\n functions.\n\n To switch to native TF2 style, use [`tf.keras.optimizers.RMSprop`]\n (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/RMSprop)\n instead. Please notice that due to the implementation differences,\n `tf.keras.optimizers.RMSprop` and\n `tf.compat.v1.train.RMSPropOptimizer` may have slight differences in\n floating point numerics even though the formula used for the variable\n updates still matches.\n\n #### Structural mapping to native TF2\n\n Before:\n\n ```python\n optimizer = tf.compat.v1.train.RMSPropOptimizer(\n learning_rate=learning_rate,\n decay=decay,\n momentum=momentum,\n epsilon=epsilon)\n ```\n\n After:\n\n ```python\n optimizer = tf.keras.optimizers.RMSprop(\n learning_rate=learning_rate,\n rho=decay,\n momentum=momentum,\n epsilon=epsilon)\n ```\n\n #### How to map arguments\n | TF1 Arg Name | TF2 Arg Name | Note |\n | ------------------ | ------------- | ------------------------------- |\n | `learning_rate` | `learning_rate`| Be careful of setting |\n : : : learning_rate tensor value computed from the global step. :\n : : : In TF1 this was usually meant to imply a dynamic learning rate and :\n : : : would recompute in each step. In TF2 (eager + function) it will :\n : : : treat it as a scalar value that only gets computed once instead of :\n : : : a symbolic placeholder to be computed each time. :\n | `decay` | `rho` | - |\n | `momentum` | `momentum` | - |\n | `epsilon` | `epsilon` | Default value is 1e-10 in TF1, |\n : : : but 1e-07 in TF2. :\n | `use_locking` | - | Not applicable in TF2. |\n\n #### Before & after usage example\n Before:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.compat.v1.train.RMSPropOptimizer(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n After:\n\n ```python\n x = tf.Variable([1,2,3], dtype=tf.float32)\n grad = tf.constant([0.1, 0.2, 0.3])\n optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001)\n optimizer.apply_gradients(zip([grad], [x]))\n ```\n\n @end_compatibility\n ", "desc": "Optimizer that implements the RMSProp algorithm (Tielemans et al.", "type": "API"}, {"name": "tf.compat.v1.train.Saver", "docs": "Saves and restores variables.\n\n @compatibility(TF2)\n `tf.compat.v1.train.Saver` is not supported for saving and restoring\n checkpoints in TF2. Please switch to `tf.train.Checkpoint` or\n `tf.keras.Model.save_weights`, which perform a more robust [object-based\n saving](https://www.tensorflow.org/guide/checkpoint#loading_mechanics).\n\n ### How to Rewrite Checkpoints\n\n Please rewrite your checkpoints immediately using the object-based checkpoint\n APIs.\n\n You can load a name-based checkpoint written by `tf.compat.v1.train.Saver`\n using `tf.train.Checkpoint.restore` or `tf.keras.Model.load_weights`. However,\n you may have to change the names of the variables in your model to match the\n variable names in the name-based checkpoint, which can be viewed with\n `tf.train.list_variables(path)`.\n\n Another option is to create an `assignment_map` that maps the name of the\n variables in the name-based checkpoint to the variables in your model, eg:\n ```\n {\n 'sequential/dense/bias': model.variables[0],\n 'sequential/dense/kernel': model.variables[1]\n }\n ```\n and use `tf.compat.v1.train.init_from_checkpoint(path, assignment_map)` to\n restore the name-based checkpoint.\n\n After restoring, re-encode your checkpoint\n using `tf.train.Checkpoint.save` or `tf.keras.Model.save_weights`.\n\n See the [Checkpoint compatibility](\n https://www.tensorflow.org/guide/migrate#checkpoint_compatibility)\n section of the migration guide for more details.\n\n\n ### Checkpoint Management in TF2\n\n Use `tf.train.CheckpointManager` to manage checkpoints in TF2.\n `tf.train.CheckpointManager` offers equivalent `keep_checkpoint_every_n_hours`\n and `max_to_keep` parameters.\n\n To recover the latest checkpoint,\n\n ```\n checkpoint = tf.train.Checkpoint(model)\n manager = tf.train.CheckpointManager(checkpoint)\n status = checkpoint.restore(manager.latest_checkpoint)\n ```\n\n `tf.train.CheckpointManager` also writes a [`CheckpointState` proto]\n (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/checkpoint_state.proto)\n which contains the timestamp when each checkpoint was created.\n\n ### Writing `MetaGraphDef`s in TF2\n\n To replace, `tf.compat.v1.train.Saver.save(write_meta_graph=True)`, use\n `tf.saved_model.save` to write the `MetaGraphDef` (which is contained in\n `saved_model.pb`).\n\n @end_compatibility\n\n See [Variables](https://tensorflow.org/guide/variables)\n for an overview of variables, saving and restoring.\n\n The `Saver` class adds ops to save and restore variables to and from\n *checkpoints*. It also provides convenience methods to run these ops.\n\n Checkpoints are binary files in a proprietary format which map variable names\n to tensor values. The best way to examine the contents of a checkpoint is to\n load it using a `Saver`.\n\n Savers can automatically number checkpoint filenames with a provided counter.\n This lets you keep multiple checkpoints at different steps while training a\n model. For example you can number the checkpoint filenames with the training\n step number. To avoid filling up disks, savers manage checkpoint files\n automatically. For example, they can keep only the N most recent files, or\n one checkpoint for every N hours of training.\n\n You number checkpoint filenames by passing a value to the optional\n `global_step` argument to `save()`:\n\n ```python\n saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'\n ...\n saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'\n ```\n\n Additionally, optional arguments to the `Saver()` constructor let you control\n the proliferation of checkpoint files on disk:\n\n * `max_to_keep` indicates the maximum number of recent checkpoint files to\n keep. As new files are created, older files are deleted. If None or 0,\n no checkpoints are deleted from the filesystem but only the last one is\n kept in the `checkpoint` file. Defaults to 5 (that is, the 5 most recent\n checkpoint files are kept.)\n\n * `keep_checkpoint_every_n_hours`: In addition to keeping the most recent\n `max_to_keep` checkpoint files, you might want to keep one checkpoint file\n for every N hours of training. This can be useful if you want to later\n analyze how a model progressed during a long training session. For\n example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep\n one checkpoint file for every 2 hours of training. The default value of\n 10,000 hours effectively disables the feature.\n\n Note that you still have to call the `save()` method to save the model.\n Passing these arguments to the constructor will not save variables\n automatically for you.\n\n A training program that saves regularly looks like:\n\n ```python\n ...\n # Create a saver.\n saver = tf.compat.v1.train.Saver(...variables...)\n # Launch the graph and train, saving the model every 1,000 steps.\n sess = tf.compat.v1.Session()\n for step in range(1000000):\n sess.run(..training_op..)\n if step % 1000 == 0:\n # Append the step number to the checkpoint name:\n saver.save(sess, 'my-model', global_step=step)\n ```\n\n In addition to checkpoint files, savers keep a protocol buffer on disk with\n the list of recent checkpoints. This is used to manage numbered checkpoint\n files and by `latest_checkpoint()`, which makes it easy to discover the path\n to the most recent checkpoint. That protocol buffer is stored in a file named\n 'checkpoint' next to the checkpoint files.\n\n If you create several savers, you can specify a different filename for the\n protocol buffer file in the call to `save()`.\n ", "desc": "Saves and restores variables.", "type": "API"}, {"name": "tf.compat.v1.train.SaverDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.Scaffold", "docs": "Structure to create or gather pieces commonly needed to train a model.\n\n When you build a model for training you usually need ops to initialize\n variables, a `Saver` to checkpoint them, an op to collect summaries for\n the visualizer, and so on.\n\n Various libraries built on top of the core TensorFlow library take care of\n creating some or all of these pieces and storing them in well known\n collections in the graph. The `Scaffold` class helps pick these pieces from\n the graph collections, creating and adding them to the collections if needed.\n\n If you call the scaffold constructor without any arguments, it will pick\n pieces from the collections, creating default ones if needed when\n `scaffold.finalize()` is called. You can pass arguments to the constructor to\n provide your own pieces. Pieces that you pass to the constructor are not\n added to the graph collections.\n\n The following pieces are directly accessible as attributes of the `Scaffold`\n object:\n\n * `saver`: A `tf.compat.v1.train.Saver` object taking care of saving the\n variables.\n Picked from and stored into the `SAVERS` collection in the graph by default.\n * `init_op`: An op to run to initialize the variables. Picked from and\n stored into the `INIT_OP` collection in the graph by default.\n * `ready_op`: An op to verify that the variables are initialized. Picked\n from and stored into the `READY_OP` collection in the graph by default.\n * `ready_for_local_init_op`: An op to verify that global state has been\n initialized and it is alright to run `local_init_op`. Picked from and\n stored into the `READY_FOR_LOCAL_INIT_OP` collection in the graph by\n default. This is needed when the initialization of local variables depends\n on the values of global variables.\n * `local_init_op`: An op to initialize the local variables. Picked\n from and stored into the `LOCAL_INIT_OP` collection in the graph by default.\n * `summary_op`: An op to run and merge the summaries in the graph. Picked\n from and stored into the `SUMMARY_OP` collection in the graph by default.\n\n You can also pass the following additional pieces to the constructor:\n\n * `init_feed_dict`: A session feed dictionary that should be used when\n running the init op.\n * `init_fn`: A callable to run after the init op to perform additional\n initializations. The callable will be called as\n `init_fn(scaffold, session)`.\n\n ", "desc": "Structure to create or gather pieces commonly needed to train a model.", "type": "API"}, {"name": "tf.compat.v1.train.sdca_fprint", "docs": "Computes fingerprints of the input strings.\n\n Args:\n input: A `Tensor` of type `string`.\n vector of strings to compute fingerprints on.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Computes fingerprints of the input strings.", "type": "API"}, {"name": "tf.compat.v1.train.sdca_optimizer", "docs": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for\n\n linear models with L1 + L2 regularization. As global optimization objective is\n strongly-convex, the optimizer optimizes the dual objective at each step. The\n optimizer applies each update one example at a time. Examples are sampled\n uniformly, and the optimizer is learning rate free and enjoys linear convergence\n rate.\n\n [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
\n Shai Shalev-Shwartz, Tong Zhang. 2012\n\n $$Loss Objective = \\sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$\n\n [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
\n Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan,\n Peter Richtarik, Martin Takac. 2015\n\n [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
\n Dominik Csiba, Zheng Qu, Peter Richtarik. 2015\n\n Args:\n sparse_example_indices: A list of `Tensor` objects with type `int64`.\n a list of vectors which contain example indices.\n sparse_feature_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors which contain feature indices.\n sparse_feature_values: A list of `Tensor` objects with type `float32`.\n a list of vectors which contains feature value\n associated with each feature group.\n dense_features: A list of `Tensor` objects with type `float32`.\n a list of matrices which contains the dense feature values.\n example_weights: A `Tensor` of type `float32`.\n a vector which contains the weight associated with each\n example.\n example_labels: A `Tensor` of type `float32`.\n a vector which contains the label/target associated with each\n example.\n sparse_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors where each value is the indices which has\n corresponding weights in sparse_weights. This field maybe omitted for the\n dense approach.\n sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n a list of vectors where each value is the weight associated with\n a sparse feature group.\n dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n a list of vectors where the values are the weights associated\n with a dense feature group.\n example_state_data: A `Tensor` of type `float32`.\n a list of vectors containing the example state data.\n loss_type: A `string` from: `\"logistic_loss\", \"squared_loss\", \"hinge_loss\", \"smooth_hinge_loss\", \"poisson_loss\"`.\n Type of the primal loss. Currently SdcaSolver supports logistic,\n squared and hinge losses.\n l1: A `float`. Symmetric l1 regularization strength.\n l2: A `float`. Symmetric l2 regularization strength.\n num_loss_partitions: An `int` that is `>= 1`.\n Number of partitions of the global loss function.\n num_inner_iterations: An `int` that is `>= 1`.\n Number of iterations per mini-batch.\n adaptative: An optional `bool`. Defaults to `True`.\n Whether to use Adaptive SDCA for the inner loop.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).\n\n out_example_state_data: A `Tensor` of type `float32`.\n out_delta_sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n out_delta_dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n ", "desc": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for", "type": "API"}, {"name": "tf.compat.v1.train.sdca_shrink_l1", "docs": "Applies L1 regularization shrink step on the parameters.\n\n Args:\n weights: A list of `Tensor` objects with type mutable `float32`.\n a list of vectors where each value is the weight associated with a\n feature group.\n l1: A `float`. Symmetric l1 regularization strength.\n l2: A `float`.\n Symmetric l2 regularization strength. Should be a positive float.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies L1 regularization shrink step on the parameters.", "type": "API"}, {"name": "tf.compat.v1.train.SecondOrStepTimer", "docs": "Timer that triggers at most once every N seconds or once every N steps.\n\n This symbol is also exported to v2 in tf.estimator namespace. See\n https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/hooks/basic_session_run_hooks.py\n ", "desc": "Timer that triggers at most once every N seconds or once every N steps.", "type": "API"}, {"name": "tf.compat.v1.train.SequenceExample", "docs": "A `SequenceExample` is a format a sequences and some context.\n\nIt can be thought of as a proto-implementation of the following python type:\n\n```\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: Dict[str, List[Feature]]\n```\n\nTo implement this as protos it's broken up into sub-messages as follows:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\n# tf.train.SequenceExample\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nTo parse a `SequenceExample` in TensorFlow refer to the\n`tf.io.parse_sequence_example` function.\n\nThe `context` contains features which apply to the entire\nexample. The `feature_lists` contain a key, value map where each key is\nassociated with a repeated set of `tf.train.Features` (a `tf.train.FeatureList`).\nA `FeatureList` represents the values of a feature identified by its key\nover time / frames.\n\nBelow is a `SequenceExample` for a movie recommendation application recording a\nsequence of ratings by a user. The time-independent features (\"locale\",\n\"age\", \"favorites\") describing the user are part of the context. The sequence\nof movies the user rated are part of the feature_lists. For each movie in the\nsequence we have information on its name and actors and the user's rating.\nThis information is recorded in three separate `feature_list`s.\nIn the example below there are only two movies. All three `feature_list`s,\nnamely \"movie_ratings\", \"movie_names\", and \"actors\" have a feature value for\nboth movies. Note, that \"actors\" is itself a `bytes_list` with multiple\nstrings per movie.\n\n```\n context: {\n feature: {\n key : \"locale\"\n value: {\n bytes_list: {\n value: [ \"pt_BR\" ]\n }\n }\n }\n feature: {\n key : \"age\"\n value: {\n float_list: {\n value: [ 19.0 ]\n }\n }\n }\n feature: {\n key : \"favorites\"\n value: {\n bytes_list: {\n value: [ \"Majesty Rose\", \"Savannah Outen\", \"One Direction\" ]\n }\n }\n }\n }\n feature_lists: {\n feature_list: {\n key : \"movie_ratings\"\n value: {\n feature: {\n float_list: {\n value: [ 4.5 ]\n }\n }\n feature: {\n float_list: {\n value: [ 5.0 ]\n }\n }\n }\n }\n feature_list: {\n key : \"movie_names\"\n value: {\n feature: {\n bytes_list: {\n value: [ \"The Shawshank Redemption\" ]\n }\n }\n feature: {\n bytes_list: {\n value: [ \"Fight Club\" ]\n }\n }\n }\n }\n feature_list: {\n key : \"actors\"\n value: {\n feature: {\n bytes_list: {\n value: [ \"Tim Robbins\", \"Morgan Freeman\" ]\n }\n }\n feature: {\n bytes_list: {\n value: [ \"Brad Pitt\", \"Edward Norton\", \"Helena Bonham Carter\" ]\n }\n }\n }\n }\n }\n```\n\nA conformant `SequenceExample` data set obeys the following conventions:\n\n`context`:\n\n - All conformant context features `K` must obey the same conventions as\n a conformant Example's features (see above).\n\n`feature_lists`:\n\n - A `FeatureList L` may be missing in an example; it is up to the\n parser configuration to determine if this is allowed or considered\n an empty list (zero length).\n - If a `FeatureList L` exists, it may be empty (zero length).\n - If a `FeatureList L` is non-empty, all features within the `FeatureList`\n must have the same data type `T`. Even across `SequenceExample`s, the type `T`\n of the `FeatureList` identified by the same key must be the same. An entry\n without any values may serve as an empty feature.\n - If a `FeatureList L` is non-empty, it is up to the parser configuration\n to determine if all features within the `FeatureList` must\n have the same size. The same holds for this `FeatureList` across multiple\n examples.\n - For sequence modeling ([example](https://github.com/tensorflow/nmt)), the\n feature lists represent a sequence of frames. In this scenario, all\n `FeatureList`s in a `SequenceExample` have the same number of `Feature`\n messages, so that the i-th element in each `FeatureList` is part of the\n i-th frame (or time step).\n\n**Examples of conformant and non-conformant examples' `FeatureLists`:**\n\nConformant `FeatureLists`:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n```\n\nNon-conformant `FeatureLists` (mismatched types):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { int64_list: { value: [ 5 ] } } }\n } }\n```\n\nConditionally conformant `FeatureLists`, the parser configuration determines\nif the feature sizes must match:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0, 6.0 ] } } }\n } }\n```\n\n**Examples of conformant and non-conformant `SequenceExample`s:**\n\nConformant pair of SequenceExample:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } }\n feature: { float_list: { value: [ 2.0 ] } } }\n } }\n```\n\nConformant pair of `SequenceExample`s:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { }\n } }\n```\n\nConditionally conformant pair of `SequenceExample`s, the parser configuration\ndetermines if the second `feature_lists` is consistent (zero-length) or\ninvalid (missing \"movie_ratings\"):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { }\n```\n\nNon-conformant pair of `SequenceExample`s (mismatched types):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { int64_list: { value: [ 4 ] } }\n feature: { int64_list: { value: [ 5 ] } }\n feature: { int64_list: { value: [ 2 ] } } }\n } }\n```\n\nConditionally conformant pair of `SequenceExample`s; the parser configuration\ndetermines if the feature sizes must match:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.0 ] } }\n feature: { float_list: { value: [ 5.0, 3.0 ] } }\n } }\n```\n", "desc": "A `SequenceExample` is a format a sequences and some context.", "type": "API"}, {"name": "tf.compat.v1.train.Server", "docs": "An in-process TensorFlow server, for use in distributed training.\n\n A `tf.distribute.Server` instance encapsulates a set of devices and a\n `tf.compat.v1.Session` target that\n can participate in distributed training. A server belongs to a\n cluster (specified by a `tf.train.ClusterSpec`), and\n corresponds to a particular task in a named job. The server can\n communicate with any other server in the same cluster.\n ", "desc": "An in-process TensorFlow server, for use in distributed training.", "type": "API"}, {"name": "tf.compat.v1.train.ServerDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.compat.v1.train.SessionCreator", "docs": "A factory for tf.Session.", "desc": "A factory for tf.Session.", "type": "API"}, {"name": "tf.compat.v1.train.SessionManager", "docs": "Training helper that restores from checkpoint and creates session.\n\n This class is a small wrapper that takes care of session creation and\n checkpoint recovery. It also provides functions that to facilitate\n coordination among multiple training threads or processes.\n\n * Checkpointing trained variables as the training progresses.\n * Initializing variables on startup, restoring them from the most recent\n checkpoint after a crash, or wait for checkpoints to become available.\n\n ### Usage:\n\n ```python\n with tf.Graph().as_default():\n ...add operations to the graph...\n # Create a SessionManager that will checkpoint the model in '/tmp/mydir'.\n sm = SessionManager()\n sess = sm.prepare_session(master, init_op, saver, checkpoint_dir)\n # Use the session to train the graph.\n while True:\n sess.run()\n ```\n\n `prepare_session()` initializes or restores a model. It requires `init_op`\n and `saver` as an argument.\n\n A second process could wait for the model to be ready by doing the following:\n\n ```python\n with tf.Graph().as_default():\n ...add operations to the graph...\n # Create a SessionManager that will wait for the model to become ready.\n sm = SessionManager()\n sess = sm.wait_for_session(master)\n # Use the session to train the graph.\n while True:\n sess.run()\n ```\n\n `wait_for_session()` waits for a model to be initialized by other processes.\n\n ", "desc": "Training helper that restores from checkpoint and creates session.", "type": "API"}, {"name": "tf.compat.v1.train.SessionRunArgs", "docs": "Represents arguments to be added to a `Session.run()` call.\n\n Args:\n fetches: Exactly like the 'fetches' argument to Session.Run().\n Can be a single tensor or op, a list of 'fetches' or a dictionary\n of fetches. For example:\n fetches = global_step_tensor\n fetches = [train_op, summary_op, global_step_tensor]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n Note that this can recurse as expected:\n fetches = {'step': global_step_tensor,\n 'ops': [train_op, check_nan_op]}\n feed_dict: Exactly like the `feed_dict` argument to `Session.Run()`\n options: Exactly like the `options` argument to `Session.run()`, i.e., a\n config_pb2.RunOptions proto.\n ", "desc": "Represents arguments to be added to a `Session.run()` call.", "type": "API"}, {"name": "tf.compat.v1.train.SessionRunContext", "docs": "Provides information about the `session.run()` call being made.\n\n Provides information about original request to `Session.Run()` function.\n SessionRunHook objects can stop the loop by calling `request_stop()` of\n `run_context`. In the future we may use this object to add more information\n about run without changing the Hook API.\n ", "desc": "Provides information about the `session.run()` call being made.", "type": "API"}, {"name": "tf.compat.v1.train.SessionRunHook", "docs": "Hook to extend calls to MonitoredSession.run().", "desc": "Hook to extend calls to MonitoredSession.run().", "type": "API"}, {"name": "tf.compat.v1.train.SessionRunValues", "docs": "Contains the results of `Session.run()`.\n\n In the future we may use this object to add more information about result of\n run without changing the Hook API.\n\n Args:\n results: The return values from `Session.run()` corresponding to the fetches\n attribute returned in the RunArgs. Note that this has the same shape as\n the RunArgs fetches. For example:\n fetches = global_step_tensor\n => results = nparray(int)\n fetches = [train_op, summary_op, global_step_tensor]\n => results = [None, nparray(string), nparray(int)]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n => results = {'step': nparray(int), 'summ': nparray(string)}\n options: `RunOptions` from the `Session.run()` call.\n run_metadata: `RunMetadata` from the `Session.run()` call.\n ", "desc": "Contains the results of `Session.run()`.", "type": "API"}, {"name": "tf.compat.v1.train.shuffle_batch", "docs": "Creates batches by randomly shuffling tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size)`.\n\nThis function adds the following to the current `Graph`:\n\n* A shuffling queue into which tensors from `tensors` are enqueued.\n* A `dequeue_many` operation to create batches from the queue.\n* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors\n from `tensors`.\n\nIf `enqueue_many` is `False`, `tensors` is assumed to represent a\nsingle example. An input tensor with shape `[x, y, z]` will be output\nas a tensor with shape `[batch_size, x, y, z]`.\n\nIf `enqueue_many` is `True`, `tensors` is assumed to represent a\nbatch of examples, where the first dimension is indexed by example,\nand all members of `tensors` should have the same size in the\nfirst dimension. If an input tensor has shape `[*, x, y, z]`, the\noutput will have shape `[batch_size, x, y, z]`.\n\nThe `capacity` argument controls the how long the prefetching is allowed to\ngrow the queues.\n\nThe returned operation is a dequeue operation and will throw\n`tf.errors.OutOfRangeError` if the input queue is exhausted. If this\noperation is feeding another input queue, its queue runner will catch\nthis exception, however, if this operation is used in your main thread\nyou are responsible for catching this yourself.\n\nFor example:\n\n```python\n# Creates batches of 32 images and 32 labels.\nimage_batch, label_batch = tf.compat.v1.train.shuffle_batch(\n [single_image, single_label],\n batch_size=32,\n num_threads=4,\n capacity=50000,\n min_after_dequeue=10000)\n```\n\n*N.B.:* You must ensure that either (i) the `shapes` argument is\npassed, or (ii) all of the tensors in `tensors` must have\nfully-defined shapes. `ValueError` will be raised if neither of\nthese conditions holds.\n\nIf `allow_smaller_final_batch` is `True`, a smaller batch value than\n`batch_size` is returned when the queue is closed and there are not enough\nelements to fill the batch, otherwise the pending elements are discarded.\nIn addition, all output tensors' static shapes, as accessed via the\n`shape` property will have a first `Dimension` value of `None`, and\noperations that depend on fixed batch_size would fail.\n\nArgs:\n tensors: The list or dictionary of tensors to enqueue.\n batch_size: The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n min_after_dequeue: Minimum number elements in the queue after a\n dequeue, used to ensure a level of mixing of elements.\n num_threads: The number of threads enqueuing `tensor_list`.\n seed: Seed for the random shuffling within the queue.\n enqueue_many: Whether each tensor in `tensor_list` is a single example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensor_list`.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (Optional) If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the types as `tensors`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Creates batches by randomly shuffling tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.shuffle_batch_join", "docs": "Create batches by randomly shuffling tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size)`.\n\nThe `tensors_list` argument is a list of tuples of tensors, or a list of\ndictionaries of tensors. Each element in the list is treated similarly\nto the `tensors` argument of `tf.compat.v1.train.shuffle_batch()`.\n\nThis version enqueues a different list of tensors in different threads.\nIt adds the following to the current `Graph`:\n\n* A shuffling queue into which tensors from `tensors_list` are enqueued.\n* A `dequeue_many` operation to create batches from the queue.\n* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors\n from `tensors_list`.\n\n`len(tensors_list)` threads will be started, with thread `i` enqueuing\nthe tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match\n`tensors_list[i2][j]` in type and shape, except in the first dimension if\n`enqueue_many` is true.\n\nIf `enqueue_many` is `False`, each `tensors_list[i]` is assumed\nto represent a single example. An input tensor with shape `[x, y, z]`\nwill be output as a tensor with shape `[batch_size, x, y, z]`.\n\nIf `enqueue_many` is `True`, `tensors_list[i]` is assumed to\nrepresent a batch of examples, where the first dimension is indexed\nby example, and all members of `tensors_list[i]` should have the\nsame size in the first dimension. If an input tensor has shape `[*, x,\ny, z]`, the output will have shape `[batch_size, x, y, z]`.\n\nThe `capacity` argument controls the how long the prefetching is allowed to\ngrow the queues.\n\nThe returned operation is a dequeue operation and will throw\n`tf.errors.OutOfRangeError` if the input queue is exhausted. If this\noperation is feeding another input queue, its queue runner will catch\nthis exception, however, if this operation is used in your main thread\nyou are responsible for catching this yourself.\n\nIf `allow_smaller_final_batch` is `True`, a smaller batch value than\n`batch_size` is returned when the queue is closed and there are not enough\nelements to fill the batch, otherwise the pending elements are discarded.\nIn addition, all output tensors' static shapes, as accessed via the\n`shape` property will have a first `Dimension` value of `None`, and\noperations that depend on fixed batch_size would fail.\n\nArgs:\n tensors_list: A list of tuples or dictionaries of tensors to enqueue.\n batch_size: An integer. The new batch size pulled from the queue.\n capacity: An integer. The maximum number of elements in the queue.\n min_after_dequeue: Minimum number elements in the queue after a\n dequeue, used to ensure a level of mixing of elements.\n seed: Seed for the random shuffling within the queue.\n enqueue_many: Whether each tensor in `tensor_list_list` is a single\n example.\n shapes: (Optional) The shapes for each example. Defaults to the\n inferred shapes for `tensors_list[i]`.\n allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final\n batch to be smaller if there are insufficient items left in the queue.\n shared_name: (optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: (Optional) A name for the operations.\n\nReturns:\n A list or dictionary of tensors with the same number and types as\n `tensors_list[i]`.\n\nRaises:\n ValueError: If the `shapes` are not specified, and cannot be\n inferred from the elements of `tensors_list`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Create batches by randomly shuffling tensors. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.SingularMonitoredSession", "docs": "Session-like object that handles initialization, restoring, and hooks.\n\n Please note that this utility is not recommended for distributed settings.\n For distributed settings, please use `tf.compat.v1.train.MonitoredSession`.\n The\n differences between `MonitoredSession` and `SingularMonitoredSession` are:\n\n * `MonitoredSession` handles `AbortedError` and `UnavailableError` for\n distributed settings, but `SingularMonitoredSession` does not.\n * `MonitoredSession` can be created in `chief` or `worker` modes.\n `SingularMonitoredSession` is always created as `chief`.\n * You can access the raw `tf.compat.v1.Session` object used by\n `SingularMonitoredSession`, whereas in MonitoredSession the raw session is\n private. This can be used:\n - To `run` without hooks.\n - To save and restore.\n * All other functionality is identical.\n\n Example usage:\n ```python\n saver_hook = CheckpointSaverHook(...)\n summary_hook = SummarySaverHook(...)\n with SingularMonitoredSession(hooks=[saver_hook, summary_hook]) as sess:\n while not sess.should_stop():\n sess.run(train_op)\n ```\n\n Initialization: At creation time the hooked session does following things\n in given order:\n\n * calls `hook.begin()` for each given hook\n * finalizes the graph via `scaffold.finalize()`\n * create session\n * initializes the model via initialization ops provided by `Scaffold`\n * restores variables if a checkpoint exists\n * launches queue runners\n\n Run: When `run()` is called, the hooked session does following things:\n\n * calls `hook.before_run()`\n * calls TensorFlow `session.run()` with merged fetches and feed_dict\n * calls `hook.after_run()`\n * returns result of `session.run()` asked by user\n\n Exit: At the `close()`, the hooked session does following things in order:\n\n * calls `hook.end()`\n * closes the queue runners and the session\n * suppresses `OutOfRange` error which indicates that all inputs have been\n processed if the `SingularMonitoredSession` is used as a context.\n\n @compatibility(TF2)\n This API is not compatible with eager execution and `tf.function`. To migrate\n to TF2, rewrite the code to be compatible with eager execution. Check the\n [migration\n guide](https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls)\n on replacing `Session.run` calls. In Keras, session hooks can be replaced by\n Callbacks e.g. [logging hook notebook](\n https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb)\n For more details please read [Better\n performance with tf.function](https://www.tensorflow.org/guide/function).\n @end_compatibility\n ", "desc": "Session-like object that handles initialization, restoring, and hooks.", "type": "API"}, {"name": "tf.compat.v1.train.SingularMonitoredSession.StepContext", "docs": "Control flow instrument for the `step_fn` from `run_step_fn()`.\n\n Users of `step_fn` may perform `run()` calls without running hooks\n by accessing the `session`. A `run()` call with hooks may be performed\n using `run_with_hooks()`. Computation flow can be interrupted using\n `request_stop()`.\n ", "desc": "Control flow instrument for the `step_fn` from `run_step_fn()`.", "type": "API"}, {"name": "tf.compat.v1.train.slice_input_producer", "docs": "Produces a slice of each `Tensor` in `tensor_list`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n\nImplemented using a Queue -- a `QueueRunner` for the Queue\nis added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\nArgs:\n tensor_list: A list of `Tensor` objects. Every `Tensor` in\n `tensor_list` must have the same size in the first dimension.\n num_epochs: An integer (optional). If specified, `slice_input_producer`\n produces each slice `num_epochs` times before generating\n an `OutOfRange` error. If not specified, `slice_input_producer` can cycle\n through the slices an unlimited number of times.\n shuffle: Boolean. If true, the integers are randomly shuffled within each\n epoch.\n seed: An integer (optional). Seed used if shuffle == True.\n capacity: An integer. Sets the queue capacity.\n shared_name: (optional). If set, this queue will be shared under the given\n name across multiple sessions.\n name: A name for the operations (optional).\n\nReturns:\n A list of tensors, one for each element of `tensor_list`. If the tensor\n in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output\n tensor will have shape `[a, b, ..., z]`.\n\nRaises:\n ValueError: if `slice_input_producer` produces nothing from `tensor_list`.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Produces a slice of each `Tensor` in `tensor_list`. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.start_queue_runners", "docs": "Starts all queue runners collected in the graph. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\n\nThis is a companion method to `add_queue_runner()`. It just starts\nthreads for all queue runners collected in the graph. It returns\nthe list of all threads.\n\n@compatibility(TF2)\nQueueRunners are not compatible with eager execution. Instead, please\nuse [tf.data](https://www.tensorflow.org/guide/data) to get data into your\nmodel.\n@end_compatibility\n\nArgs:\n sess: `Session` used to run the queue ops. Defaults to the\n default session.\n coord: Optional `Coordinator` for coordinating the started threads.\n daemon: Whether the threads should be marked as `daemons`, meaning\n they don't block program exit.\n start: Set to `False` to only create the threads, not start them.\n collection: A `GraphKey` specifying the graph collection to\n get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.\n\nRaises:\n ValueError: if `sess` is None and there isn't any default session.\n TypeError: if `sess` is not a `tf.compat.v1.Session` object.\n\nReturns:\n A list of threads.\n\nRaises:\n RuntimeError: If called with eager execution enabled.\n ValueError: If called without a default `tf.compat.v1.Session` registered.", "desc": "Starts all queue runners collected in the graph. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.StepCounterHook", "docs": "Hook that counts steps per second.", "desc": "Hook that counts steps per second.", "type": "API"}, {"name": "tf.compat.v1.train.StopAtStepHook", "docs": "Hook that requests stop at a specified step.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n ", "desc": "Hook that requests stop at a specified step.", "type": "API"}, {"name": "tf.compat.v1.train.string_input_producer", "docs": "Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nQueue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n\nNote: if `num_epochs` is not `None`, this function creates local counter\n`epochs`. Use `local_variables_initializer()` to initialize local variables.\n\nArgs:\n string_tensor: A 1-D string tensor with the strings to produce.\n num_epochs: An integer (optional). If specified, `string_input_producer`\n produces each string from `string_tensor` `num_epochs` times before\n generating an `OutOfRange` error. If not specified,\n `string_input_producer` can cycle through the strings in `string_tensor`\n an unlimited number of times.\n shuffle: Boolean. If true, the strings are randomly shuffled within each\n epoch.\n seed: An integer (optional). Seed used if shuffle == True.\n capacity: An integer. Sets the queue capacity.\n shared_name: (optional). If set, this queue will be shared under the given\n name across multiple sessions. All sessions open to the device which has\n this queue will be able to access it via the shared_name. Using this in\n a distributed setting means each name will only be seen by one of the\n sessions which has access to this operation.\n name: A name for the operations (optional).\n cancel_op: Cancel op for the queue (optional).\n\nReturns:\n A queue with the output strings. A `QueueRunner` for the Queue\n is added to the current `Graph`'s `QUEUE_RUNNER` collection.\n\nRaises:\n ValueError: If the string_tensor is a null Python list. At runtime,\n will fail with an assertion if string_tensor becomes a null tensor.\n\n@compatibility(eager)\nInput pipelines based on Queues are not supported when eager execution is\nenabled. Please use the `tf.data` API to ingest data under eager execution.\n@end_compatibility", "desc": "Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.summary_iterator", "docs": "Returns a iterator for reading `Event` protocol buffers from an event file.\n\n You can use this function to read events written to an event file. It returns\n a Python iterator that yields `Event` protocol buffers.\n\n Example: Print the contents of an events file.\n\n ```python\n for e in tf.compat.v1.train.summary_iterator(path to events file):\n print(e)\n ```\n\n Example: Print selected summary values.\n\n ```python\n # This example supposes that the events file contains summaries with a\n # summary value tag 'loss'. These could have been added by calling\n # `add_summary()`, passing the output of a scalar summary op created with\n # with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`.\n for e in tf.compat.v1.train.summary_iterator(path to events file):\n for v in e.summary.value:\n if v.tag == 'loss':\n print(tf.make_ndarray(v.tensor))\n ```\n Example: Continuously check for new summary values.\n\n ```python\n summaries = tf.compat.v1.train.summary_iterator(path to events file)\n while True:\n for e in summaries:\n for v in e.summary.value:\n if v.tag == 'loss':\n print(tf.make_ndarray(v.tensor))\n # Wait for a bit before checking the file for any new events\n time.sleep(wait time)\n ```\n\n See the protocol buffer definitions of\n [Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)\n and\n [Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)\n for more information about their attributes.\n\n Args:\n path: The path to an event file created by a `SummaryWriter`.\n\n Returns:\n A iterator that yields `Event` protocol buffers\n ", "desc": "Returns a iterator for reading `Event` protocol buffers from an event file.", "type": "API"}, {"name": "tf.compat.v1.train.SummarySaverHook", "docs": "Saves summaries every N steps.", "desc": "Saves summaries every N steps.", "type": "API"}, {"name": "tf.compat.v1.train.Supervisor", "docs": "A training helper that checkpoints models and computes summaries.\n\n This class is deprecated. Please use\n `tf.compat.v1.train.MonitoredTrainingSession` instead.\n\n The Supervisor is a small wrapper around a `Coordinator`, a `Saver`,\n and a `SessionManager` that takes care of common needs of TensorFlow\n training programs.\n\n #### Use for a single program\n\n ```python\n with tf.Graph().as_default():\n ...add operations to the graph...\n # Create a Supervisor that will checkpoint the model in '/tmp/mydir'.\n sv = Supervisor(logdir='/tmp/mydir')\n # Get a TensorFlow session managed by the supervisor.\n with sv.managed_session(FLAGS.master) as sess:\n # Use the session to train the graph.\n while not sv.should_stop():\n sess.run()\n ```\n\n Within the `with sv.managed_session()` block all variables in the graph have\n been initialized. In addition, a few services have been started to\n checkpoint the model and add summaries to the event log.\n\n If the program crashes and is restarted, the managed session automatically\n reinitialize variables from the most recent checkpoint.\n\n The supervisor is notified of any exception raised by one of the services.\n After an exception is raised, `should_stop()` returns `True`. In that case\n the training loop should also stop. This is why the training loop has to\n check for `sv.should_stop()`.\n\n Exceptions that indicate that the training inputs have been exhausted,\n `tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True`\n but are not re-raised from the `with` block: they indicate a normal\n termination.\n\n #### Use for multiple replicas\n\n To train with replicas you deploy the same program in a `Cluster`.\n One of the tasks must be identified as the *chief*: the task that handles\n initialization, checkpoints, summaries, and recovery. The other tasks\n depend on the *chief* for these services.\n\n The only change you have to do to the single program code is to indicate\n if the program is running as the *chief*.\n\n ```python\n # Choose a task as the chief. This could be based on server_def.task_index,\n # or job_def.name, or job_def.tasks. It's entirely up to the end user.\n # But there can be only one *chief*.\n is_chief = (server_def.task_index == 0)\n server = tf.distribute.Server(server_def)\n\n with tf.Graph().as_default():\n ...add operations to the graph...\n # Create a Supervisor that uses log directory on a shared file system.\n # Indicate if you are the 'chief'\n sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)\n # Get a Session in a TensorFlow server on the cluster.\n with sv.managed_session(server.target) as sess:\n # Use the session to train the graph.\n while not sv.should_stop():\n sess.run()\n ```\n\n In the *chief* task, the `Supervisor` works exactly as in the first example\n above. In the other tasks `sv.managed_session()` waits for the Model to have\n been initialized before returning a session to the training code. The\n non-chief tasks depend on the chief task for initializing the model.\n\n If one of the tasks crashes and restarts, `managed_session()`\n checks if the Model is initialized. If yes, it just creates a session and\n returns it to the training code that proceeds normally. If the model needs\n to be initialized, the chief task takes care of reinitializing it; the other\n tasks just wait for the model to have been initialized.\n\n NOTE: This modified program still works fine as a single program.\n The single program marks itself as the chief.\n\n #### What `master` string to use\n\n Whether you are running on your machine or in the cluster you can use the\n following values for the --master flag:\n\n * Specifying `''` requests an in-process session that does not use RPC.\n\n * Specifying `'local'` requests a session that uses the RPC-based\n \"Master interface\" to run TensorFlow programs. See\n `tf.train.Server.create_local_server` for\n details.\n\n * Specifying `'grpc://hostname:port'` requests a session that uses\n the RPC interface to a specific host, and also allows the in-process\n master to access remote tensorflow workers. Often, it is\n appropriate to pass `server.target` (for some `tf.distribute.Server`\n named `server).\n\n #### Advanced use\n\n ##### Launching additional services\n\n `managed_session()` launches the Checkpoint and Summary services (threads).\n If you need more services to run you can simply launch them in the block\n controlled by `managed_session()`.\n\n Example: Start a thread to print losses. We want this thread to run\n every 60 seconds, so we launch it with `sv.loop()`.\n\n ```python\n ...\n sv = Supervisor(logdir='/tmp/mydir')\n with sv.managed_session(FLAGS.master) as sess:\n sv.loop(60, print_loss, (sess, ))\n while not sv.should_stop():\n sess.run(my_train_op)\n ```\n\n ##### Launching fewer services\n\n `managed_session()` launches the \"summary\" and \"checkpoint\" threads which use\n either the optionally `summary_op` and `saver` passed to the constructor, or\n default ones created automatically by the supervisor. If you want to run\n your own summary and checkpointing logic, disable these services by passing\n `None` to the `summary_op` and `saver` parameters.\n\n Example: Create summaries manually every 100 steps in the chief.\n\n ```python\n # Create a Supervisor with no automatic summaries.\n sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)\n # As summary_op was None, managed_session() does not start the\n # summary thread.\n with sv.managed_session(FLAGS.master) as sess:\n for step in range(1000000):\n if sv.should_stop():\n break\n if is_chief and step % 100 == 0:\n # Create the summary every 100 chief steps.\n sv.summary_computed(sess, sess.run(my_summary_op))\n else:\n # Train normally\n sess.run(my_train_op)\n ```\n\n ##### Custom model initialization\n\n `managed_session()` only supports initializing the model by running an\n `init_op` or restoring from the latest checkpoint. If you have special\n initialization needs, see how to specify a `local_init_op` when creating the\n supervisor. You can also use the `SessionManager` directly to create a\n session and check if it could be initialized automatically.\n ", "desc": "A training helper that checkpoints models and computes summaries.", "type": "API"}, {"name": "tf.compat.v1.train.SyncReplicasOptimizer", "docs": "Class to synchronize, aggregate gradients and pass them to the optimizer.\n\n This class is deprecated. For synchronous training, please use [Distribution\n Strategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute).\n\n In a typical asynchronous training environment, it's common to have some\n stale gradients. For example, with a N-replica asynchronous training,\n gradients will be applied to the variables N times independently. Depending\n on each replica's training speed, some gradients might be calculated from\n copies of the variable from several steps back (N-1 steps on average). This\n optimizer avoids stale gradients by collecting gradients from all replicas,\n averaging them, then applying them to the variables in one shot, after\n which replicas can fetch the new variables and continue.\n\n The following accumulators/queue are created:\n\n * N `gradient accumulators`, one per variable to train. Gradients are pushed\n to them and the chief worker will wait until enough gradients are collected\n and then average them before applying to variables. The accumulator will\n drop all stale gradients (more details in the accumulator op).\n * 1 `token` queue where the optimizer pushes the new global_step value after\n all variables are updated.\n\n The following local variable is created:\n * `sync_rep_local_step`, one per replica. Compared against the global_step in\n each accumulator to check for staleness of the gradients.\n\n The optimizer adds nodes to the graph to collect gradients and pause the\n trainers until variables are updated.\n For the Parameter Server job:\n\n 1. An accumulator is created for each variable, and each replica pushes the\n gradients into the accumulators instead of directly applying them to the\n variables.\n 2. Each accumulator averages once enough gradients (replicas_to_aggregate)\n have been accumulated.\n 3. Apply the averaged gradients to the variables.\n 4. Only after all variables have been updated, increment the global step.\n 5. Only after step 4, pushes `global_step` in the `token_queue`, once for\n each worker replica. The workers can now fetch the global step, use it to\n update its local_step variable and start the next batch. Please note that\n some workers can consume multiple minibatches, while some may not consume\n even one. This is because each worker fetches minibatches as long as\n a token exists. If one worker is stuck for some reason and does not\n consume a token, another worker can use it.\n\n For the replicas:\n\n 1. Start a step: fetch variables and compute gradients.\n 2. Once the gradients have been computed, push them into gradient\n accumulators. Each accumulator will check the staleness and drop the stale.\n 3. After pushing all the gradients, dequeue an updated value of global_step\n from the token queue and record that step to its local_step variable. Note\n that this is effectively a barrier.\n 4. Start the next batch.\n\n ### Usage\n\n ```python\n # Create any optimizer to update the variables, say a simple SGD:\n opt = GradientDescentOptimizer(learning_rate=0.1)\n\n # Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each\n # step the optimizer collects 50 gradients before applying to variables.\n # Note that if you want to have 2 backup replicas, you can change\n # total_num_replicas=52 and make sure this number matches how many physical\n # replicas you started in your job.\n opt = tf.compat.v1.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=50,\n total_num_replicas=50)\n\n # Some models have startup_delays to help stabilize the model but when using\n # sync_replicas training, set it to 0.\n\n # Now you can call `minimize()` or `compute_gradients()` and\n # `apply_gradients()` normally\n training_op = opt.minimize(total_loss, global_step=self.global_step)\n\n\n # You can create the hook which handles initialization and queues.\n sync_replicas_hook = opt.make_session_run_hook(is_chief)\n ```\n\n In the training program, every worker will run the train_op as if not\n synchronized.\n\n ```python\n with training.MonitoredTrainingSession(\n master=workers[worker_id].target, is_chief=is_chief,\n hooks=[sync_replicas_hook]) as mon_sess:\n while not mon_sess.should_stop():\n mon_sess.run(training_op)\n ```\n\n To use SyncReplicasOptimizer with an `Estimator`, you need to send\n sync_replicas_hook while calling the fit.\n ```python\n my_estimator = DNNClassifier(..., optimizer=opt)\n my_estimator.fit(..., hooks=[sync_replicas_hook])\n ```\n ", "desc": "Class to synchronize, aggregate gradients and pass them to the optimizer.", "type": "API"}, {"name": "tf.compat.v1.train.update_checkpoint_state", "docs": "Updates the content of the 'checkpoint' file. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.train.CheckpointManager` to manage checkpoints rather than manually editing the Checkpoint proto.\n\nThis updates the checkpoint file containing a CheckpointState\nproto.\n\nArgs:\n save_dir: Directory where the model was saved.\n model_checkpoint_path: The checkpoint file.\n all_model_checkpoint_paths: List of strings. Paths to all not-yet-deleted\n checkpoints, sorted from oldest to newest. If this is a non-empty list,\n the last element must be equal to model_checkpoint_path. These paths\n are also saved in the CheckpointState proto.\n latest_filename: Optional name of the checkpoint file. Default to\n 'checkpoint'.\n all_model_checkpoint_timestamps: Optional list of timestamps (floats,\n seconds since the Epoch) indicating when the checkpoints in\n `all_model_checkpoint_paths` were created.\n last_preserved_timestamp: A float, indicating the number of seconds since\n the Epoch when the last preserved checkpoint was written, e.g. due to a\n `keep_checkpoint_every_n_hours` parameter (see\n `tf.train.CheckpointManager` for an implementation).\nRaises:\n RuntimeError: If any of the model checkpoint paths conflict with the file\n containing CheckpointSate.", "desc": "Updates the content of the 'checkpoint' file. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.train.VocabInfo", "docs": "Vocabulary information for warm-starting.\n\n See `tf.estimator.WarmStartSettings` for examples of using\n VocabInfo to warm-start.\n\n Args:\n new_vocab: [Required] A path to the new vocabulary file (used with the model\n to be trained).\n new_vocab_size: [Required] An integer indicating how many entries of the new\n vocabulary will used in training.\n num_oov_buckets: [Required] An integer indicating how many OOV buckets are\n associated with the vocabulary.\n old_vocab: [Required] A path to the old vocabulary file (used with the\n checkpoint to be warm-started from).\n old_vocab_size: [Optional] An integer indicating how many entries of the old\n vocabulary were used in the creation of the checkpoint. If not provided,\n the entire old vocabulary will be used.\n backup_initializer: [Optional] A variable initializer used for variables\n corresponding to new vocabulary entries and OOV. If not provided, these\n entries will be zero-initialized.\n axis: [Optional] Denotes what axis the vocabulary corresponds to. The\n default, 0, corresponds to the most common use case (embeddings or\n linear weights for binary classification / regression). An axis of 1\n could be used for warm-starting output layers with class vocabularies.\n\n Returns:\n A `VocabInfo` which represents the vocabulary information for warm-starting.\n\n Raises:\n ValueError: `axis` is neither 0 or 1.\n\n Example Usage:\n```python\n embeddings_vocab_info = tf.VocabInfo(\n new_vocab='embeddings_vocab',\n new_vocab_size=100,\n num_oov_buckets=1,\n old_vocab='pretrained_embeddings_vocab',\n old_vocab_size=10000,\n backup_initializer=tf.compat.v1.truncated_normal_initializer(\n mean=0.0, stddev=(1 / math.sqrt(embedding_dim))),\n axis=0)\n\n softmax_output_layer_kernel_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.glorot_uniform_initializer(),\n axis=1)\n\n softmax_output_layer_bias_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.zeros_initializer(),\n axis=0)\n\n #Currently, only axis=0 and axis=1 are supported.\n ```\n ", "desc": "Vocabulary information for warm-starting.", "type": "API"}, {"name": "tf.compat.v1.train.warm_start", "docs": "Warm-starts a model using the given settings.\n\n If you are using a tf.estimator.Estimator, this will automatically be called\n during training.\n\n Args:\n ckpt_to_initialize_from: [Required] A string specifying the directory with\n checkpoint file(s) or path to checkpoint from which to warm-start the\n model parameters.\n vars_to_warm_start: [Optional] One of the following:\n\n - A regular expression (string) that captures which variables to\n warm-start (see tf.compat.v1.get_collection). This expression will only\n consider variables in the TRAINABLE_VARIABLES collection -- if you need\n to warm-start non_TRAINABLE vars (such as optimizer accumulators or\n batch norm statistics), please use the below option.\n - A list of strings, each a regex scope provided to\n tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see\n tf.compat.v1.get_collection). For backwards compatibility reasons,\n this is separate from the single-string argument type.\n - A list of Variables to warm-start. If you do not have access to the\n `Variable` objects at the call site, please use the above option.\n - `None`, in which case only TRAINABLE variables specified in\n `var_name_to_vocab_info` will be warm-started.\n\n Defaults to `'.*'`, which warm-starts all variables in the\n TRAINABLE_VARIABLES collection. Note that this excludes variables such\n as accumulators and moving statistics from batch norm.\n var_name_to_vocab_info: [Optional] Dict of variable names (strings) to\n `tf.estimator.VocabInfo`. The variable names should be \"full\" variables,\n not the names of the partitions. If not explicitly provided, the variable\n is assumed to have no (changes to) vocabulary.\n var_name_to_prev_var_name: [Optional] Dict of variable names (strings) to\n name of the previously-trained variable in `ckpt_to_initialize_from`. If\n not explicitly provided, the name of the variable is assumed to be same\n between previous checkpoint and current model. Note that this has no\n effect on the set of variables that is warm-started, and only controls\n name mapping (use `vars_to_warm_start` for controlling what variables to\n warm-start).\n\n Raises:\n ValueError: If the WarmStartSettings contains prev_var_name or VocabInfo\n configuration for variable names that are not used. This is to ensure\n a stronger check for variable configuration than relying on users to\n examine the logs.\n ", "desc": "Warm-starts a model using the given settings.", "type": "API"}, {"name": "tf.compat.v1.train.WorkerSessionCreator", "docs": "Creates a tf.compat.v1.Session for a worker.", "desc": "Creates a tf.compat.v1.Session for a worker.", "type": "API"}, {"name": "tf.compat.v1.train.write_graph", "docs": "Writes a graph proto to a file.\n\n The graph is written as a text proto unless `as_text` is `False`.\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')\n ```\n\n or\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')\n ```\n\n Args:\n graph_or_graph_def: A `Graph` or a `GraphDef` protocol buffer.\n logdir: Directory where to write the graph. This can refer to remote\n filesystems, such as Google Cloud Storage (GCS).\n name: Filename for the graph.\n as_text: If `True`, writes the graph as an ASCII proto.\n\n Returns:\n The path of the output proto file.\n ", "desc": "Writes a graph proto to a file.", "type": "API"}, {"name": "tf.compat.v1.trainable_variables", "docs": "Returns all variables created with `trainable=True`.\n\n When passed `trainable=True`, the `Variable()` constructor automatically\n adds new variables to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the\n contents of that collection.\n\n @compatibility(TF2)\n Not compatible with eager execution and `tf.function`. In particular, Graph\n collections are deprecated in TF2. Instead please create a `tf.Module`\n container for all your model state, including variables.\n You can then list all the trainable variables in your `tf.Module` through the\n `trainable_variables` attribute.\n @end_compatibility\n\n Args:\n scope: (Optional.) A string. If supplied, the resulting list is filtered to\n include only items whose `name` attribute matches `scope` using\n `re.match`. Items without a `name` attribute are never returned if a scope\n is supplied. The choice of `re.match` means that a `scope` without special\n tokens filters by prefix.\n\n Returns:\n A list of Variable objects.\n ", "desc": "Returns all variables created with `trainable=True`.", "type": "API"}, {"name": "tf.compat.v1.transpose", "docs": "Transposes `a`.\n\n Permutes the dimensions according to `perm`.\n\n The returned tensor's dimension i will correspond to the input dimension\n `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is\n the rank of the input tensor. Hence by default, this operation performs a\n regular matrix transpose on 2-D input Tensors. If conjugate is True and\n `a.dtype` is either `complex64` or `complex128` then the values of `a`\n are conjugated and transposed.\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, so `transpose` returns a new tensor with\n the items permuted.\n @end_compatibility\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.transpose(x) # [[1, 4]\n # [2, 5]\n # [3, 6]]\n\n # Equivalently\n tf.transpose(x, perm=[1, 0]) # [[1, 4]\n # [2, 5]\n # [3, 6]]\n\n # If x is complex, setting conjugate=True gives the conjugate transpose\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n\n # 'perm' is more useful for n-dimensional tensors, for n > 2\n x = tf.constant([[[ 1, 2, 3],\n [ 4, 5, 6]],\n [[ 7, 8, 9],\n [10, 11, 12]]])\n\n # Take the transpose of the matrices in dimension-0\n # (this common operation has a shorthand `linalg.matrix_transpose`)\n tf.transpose(x, perm=[0, 2, 1]) # [[[1, 4],\n # [2, 5],\n # [3, 6]],\n # [[7, 10],\n # [8, 11],\n # [9, 12]]]\n ```\n\n Args:\n a: A `Tensor`.\n perm: A permutation of the dimensions of `a`.\n name: A name for the operation (optional).\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.transpose(input)).\n\n Returns:\n A transposed `Tensor`.\n ", "desc": "Transposes `a`.", "type": "API"}, {"name": "tf.compat.v1.truediv", "docs": "Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 division operator semantics where all integer\n arguments are cast to floating types first. This op is generated by normal\n `x / y` division in Python 3 and in Python 2.7 with\n `from __future__ import division`. If you want integer division that rounds\n down, use `x // y` or `tf.math.floordiv`.\n\n `x` and `y` must have the same numeric type. If the inputs are floating\n point, the output will have the same type. If the inputs are integral, the\n inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`\n and `int64` (matching the behavior of Numpy).\n\n Args:\n x: `Tensor` numerator of numeric type.\n y: `Tensor` denominator of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` evaluated in floating point.\n\n Raises:\n TypeError: If `x` and `y` have different dtypes.\n ", "desc": "Divides x / y elementwise (using Python 3 division operator semantics).", "type": "API"}, {"name": "tf.compat.v1.truncated_normal", "docs": "Outputs random values from a truncated normal distribution.\n\n The values are drawn from a normal distribution with specified mean and\n standard deviation, discarding and re-drawing any samples that are more than\n two standard deviations from the mean.\n\n Examples:\n\n >>> tf.random.truncated_normal(shape=[2])\n \n\n >>> tf.random.truncated_normal(shape=[2], mean=3, stddev=1, dtype=tf.float32)\n \n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the\n truncated normal distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution, before truncation.\n dtype: The type of the output. Restricted to floating-point types:\n `tf.half`, `tf.float`, `tf.double`, etc.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for more information.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.truncated_normal_initializer", "docs": "Initializer that generates a truncated normal distribution.\n\n These values are similar to values from a `random_normal_initializer`\n except that values more than two standard deviations from the mean\n are discarded and re-drawn. This is the recommended initializer for\n neural network weights and filters.\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2, switch to using either\n `tf.initializers.truncated_normal` or `tf.keras.initializers.TruncatedNormal`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer. Keep in mind that\n the default stddev and the behavior of fixed seeds have changed.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.truncated_normal_initializer(\n mean=mean,\n stddev=stddev,\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.initializers.truncated_normal(\n mean=mean,\n seed=seed,\n stddev=stddev)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :-------------------- | :-------------- | :------------------------- |\n | `mean` | `mean` | No change to defaults |\n | `stddev` | `stddev` | Default changes from 1.0 to 0.05 |\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 native api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.compat.v1.truncatediv", "docs": "Returns x / y element-wise for integer types.\n\n Truncation designates that negative numbers will round fractional quantities\n toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different\n than Python semantics. See `FloorDiv` for a division function that matches\n Python Semantics.\n\n *NOTE*: `truncatediv` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for integer types.", "type": "API"}, {"name": "tf.compat.v1.truncatemod", "docs": "Returns element-wise remainder of division. This emulates C semantics in that\n\n the result here is consistent with a truncating divide. E.g. `truncate(x / y) *\n y + truncate_mod(x, y) = x`.\n\n *NOTE*: `truncatemod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. This emulates C semantics in that", "type": "API"}, {"name": "tf.compat.v1.tuple", "docs": "Group tensors together.\n\n This creates a tuple of tensors with the same values as the `tensors`\n argument, except that the value of each tensor is only returned after the\n values of all tensors have been computed.\n\n `control_inputs` contains additional ops that have to finish before this op\n finishes, but whose outputs are not returned.\n\n This can be used as a \"join\" mechanism for parallel computations: all the\n argument tensors can be computed in parallel, but the values of any tensor\n returned by `tuple` are only available after all the parallel computations\n are done.\n\n See also `tf.group` and\n `tf.control_dependencies`.\n\n Args:\n tensors: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.\n name: (optional) A name to use as a `name_scope` for the operation.\n control_inputs: List of additional ops to finish before returning.\n\n Returns:\n Same as `tensors`.\n\n Raises:\n ValueError: If `tensors` does not contain any `Tensor` or `IndexedSlices`.\n TypeError: If `control_inputs` is not a list of `Operation` or `Tensor`\n objects.\n\n ", "desc": "Group tensors together.", "type": "API"}, {"name": "tf.compat.v1.type_spec_from_value", "docs": "Returns a `tf.TypeSpec` that represents the given `value`.\n\n Examples:\n\n >>> tf.type_spec_from_value(tf.constant([1, 2, 3]))\n TensorSpec(shape=(3,), dtype=tf.int32, name=None)\n >>> tf.type_spec_from_value(np.array([4.0, 5.0], np.float64))\n TensorSpec(shape=(2,), dtype=tf.float64, name=None)\n >>> tf.type_spec_from_value(tf.ragged.constant([[1, 2], [3, 4, 5]]))\n RaggedTensorSpec(TensorShape([2, None]), tf.int32, 1, tf.int64)\n\n >>> example_input = tf.ragged.constant([[1, 2], [3]])\n >>> @tf.function(input_signature=[tf.type_spec_from_value(example_input)])\n ... def f(x):\n ... return tf.reduce_sum(x, axis=1)\n\n Args:\n value: A value that can be accepted or returned by TensorFlow APIs. Accepted\n types for `value` include `tf.Tensor`, any value that can be converted to\n `tf.Tensor` using `tf.convert_to_tensor`, and any subclass of\n `CompositeTensor` (such as `tf.RaggedTensor`).\n\n Returns:\n A `TypeSpec` that is compatible with `value`.\n\n Raises:\n TypeError: If a TypeSpec cannot be built for `value`, because its type\n is not supported.\n ", "desc": "Returns a `tf.TypeSpec` that represents the given `value`.", "type": "API"}, {"name": "tf.compat.v1.types", "docs": "Public TensorFlow type definitions.\n\nFor details, see\nhttps://github.com/tensorflow/community/blob/master/rfcs/20200211-tf-types.md.\n\n", "desc": "Public TensorFlow type definitions.", "type": "API"}, {"name": "tf.compat.v1.types.experimental", "docs": "Public API for tf.types.experimental namespace.\n", "desc": "Public API for tf.types.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.types.experimental.TensorLike", "docs": "Union of all types that can be converted to a `tf.Tensor` by `tf.convert_to_tensor`.\n\nThis definition may be used in user code. Additional types may be added\nin the future as more input types are supported.\n\nExample:\n\n```\ndef foo(x: TensorLike):\n pass\n```\n\nThis definition passes static type verification for:\n\n```\nfoo(tf.constant([1, 2, 3]))\nfoo([1, 2, 3])\nfoo(np.array([1, 2, 3]))\n```\n", "desc": "Union of all types that can be converted to a `tf.Tensor` by `tf.convert_to_tensor`.", "type": "API"}, {"name": "tf.compat.v1.TypeSpec", "docs": "Specifies a TensorFlow value type.\n\n A `tf.TypeSpec` provides metadata describing an object accepted or returned\n by TensorFlow APIs. Concrete subclasses, such as `tf.TensorSpec` and\n `tf.RaggedTensorSpec`, are used to describe different value types.\n\n For example, `tf.function`'s `input_signature` argument accepts a list\n (or nested structure) of `TypeSpec`s.\n\n Creating new subclasses of `TypeSpec` (outside of TensorFlow core) is not\n currently supported. In particular, we may make breaking changes to the\n private methods and properties defined by this base class.\n\n Example:\n\n >>> spec = tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)\n >>> @tf.function(input_signature=[spec])\n ... def double(x):\n ... return x * 2\n >>> print(double(tf.ragged.constant([[1, 2], [3]])))\n \n ", "desc": "Specifies a TensorFlow value type.", "type": "API"}, {"name": "tf.compat.v1.UnconnectedGradients", "docs": "Controls how gradient computation behaves when y does not depend on x.\n\n The gradient of y with respect to x can be zero in two different ways: there\n could be no differentiable path in the graph connecting x to y (and so we can\n statically prove that the gradient is zero) or it could be that runtime values\n of tensors in a particular execution lead to a gradient of zero (say, if a\n relu unit happens to not be activated). To allow you to distinguish between\n these two cases you can choose what value gets returned for the gradient when\n there is no path in the graph from x to y:\n\n * `NONE`: Indicates that [None] will be returned if there is no path from x\n to y\n * `ZERO`: Indicates that a zero tensor will be returned in the shape of x.\n ", "desc": "Controls how gradient computation behaves when y does not depend on x.", "type": "API"}, {"name": "tf.compat.v1.uniform_unit_scaling_initializer", "docs": "Initializer that generates tensors without scaling variance.\n\n When initializing a deep network, it is in principle advantageous to keep\n the scale of the input variance constant, so it does not explode or diminish\n by reaching the final layer. If the input is `x` and the operation `x * W`,\n and we want to initialize `W` uniformly at random, we need to pick `W` from\n\n [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]\n\n to keep the scale intact, where `dim = W.shape[0]` (the size of the input).\n A similar calculation for convolutional networks gives an analogous result\n with `dim` equal to the product of the first 3 dimensions. When\n nonlinearities are present, we need to multiply this by a constant `factor`.\n See (Sussillo et al., 2014) for deeper motivation, experiments\n and the calculation of constants. In section 2.3 there, the constants were\n numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.\n\n Args:\n factor: Float. A multiplicative factor by which the values will be scaled.\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n References:\n [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558)\n ([pdf](http://arxiv.org/pdf/1412.6558.pdf))\n ", "desc": "Initializer that generates tensors without scaling variance.", "type": "API"}, {"name": "tf.compat.v1.unique", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`; `x` does not need to be sorted.\n This operation also returns a tensor `idx` the same size as `x` that contains\n the index of each value of `x` in the unique output `y`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n Examples:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx = unique(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n ```\n\n ```\n # tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5]\n y, idx = unique(x)\n y ==> [4, 5, 1, 2, 3]\n idx ==> [0, 1, 2, 3, 4, 4, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.compat.v1.unique_with_counts", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`. This operation also returns a\n tensor `idx` the same size as `x` that contains the index of each value of `x`\n in the unique output `y`. Finally, it returns a third tensor `count` that\n contains the count of each element of `y` in `x`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n For example:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx, count = unique_with_counts(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n count ==> [2, 1, 3, 1, 2]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx, count).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n count: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.compat.v1.unravel_index", "docs": "Converts an array of flat indices into a tuple of coordinate arrays.\n\n \n Example:\n\n ```\n y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3])\n # 'dims' represent a hypothetical (3, 3) tensor of indices:\n # [[0, 1, *2*],\n # [3, 4, *5*],\n # [6, *7*, 8]]\n # For each entry from 'indices', this operation returns\n # its coordinates (marked with '*'), such as\n # 2 ==> (0, 2)\n # 5 ==> (1, 2)\n # 7 ==> (2, 1)\n y ==> [[0, 1, 2], [2, 2, 1]]\n ```\n\n @compatibility(numpy)\n Equivalent to np.unravel_index\n @end_compatibility\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 0-D or 1-D `int` Tensor whose elements are indices into the\n flattened version of an array of dimensions dims.\n dims: A `Tensor`. Must have the same type as `indices`.\n An 1-D `int` Tensor. The shape of the array to use for unraveling\n indices.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `indices`.\n ", "desc": "Converts an array of flat indices into a tuple of coordinate arrays.", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the maximum such that:\n\n \\\\(output_i = \\max_{j...} data[j...]\\\\) where max is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the maximum is empty for a given segment ID `i`, it outputs the smallest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::lowest()`.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Instead of computing the sum over segments, it computes the mean of all\n entries belonging to a segment such that:\n\n \\\\(output_i = 1/N_i \\sum_{j...} data[j...]\\\\) where the sum is over tuples\n `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the number of\n occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the minimum such that:\n\n \\\\(output_i = \\min_{j...} data_[j...]\\\\) where min is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the minimum is empty for a given segment ID `i`, it outputs the largest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::max()`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the product of all\n entries belonging to a segment such that:\n\n \\\\(output_i = \\prod_{j...} data[j...]\\\\) where the product is over tuples\n `j...` such that `segment_ids[j...] == i`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n If there is no entry for a given segment ID `i`, it outputs 1.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_sqrt_n", "docs": "Computes the sum along segments of a tensor divided by the sqrt(N).\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Additionally to computing the sum over segments, it divides the results by\n sqrt(N).\n\n \\\\(output_i = 1/sqrt(N_i) \\sum_{j...} data[j...]\\\\) where the sum is over\n tuples `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the\n number of occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n Note that this op only supports floating point and complex dtypes,\n due to tf.sqrt only supporting these types.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be in the range `[0, num_segments)`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the sum along segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.compat.v1.unsorted_segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output[i] = \\sum_{j...} data[j...]\\\\) where the sum is over tuples `j...` such\n that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`\n need not be sorted and need not cover all values in the full\n range of valid values.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n If the given segment ID `i` is negative, the value is dropped and will not be\n added to the sum of the segment.\n\n `num_segments` should equal the number of distinct segment IDs.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n >>> c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]]\n >>> tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.compat.v1.unstack", "docs": "Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.\n\n Unpacks tensors from `value` by chipping it along the `axis` dimension.\n\n >>> x = tf.reshape(tf.range(12), (3,4))\n >>>\n >>> p, q, r = tf.unstack(x)\n >>> p.shape.as_list()\n [4]\n\n >>> i, j, k, l = tf.unstack(x, axis=1)\n >>> i.shape.as_list()\n [3]\n\n This is the opposite of stack.\n\n >>> x = tf.stack([i, j, k, l], axis=1)\n\n More generally if you have a tensor of shape `(A, B, C, D)`:\n\n >>> A, B, C, D = [2, 3, 4, 5]\n >>> t = tf.random.normal(shape=[A, B, C, D])\n\n The number of tensor returned is equal to the length of the target `axis`:\n\n >>> axis = 2\n >>> items = tf.unstack(t, axis=axis)\n >>> len(items) == t.shape[axis]\n True\n\n The shape of each result tensor is equal to the shape of the input tensor,\n with the target `axis` removed.\n\n >>> items[0].shape.as_list() # [A, B, D]\n [2, 3, 5]\n\n The value of each tensor `items[i]` is equal to the slice of `input` across\n `axis` at index `i`:\n\n >>> for i in range(len(items)):\n ... slice = t[:,:,i,:]\n ... assert tf.reduce_all(slice == items[i])\n\n #### Python iterable unpacking\n\n With eager execution you _can_ unstack the 0th axis of a tensor using python's\n iterable unpacking:\n\n >>> t = tf.constant([1,2,3])\n >>> a,b,c = t\n\n `unstack` is still necessary because Iterable unpacking doesn't work in\n a `@tf.function`: Symbolic tensors are not iterable.\n\n You need to use `tf.unstack` here:\n\n >>> @tf.function\n ... def bad(t):\n ... a,b,c = t\n ... return a\n >>>\n >>> bad(t)\n Traceback (most recent call last):\n ...\n OperatorNotAllowedInGraphError: ...\n\n >>> @tf.function\n ... def good(t):\n ... a,b,c = tf.unstack(t)\n ... return a\n >>>\n >>> good(t).numpy()\n 1\n\n #### Unknown shapes\n\n Eager tensors have concrete values, so their shape is always known.\n Inside a `tf.function` the symbolic tensors may have unknown shapes.\n If the length of `axis` is unknown `tf.unstack` will fail because it cannot\n handle an unknown number of tensors:\n\n >>> @tf.function(input_signature=[tf.TensorSpec([None], tf.float32)])\n ... def bad(t):\n ... tensors = tf.unstack(t)\n ... return tensors[0]\n >>>\n >>> bad(tf.constant([1,2,3]))\n Traceback (most recent call last):\n ...\n ValueError: Cannot infer argument `num` from shape (None,)\n\n If you know the `axis` length you can pass it as the `num` argument. But this\n must be a constant value.\n\n If you actually need a variable number of tensors in a single `tf.function`\n trace, you will need to use exlicit loops and a `tf.TensorArray` instead.\n\n Args:\n value: A rank `R > 0` `Tensor` to be unstacked.\n num: An `int`. The length of the dimension `axis`. Automatically inferred if\n `None` (the default).\n axis: An `int`. The axis to unstack along. Defaults to the first dimension.\n Negative values wrap around, so the valid range is `[-R, R)`.\n name: A name for the operation (optional).\n\n Returns:\n The list of `Tensor` objects unstacked from `value`.\n\n Raises:\n ValueError: If `axis` is out of the range `[-R, R)`.\n ValueError: If `num` is unspecified and cannot be inferred.\n InvalidArgumentError: If `num` does not match the shape of `value`.\n ", "desc": "Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.", "type": "API"}, {"name": "tf.compat.v1.user_ops", "docs": "Public API for tf.user_ops namespace.\n", "desc": "Public API for tf.user_ops namespace.", "type": "API"}, {"name": "tf.compat.v1.user_ops.my_fact", "docs": "Example of overriding the generated code for an Op.", "desc": "Example of overriding the generated code for an Op.", "type": "API"}, {"name": "tf.compat.v1.Variable", "docs": "See the [Variables Guide](https://tensorflow.org/guide/variables).\n\n A variable maintains state in the graph across calls to `run()`. You add a\n variable to the graph by constructing an instance of the class `Variable`.\n\n The `Variable()` constructor requires an initial value for the variable,\n which can be a `Tensor` of any type and shape. The initial value defines the\n type and shape of the variable. After construction, the type and shape of\n the variable are fixed. The value can be changed using one of the assign\n methods.\n\n If you want to change the shape of a variable later you have to use an\n `assign` Op with `validate_shape=False`.\n\n Just like any `Tensor`, variables created with `Variable()` can be used as\n inputs for other Ops in the graph. Additionally, all the operators\n overloaded for the `Tensor` class are carried over to variables, so you can\n also add nodes to the graph by just doing arithmetic on variables.\n\n ```python\n import tensorflow as tf\n\n # Create a variable.\n w = tf.Variable(, name=)\n\n # Use the variable in the graph like any Tensor.\n y = tf.matmul(w, ...another variable or tensor...)\n\n # The overloaded operators are available too.\n z = tf.sigmoid(w + y)\n\n # Assign a new value to the variable with `assign()` or a related method.\n w.assign(w + 1.0)\n w.assign_add(1.0)\n ```\n\n When you launch the graph, variables have to be explicitly initialized before\n you can run Ops that use their value. You can initialize a variable by\n running its *initializer op*, restoring the variable from a save file, or\n simply running an `assign` Op that assigns a value to the variable. In fact,\n the variable *initializer op* is just an `assign` Op that assigns the\n variable's initial value to the variable itself.\n\n ```python\n # Launch the graph in a session.\n with tf.compat.v1.Session() as sess:\n # Run the variable initializer.\n sess.run(w.initializer)\n # ...you now can run ops that use the value of 'w'...\n ```\n\n The most common initialization pattern is to use the convenience function\n `global_variables_initializer()` to add an Op to the graph that initializes\n all the variables. You then run that Op after launching the graph.\n\n ```python\n # Add an Op to initialize global variables.\n init_op = tf.compat.v1.global_variables_initializer()\n\n # Launch the graph in a session.\n with tf.compat.v1.Session() as sess:\n # Run the Op that initializes global variables.\n sess.run(init_op)\n # ...you can now run any Op that uses variable values...\n ```\n\n If you need to create a variable with an initial value dependent on another\n variable, use the other variable's `initialized_value()`. This ensures that\n variables are initialized in the right order.\n\n All variables are automatically collected in the graph where they are\n created. By default, the constructor adds the new variable to the graph\n collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function\n `global_variables()` returns the contents of that collection.\n\n When building a machine learning model it is often convenient to distinguish\n between variables holding the trainable model parameters and other variables\n such as a `global step` variable used to count training steps. To make this\n easier, the variable constructor supports a `trainable=` parameter. If\n `True`, the new variable is also added to the graph collection\n `GraphKeys.TRAINABLE_VARIABLES`. The convenience function\n `trainable_variables()` returns the contents of this collection. The\n various `Optimizer` classes use this collection as the default list of\n variables to optimize.\n\n WARNING: tf.Variable objects by default have a non-intuitive memory model. A\n Variable is represented internally as a mutable Tensor which can\n non-deterministically alias other Tensors in a graph. The set of operations\n which consume a Variable and can lead to aliasing is undetermined and can\n change across TensorFlow versions. Avoid writing code which relies on the\n value of a Variable either changing or not changing as other operations\n happen. For example, using Variable objects or simple functions thereof as\n predicates in a `tf.cond` is dangerous and error-prone:\n\n ```\n v = tf.Variable(True)\n tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken.\n ```\n\n Here, adding `use_resource=True` when constructing the variable will\n fix any nondeterminism issues:\n ```\n v = tf.Variable(True, use_resource=True)\n tf.cond(v, lambda: v.assign(False), my_false_fn)\n ```\n\n To use the replacement for variables which does\n not have these issues:\n\n * Add `use_resource=True` when constructing `tf.Variable`;\n * Call `tf.compat.v1.get_variable_scope().set_use_resource(True)` inside a\n `tf.compat.v1.variable_scope` before the `tf.compat.v1.get_variable()` call.\n ", "desc": "See the [Variables Guide](https://tensorflow.org/guide/variables).", "type": "API"}, {"name": "tf.compat.v1.Variable.SaveSliceInfo", "docs": "Information on how to save this Variable as a slice.\n\n Provides internal support for saving variables as slices of a larger\n variable. This API is not public and is subject to change.\n\n Available properties:\n\n * full_name\n * full_shape\n * var_offset\n * var_shape\n ", "desc": "Information on how to save this Variable as a slice.", "type": "API"}, {"name": "tf.compat.v1.variable_axis_size_partitioner", "docs": "Get a partitioner for VariableScope to keep shards below `max_shard_bytes`.\n\n This partitioner will shard a Variable along one axis, attempting to keep\n the maximum shard size below `max_shard_bytes`. In practice, this is not\n always possible when sharding along only one axis. When this happens,\n this axis is sharded as much as possible (i.e., every dimension becomes\n a separate shard).\n\n If the partitioner hits the `max_shards` limit, then each shard may end up\n larger than `max_shard_bytes`. By default `max_shards` equals `None` and no\n limit on the number of shards is enforced.\n\n One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost\n `64MB`, to keep below the protobuf byte limit.\n\n Args:\n max_shard_bytes: The maximum size any given shard is allowed to be.\n axis: The axis to partition along. Default: outermost axis.\n bytes_per_string_element: If the `Variable` is of type string, this provides\n an estimate of how large each scalar in the `Variable` is.\n max_shards: The maximum number of shards in int created taking precedence\n over `max_shard_bytes`.\n\n Returns:\n A partition function usable as the `partitioner` argument to\n `variable_scope` and `get_variable`.\n\n Raises:\n ValueError: If any of the byte counts are non-positive.\n ", "desc": "Get a partitioner for VariableScope to keep shards below `max_shard_bytes`.", "type": "API"}, {"name": "tf.compat.v1.variable_creator_scope", "docs": "Scope which defines a variable creation function to be used by variable().\n\n variable_creator is expected to be a function with the following signature:\n\n ```\n def variable_creator(next_creator, **kwargs)\n ```\n\n The creator is supposed to eventually call the next_creator to create a\n variable if it does want to create a variable and not call Variable or\n ResourceVariable directly. This helps make creators composable. A creator may\n choose to create multiple variables, return already existing variables, or\n simply register that a variable was created and defer to the next creators in\n line. Creators can also modify the keyword arguments seen by the next\n creators.\n\n Custom getters in the variable scope will eventually resolve down to these\n custom creators when they do create variables.\n\n The valid keyword arguments in kwds are:\n\n * initial_value: A `Tensor`, or Python object convertible to a `Tensor`,\n which is the initial value for the Variable. The initial value must have\n a shape specified unless `validate_shape` is set to False. Can also be a\n callable with no argument that returns the initial value when called. In\n that case, `dtype` must be specified. (Note that initializer functions\n from init_ops.py must first be bound to a shape before being used here.)\n * trainable: If `True`, the default, also adds the variable to the graph\n collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as\n the default list of variables to use by the `Optimizer` classes.\n `trainable` defaults to `True`, unless `synchronization` is\n set to `ON_READ`, in which case it defaults to `False`.\n * collections: List of graph collections keys. The new variable is added to\n these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.\n * validate_shape: If `False`, allows the variable to be initialized with a\n value of unknown shape. If `True`, the default, the shape of\n `initial_value` must be known.\n * caching_device: Optional device string describing where the Variable\n should be cached for reading. Defaults to the Variable's device.\n If not `None`, caches on another device. Typical use is to cache\n on the device where the Ops using the Variable reside, to deduplicate\n copying through `Switch` and other conditional statements.\n * name: Optional name for the variable. Defaults to `'Variable'` and gets\n uniquified automatically.\n * dtype: If set, initial_value will be converted to the given type.\n If `None`, either the datatype will be kept (if `initial_value` is\n a Tensor), or `convert_to_tensor` will decide.\n * constraint: A constraint function to be applied to the variable after\n updates by some algorithms.\n * use_resource: if True, a ResourceVariable is always created.\n * synchronization: Indicates when a distributed a variable will be\n aggregated. Accepted values are constants defined in the class\n `tf.VariableSynchronization`. By default the synchronization is set to\n `AUTO` and the current `DistributionStrategy` chooses\n when to synchronize.\n * aggregation: Indicates how a distributed variable will be aggregated.\n Accepted values are constants defined in the class\n `tf.VariableAggregation`.\n\n This set may grow over time, so it's important the signature of creators is as\n mentioned above.\n\n Args:\n variable_creator: the passed creator\n\n Yields:\n A scope in which the creator is active\n ", "desc": "Scope which defines a variable creation function to be used by variable().", "type": "API"}, {"name": "tf.compat.v1.variable_op_scope", "docs": "Deprecated: context manager for defining an op that creates variables.", "desc": "Deprecated: context manager for defining an op that creates variables.", "type": "API"}, {"name": "tf.compat.v1.variable_scope", "docs": "A context manager for defining ops that creates variables (layers).\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` api,\n `tf.compat.v1.variable_scope` is mostly compatible with eager\n execution and `tf.function` as long as you combine it with the\n `tf.compat.v1.keras.utils.track_tf1_style_variables` decorator (though\n it will behave as if reuse is always set to `AUTO_REUSE`.)\n\n See the\n [model migration guide](www.tensorflow.org/guide/migrate/model_mapping)\n for more info on\n migrating code that relies on `variable_scope`-based variable reuse.\n\n When you use it with eager execution enabled but without\n `tf.compat.v1.keras.utils.track_tf1_style_variables`,\n `tf.compat.v1.variable_scope` will still be able to prefix the names\n of variables created within the scope but it will not enable variable reuse\n or error-raising checks around variable reuse (`get_variable` calls within\n it would always create new variables).\n\n Once you have switched away from `get_variable`-based variable reuse\n mechanisms, to switch to TF2 APIs you can just use\n `tf.name_scope` to prefix variable names.\n @end_compatibility\n\n This context manager validates that the (optional) `values` are from the same\n graph, ensures that graph is the default graph, and pushes a name scope and a\n variable scope.\n\n If `name_or_scope` is not None, it is used as is. If `name_or_scope` is None,\n then `default_name` is used. In that case, if the same name has been\n previously used in the same scope, it will be made unique by appending `_N`\n to it.\n\n Variable scope allows you to create new variables and to share already created\n ones while providing checks to not create or share by accident. For details,\n see the [Variable Scope How To](https://tensorflow.org/guide/variables), here\n we present only a few basic examples.\n\n The Variable Scope works as expected when the Eager Execution is Disabled.\n\n ```python\n tf.compat.v1.disable_eager_execution()\n ```\n\n Simple example of how to create a new variable:\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\"):\n with tf.compat.v1.variable_scope(\"bar\"):\n v = tf.compat.v1.get_variable(\"v\", [1])\n assert v.name == \"foo/bar/v:0\"\n ```\n\n Simple example of how to reenter a premade variable scope safely:\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\") as vs:\n pass\n\n # Re-enter the variable scope.\n with tf.compat.v1.variable_scope(vs,\n auxiliary_name_scope=False) as vs1:\n # Restore the original name_scope.\n with tf.name_scope(vs1.original_name_scope):\n v = tf.compat.v1.get_variable(\"v\", [1])\n assert v.name == \"foo/v:0\"\n c = tf.constant([1], name=\"c\")\n assert c.name == \"foo/c:0\"\n ```\n\n Keep in mind that the counters for `default_name` are discarded once the\n parent scope is exited. Therefore when the code re-enters the scope (for\n instance by saving it), all nested default_name counters will be restarted.\n\n For instance:\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\") as vs:\n with tf.compat.v1.variable_scope(None, default_name=\"bar\"):\n v = tf.compat.v1.get_variable(\"a\", [1])\n assert v.name == \"foo/bar/a:0\", v.name\n with tf.compat.v1.variable_scope(None, default_name=\"bar\"):\n v = tf.compat.v1.get_variable(\"b\", [1])\n assert v.name == \"foo/bar_1/b:0\"\n\n with tf.compat.v1.variable_scope(vs):\n with tf.compat.v1.variable_scope(None, default_name=\"bar\"):\n v = tf.compat.v1.get_variable(\"c\", [1])\n assert v.name == \"foo/bar/c:0\" # Uses bar instead of bar_2!\n ```\n\n Basic example of sharing a variable AUTO_REUSE:\n\n ```python\n def foo():\n with tf.compat.v1.variable_scope(\"foo\", reuse=tf.compat.v1.AUTO_REUSE):\n v = tf.compat.v1.get_variable(\"v\", [1])\n return v\n\n v1 = foo() # Creates v.\n v2 = foo() # Gets the same, existing v.\n assert v1 == v2\n ```\n\n Basic example of sharing a variable with reuse=True:\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\"):\n v = tf.compat.v1.get_variable(\"v\", [1])\n with tf.compat.v1.variable_scope(\"foo\", reuse=True):\n v1 = tf.compat.v1.get_variable(\"v\", [1])\n assert v1 == v\n ```\n\n Sharing a variable by capturing a scope and setting reuse:\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\") as scope:\n v = tf.compat.v1.get_variable(\"v\", [1])\n scope.reuse_variables()\n v1 = tf.compat.v1.get_variable(\"v\", [1])\n assert v1 == v\n ```\n\n To prevent accidental sharing of variables, we raise an exception when getting\n an existing variable in a non-reusing scope.\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\"):\n v = tf.compat.v1.get_variable(\"v\", [1])\n v1 = tf.compat.v1.get_variable(\"v\", [1])\n # Raises ValueError(\"... v already exists ...\").\n ```\n\n Similarly, we raise an exception when trying to get a variable that does not\n exist in reuse mode.\n\n ```python\n with tf.compat.v1.variable_scope(\"foo\", reuse=True):\n v = tf.compat.v1.get_variable(\"v\", [1])\n # Raises ValueError(\"... v does not exists ...\").\n ```\n\n Note that the `reuse` flag is inherited: if we open a reusing scope, then all\n its sub-scopes become reusing as well.\n\n A note about name scoping: Setting `reuse` does not impact the naming of other\n ops such as mult. See related discussion on\n [github#6189](https://github.com/tensorflow/tensorflow/issues/6189)\n\n Note that up to and including version 1.0, it was allowed (though explicitly\n discouraged) to pass False to the reuse argument, yielding undocumented\n behaviour slightly different from None. Starting at 1.1.0 passing None and\n False as reuse has exactly the same effect.\n\n A note about using variable scopes in multi-threaded environment: Variable\n scopes are thread local, so one thread will not see another thread's current\n scope. Also, when using `default_name`, unique scopes names are also generated\n only on a per thread basis. If the same name was used within a different\n thread, that doesn't prevent a new thread from creating the same scope.\n However, the underlying variable store is shared across threads (within the\n same graph). As such, if another thread tries to create a new variable with\n the same name as a variable created by a previous thread, it will fail unless\n reuse is True.\n\n Further, each thread starts with an empty variable scope. So if you wish to\n preserve name prefixes from a scope from the main thread, you should capture\n the main thread's scope and re-enter it in each thread. For e.g.\n\n ```\n main_thread_scope = variable_scope.get_variable_scope()\n\n # Thread's target function:\n def thread_target_fn(captured_scope):\n with variable_scope.variable_scope(captured_scope):\n # .... regular code for this thread\n\n\n thread = threading.Thread(target=thread_target_fn, args=(main_thread_scope,))\n ```\n ", "desc": "A context manager for defining ops that creates variables (layers).", "type": "API"}, {"name": "tf.compat.v1.VariableAggregation", "docs": "Indicates how a distributed variable will be aggregated.\n\n `tf.distribute.Strategy` distributes a model by making multiple copies\n (called \"replicas\") acting data-parallel on different elements of the input\n batch. When performing some variable-update operation, say\n `var.assign_add(x)`, in a model, we need to resolve how to combine the\n different values for `x` computed in the different replicas.\n\n * `NONE`: This is the default, giving an error if you use a\n variable-update operation with multiple replicas.\n * `SUM`: Add the updates across replicas.\n * `MEAN`: Take the arithmetic mean (\"average\") of the updates across replicas.\n * `ONLY_FIRST_REPLICA`: This is for when every replica is performing the same\n update, but we only want to perform the update once. Used, e.g., for the\n global step counter.\n * `ONLY_FIRST_TOWER`: Deprecated alias for `ONLY_FIRST_REPLICA`.\n ", "desc": "Indicates how a distributed variable will be aggregated.", "type": "API"}, {"name": "tf.compat.v1.variables_initializer", "docs": "Returns an Op that initializes a list of variables.\n\n After you launch the graph in a session, you can run the returned Op to\n initialize all the variables in `var_list`. This Op runs all the\n initializers of the variables in `var_list` in parallel.\n\n Calling `initialize_variables()` is equivalent to passing the list of\n initializers to `Group()`.\n\n If `var_list` is empty, however, the function still returns an Op that can\n be run. That Op just has no effect.\n\n @compatibility(TF2)\n In TF2, variables are initialized immediately when they are created. There is\n no longer a need to run variable initializers before using them.\n @end_compatibility\n\n Args:\n var_list: List of `Variable` objects to initialize.\n name: Optional name for the returned operation.\n\n Returns:\n An Op that run the initializers of all the specified variables.\n ", "desc": "Returns an Op that initializes a list of variables.", "type": "API"}, {"name": "tf.compat.v1.VariableScope", "docs": "Variable scope object to carry defaults to provide to `get_variable`.\n\n Many of the arguments we need for `get_variable` in a variable store are most\n easily handled with a context. This object is used for the defaults.\n\n Attributes:\n name: name of the current scope, used as prefix in get_variable.\n initializer: default initializer passed to get_variable.\n regularizer: default regularizer passed to get_variable.\n reuse: Boolean, None, or tf.compat.v1.AUTO_REUSE, setting the reuse in\n get_variable. When eager execution is enabled this argument is always\n forced to be False.\n caching_device: string, callable, or None: the caching device passed to\n get_variable.\n partitioner: callable or `None`: the partitioner passed to `get_variable`.\n custom_getter: default custom getter passed to get_variable.\n name_scope: The name passed to `tf.name_scope`.\n dtype: default type passed to get_variable (defaults to DT_FLOAT).\n use_resource: if False, create a normal Variable; if True create an\n experimental ResourceVariable with well-defined semantics. Defaults to\n False (will later change to True). When eager execution is enabled this\n argument is always forced to be True.\n constraint: An optional projection function to be applied to the variable\n after being updated by an `Optimizer` (e.g. used to implement norm\n constraints or value constraints for layer weights). The function must\n take as input the unprojected Tensor representing the value of the\n variable and return the Tensor for the projected value (which must have\n the same shape). Constraints are not safe to use when doing asynchronous\n distributed training.\n ", "desc": "Variable scope object to carry defaults to provide to `get_variable`.", "type": "API"}, {"name": "tf.compat.v1.VariableSynchronization", "docs": "Indicates when a distributed variable will be synced.\n\n * `AUTO`: Indicates that the synchronization will be determined by the current\n `DistributionStrategy` (eg. With `MirroredStrategy` this would be\n `ON_WRITE`).\n * `NONE`: Indicates that there will only be one copy of the variable, so\n there is no need to sync.\n * `ON_WRITE`: Indicates that the variable will be updated across devices\n every time it is written.\n * `ON_READ`: Indicates that the variable will be aggregated across devices\n when it is read (eg. when checkpointing or when evaluating an op that uses\n the variable).\n\n Example:\n >>> temp_grad=[tf.Variable([0.], trainable=False,\n ... synchronization=tf.VariableSynchronization.ON_READ,\n ... aggregation=tf.VariableAggregation.MEAN\n ... )]\n ", "desc": "Indicates when a distributed variable will be synced.", "type": "API"}, {"name": "tf.compat.v1.variance_scaling_initializer", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n @compatibility(TF2)\n Although it is a legacy `compat.v1` API, this symbol is compatible with eager\n execution and `tf.function`.\n\n To switch to TF2 APIs, move to using either\n `tf.initializers.variance_scaling` or `tf.keras.initializers.VarianceScaling`\n (neither from `compat.v1`) and\n pass the dtype when calling the initializer.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.variance_scaling_initializer(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed,\n dtype=dtype)\n\n weight_one = tf.Variable(initializer(shape_one))\n weight_two = tf.Variable(initializer(shape_two))\n ```\n\n After:\n\n ```python\n initializer = tf.keras.initializers.VarianceScaling(\n scale=scale,\n mode=mode,\n distribution=distribution\n seed=seed)\n\n weight_one = tf.Variable(initializer(shape_one, dtype=dtype))\n weight_two = tf.Variable(initializer(shape_two, dtype=dtype))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :----------------- | :-------------- | :------------------------- |\n | `scale` | `scale` | No change to defaults |\n | `mode` | `mode` | No change to defaults |\n | `distribution` | `distribution` | No change to defaults. |\n : : : 'normal' maps to 'truncated_normal' :\n | `seed` | `seed` | |\n | `dtype` | `dtype` | The TF2 api only takes it |\n : : : as a `__call__` arg, not a constructor arg. :\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n @end_compatibility\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`,\n samples are drawn from a truncated/untruncated normal\n distribution with a mean of zero and a standard deviation (after truncation,\n if used) `stddev = sqrt(scale / n)`\n where n is:\n - number of input units in the weight tensor, if mode = \"fan_in\"\n - number of output units, if mode = \"fan_out\"\n - average of the numbers of input and output units, if mode = \"fan_avg\"\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within [-limit, limit], with `limit = sqrt(3 * scale / n)`.\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"normal\", \"uniform\".\n seed: A Python integer. Used to create random seeds. See\n `tf.compat.v1.set_random_seed` for behavior.\n dtype: Default data type, used if no `dtype` argument is provided when\n calling the initializer. Only floating point types are supported.\n\n Raises:\n ValueError: In case of an invalid value for the \"scale\", mode\" or\n \"distribution\" arguments.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.compat.v1.VarLenFeature", "docs": "Configuration for parsing a variable-length input feature.\n\n Fields:\n dtype: Data type of input.\n ", "desc": "Configuration for parsing a variable-length input feature.", "type": "API"}, {"name": "tf.compat.v1.vectorized_map", "docs": "Parallel map on the list of tensors unpacked from `elems` on dimension 0.\n\n This method works similar to `tf.map_fn` but is optimized to run much faster,\n possibly with a much larger memory footprint. The speedups are obtained by\n vectorization (see [Auto-Vectorizing TensorFlow Graphs: Jacobians,\n Auto-Batching and Beyond](https://arxiv.org/pdf/1903.04243.pdf)). The idea\n behind vectorization is to semantically launch all the invocations of `fn` in\n parallel and fuse corresponding operations across all these invocations. This\n fusion is done statically at graph generation time and the generated code is\n often similar in performance to a manually fused version.\n\n Because `tf.vectorized_map` fully parallelizes the batch, this method will\n generally be significantly faster than using `tf.map_fn`, especially in eager\n mode. However this is an experimental feature and currently has a lot of\n limitations:\n - There should be no data dependency between the different semantic\n invocations of `fn`, i.e. it should be safe to map the elements of the\n inputs in any order.\n - Stateful kernels may mostly not be supported since these often imply a\n data dependency. We do support a limited set of such stateful kernels\n though (like RandomFoo, Variable operations like reads, etc).\n - `fn` has limited support for control flow operations.\n - `fn` should return nested structure of Tensors or Operations. However\n if an Operation is returned, it should have zero outputs.\n - The shape and dtype of any intermediate or output tensors in the\n computation of `fn` should not depend on the input to `fn`.\n\n Examples:\n ```python\n def outer_product(a):\n return tf.tensordot(a, a, 0)\n\n batch_size = 100\n a = tf.ones((batch_size, 32, 32))\n c = tf.vectorized_map(outer_product, a)\n assert c.shape == (batch_size, 32, 32, 32, 32)\n ```\n\n ```python\n # Computing per-example gradients\n\n batch_size = 10\n num_features = 32\n layer = tf.keras.layers.Dense(1)\n\n def model_fn(arg):\n with tf.GradientTape() as g:\n inp, label = arg\n inp = tf.expand_dims(inp, 0)\n label = tf.expand_dims(label, 0)\n prediction = layer(inp)\n loss = tf.nn.l2_loss(label - prediction)\n return g.gradient(loss, (layer.kernel, layer.bias))\n\n inputs = tf.random.uniform([batch_size, num_features])\n labels = tf.random.uniform([batch_size, 1])\n per_example_gradients = tf.vectorized_map(model_fn, (inputs, labels))\n assert per_example_gradients[0].shape == (batch_size, num_features, 1)\n assert per_example_gradients[1].shape == (batch_size, 1)\n ```\n\n Args:\n fn: The callable to be performed. It accepts one argument, which will have\n the same (possibly nested) structure as `elems`, and returns a possibly\n nested structure of Tensors and Operations, which may be different than\n the structure of `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be mapped over by `fn`. The first dimensions of all\n elements must broadcast to a consistent value; equivalently, each\n element tensor must have first dimension of either `B` or `1`, for some\n common batch size `B >= 1`.\n fallback_to_while_loop: If true, on failing to vectorize an operation,\n the unsupported op is wrapped in a tf.while_loop to execute the map\n iterations. Note that this fallback only happens for unsupported ops and\n other parts of `fn` are still vectorized. If false, on encountering an\n unsupported op, a ValueError is thrown. Note that the fallbacks can result\n in slowdowns since vectorization often yields speedup of one to two orders\n of magnitude.\n\n Returns:\n A tensor or (possibly nested) sequence of tensors. Each tensor packs the\n results of applying fn to tensors unpacked from elems along the first\n dimension, from first to last.\n\n Although they are less common as user-visible inputs and outputs, note that\n tensors of type `tf.variant` which represent tensor lists (for example from\n `tf.raw_ops.TensorListFromTensor`) are vectorized by stacking the list\n contents rather than the variant itself, and so the container tensor will\n have a scalar shape when returned rather than the usual stacked shape. This\n improves the performance of control flow gradient vectorization.\n\n Raises:\n ValueError: If vectorization fails and fallback_to_while_loop is False.\n ", "desc": "Parallel map on the list of tensors unpacked from `elems` on dimension 0.", "type": "API"}, {"name": "tf.compat.v1.verify_tensor_all_finite", "docs": "Assert that the tensor does not contain any NaN's or Inf's.\n\n Args:\n t: Tensor to check.\n msg: Message to log on failure.\n name: A name for this operation (optional).\n x: Alias for t.\n message: Alias for msg.\n\n Returns:\n Same tensor as `t`.\n ", "desc": "Assert that the tensor does not contain any NaN's or Inf's.", "type": "API"}, {"name": "tf.compat.v1.version", "docs": "Public API for tf.version namespace.\n", "desc": "Public API for tf.version namespace.", "type": "API"}, {"name": "tf.compat.v1.where", "docs": "Return the elements, either from `x` or `y`, depending on the `condition`.\n\n If both `x` and `y` are None, then this operation returns the coordinates of\n true elements of `condition`. The coordinates are returned in a 2-D tensor\n where the first dimension (rows) represents the number of true elements, and\n the second dimension (columns) represents the coordinates of the true\n elements. Keep in mind, the shape of the output tensor can vary depending on\n how many true values there are in input. Indices are output in row-major\n order.\n\n If both non-None, `x` and `y` must have the same shape.\n The `condition` tensor must be a scalar if `x` and `y` are scalar.\n If `x` and `y` are tensors of higher rank, then `condition` must be either a\n vector with size matching the first dimension of `x`, or must have the same\n shape as `x`.\n\n The `condition` tensor acts as a mask that chooses, based on the value at each\n element, whether the corresponding element / row in the output should be taken\n from `x` (if true) or `y` (if false).\n\n If `condition` is a vector and `x` and `y` are higher rank matrices, then it\n chooses which row (outer dimension) to copy from `x` and `y`. If `condition`\n has the same shape as `x` and `y`, then it chooses which element to copy from\n `x` and `y`.\n\n Args:\n condition: A `Tensor` of type `bool`\n x: A Tensor which may have the same shape as `condition`. If `condition` is\n rank 1, `x` may have higher rank, but its first dimension must match the\n size of `condition`.\n y: A `tensor` with the same shape and type as `x`.\n name: A name of the operation (optional)\n\n Returns:\n A `Tensor` with the same type and shape as `x`, `y` if they are non-None.\n Otherwise, a `Tensor` with shape `(num_true, rank(condition))`.\n\n Raises:\n ValueError: When exactly one of `x` or `y` is non-None.\n\n @compatibility(TF2)\n\n This API is compatible with eager execution and `tf.function`. However, this\n is still a legacy API endpoint originally designed for TF1. To migrate to\n fully-native TF2, please replace its usage with `tf.where` instead, which is\n directly backwards compatible with `tf.compat.v1.where`.\n\n However,`tf.compat.v1.where` is more restrictive than `tf.where`, requiring\n `x` and `y` to have the same shape, and returning a `Tensor` with the same\n type and shape as `x`, `y` (if they are both non-None).\n\n `tf.where` will accept `x`, `y` that are not the same shape as long as they\n are broadcastable with one another and with `condition`, and will return a\n `Tensor` with shape broadcast from `condition`, `x`, and `y`.\n\n For example, the following works with `tf.where` but not `tf.compat.v1.where`:\n\n >>> tf.where([True, False, False, True], [1,2,3,4], [100])\n \n\n >>> tf.where(True, [1,2,3,4], 100)\n \n\n @end_compatibility\n ", "desc": "Return the elements, either from `x` or `y`, depending on the `condition`.", "type": "API"}, {"name": "tf.compat.v1.where_v2", "docs": "Returns the indices of non-zero elements, or multiplexes `x` and `y`.\n\n This operation has two modes:\n\n 1. **Return the indices of non-zero elements** - When only\n `condition` is provided the result is an `int64` tensor where each row is\n the index of a non-zero element of `condition`. The result's shape\n is `[tf.math.count_nonzero(condition), tf.rank(condition)]`.\n 2. **Multiplex `x` and `y`** - When both `x` and `y` are provided the\n result has the shape of `x`, `y`, and `condition` broadcast together. The\n result is taken from `x` where `condition` is non-zero\n or `y` where `condition` is zero.\n\n #### 1. Return the indices of non-zero elements\n\n Note: In this mode `condition` can have a dtype of `bool` or any numeric\n dtype.\n\n If `x` and `y` are not provided (both are None):\n\n `tf.where` will return the indices of `condition` that are non-zero,\n in the form of a 2-D tensor with shape `[n, d]`, where `n` is the number of\n non-zero elements in `condition` (`tf.count_nonzero(condition)`), and `d` is\n the number of axes of `condition` (`tf.rank(condition)`).\n\n Indices are output in row-major order. The `condition` can have a `dtype` of\n `tf.bool`, or any numeric `dtype`.\n\n Here `condition` is a 1-axis `bool` tensor with 2 `True` values. The result\n has a shape of `[2,1]`\n\n >>> tf.where([True, False, False, True]).numpy()\n array([[0],\n [3]])\n\n Here `condition` is a 2-axis integer tensor, with 3 non-zero values. The\n result has a shape of `[3, 2]`.\n\n >>> tf.where([[1, 0, 0], [1, 0, 1]]).numpy()\n array([[0, 0],\n [1, 0],\n [1, 2]])\n\n Here `condition` is a 3-axis float tensor, with 5 non-zero values. The output\n shape is `[5, 3]`.\n\n >>> float_tensor = [[[0.1, 0], [0, 2.2], [3.5, 1e6]],\n ... [[0, 0], [0, 0], [99, 0]]]\n >>> tf.where(float_tensor).numpy()\n array([[0, 0, 0],\n [0, 1, 1],\n [0, 2, 0],\n [0, 2, 1],\n [1, 2, 0]])\n\n These indices are the same that `tf.sparse.SparseTensor` would use to\n represent the condition tensor:\n\n >>> sparse = tf.sparse.from_dense(float_tensor)\n >>> sparse.indices.numpy()\n array([[0, 0, 0],\n [0, 1, 1],\n [0, 2, 0],\n [0, 2, 1],\n [1, 2, 0]])\n\n A complex number is considered non-zero if either the real or imaginary\n component is non-zero:\n\n >>> tf.where([complex(0.), complex(1.), 0+1j, 1+1j]).numpy()\n array([[1],\n [2],\n [3]])\n\n #### 2. Multiplex `x` and `y`\n\n Note: In this mode `condition` must have a dtype of `bool`.\n\n If `x` and `y` are also provided (both have non-None values) the `condition`\n tensor acts as a mask that chooses whether the corresponding\n element / row in the output should be taken from `x` (if the element in\n `condition` is `True`) or `y` (if it is `False`).\n\n The shape of the result is formed by\n [broadcasting](https://docs.scipy.org/doc/numpy/reference/ufuncs.html)\n together the shapes of `condition`, `x`, and `y`.\n\n When all three inputs have the same size, each is handled element-wise.\n\n >>> tf.where([True, False, False, True],\n ... [1, 2, 3, 4],\n ... [100, 200, 300, 400]).numpy()\n array([ 1, 200, 300, 4], dtype=int32)\n\n There are two main rules for broadcasting:\n\n 1. If a tensor has fewer axes than the others, length-1 axes are added to the\n left of the shape.\n 2. Axes with length-1 are streched to match the coresponding axes of the other\n tensors.\n\n A length-1 vector is streched to match the other vectors:\n\n >>> tf.where([True, False, False, True], [1, 2, 3, 4], [100]).numpy()\n array([ 1, 100, 100, 4], dtype=int32)\n\n A scalar is expanded to match the other arguments:\n\n >>> tf.where([[True, False], [False, True]], [[1, 2], [3, 4]], 100).numpy()\n array([[ 1, 100], [100, 4]], dtype=int32)\n >>> tf.where([[True, False], [False, True]], 1, 100).numpy()\n array([[ 1, 100], [100, 1]], dtype=int32)\n\n A scalar `condition` returns the complete `x` or `y` tensor, with\n broadcasting applied.\n\n >>> tf.where(True, [1, 2, 3, 4], 100).numpy()\n array([1, 2, 3, 4], dtype=int32)\n >>> tf.where(False, [1, 2, 3, 4], 100).numpy()\n array([100, 100, 100, 100], dtype=int32)\n\n For a non-trivial example of broadcasting, here `condition` has a shape of\n `[3]`, `x` has a shape of `[3,3]`, and `y` has a shape of `[3,1]`.\n Broadcasting first expands the shape of `condition` to `[1,3]`. The final\n broadcast shape is `[3,3]`. `condition` will select columns from `x` and `y`.\n Since `y` only has one column, all columns from `y` will be identical.\n\n >>> tf.where([True, False, True],\n ... x=[[1, 2, 3],\n ... [4, 5, 6],\n ... [7, 8, 9]],\n ... y=[[100],\n ... [200],\n ... [300]]\n ... ).numpy()\n array([[ 1, 100, 3],\n [ 4, 200, 6],\n [ 7, 300, 9]], dtype=int32)\n\n Note that if the gradient of either branch of the `tf.where` generates\n a `NaN`, then the gradient of the entire `tf.where` will be `NaN`. This is\n because the gradient calculation for `tf.where` combines the two branches, for\n performance reasons.\n\n A workaround is to use an inner `tf.where` to ensure the function has\n no asymptote, and to avoid computing a value whose gradient is `NaN` by\n replacing dangerous inputs with safe inputs.\n\n Instead of this,\n\n >>> x = tf.constant(0., dtype=tf.float32)\n >>> with tf.GradientTape() as tape:\n ... tape.watch(x)\n ... y = tf.where(x < 1., 0., 1. / x)\n >>> print(tape.gradient(y, x))\n tf.Tensor(nan, shape=(), dtype=float32)\n\n Although, the `1. / x` values are never used, its gradient is a `NaN` when\n `x = 0`. Instead, we should guard that with another `tf.where`\n\n >>> x = tf.constant(0., dtype=tf.float32)\n >>> with tf.GradientTape() as tape:\n ... tape.watch(x)\n ... safe_x = tf.where(tf.equal(x, 0.), 1., x)\n ... y = tf.where(x < 1., 0., 1. / safe_x)\n >>> print(tape.gradient(y, x))\n tf.Tensor(0.0, shape=(), dtype=float32)\n\n See also:\n\n * `tf.sparse` - The indices returned by the first form of `tf.where` can be\n useful in `tf.sparse.SparseTensor` objects.\n * `tf.gather_nd`, `tf.scatter_nd`, and related ops - Given the\n list of indices returned from `tf.where` the `scatter` and `gather` family\n of ops can be used fetch values or insert values at those indices.\n * `tf.strings.length` - `tf.string` is not an allowed dtype for the\n `condition`. Use the string length instead.\n\n Args:\n condition: A `tf.Tensor` of dtype bool, or any numeric dtype. `condition`\n must have dtype `bool` when `x` and `y` are provided.\n x: If provided, a Tensor which is of the same type as `y`, and has a shape\n broadcastable with `condition` and `y`.\n y: If provided, a Tensor which is of the same type as `x`, and has a shape\n broadcastable with `condition` and `x`.\n name: A name of the operation (optional).\n\n Returns:\n If `x` and `y` are provided:\n A `Tensor` with the same type as `x` and `y`, and shape that\n is broadcast from `condition`, `x`, and `y`.\n Otherwise, a `Tensor` with shape `[tf.math.count_nonzero(condition),\n tf.rank(condition)]`.\n\n Raises:\n ValueError: When exactly one of `x` or `y` is non-None, or the shapes\n are not all broadcastable.\n ", "desc": "Returns the indices of non-zero elements, or multiplexes `x` and `y`.", "type": "API"}, {"name": "tf.compat.v1.while_loop", "docs": "Repeat `body` while the condition `cond` is true.\n\n `cond` is a callable returning a boolean scalar tensor. `body` is a callable\n returning a (possibly nested) tuple, namedtuple or list of tensors of the same\n arity (length and structure) and types as `loop_vars`. `loop_vars` is a\n (possibly nested) tuple, namedtuple or list of tensors that is passed to both\n `cond` and `body`. `cond` and `body` both take as many arguments as there are\n `loop_vars`.\n\n In addition to regular Tensors or IndexedSlices, the body may accept and\n return TensorArray objects. The flows of the TensorArray objects will\n be appropriately forwarded between loops and during gradient calculations.\n\n Note that `while_loop` calls `cond` and `body` *exactly once* (inside the\n call to `while_loop`, and not at all during `Session.run()`). `while_loop`\n stitches together the graph fragments created during the `cond` and `body`\n calls with some additional graph nodes to create the graph flow that\n repeats `body` until `cond` returns false.\n\n For correctness, `tf.while_loop()` strictly enforces shape invariants for\n the loop variables. A shape invariant is a (possibly partial) shape that\n is unchanged across the iterations of the loop. An error will be raised\n if the shape of a loop variable after an iteration is determined to be more\n general than or incompatible with its shape invariant. For example, a shape\n of [11, None] is more general than a shape of [11, 17], and [11, 21] is not\n compatible with [11, 17]. By default (if the argument `shape_invariants` is\n not specified), it is assumed that the initial shape of each tensor in\n `loop_vars` is the same in every iteration. The `shape_invariants` argument\n allows the caller to specify a less specific shape invariant for each loop\n variable, which is needed if the shape varies between iterations. The\n `tf.Tensor.set_shape`\n function may also be used in the `body` function to indicate that\n the output loop variable has a particular shape. The shape invariant for\n SparseTensor and IndexedSlices are treated specially as follows:\n\n a) If a loop variable is a SparseTensor, the shape invariant must be\n TensorShape([r]) where r is the rank of the dense tensor represented\n by the sparse tensor. It means the shapes of the three tensors of the\n SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here\n is the shape of the SparseTensor.dense_shape property. It must be the shape of\n a vector.\n\n b) If a loop variable is an IndexedSlices, the shape invariant must be\n a shape invariant of the values tensor of the IndexedSlices. It means\n the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]],\n [shape.ndims]).\n\n `while_loop` implements non-strict semantics, enabling multiple iterations\n to run in parallel. The maximum number of parallel iterations can be\n controlled by `parallel_iterations`, which gives users some control over\n memory consumption and execution order. For correct programs, `while_loop`\n should return the same result for any parallel_iterations > 0.\n\n For training, TensorFlow stores the tensors that are produced in the\n forward inference and are needed in back propagation. These tensors are a\n main source of memory consumption and often cause OOM errors when training\n on GPUs. When the flag swap_memory is true, we swap out these tensors from\n GPU to CPU. This for example allows us to train RNN models with very long\n sequences and large batches.\n\n Args:\n cond: A callable that represents the termination condition of the loop.\n body: A callable that represents the loop body.\n loop_vars: A (possibly nested) tuple, namedtuple or list of numpy array,\n `Tensor`, and `TensorArray` objects.\n shape_invariants: The shape invariants for the loop variables.\n parallel_iterations: The number of iterations allowed to run in parallel. It\n must be a positive integer.\n back_prop: Whether backprop is enabled for this while loop.\n swap_memory: Whether GPU-CPU memory swap is enabled for this loop.\n name: Optional name prefix for the returned tensors.\n maximum_iterations: Optional maximum number of iterations of the while loop\n to run. If provided, the `cond` output is AND-ed with an additional\n condition ensuring the number of iterations executed is no greater than\n `maximum_iterations`.\n return_same_structure: If True, output has same structure as `loop_vars`. If\n eager execution is enabled, this is ignored (and always treated as True).\n\n Returns:\n The output tensors for the loop variables after the loop.\n If `return_same_structure` is True, the return value has the same\n structure as `loop_vars`.\n If `return_same_structure` is False, the return value is a Tensor,\n TensorArray or IndexedSlice if the length of `loop_vars` is 1, or a list\n otherwise.\n\n Raises:\n TypeError: if `cond` or `body` is not callable.\n ValueError: if `loop_vars` is empty.\n\n Example:\n\n ```python\n i = tf.constant(0)\n c = lambda i: tf.less(i, 10)\n b = lambda i: tf.add(i, 1)\n r = tf.while_loop(c, b, [i])\n ```\n\n Example with nesting and a namedtuple:\n\n ```python\n import collections\n Pair = collections.namedtuple('Pair', 'j, k')\n ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2)))\n c = lambda i, p: i < 10\n b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k)))\n ijk_final = tf.while_loop(c, b, ijk_0)\n ```\n\n Example using shape_invariants:\n\n ```python\n i0 = tf.constant(0)\n m0 = tf.ones([2, 2])\n c = lambda i, m: i < 10\n b = lambda i, m: [i+1, tf.concat([m, m], axis=0)]\n tf.while_loop(\n c, b, loop_vars=[i0, m0],\n shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])\n ```\n\n Example which demonstrates non-strict semantics: In the following\n example, the final value of the counter `i` does not depend on `x`. So\n the `while_loop` can increment the counter parallel to updates of `x`.\n However, because the loop counter at one loop iteration depends\n on the value at the previous iteration, the loop counter itself cannot\n be incremented in parallel. Hence if we just want the final value of the\n counter (which we print on the line `print(sess.run(i))`), then\n `x` will never be incremented, but the counter will be updated on a\n single thread. Conversely, if we want the value of the output (which we\n print on the line `print(sess.run(out).shape)`), then the counter may be\n incremented on its own thread, while `x` can be incremented in\n parallel on a separate thread. In the extreme case, it is conceivable\n that the thread incrementing the counter runs until completion before\n `x` is incremented even a single time. The only thing that can never\n happen is that the thread updating `x` can never get ahead of the\n counter thread because the thread incrementing `x` depends on the value\n of the counter.\n\n ```python\n import tensorflow as tf\n\n n = 10000\n x = tf.constant(list(range(n)))\n c = lambda i, x: i < n\n b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1,\n [i], \"x:\"))\n i, out = tf.while_loop(c, b, (0, x))\n with tf.compat.v1.Session() as sess:\n print(sess.run(i)) # prints [0] ... [9999]\n\n # The following line may increment the counter and x in parallel.\n # The counter thread may get ahead of the other thread, but not the\n # other way around. So you may see things like\n # [9996] x:[9987]\n # meaning that the counter thread is on iteration 9996,\n # while the other thread is on iteration 9987\n print(sess.run(out).shape)\n ```\n\n ", "desc": "Repeat `body` while the condition `cond` is true.", "type": "API"}, {"name": "tf.compat.v1.WholeFileReader", "docs": "A Reader that outputs the entire contents of a file as a value.\n\n To use, enqueue filenames in a Queue. The output of Read will\n be a filename (key) and the contents of that file (value).\n\n See ReaderBase for supported methods.\n\n @compatibility(eager)\n Readers are not compatible with eager execution. Instead, please\n use `tf.data` to get data into your model.\n @end_compatibility\n ", "desc": "A Reader that outputs the entire contents of a file as a value.", "type": "API"}, {"name": "tf.compat.v1.wrap_function", "docs": "Wraps the TF 1.x function fn into a graph function.\n\n The python function `fn` will be called once with symbolic arguments specified\n in the `signature`, traced, and turned into a graph function. Any variables\n created by `fn` will be owned by the object returned by `wrap_function`. The\n resulting graph function can be called with tensors which match the\n signature.\n\n ```python\n def f(x, do_add):\n v = tf.Variable(5.0)\n if do_add:\n op = v.assign_add(x)\n else:\n op = v.assign_sub(x)\n with tf.control_dependencies([op]):\n return v.read_value()\n\n f_add = tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), True])\n\n assert float(f_add(1.0)) == 6.0\n assert float(f_add(1.0)) == 7.0\n\n # Can call tf.compat.v1.wrap_function again to get a new trace, a new set\n # of variables, and possibly different non-template arguments.\n f_sub= tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), False])\n\n assert float(f_sub(1.0)) == 4.0\n assert float(f_sub(1.0)) == 3.0\n ```\n\n Both `tf.compat.v1.wrap_function` and `tf.function` create a callable\n TensorFlow graph. But while `tf.function` runs all stateful operations\n (e.g. `tf.print`) and sequences operations to provide the same semantics as\n eager execution, `wrap_function` is closer to the behavior of `session.run` in\n TensorFlow 1.x. It will not run any operations unless they are required to\n compute the function's outputs, either through a data dependency or a control\n dependency. Nor will it sequence operations.\n\n Unlike `tf.function`, `wrap_function` will only trace the Python function\n once. As with placeholders in TF 1.x, shapes and dtypes must be provided to\n `wrap_function`'s `signature` argument.\n\n Since it is only traced once, variables and state may be created inside the\n function and owned by the function wrapper object.\n\n Args:\n fn: python function to be wrapped\n signature: the placeholder and python arguments to be passed to the wrapped\n function\n name: Optional. The name of the function.\n\n Returns:\n the wrapped graph function.\n ", "desc": "Wraps the TF 1.x function fn into a graph function.", "type": "API"}, {"name": "tf.compat.v1.write_file", "docs": "Writes `contents` to the file at input `filename`.\n\n Creates the file and recursively creates directory if it does not exist.\n\n Args:\n filename: A `Tensor` of type `string`.\n scalar. The name of the file to which we write the contents.\n contents: A `Tensor` of type `string`.\n scalar. The content to be written to the output file.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes `contents` to the file at input `filename`.", "type": "API"}, {"name": "tf.compat.v1.xla", "docs": "Public API for tf.xla namespace.\n", "desc": "Public API for tf.xla namespace.", "type": "API"}, {"name": "tf.compat.v1.xla.experimental", "docs": "Public API for tf.xla.experimental namespace.\n", "desc": "Public API for tf.xla.experimental namespace.", "type": "API"}, {"name": "tf.compat.v1.xla.experimental.compile", "docs": "Builds an operator that compiles and runs `computation` with XLA. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nxla.experimental.compile is deprecated. Consider using tf.function(jit_compile=True)\n\nNOTE: In eager mode, `computation` will have `@tf.function` semantics.\n\nArgs:\n computation: A Python function that builds a computation to apply to the\n input. If the function takes n inputs, 'inputs' should be a list of n\n tensors.\n\n `computation` may return a list of operations and tensors. Tensors must\n come before operations in the returned list. The return value of\n `compile` is a list of tensors corresponding to the tensors from the\n output of `computation`.\n\n All `Operation`s returned from `computation` will be executed when\n evaluating any of the returned output tensors.\n inputs: A list of inputs or `None` (equivalent to an empty list). Each input\n can be a nested structure containing values that are convertible to\n tensors. Note that passing an N-dimension list of compatible values will\n result in a N-dimension list of scalar tensors rather than a single Rank-N\n tensors. If you need different behavior, convert part of inputs to tensors\n with `tf.convert_to_tensor`.\n\nReturns:\n Same data structure as if computation(*inputs) is called directly with some\n exceptions for correctness. Exceptions include:\n 1) None output: a NoOp would be returned which control-depends on\n computation.\n 2) Single value output: A tuple containing the value would be returned.\n 3) Operation-only outputs: a NoOp would be returned which\n control-depends on computation.\n TODO(b/121383831): Investigate into removing these special cases.\n\nRaises:\n RuntimeError: if called when eager execution is enabled.\n\nKnown issues:\n When a tf.random operation is built with XLA, the implementation doesn't\n pass the user provided seed to the XLA compiler. As such, the XLA compiler\n generates a random number and uses it as a seed when compiling the\n operation. This implementation causes a violation of the Tensorflow\n defined semantics in two aspects. First, changing the value of the user\n defined seed doesn't change the numbers generated by the operation.\n Second, when a seed is not specified, running the program multiple times\n will generate the same numbers.", "desc": "Builds an operator that compiles and runs `computation` with XLA. (deprecated)", "type": "API"}, {"name": "tf.compat.v1.xla.experimental.jit_scope", "docs": "Enable or disable JIT compilation of operators within the scope.\n\n NOTE: This is an experimental feature.\n\n The compilation is a hint and only supported on a best-effort basis.\n\n Example usage:\n\n ```python\n with tf.xla.experimental.jit_scope():\n c = tf.matmul(a, b) # compiled\n with tf.xla.experimental.jit_scope(compile_ops=False):\n d = tf.matmul(a, c) # not compiled\n with tf.xla.experimental.jit_scope(\n compile_ops=lambda node_def: 'matmul' in node_def.op.lower()):\n e = tf.matmul(a, b) + d # matmul is compiled, the addition is not.\n ```\n\n Example of `separate_compiled_gradients`:\n\n ```python\n # In the example below, the computations for f, g and h will all be compiled\n # in separate scopes.\n with tf.xla.experimental.jit_scope(\n separate_compiled_gradients=True):\n f = tf.matmul(a, b)\n g = tf.gradients([f], [a, b], name='mygrads1')\n h = tf.gradients([f], [a, b], name='mygrads2')\n ```\n\n Ops that are not in the scope may be clustered and compiled with ops in\n the scope with `compile_ops=True`, while the ops in the scope with\n `compile_ops=False` will never be compiled.\n\n For example:\n\n ```python\n # In the example below, x and loss may be clustered and compiled together,\n # while y will not be compiled.\n with tf.xla.experimental.jit_scope():\n x = tf.matmul(a, b)\n with tf.xla.experimental.jit_scope(compile_ops=False):\n y = tf.matmul(c, d)\n loss = x + y\n ```\n\n If you want to only compile the ops in the scope with `compile_ops=True`,\n consider adding an outer `jit_scope(compile_ops=False)`:\n\n ```python\n # In the example below, only x will be compiled.\n with tf.xla.experimental.jit_scope(compile_ops=False):\n with tf.xla.experimental.jit_scope():\n x = tf.matmul(a, b)\n y = tf.matmul(c, d)\n loss = x + y\n ```\n\n Args:\n compile_ops: Whether to enable or disable compilation in the scope.\n Either a Python bool, or a callable that accepts the parameter\n `node_def` and returns a python bool.\n separate_compiled_gradients: If true put each gradient subgraph into a\n separate compilation scope. This gives fine-grained control over which\n portions of the graph will be compiled as a single unit. Compiling\n gradients separately may yield better performance for some graphs.\n The scope is named based on the scope of the forward computation as well\n as the name of the gradients. As a result, the gradients will be compiled\n in a scope that is separate from both the forward computation, and from\n other gradients.\n Raises:\n RuntimeError: if called when eager execution is enabled.\n Yields:\n The current scope, enabling or disabling compilation.\n ", "desc": "Enable or disable JIT compilation of operators within the scope.", "type": "API"}, {"name": "tf.compat.v1.zeros", "docs": "Creates a tensor with all elements set to zero.\n\n See also `tf.zeros_like`, `tf.ones`, `tf.fill`, `tf.eye`.\n\n This operation returns a tensor of type `dtype` with shape `shape` and\n all elements set to zero.\n\n >>> tf.zeros([3, 4], tf.int32)\n \n\n Args:\n shape: A `list` of integers, a `tuple` of integers, or\n a 1-D `Tensor` of type `int32`.\n dtype: The DType of an element in the resulting `Tensor`.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor` with all elements set to zero.\n ", "desc": "Creates a tensor with all elements set to zero.", "type": "API"}, {"name": "tf.compat.v1.zeros_initializer", "docs": "Initializer that generates tensors initialized to 0.\n\n @compatibility(TF2)\n `tf.compat.v1.zeros_initializer` is compatible with eager execution\n and `tf.function`.\n\n To migrate to TF2, please use `tf.zeros_initializer` instead. The `dtype`\n argument in `tf.compat.v1.zeros_initializer.__init__()` does not exist in\n `tf.zeros_initializer.__init__()`. However, you can specify the `dtype` in\n `__call__()` in both cases.\n\n #### Structural Mapping to TF2\n\n Before:\n\n ```python\n initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n variable = tf.Variable(initializer(shape=[3, 3]))\n ```\n\n After:\n\n ```python\n initializer = tf.zeros_initializer()\n variable = tf.Variable(initializer(shape=[3, 3], dtype=tf.float32))\n ```\n\n #### How to Map Arguments\n\n | TF1 Arg Name | TF2 Arg Name | Note |\n | :------------------- | :--------------- | :------------------------- |\n | `dtype` | `dtype` | In `__call__()` method |\n | `partition_info` | - | (`__call__` arg in TF1) Not supported |\n\n\n #### Before & After Usage Example\n\n Before:\n\n >>> initializer = tf.compat.v1.zeros_initializer(dtype=tf.float32)\n >>> tf.Variable(initializer(shape=[3])).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3])).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n >>> initializer = tf.compat.v1.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n After:\n\n >>> initializer = tf.zeros_initializer()\n >>> tf.Variable(initializer(shape=[3], dtype=tf.float32)).numpy()\n array([0., 0., 0.], dtype=float32)\n >>> tf.Variable(initializer(shape=[3, 3], dtype=tf.float32)).numpy()\n array([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]], dtype=float32)\n\n @end_compatibility\n ", "desc": "Initializer that generates tensors initialized to 0.", "type": "API"}, {"name": "tf.compat.v1.zeros_like", "docs": "Creates a tensor with all elements set to zero.\n\n See also `tf.zeros`.\n\n Given a single tensor (`tensor`), this operation returns a tensor of the\n same type and shape as `tensor` with all elements set to zero. Optionally,\n you can use `dtype` to specify a new type for the returned tensor.\n\n Examples:\n\n >>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.zeros_like(tensor)\n \n\n >>> tf.zeros_like(tensor, dtype=tf.float32)\n \n\n Args:\n tensor: A `Tensor`.\n dtype: A type for the returned `Tensor`. Must be `float16`, `float32`,\n `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`,\n `complex64`, `complex128`, `bool` or `string`. (optional)\n name: A name for the operation (optional).\n optimize: if `True`, attempt to statically determine the shape of `tensor`\n and encode it as a constant. (optional, defaults to `True`)\n\n Returns:\n A `Tensor` with all elements set to zero.\n ", "desc": "Creates a tensor with all elements set to zero.", "type": "API"}, {"name": "tf.compat.v1.zeta", "docs": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).\n\n The Hurwitz zeta function is defined as:\n\n\n \\\\(\\zeta(x, q) = \\sum_{n=0}^{\\infty} (q + n)^{-x}\\\\)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n q: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).", "type": "API"}, {"name": "tf.complex", "docs": "Converts two real numbers to a complex number.\n\n Given a tensor `real` representing the real part of a complex number, and a\n tensor `imag` representing the imaginary part of a complex number, this\n operation returns complex numbers elementwise of the form \\\\(a + bj\\\\), where\n *a* represents the `real` part and *b* represents the `imag` part.\n\n The input tensors `real` and `imag` must have the same shape.\n\n For example:\n\n ```python\n real = tf.constant([2.25, 3.25])\n imag = tf.constant([4.75, 5.75])\n tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]]\n ```\n\n Args:\n real: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n imag: A `Tensor`. Must have the same type as `real`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64` or `complex128`.\n\n Raises:\n TypeError: Real and imag must be correct types\n ", "desc": "Converts two real numbers to a complex number.", "type": "API"}, {"name": "tf.concat", "docs": "Concatenates tensors along one dimension.\n\n See also `tf.tile`, `tf.stack`, `tf.repeat`.\n\n Concatenates the list of tensors `values` along dimension `axis`. If\n `values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, the concatenated\n result has shape\n\n [D0, D1, ... Raxis, ...Dn]\n\n where\n\n Raxis = sum(Daxis(i))\n\n That is, the data from the input tensors is joined along the `axis`\n dimension.\n\n The number of dimensions of the input tensors must match, and all dimensions\n except `axis` must be equal.\n\n For example:\n\n >>> t1 = [[1, 2, 3], [4, 5, 6]]\n >>> t2 = [[7, 8, 9], [10, 11, 12]]\n >>> tf.concat([t1, t2], 0)\n \n\n >>> tf.concat([t1, t2], 1)\n \n\n As in Python, the `axis` could also be negative numbers. Negative `axis`\n are interpreted as counting from the end of the rank, i.e.,\n `axis + rank(values)`-th dimension.\n\n For example:\n\n >>> t1 = [[[1, 2], [2, 3]], [[4, 4], [5, 3]]]\n >>> t2 = [[[7, 4], [8, 4]], [[2, 10], [15, 11]]]\n >>> tf.concat([t1, t2], -1)\n \n\n Note: If you are concatenating along a new axis consider using stack.\n E.g.\n\n ```python\n tf.concat([tf.expand_dims(t, axis) for t in tensors], axis)\n ```\n\n can be rewritten as\n\n ```python\n tf.stack(tensors, axis=axis)\n ```\n\n Args:\n values: A list of `Tensor` objects or a single `Tensor`.\n axis: 0-D `int32` `Tensor`. Dimension along which to concatenate. Must be\n in the range `[-rank(values), rank(values))`. As in Python, indexing for\n axis is 0-based. Positive axis in the rage of `[0, rank(values))` refers\n to `axis`-th dimension. And negative axis refers to `axis +\n rank(values)`-th dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` resulting from concatenation of the input tensors.\n ", "desc": "Concatenates tensors along one dimension.", "type": "API"}, {"name": "tf.cond", "docs": "Return `true_fn()` if the predicate `pred` is true else `false_fn()`.\n\n `true_fn` and `false_fn` both return lists of output tensors. `true_fn` and\n `false_fn` must have the same non-zero number and type of outputs.\n\n **WARNING**: Any Tensors or Operations created outside of `true_fn` and\n `false_fn` will be executed regardless of which branch is selected at runtime.\n\n Although this behavior is consistent with the dataflow model of TensorFlow,\n it has frequently surprised users who expected a lazier semantics.\n Consider the following simple program:\n\n ```python\n z = tf.multiply(a, b)\n result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))\n ```\n\n If `x < y`, the `tf.add` operation will be executed and `tf.square`\n operation will not be executed. Since `z` is needed for at least one\n branch of the `cond`, the `tf.multiply` operation is always executed,\n unconditionally.\n\n Note that `cond` calls `true_fn` and `false_fn` *exactly once* (inside the\n call to `cond`, and not at all during `Session.run()`). `cond`\n stitches together the graph fragments created during the `true_fn` and\n `false_fn` calls with some additional graph nodes to ensure that the right\n branch gets executed depending on the value of `pred`.\n\n `tf.cond` supports nested structures as implemented in\n `tensorflow.python.util.nest`. Both `true_fn` and `false_fn` must return the\n same (possibly nested) value structure of lists, tuples, and/or named tuples.\n Singleton lists and tuples form the only exceptions to this: when returned by\n `true_fn` and/or `false_fn`, they are implicitly unpacked to single values.\n\n Note: It is illegal to \"directly\" use tensors created inside a cond branch\n outside it, e.g. by storing a reference to a branch tensor in the python\n state. If you need to use a tensor created in a branch function you should\n return it as an output of the branch function and use the output from\n `tf.cond` instead.\n\n Args:\n pred: A scalar determining whether to return the result of `true_fn` or\n `false_fn`.\n true_fn: The callable to be performed if pred is true.\n false_fn: The callable to be performed if pred is false.\n name: Optional name prefix for the returned tensors.\n\n Returns:\n Tensors returned by the call to either `true_fn` or `false_fn`. If the\n callables return a singleton list, the element is extracted from the list.\n\n Raises:\n TypeError: if `true_fn` or `false_fn` is not callable.\n ValueError: if `true_fn` and `false_fn` do not return the same number of\n tensors, or return tensors of different types.\n\n Example:\n\n ```python\n x = tf.constant(2)\n y = tf.constant(5)\n def f1(): return tf.multiply(x, 17)\n def f2(): return tf.add(y, 23)\n r = tf.cond(tf.less(x, y), f1, f2)\n # r is set to f1().\n # Operations in f2 (e.g., tf.add) are not executed.\n ```\n\n ", "desc": "Return `true_fn()` if the predicate `pred` is true else `false_fn()`.", "type": "API"}, {"name": "tf.config", "docs": "Public API for tf.config namespace.\n", "desc": "Public API for tf.config namespace.", "type": "API"}, {"name": "tf.config.experimental", "docs": "Public API for tf.config.experimental namespace.\n", "desc": "Public API for tf.config.experimental namespace.", "type": "API"}, {"name": "tf.config.experimental.ClusterDeviceFilters", "docs": "Represent a collection of device filters for the remote workers in cluster.\n\n NOTE: this is an experimental API and subject to changes.\n\n Set device filters for selective jobs and tasks. For each remote worker, the\n device filters are a list of strings. When any filters are present, the remote\n worker will ignore all devices which do not match any of its filters. Each\n filter can be partially specified, e.g. \"/job:ps\", \"/job:worker/replica:3\",\n etc. Note that a device is always visible to the worker it is located on.\n\n For example, to set the device filters for a parameter server cluster:\n\n ```python\n cdf = tf.config.experimental.ClusterDeviceFilters()\n for i in range(num_workers):\n cdf.set_device_filters('worker', i, ['/job:ps'])\n for i in range(num_ps):\n cdf.set_device_filters('ps', i, ['/job:worker'])\n\n tf.config.experimental_connect_to_cluster(cluster_def,\n cluster_device_filters=cdf)\n ```\n\n The device filters can be partically specified. For remote tasks that do not\n have device filters specified, all devices will be visible to them.\n ", "desc": "Represent a collection of device filters for the remote workers in cluster.", "type": "API"}, {"name": "tf.config.experimental.disable_mlir_bridge", "docs": "Disables experimental MLIR-Based TensorFlow Compiler Bridge.", "desc": "Disables experimental MLIR-Based TensorFlow Compiler Bridge.", "type": "API"}, {"name": "tf.config.experimental.disable_mlir_graph_optimization", "docs": "Disables experimental MLIR-Based TensorFlow Compiler Optimizations.", "desc": "Disables experimental MLIR-Based TensorFlow Compiler Optimizations.", "type": "API"}, {"name": "tf.config.experimental.enable_mlir_bridge", "docs": "Enables experimental MLIR-Based TensorFlow Compiler Bridge.\n\n DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.\n\n NOTE: MLIR-Based TensorFlow Compiler is under active development and has\n missing features, please refrain from using. This API exists for development\n and testing only.\n\n TensorFlow Compiler Bridge (TF Bridge) is responsible for translating parts\n of TensorFlow graph into a form that can be accepted as an input by a backend\n compiler such as XLA.\n ", "desc": "Enables experimental MLIR-Based TensorFlow Compiler Bridge.", "type": "API"}, {"name": "tf.config.experimental.enable_mlir_graph_optimization", "docs": "Enables experimental MLIR-Based TensorFlow Compiler Optimizations.\n\n DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.\n\n NOTE: MLIR-Based TensorFlow Compiler is under active development and has\n missing features, please refrain from using. This API exists for development\n and testing only.\n\n TensorFlow Compiler Optimizations are responsible general graph level\n optimizations that in the current stack mostly done by Grappler graph\n optimizers.\n ", "desc": "Enables experimental MLIR-Based TensorFlow Compiler Optimizations.", "type": "API"}, {"name": "tf.config.experimental.enable_tensor_float_32_execution", "docs": "Enable or disable the use of TensorFloat-32 on supported hardware.\n\n [TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format),\n or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32\n execution causes certain float32 ops, such as matrix multiplications and\n convolutions, to run much faster on Ampere GPUs but with reduced precision.\n This reduced precision should not impact convergence of deep learning models\n in practice.\n\n TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on\n Ampere GPUs, so all other hardware will use the full float32 precision\n regardless of whether TensorFloat-32 is enabled or not. If you want to use the\n full float32 precision on Ampere, you can disable TensorFloat-32 execution\n with this function. For example:\n\n ```python\n x = tf.fill((2, 2), 1.0001)\n y = tf.fill((2, 2), 1.)\n # TensorFloat-32 is enabled, so matmul is run with reduced precision\n print(tf.linalg.matmul(x, y)) # [[2., 2.], [2., 2.]]\n tf.config.experimental.enable_tensor_float_32_execution(False)\n # Matmul is run with full precision\n print(tf.linalg.matmul(x, y)) # [[2.0002, 2.0002], [2.0002, 2.0002]]\n ```\n\n To check whether TensorFloat-32 execution is currently enabled, use\n `tf.config.experimental.tensor_float_32_execution_enabled`.\n\n If TensorFloat-32 is enabled, float32 inputs of supported ops, such as\n `tf.linalg.matmul`, will be rounded from 23 bits of precision to 10 bits of\n precision in most cases. This allows the ops to execute much faster by\n utilizing the GPU's tensor cores. TensorFloat-32 has the same dynamic range as\n float32, meaning it is no more likely to underflow or overflow than float32.\n Ops still use float32 accumulation when TensorFloat-32 is enabled. Enabling or\n disabling TensorFloat-32 only affects Ampere GPUs and subsequent GPUs that\n support TensorFloat-32.\n\n Note TensorFloat-32 is not always used in supported ops, as only inputs of\n certain shapes are supported. Support for more input shapes and more ops may\n be added in the future. As a result, precision of float32 ops may decrease in\n minor versions of TensorFlow.\n\n TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32\n is used in fewer cases for complex64 as it is for float32.\n\n Args:\n enabled: Bool indicating whether to enable TensorFloat-32 execution.\n ", "desc": "Enable or disable the use of TensorFloat-32 on supported hardware.", "type": "API"}, {"name": "tf.config.experimental.get_device_details", "docs": "Returns details about a physical devices.\n\n This API takes in a `tf.config.PhysicalDevice` returned by\n `tf.config.list_physical_devices`. It returns a dict with string keys\n containing various details about the device. Each key is only supported by a\n subset of devices, so you should not assume the returned dict will have any\n particular key.\n\n >>> gpu_devices = tf.config.list_physical_devices('GPU')\n >>> if gpu_devices:\n ... details = tf.config.experimental.get_device_details(gpu_devices[0])\n ... details.get('device_name', 'Unknown GPU')\n\n Currently, details are only returned for GPUs. This function returns an\n empty dict if passed a non-GPU device.\n\n The returned dict may have the following keys:\n * `'device_name'`: A human-readable name of the device as a string, e.g.\n \"Titan V\". Unlike `tf.config.PhysicalDevice.name`, this will be the same for\n multiple devices if each device is the same model. Currently only available\n for GPUs.\n * `'compute_capability'`: The\n [compute capability](https://developer.nvidia.com/cuda-gpus) of the device\n as a tuple of two ints, in the form `(major_version, minor_version)`. Only\n available for NVIDIA GPUs\n\n Note: This is similar to `tf.sysconfig.get_build_info` in that both functions\n can return information relating to GPUs. However, this function returns\n run-time information about a specific device (such as a GPU's compute\n capability), while `tf.sysconfig.get_build_info` returns compile-time\n information about how TensorFlow was built (such as what version of CUDA\n TensorFlow was built for).\n\n Args:\n device: A `tf.config.PhysicalDevice` returned by\n `tf.config.list_physical_devices` or `tf.config.get_visible_devices`.\n\n Returns:\n A dict with string keys.\n ", "desc": "Returns details about a physical devices.", "type": "API"}, {"name": "tf.config.experimental.get_device_policy", "docs": "Gets the current device policy.\n\n The device policy controls how operations requiring inputs on a specific\n device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).\n\n This function only gets the device policy for the current thread. Any\n subsequently started thread will again use the default policy.\n\n Returns:\n Current thread device policy\n ", "desc": "Gets the current device policy.", "type": "API"}, {"name": "tf.config.experimental.get_memory_growth", "docs": "Get if memory growth is enabled for a `PhysicalDevice`.\n\n If memory growth is enabled for a `PhysicalDevice`, the runtime initialization\n will not allocate all memory on the device.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.experimental.set_memory_growth(physical_devices[0], True)\n ... assert tf.config.experimental.get_memory_growth(physical_devices[0])\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n A boolean indicating the memory growth setting for the `PhysicalDevice`.\n\n Raises:\n ValueError: Invalid `PhysicalDevice` specified.\n ", "desc": "Get if memory growth is enabled for a `PhysicalDevice`.", "type": "API"}, {"name": "tf.config.experimental.get_memory_info", "docs": "Get memory info for the chosen device, as a dict.\n\n This function returns a dict containing information about the device's memory\n usage. For example:\n\n >>> if tf.config.list_physical_devices('GPU'):\n ... # Returns a dict in the form {'current': ,\n ... # 'peak': }\n ... tf.config.experimental.get_memory_info('GPU:0')\n\n Currently returns the following keys:\n - `'current'`: The current memory used by the device, in bytes.\n - `'peak'`: The peak memory used by the device across the run of the\n program, in bytes. Can be reset with\n `tf.config.experimental.reset_memory_stats`.\n\n More keys may be added in the future, including device-specific keys.\n\n Currently only supports GPU and TPU. If called on a CPU device, an exception\n will be raised.\n\n For GPUs, TensorFlow will allocate all the memory by default, unless changed\n with `tf.config.experimental.set_memory_growth`. The dict specifies only the\n current and peak memory that TensorFlow is actually using, not the memory that\n TensorFlow has allocated on the GPU.\n\n Args:\n device: Device string to get the memory information for, e.g. `\"GPU:0\"`,\n `\"TPU:0\"`. See https://www.tensorflow.org/api_docs/python/tf/device for\n specifying device strings.\n\n Returns:\n A dict with keys `'current'` and `'peak'`, specifying the current and peak\n memory usage respectively.\n\n Raises:\n ValueError: No device found with the device name, like '\"nonexistent\"'.\n ValueError: Invalid device name, like '\"GPU\"', '\"CPU:GPU\"', '\"CPU:\"'.\n ValueError: Multiple devices matched with the device name.\n ValueError: Memory statistics not tracked, like '\"CPU:0\"'.\n ", "desc": "Get memory info for the chosen device, as a dict.", "type": "API"}, {"name": "tf.config.experimental.get_memory_usage", "docs": "Get the current memory usage, in bytes, for the chosen device. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.config.experimental.get_memory_info(device)['current'] instead.\n\nThis function is deprecated in favor of\n`tf.config.experimental.get_memory_info`. Calling this function is equivalent\nto calling `tf.config.experimental.get_memory_info()['current']`.\n\nSee https://www.tensorflow.org/api_docs/python/tf/device for specifying device\nstrings.\n\nFor example:\n\n>>> gpu_devices = tf.config.list_physical_devices('GPU')\n>>> if gpu_devices:\n... tf.config.experimental.get_memory_usage('GPU:0')\n\nDoes not work for CPU.\n\nFor GPUs, TensorFlow will allocate all the memory by default, unless changed\nwith `tf.config.experimental.set_memory_growth`. This function only returns\nthe memory that TensorFlow is actually using, not the memory that TensorFlow\nhas allocated on the GPU.\n\nArgs:\n device: Device string to get the bytes in use for, e.g. `\"GPU:0\"`\n\nReturns:\n Total memory usage in bytes.\n\nRaises:\n ValueError: Non-existent or CPU device specified.", "desc": "Get the current memory usage, in bytes, for the chosen device. (deprecated)", "type": "API"}, {"name": "tf.config.experimental.get_synchronous_execution", "docs": "Gets whether operations are executed synchronously or asynchronously.\n\n TensorFlow can execute operations synchronously or asynchronously. If\n asynchronous execution is enabled, operations may return \"non-ready\" handles.\n\n Returns:\n Current thread execution mode\n ", "desc": "Gets whether operations are executed synchronously or asynchronously.", "type": "API"}, {"name": "tf.config.experimental.get_virtual_device_configuration", "docs": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.\n\n Returns the list of `tf.config.LogicalDeviceConfiguration`\n objects previously configured by a call to\n `tf.config.set_logical_device_configuration`.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n >>> try:\n ... assert configs is None\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n ... assert len(configs) == 2\n ... except:\n ... # Cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n List of `tf.config.LogicalDeviceConfiguration` objects or\n `None` if no virtual device configuration has been set for this physical\n device.\n ", "desc": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.config.experimental.get_visible_devices", "docs": "Get the list of visible physical devices.\n\n Returns the list of `PhysicalDevice`s currently marked as visible to the\n runtime. A visible device will have at least one `LogicalDevice` associated\n with it once the runtime is initialized.\n\n The following example verifies all visible GPUs have been disabled:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable all GPUS\n ... tf.config.set_visible_devices([], 'GPU')\n ... visible_devices = tf.config.get_visible_devices()\n ... for device in visible_devices:\n ... assert device.device_type != 'GPU'\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of visible `PhysicalDevice`s\n ", "desc": "Get the list of visible physical devices.", "type": "API"}, {"name": "tf.config.experimental.list_logical_devices", "docs": "Return a list of logical devices created by runtime.\n\n Logical devices may correspond to physical devices or remote devices in the\n cluster. Operations and tensors may be placed on these devices by using the\n `name` of the `tf.config.LogicalDevice`.\n\n Calling `tf.config.list_logical_devices` triggers the runtime to configure any\n `tf.config.PhysicalDevice` visible to the runtime, thereby preventing\n further configuration. To avoid runtime initialization, call\n `tf.config.list_physical_devices` instead.\n\n For example:\n\n >>> logical_devices = tf.config.list_logical_devices('GPU')\n >>> if len(logical_devices) > 0:\n ... # Allocate on GPU:0\n ... with tf.device(logical_devices[0].name):\n ... one = tf.constant(1)\n ... # Allocate on GPU:1\n ... with tf.device(logical_devices[1].name):\n ... two = tf.constant(2)\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of initialized `LogicalDevice`s\n ", "desc": "Return a list of logical devices created by runtime.", "type": "API"}, {"name": "tf.config.experimental.list_physical_devices", "docs": "Return a list of physical devices visible to the host runtime.\n\n Physical devices are hardware devices present on the host machine. By default\n all discovered CPU and GPU devices are considered visible.\n\n This API allows querying the physical hardware resources prior to runtime\n initialization. Thus, giving an opportunity to call any additional\n configuration APIs. This is in contrast to `tf.config.list_logical_devices`,\n which triggers runtime initialization in order to list the configured devices.\n\n The following example lists the number of visible GPUs on the host.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> print(\"Num GPUs:\", len(physical_devices))\n Num GPUs: ...\n\n However, the number of GPUs available to the runtime may change during runtime\n initialization due to marking certain devices as not visible or configuring\n multiple logical devices.\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of discovered `tf.config.PhysicalDevice` objects\n ", "desc": "Return a list of physical devices visible to the host runtime.", "type": "API"}, {"name": "tf.config.experimental.set_device_policy", "docs": "Sets the current thread device policy.\n\n The device policy controls how operations requiring inputs on a specific\n device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1).\n\n When using the default, an appropriate policy will be picked automatically.\n The default policy may change over time.\n\n This function only sets the device policy for the current thread. Any\n subsequently started thread will again use the default policy.\n\n Args:\n device_policy: A device policy.\n Valid values:\n - None: Switch to a system default.\n - 'warn': Copies the tensors which are not on the right device and logs a\n warning.\n - 'explicit': Raises an error if the placement is not as required.\n - 'silent': Silently copies the tensors. Note that this may hide\n performance problems as there is no notification provided when\n operations are blocked on the tensor being copied between devices.\n - 'silent_for_int32': silently copies `int32` tensors, raising errors on\n the other ones.\n\n Raises:\n ValueError: If an invalid `device_policy` is passed.\n ", "desc": "Sets the current thread device policy.", "type": "API"}, {"name": "tf.config.experimental.set_memory_growth", "docs": "Set if memory growth should be enabled for a `PhysicalDevice`.\n\n If memory growth is enabled for a `PhysicalDevice`, the runtime initialization\n will not allocate all memory on the device. Memory growth cannot be configured\n on a `PhysicalDevice` with virtual devices configured.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.experimental.set_memory_growth(physical_devices[0], True)\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to configure\n enable: (Boolean) Whether to enable or disable memory growth\n\n Raises:\n ValueError: Invalid `PhysicalDevice` specified.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set if memory growth should be enabled for a `PhysicalDevice`.", "type": "API"}, {"name": "tf.config.experimental.set_synchronous_execution", "docs": "Specifies whether operations are executed synchronously or asynchronously.\n\n TensorFlow can execute operations synchronously or asynchronously. If\n asynchronous execution is enabled, operations may return \"non-ready\" handles.\n\n When `enable` is set to None, an appropriate value will be picked\n automatically. The value picked may change between TensorFlow releases.\n\n Args:\n enable: Whether operations should be dispatched synchronously.\n Valid values:\n - None: sets the system default.\n - True: executes each operation synchronously.\n - False: executes each operation asynchronously.\n ", "desc": "Specifies whether operations are executed synchronously or asynchronously.", "type": "API"}, {"name": "tf.config.experimental.set_virtual_device_configuration", "docs": "Set the logical device configuration for a `tf.config.PhysicalDevice`.\n\n A visible `tf.config.PhysicalDevice` will by default have a single\n `tf.config.LogicalDevice` associated with it once the runtime is initialized.\n Specifying a list of `tf.config.LogicalDeviceConfiguration` objects allows\n multiple devices to be created on the same `tf.config.PhysicalDevice`.\n\n Logical device configurations can be modified by calling this function as\n long as the runtime is uninitialized. After the runtime is initialized\n calling this function raises a RuntimeError.\n\n The following example splits the CPU into 2 logical devices:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> # Specify 2 virtual CPUs. Note currently memory limit is not supported.\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... logical_devices = tf.config.list_logical_devices('CPU')\n ... assert len(logical_devices) == 2\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... except:\n ... # Cannot modify logical devices once initialized.\n ... pass\n\n The following example splits the GPU into 2 logical devices with 100 MB each:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=100),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=100)])\n ...\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... assert len(logical_devices) == len(physical_devices) + 1\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=10),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=10)])\n ... except:\n ... # Invalid device or cannot modify logical devices once initialized.\n ... pass\n\n Args:\n device: The `PhysicalDevice` to configure.\n logical_devices: (optional) List of `tf.config.LogicalDeviceConfiguration`\n objects to allocate for the specified `PhysicalDevice`. If None, the\n default configuration will be used.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the logical device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.config.experimental.set_visible_devices", "docs": "Set the list of visible devices.\n\n Specifies which `PhysicalDevice` objects are visible to the runtime.\n TensorFlow will only allocate memory and place operations on visible\n physical devices, as otherwise no `LogicalDevice` will be created on them.\n By default all discovered devices are marked as visible.\n\n The following example demonstrates disabling the first GPU on the machine.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable first GPU\n ... tf.config.set_visible_devices(physical_devices[1:], 'GPU')\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... # Logical device was not created for first GPU\n ... assert len(logical_devices) == len(physical_devices) - 1\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n devices: List of `PhysicalDevice`s to make visible\n device_type: (optional) Only configure devices matching this device type.\n For example \"CPU\" or \"GPU\". Other devices will be left unaltered.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the list of visible devices.", "type": "API"}, {"name": "tf.config.experimental.tensor_float_32_execution_enabled", "docs": "Returns whether TensorFloat-32 is enabled.\n\n By default, TensorFloat-32 is enabled, but this can be changed with\n `tf.config.experimental.enable_tensor_float_32_execution`.\n\n Returns:\n True if TensorFloat-32 is enabled (the default) and False otherwise\n ", "desc": "Returns whether TensorFloat-32 is enabled.", "type": "API"}, {"name": "tf.config.experimental.VirtualDeviceConfiguration", "docs": "Configuration class for a logical devices.\n\n The class specifies the parameters to configure a `tf.config.PhysicalDevice`\n as it is initialized to a `tf.config.LogicalDevice` during runtime\n initialization. Not all fields are valid for all device types.\n\n See `tf.config.get_logical_device_configuration` and\n `tf.config.set_logical_device_configuration` for usage examples.\n\n Fields:\n memory_limit: (optional) Maximum memory (in MB) to allocate on the virtual\n device. Currently only supported for GPUs.\n experimental_priority: (optional) Priority to assign to a virtual device.\n Lower values have higher priorities and 0 is the default.\n Within a physical GPU, the GPU scheduler will prioritize ops on virtual\n devices with higher priority. Currently only supported for Nvidia GPUs.\n ", "desc": "Configuration class for a logical devices.", "type": "API"}, {"name": "tf.config.experimental_connect_to_cluster", "docs": "Connects to the given cluster.\n\n Will make devices on the cluster available to use. Note that calling this more\n than once will work, but will invalidate any tensor handles on the old remote\n devices.\n\n If the given local job name is not present in the cluster specification, it\n will be automatically added, using an unused port on the localhost.\n\n Device filters can be specified to isolate groups of remote tasks to avoid\n undesired accesses between workers. Workers accessing resources or launching\n ops / functions on filtered remote devices will result in errors (unknown\n devices). For any remote task, if no device filter is present, all cluster\n devices will be visible; if any device filter is specified, it can only\n see devices matching at least one filter. Devices on the task itself are\n always visible. Device filters can be particially specified.\n\n For example, for a cluster set up for parameter server training, the following\n device filters might be specified:\n\n ```python\n cdf = tf.config.experimental.ClusterDeviceFilters()\n # For any worker, only the devices on PS nodes and itself are visible\n for i in range(num_workers):\n cdf.set_device_filters('worker', i, ['/job:ps'])\n # Similarly for any ps, only the devices on workers and itself are visible\n for i in range(num_ps):\n cdf.set_device_filters('ps', i, ['/job:worker'])\n\n tf.config.experimental_connect_to_cluster(cluster_def,\n cluster_device_filters=cdf)\n ```\n\n Args:\n cluster_spec_or_resolver: A `ClusterSpec` or `ClusterResolver` describing\n the cluster.\n job_name: The name of the local job.\n task_index: The local task index.\n protocol: The communication protocol, such as `\"grpc\"`. If unspecified, will\n use the default from `python/platform/remote_utils.py`.\n make_master_device_default: If True and a cluster resolver is passed, will\n automatically enter the master task device scope, which indicates the\n master becomes the default device to run ops. It won't do anything if\n a cluster spec is passed. Will throw an error if the caller is currently\n already in some device scope.\n cluster_device_filters: an instance of\n `tf.train.experimental/ClusterDeviceFilters` that specify device filters\n to the remote tasks in cluster.\n ", "desc": "Connects to the given cluster.", "type": "API"}, {"name": "tf.config.experimental_connect_to_host", "docs": "Connects to a single machine to enable remote execution on it.\n\n Will make devices on the remote host available to use. Note that calling this\n more than once will work, but will invalidate any tensor handles on the old\n remote devices.\n\n Using the default job_name of worker, you can schedule ops to run remotely as\n follows:\n ```python\n # When eager execution is enabled, connect to the remote host.\n tf.config.experimental_connect_to_host(\"exampleaddr.com:9876\")\n\n with ops.device(\"job:worker/replica:0/task:1/device:CPU:0\"):\n # The following tensors should be resident on the remote device, and the op\n # will also execute remotely.\n x1 = array_ops.ones([2, 2])\n x2 = array_ops.ones([2, 2])\n y = math_ops.matmul(x1, x2)\n ```\n\n Args:\n remote_host: a single or a list the remote server addr in host-port format.\n job_name: The job name under which the new server will be accessible.\n\n Raises:\n ValueError: if remote_host is None.\n ", "desc": "Connects to a single machine to enable remote execution on it.", "type": "API"}, {"name": "tf.config.experimental_functions_run_eagerly", "docs": "Returns the value of the `experimental_run_functions_eagerly` setting. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse tf.config.functions_run_eagerly instead of the experimental version.", "desc": "Returns the value of the `experimental_run_functions_eagerly` setting. (deprecated)", "type": "API"}, {"name": "tf.config.experimental_run_functions_eagerly", "docs": "Enables / disables eager execution of `tf.function`s. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.config.run_functions_eagerly` instead of the experimental version.\n\nCalling `tf.config.experimental_run_functions_eagerly(True)` will make all\ninvocations of `tf.function` run eagerly instead of running as a traced graph\nfunction.\n\nSee `tf.config.run_functions_eagerly` for an example.\n\nNote: This flag has no effect on functions passed into tf.data transformations\nas arguments. tf.data functions are never executed eagerly and are always\nexecuted as a compiled Tensorflow Graph.\n\nArgs:\n run_eagerly: Boolean. Whether to run functions eagerly.", "desc": "Enables / disables eager execution of `tf.function`s. (deprecated)", "type": "API"}, {"name": "tf.config.functions_run_eagerly", "docs": "Returns the value of the `run_functions_eagerly` setting.", "desc": "Returns the value of the `run_functions_eagerly` setting.", "type": "API"}, {"name": "tf.config.get_logical_device_configuration", "docs": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.\n\n Returns the list of `tf.config.LogicalDeviceConfiguration`\n objects previously configured by a call to\n `tf.config.set_logical_device_configuration`.\n\n For example:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n >>> try:\n ... assert configs is None\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... configs = tf.config.get_logical_device_configuration(\n ... physical_devices[0])\n ... assert len(configs) == 2\n ... except:\n ... # Cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device: `PhysicalDevice` to query\n\n Returns:\n List of `tf.config.LogicalDeviceConfiguration` objects or\n `None` if no virtual device configuration has been set for this physical\n device.\n ", "desc": "Get the virtual device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.config.get_soft_device_placement", "docs": "Return status of soft device placement flag.\n\n If enabled, an op will be placed on CPU if any of the following are true\n 1. there's no GPU implementation for the OP\n 2. no GPU devices are known or registered\n 3. need to co-locate with reftype input(s) which are from CPU\n\n If disabled, the placement is strict and CPU fallback is not allowed.\n An error is raised when an Op cannot be placed onto its intended device.\n\n Returns:\n A boolean indicating if soft placement is enabled.\n ", "desc": "Return status of soft device placement flag.", "type": "API"}, {"name": "tf.config.get_visible_devices", "docs": "Get the list of visible physical devices.\n\n Returns the list of `PhysicalDevice`s currently marked as visible to the\n runtime. A visible device will have at least one `LogicalDevice` associated\n with it once the runtime is initialized.\n\n The following example verifies all visible GPUs have been disabled:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable all GPUS\n ... tf.config.set_visible_devices([], 'GPU')\n ... visible_devices = tf.config.get_visible_devices()\n ... for device in visible_devices:\n ... assert device.device_type != 'GPU'\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of visible `PhysicalDevice`s\n ", "desc": "Get the list of visible physical devices.", "type": "API"}, {"name": "tf.config.list_logical_devices", "docs": "Return a list of logical devices created by runtime.\n\n Logical devices may correspond to physical devices or remote devices in the\n cluster. Operations and tensors may be placed on these devices by using the\n `name` of the `tf.config.LogicalDevice`.\n\n Calling `tf.config.list_logical_devices` triggers the runtime to configure any\n `tf.config.PhysicalDevice` visible to the runtime, thereby preventing\n further configuration. To avoid runtime initialization, call\n `tf.config.list_physical_devices` instead.\n\n For example:\n\n >>> logical_devices = tf.config.list_logical_devices('GPU')\n >>> if len(logical_devices) > 0:\n ... # Allocate on GPU:0\n ... with tf.device(logical_devices[0].name):\n ... one = tf.constant(1)\n ... # Allocate on GPU:1\n ... with tf.device(logical_devices[1].name):\n ... two = tf.constant(2)\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of initialized `LogicalDevice`s\n ", "desc": "Return a list of logical devices created by runtime.", "type": "API"}, {"name": "tf.config.list_physical_devices", "docs": "Return a list of physical devices visible to the host runtime.\n\n Physical devices are hardware devices present on the host machine. By default\n all discovered CPU and GPU devices are considered visible.\n\n This API allows querying the physical hardware resources prior to runtime\n initialization. Thus, giving an opportunity to call any additional\n configuration APIs. This is in contrast to `tf.config.list_logical_devices`,\n which triggers runtime initialization in order to list the configured devices.\n\n The following example lists the number of visible GPUs on the host.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> print(\"Num GPUs:\", len(physical_devices))\n Num GPUs: ...\n\n However, the number of GPUs available to the runtime may change during runtime\n initialization due to marking certain devices as not visible or configuring\n multiple logical devices.\n\n Args:\n device_type: (optional string) Only include devices matching this device\n type. For example \"CPU\" or \"GPU\".\n\n Returns:\n List of discovered `tf.config.PhysicalDevice` objects\n ", "desc": "Return a list of physical devices visible to the host runtime.", "type": "API"}, {"name": "tf.config.LogicalDevice", "docs": "Abstraction for a logical device initialized by the runtime.\n\n A `tf.config.LogicalDevice` corresponds to an initialized logical device on a\n `tf.config.PhysicalDevice` or a remote device visible to the cluster. Tensors\n and operations can be placed on a specific logical device by calling\n `tf.device` with a specified `tf.config.LogicalDevice`.\n\n Fields:\n name: The fully qualified name of the device. Can be used for Op or function\n placement.\n device_type: String declaring the type of device such as \"CPU\" or \"GPU\".\n ", "desc": "Abstraction for a logical device initialized by the runtime.", "type": "API"}, {"name": "tf.config.LogicalDeviceConfiguration", "docs": "Configuration class for a logical devices.\n\n The class specifies the parameters to configure a `tf.config.PhysicalDevice`\n as it is initialized to a `tf.config.LogicalDevice` during runtime\n initialization. Not all fields are valid for all device types.\n\n See `tf.config.get_logical_device_configuration` and\n `tf.config.set_logical_device_configuration` for usage examples.\n\n Fields:\n memory_limit: (optional) Maximum memory (in MB) to allocate on the virtual\n device. Currently only supported for GPUs.\n experimental_priority: (optional) Priority to assign to a virtual device.\n Lower values have higher priorities and 0 is the default.\n Within a physical GPU, the GPU scheduler will prioritize ops on virtual\n devices with higher priority. Currently only supported for Nvidia GPUs.\n ", "desc": "Configuration class for a logical devices.", "type": "API"}, {"name": "tf.config.optimizer", "docs": "Public API for tf.config.optimizer namespace.\n", "desc": "Public API for tf.config.optimizer namespace.", "type": "API"}, {"name": "tf.config.optimizer.get_experimental_options", "docs": "Get experimental optimizer options.\n\n Refer to tf.config.optimizer.set_experimental_options for a list of current\n options.\n\n Note that optimizations are only applied in graph mode, (within tf.function).\n In addition, as these are experimental options, the list is subject to change.\n\n Returns:\n Dictionary of configured experimental optimizer options\n ", "desc": "Get experimental optimizer options.", "type": "API"}, {"name": "tf.config.optimizer.get_jit", "docs": "Returns JIT compilation configuration for code inside `tf.function`.\n\n Possible return values:\n -`\"autoclustering\"` if\n [autoclustering](https://www.tensorflow.org/xla#auto-clustering) is enabled\n - `\"\"` when no default compilation is applied.\n ", "desc": "Returns JIT compilation configuration for code inside `tf.function`.", "type": "API"}, {"name": "tf.config.optimizer.set_experimental_options", "docs": "Set experimental optimizer options.\n\n Note that optimizations are only applied in graph mode, (within tf.function).\n In addition, as these are experimental options, the list is subject to change.\n\n Args:\n options: Dictionary of experimental optimizer options to configure.\n Valid keys:\n - layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW\n layout on GPU which is faster.\n - constant_folding: Fold constants Statically infer the value of tensors\n when possible, and materialize the result using constants.\n - shape_optimization: Simplify computations made on shapes.\n - remapping: Remap subgraphs onto more efficient implementations.\n - arithmetic_optimization: Simplify arithmetic ops with common\n sub-expression elimination and arithmetic simplification.\n - dependency_optimization: Control dependency optimizations. Remove\n redundant control dependencies, which may enable other optimization.\n This optimizer is also essential for pruning Identity and NoOp nodes.\n - loop_optimization: Loop optimizations.\n - function_optimization: Function optimizations and inlining.\n - debug_stripper: Strips debug-related nodes from the graph.\n - disable_model_pruning: Disable removal of unnecessary ops from the graph\n - scoped_allocator_optimization: Try to allocate some independent Op\n outputs contiguously in order to merge or eliminate downstream Ops.\n - pin_to_host_optimization: Force small ops onto the CPU.\n - implementation_selector: Enable the swap of kernel implementations based\n on the device placement.\n - auto_mixed_precision: Change certain float32 ops to float16 on Volta\n GPUs and above. Without the use of loss scaling, this can cause\n numerical underflow (see\n `keras.mixed_precision.experimental.LossScaleOptimizer`).\n - disable_meta_optimizer: Disable the entire meta optimizer.\n - min_graph_nodes: The minimum number of nodes in a graph to optimizer.\n For smaller graphs, optimization is skipped.\n ", "desc": "Set experimental optimizer options.", "type": "API"}, {"name": "tf.config.optimizer.set_jit", "docs": "Configure JIT compilation. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(jit_config=True)`. They will be removed in a future version.\nInstructions for updating:\n`True` setting is deprecated, use `autoclustering` instead.\n\nNote: compilation is only applied to code that is compiled into a\ngraph (in TF2 that's only a code inside `tf.function`).\n\nArgs:\n enabled: JIT compilation configuration.\n Possible values:\n - `\"autoclustering\"` (`True` is a deprecated alias): perform\n [autoclustering](https://www.tensorflow.org/xla#auto-clustering)\n (automatically identify and compile clusters of nodes) on all graphs\n using\n [XLA](https://www.tensorflow.org/xla).\n - `False`: do not automatically compile any graphs.", "desc": "Configure JIT compilation. (deprecated argument values)", "type": "API"}, {"name": "tf.config.PhysicalDevice", "docs": "Abstraction for a locally visible physical device.\n\n TensorFlow can utilize various devices such as the CPU or multiple GPUs\n for computation. Before initializing a local device for use, the user can\n customize certain properties of the device such as it's visibility or memory\n configuration.\n\n Once a visible `tf.config.PhysicalDevice` is initialized one or more\n `tf.config.LogicalDevice` objects are created. Use\n `tf.config.set_visible_devices` to configure the visibility of a physical\n device and `tf.config.set_logical_device_configuration` to configure multiple\n `tf.config.LogicalDevice` objects for a `tf.config.PhysicalDevice`. This is\n useful when separation between models is needed or to simulate a multi-device\n environment.\n\n Fields:\n name: Unique identifier for device.\n device_type: String declaring the type of device such as \"CPU\" or \"GPU\".\n ", "desc": "Abstraction for a locally visible physical device.", "type": "API"}, {"name": "tf.config.run_functions_eagerly", "docs": "Enables / disables eager execution of `tf.function`s.\n\n Calling `tf.config.run_functions_eagerly(True)` will make all\n invocations of `tf.function` run eagerly instead of running as a traced graph\n function.\n\n This can be useful for debugging.\n\n >>> def my_func(a):\n ... print(\"Python side effect\")\n ... return a + a\n >>> a_fn = tf.function(my_func)\n\n >>> # A side effect the first time the function is traced\n >>> a_fn(tf.constant(1))\n Python side effect\n \n\n >>> # No further side effect, as the traced function is called\n >>> a_fn(tf.constant(2))\n \n\n >>> # Now, switch to eager running\n >>> tf.config.run_functions_eagerly(True)\n >>> # Side effect, as the function is called directly\n >>> a_fn(tf.constant(2))\n Python side effect\n \n\n >>> # Turn this back off\n >>> tf.config.run_functions_eagerly(False)\n\n Note: This flag has no effect on functions passed into tf.data transformations\n as arguments. tf.data functions are never executed eagerly and are always\n executed as a compiled Tensorflow Graph.\n\n Args:\n run_eagerly: Boolean. Whether to run functions eagerly.\n ", "desc": "Enables / disables eager execution of `tf.function`s.", "type": "API"}, {"name": "tf.config.set_logical_device_configuration", "docs": "Set the logical device configuration for a `tf.config.PhysicalDevice`.\n\n A visible `tf.config.PhysicalDevice` will by default have a single\n `tf.config.LogicalDevice` associated with it once the runtime is initialized.\n Specifying a list of `tf.config.LogicalDeviceConfiguration` objects allows\n multiple devices to be created on the same `tf.config.PhysicalDevice`.\n\n Logical device configurations can be modified by calling this function as\n long as the runtime is uninitialized. After the runtime is initialized\n calling this function raises a RuntimeError.\n\n The following example splits the CPU into 2 logical devices:\n\n >>> physical_devices = tf.config.list_physical_devices('CPU')\n >>> assert len(physical_devices) == 1, \"No CPUs found\"\n >>> # Specify 2 virtual CPUs. Note currently memory limit is not supported.\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... logical_devices = tf.config.list_logical_devices('CPU')\n ... assert len(logical_devices) == 2\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration(),\n ... tf.config.LogicalDeviceConfiguration()])\n ... except:\n ... # Cannot modify logical devices once initialized.\n ... pass\n\n The following example splits the GPU into 2 logical devices with 100 MB each:\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=100),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=100)])\n ...\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... assert len(logical_devices) == len(physical_devices) + 1\n ...\n ... tf.config.set_logical_device_configuration(\n ... physical_devices[0],\n ... [tf.config.LogicalDeviceConfiguration(memory_limit=10),\n ... tf.config.LogicalDeviceConfiguration(memory_limit=10)])\n ... except:\n ... # Invalid device or cannot modify logical devices once initialized.\n ... pass\n\n Args:\n device: The `PhysicalDevice` to configure.\n logical_devices: (optional) List of `tf.config.LogicalDeviceConfiguration`\n objects to allocate for the specified `PhysicalDevice`. If None, the\n default configuration will be used.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the logical device configuration for a `tf.config.PhysicalDevice`.", "type": "API"}, {"name": "tf.config.set_soft_device_placement", "docs": "Enable or disable soft device placement.\n\n If enabled, an op will be placed on CPU if any of the following are true\n 1. there's no GPU implementation for the OP\n 2. no GPU devices are known or registered\n 3. need to co-locate with reftype input(s) which are from CPU\n\n Note: by default soft device placement is enabled when running in eager mode\n (for convenience) and disabled in graph mode (for performance).\n\n Args:\n enabled: A boolean indicating whether to enable soft placement.\n ", "desc": "Enable or disable soft device placement.", "type": "API"}, {"name": "tf.config.set_visible_devices", "docs": "Set the list of visible devices.\n\n Specifies which `PhysicalDevice` objects are visible to the runtime.\n TensorFlow will only allocate memory and place operations on visible\n physical devices, as otherwise no `LogicalDevice` will be created on them.\n By default all discovered devices are marked as visible.\n\n The following example demonstrates disabling the first GPU on the machine.\n\n >>> physical_devices = tf.config.list_physical_devices('GPU')\n >>> try:\n ... # Disable first GPU\n ... tf.config.set_visible_devices(physical_devices[1:], 'GPU')\n ... logical_devices = tf.config.list_logical_devices('GPU')\n ... # Logical device was not created for first GPU\n ... assert len(logical_devices) == len(physical_devices) - 1\n ... except:\n ... # Invalid device or cannot modify virtual devices once initialized.\n ... pass\n\n Args:\n devices: List of `PhysicalDevice`s to make visible\n device_type: (optional) Only configure devices matching this device type.\n For example \"CPU\" or \"GPU\". Other devices will be left unaltered.\n\n Raises:\n ValueError: If argument validation fails.\n RuntimeError: Runtime is already initialized.\n ", "desc": "Set the list of visible devices.", "type": "API"}, {"name": "tf.config.threading", "docs": "Public API for tf.config.threading namespace.\n", "desc": "Public API for tf.config.threading namespace.", "type": "API"}, {"name": "tf.config.threading.get_inter_op_parallelism_threads", "docs": "Get number of threads used for parallelism between independent operations.\n\n Determines the number of threads used by independent non-blocking operations.\n 0 means the system picks an appropriate number.\n\n Returns:\n Number of parallel threads\n ", "desc": "Get number of threads used for parallelism between independent operations.", "type": "API"}, {"name": "tf.config.threading.get_intra_op_parallelism_threads", "docs": "Get number of threads used within an individual op for parallelism.\n\n Certain operations like matrix multiplication and reductions can utilize\n parallel threads for speed ups. A value of 0 means the system picks an\n appropriate number.\n\n Returns:\n Number of parallel threads\n ", "desc": "Get number of threads used within an individual op for parallelism.", "type": "API"}, {"name": "tf.config.threading.set_inter_op_parallelism_threads", "docs": "Set number of threads used for parallelism between independent operations.\n\n Determines the number of threads used by independent non-blocking operations.\n 0 means the system picks an appropriate number.\n\n Args:\n num_threads: Number of parallel threads\n ", "desc": "Set number of threads used for parallelism between independent operations.", "type": "API"}, {"name": "tf.config.threading.set_intra_op_parallelism_threads", "docs": "Set number of threads used within an individual op for parallelism.\n\n Certain operations like matrix multiplication and reductions can utilize\n parallel threads for speed ups. A value of 0 means the system picks an\n appropriate number.\n\n Args:\n num_threads: Number of parallel threads\n ", "desc": "Set number of threads used within an individual op for parallelism.", "type": "API"}, {"name": "tf.constant", "docs": "Creates a constant tensor from a tensor-like object.\n\n Note: All eager `tf.Tensor` values are immutable (in contrast to\n `tf.Variable`). There is nothing especially _constant_ about the value\n returned from `tf.constant`. This function is not fundamentally different from\n `tf.convert_to_tensor`. The name `tf.constant` comes from the `value` being\n embedded in a `Const` node in the `tf.Graph`. `tf.constant` is useful\n for asserting that the value can be embedded that way.\n\n If the argument `dtype` is not specified, then the type is inferred from\n the type of `value`.\n\n >>> # Constant 1-D Tensor from a python list.\n >>> tf.constant([1, 2, 3, 4, 5, 6])\n \n >>> # Or a numpy array\n >>> a = np.array([[1, 2, 3], [4, 5, 6]])\n >>> tf.constant(a)\n \n\n If `dtype` is specified, the resulting tensor values are cast to the requested\n `dtype`.\n\n >>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64)\n \n\n If `shape` is set, the `value` is reshaped to match. Scalars are expanded to\n fill the `shape`:\n\n >>> tf.constant(0, shape=(2, 3))\n \n >>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n \n\n `tf.constant` has no effect if an eager Tensor is passed as the `value`, it\n even transmits gradients:\n\n >>> v = tf.Variable([0.0])\n >>> with tf.GradientTape() as g:\n ... loss = tf.constant(v + v)\n >>> g.gradient(loss, v).numpy()\n array([2.], dtype=float32)\n\n But, since `tf.constant` embeds the value in the `tf.Graph` this fails for\n symbolic tensors:\n\n >>> with tf.compat.v1.Graph().as_default():\n ... i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)\n ... t = tf.constant(i)\n Traceback (most recent call last):\n ...\n TypeError: ...\n\n `tf.constant` will create tensors on the current device. Inputs which are\n already tensors maintain their placements unchanged.\n\n Related Ops:\n\n * `tf.convert_to_tensor` is similar but:\n * It has no `shape` argument.\n * Symbolic tensors are allowed to pass through.\n\n >>> with tf.compat.v1.Graph().as_default():\n ... i = tf.compat.v1.placeholder(shape=[None, None], dtype=tf.float32)\n ... t = tf.convert_to_tensor(i)\n\n * `tf.fill`: differs in a few ways:\n * `tf.constant` supports arbitrary constants, not just uniform scalar\n Tensors like `tf.fill`.\n * `tf.fill` creates an Op in the graph that is expanded at runtime, so it\n can efficiently represent large tensors.\n * Since `tf.fill` does not embed the value, it can produce dynamically\n sized outputs.\n\n Args:\n value: A constant value (or list) of output type `dtype`.\n dtype: The type of the elements of the resulting tensor.\n shape: Optional dimensions of resulting tensor.\n name: Optional name for the tensor.\n\n Returns:\n A Constant Tensor.\n\n Raises:\n TypeError: if shape is incorrectly specified or unsupported.\n ValueError: if called on a symbolic tensor.\n ", "desc": "Creates a constant tensor from a tensor-like object.", "type": "API"}, {"name": "tf.constant_initializer", "docs": "Initializer that generates tensors with constant values.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n `tf.constant_initializer` returns an object which when called returns a tensor\n populated with the `value` specified in the constructor. This `value` must be\n convertible to the requested `dtype`.\n\n The argument `value` can be a scalar constant value, or a list of\n values. Scalars broadcast to whichever shape is requested from the\n initializer.\n\n If `value` is a list, then the length of the list must be equal to the number\n of elements implied by the desired shape of the tensor. If the total number of\n elements in `value` is not equal to the number of elements required by the\n tensor shape, the initializer will raise a `TypeError`.\n\n Examples:\n\n >>> def make_variables(k, initializer):\n ... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)),\n ... tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))\n >>> v1, v2 = make_variables(3, tf.constant_initializer(2.))\n >>> v1\n \n >>> v2\n \n >>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.))\n (, >> value = [0, 1, 2, 3, 4, 5, 6, 7]\n >>> init = tf.constant_initializer(value)\n >>> # Fitting shape\n >>> tf.Variable(init(shape=[2, 4], dtype=tf.float32))\n \n >>> # Larger shape\n >>> tf.Variable(init(shape=[3, 4], dtype=tf.float32))\n Traceback (most recent call last):\n ...\n TypeError: ...value has 8 elements, shape is (3, 4) with 12 elements...\n >>> # Smaller shape\n >>> tf.Variable(init(shape=[2, 3], dtype=tf.float32))\n Traceback (most recent call last):\n ...\n TypeError: ...value has 8 elements, shape is (2, 3) with 6 elements...\n\n Args:\n value: A Python scalar, list or tuple of values, or a N-dimensional numpy\n array. All elements of the initialized variable will be set to the\n corresponding value in the `value` argument.\n\n Raises:\n TypeError: If the input `value` is not one of the expected types.\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.control_dependencies", "docs": "Wrapper for `Graph.control_dependencies()` using the default graph.\n\n See `tf.Graph.control_dependencies` for more details.\n\n Note: *In TensorFlow 2 with eager and/or Autograph, you should not require\n this method, as ops execute in the expected order thanks to automatic control\n dependencies.* Only use `tf.control_dependencies` when working with v1\n `tf.Graph` code.\n\n When eager execution is enabled, any callable object in the `control_inputs`\n list will be called.\n\n Args:\n control_inputs: A list of `Operation` or `Tensor` objects which must be\n executed or computed before running the operations defined in the context.\n Can also be `None` to clear the control dependencies. If eager execution\n is enabled, any callable object in the `control_inputs` list will be\n called.\n\n Returns:\n A context manager that specifies control dependencies for all\n operations constructed within the context.\n ", "desc": "Wrapper for `Graph.control_dependencies()` using the default graph.", "type": "API"}, {"name": "tf.convert_to_tensor", "docs": "Converts the given `value` to a `Tensor`.\n\n This function converts Python objects of various types to `Tensor`\n objects. It accepts `Tensor` objects, numpy arrays, Python lists,\n and Python scalars.\n\n For example:\n\n >>> import numpy as np\n >>> def my_func(arg):\n ... arg = tf.convert_to_tensor(arg, dtype=tf.float32)\n ... return arg\n\n >>> # The following calls are equivalent.\n ...\n >>> value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))\n >>> print(value_1)\n tf.Tensor(\n [[1. 2.]\n [3. 4.]], shape=(2, 2), dtype=float32)\n >>> value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])\n >>> print(value_2)\n tf.Tensor(\n [[1. 2.]\n [3. 4.]], shape=(2, 2), dtype=float32)\n >>> value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))\n >>> print(value_3)\n tf.Tensor(\n [[1. 2.]\n [3. 4.]], shape=(2, 2), dtype=float32)\n\n This function can be useful when composing a new operation in Python\n (such as `my_func` in the example above). All standard Python op\n constructors apply this function to each of their Tensor-valued\n inputs, which allows those ops to accept numpy arrays, Python lists,\n and scalars in addition to `Tensor` objects.\n\n Note: This function diverges from default Numpy behavior for `float` and\n `string` types when `None` is present in a Python list or scalar. Rather\n than silently converting `None` values, an error will be thrown.\n\n Args:\n value: An object whose type has a registered `Tensor` conversion function.\n dtype: Optional element type for the returned tensor. If missing, the type\n is inferred from the type of `value`.\n dtype_hint: Optional element type for the returned tensor, used when dtype\n is None. In some cases, a caller may not have a dtype in mind when\n converting to a tensor, so dtype_hint can be used as a soft preference.\n If the conversion to `dtype_hint` is not possible, this argument has no\n effect.\n name: Optional name to use if a new `Tensor` is created.\n\n Returns:\n A `Tensor` based on `value`.\n\n Raises:\n TypeError: If no conversion function is registered for `value` to `dtype`.\n RuntimeError: If a registered conversion function returns an invalid value.\n ValueError: If the `value` is a tensor not of given `dtype` in graph mode.\n ", "desc": "Converts the given `value` to a `Tensor`.", "type": "API"}, {"name": "tf.cos", "docs": "Computes cos of x element-wise.\n\n Given an input tensor, this function computes cosine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes cos of x element-wise.", "type": "API"}, {"name": "tf.cosh", "docs": "Computes hyperbolic cosine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic cosine of every\n element in the tensor. Input range is `[-inf, inf]` and output range\n is `[1, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.CriticalSection", "docs": "Critical section.\n\n A `CriticalSection` object is a resource in the graph which executes subgraphs\n in **serial** order. A common example of a subgraph one may wish to run\n exclusively is the one given by the following function:\n\n ```python\n v = resource_variable_ops.ResourceVariable(0.0, name=\"v\")\n\n def count():\n value = v.read_value()\n with tf.control_dependencies([value]):\n with tf.control_dependencies([v.assign_add(1)]):\n return tf.identity(value)\n ```\n\n Here, a snapshot of `v` is captured in `value`; and then `v` is updated.\n The snapshot value is returned.\n\n If multiple workers or threads all execute `count` in parallel, there is no\n guarantee that access to the variable `v` is atomic at any point within\n any thread's calculation of `count`. In fact, even implementing an atomic\n counter that guarantees that the user will see each value `0, 1, ...,` is\n currently impossible.\n\n The solution is to ensure any access to the underlying resource `v` is\n only processed through a critical section:\n\n ```python\n cs = CriticalSection()\n f1 = cs.execute(count)\n f2 = cs.execute(count)\n output = f1 + f2\n session.run(output)\n ```\n The functions `f1` and `f2` will be executed serially, and updates to `v`\n will be atomic.\n\n **NOTES**\n\n All resource objects, including the critical section and any captured\n variables of functions executed on that critical section, will be\n colocated to the same device (host and cpu/gpu).\n\n When using multiple critical sections on the same resources, there is no\n guarantee of exclusive access to those resources. This behavior is disallowed\n by default (but see the kwarg `exclusive_resource_access`).\n\n For example, running the same function in two separate critical sections\n will not ensure serial execution:\n\n ```python\n v = tf.compat.v1.get_variable(\"v\", initializer=0.0, use_resource=True)\n def accumulate(up):\n x = v.read_value()\n with tf.control_dependencies([x]):\n with tf.control_dependencies([v.assign_add(up)]):\n return tf.identity(x)\n ex1 = CriticalSection().execute(\n accumulate, 1.0, exclusive_resource_access=False)\n ex2 = CriticalSection().execute(\n accumulate, 1.0, exclusive_resource_access=False)\n bad_sum = ex1 + ex2\n sess.run(v.initializer)\n sess.run(bad_sum) # May return 0.0\n ```\n ", "desc": "Critical section.", "type": "API"}, {"name": "tf.cumsum", "docs": "Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:\n For example:\n\n >>> # tf.cumsum([a, b, c]) # [a, a + b, a + b + c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x)\n \n\n >>> # using varying `axis` values\n >>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]])\n >>> tf.cumsum(y, axis=0)\n \n >>> tf.cumsum(y, axis=1)\n \n\n By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed\n instead:\n\n >>> # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True)\n \n\n By setting the `reverse` kwarg to `True`, the cumsum is performed in the\n opposite direction:\n\n >>> # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, reverse=True)\n \n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n >>> # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True, reverse=True)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumsum.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative sum of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.custom_gradient", "docs": "Decorator to define a function with a custom gradient.\n\n This decorator allows fine grained control over the gradients of a sequence\n for operations. This may be useful for multiple reasons, including providing\n a more efficient or numerically stable gradient for a sequence of operations.\n\n For example, consider the following function that commonly occurs in the\n computation of cross entropy and log likelihoods:\n\n ```python\n def log1pexp(x):\n return tf.math.log(1 + tf.exp(x))\n ```\n\n Due to numerical instability, the gradient of this function evaluated at x=100\n is NaN. For example:\n\n ```python\n x = tf.constant(100.)\n y = log1pexp(x)\n dy_dx = tf.gradients(y, x) # Will be NaN when evaluated.\n ```\n\n The gradient expression can be analytically simplified to provide numerical\n stability:\n\n ```python\n @tf.custom_gradient\n def log1pexp(x):\n e = tf.exp(x)\n def grad(upstream):\n return upstream * (1 - 1 / (1 + e))\n return tf.math.log(1 + e), grad\n ```\n\n With this definition, the gradient `dy_dx` at `x = 100` will be correctly\n evaluated as 1.0.\n\n The variable `upstream` is defined as the upstream gradient. i.e. the gradient\n from all the layers or functions originating from this layer. The above\n example has no upstream functions, therefore `upstream = dy/dy = 1.0`.\n\n Assume that `x_i` is `log1pexp` in the forward pass `x_1 = x_1(x_0)`,\n `x_2 = x_2(x_1)`, ..., `x_i = x_i(x_i-1)`, ..., `x_n = x_n(x_n-1)`. By\n chain rule we know that `dx_n/dx_0 = dx_n/dx_n-1 * dx_n-1/dx_n-2 * ... *\n dx_i/dx_i-1 * ... * dx_1/dx_0`.\n\n In this case the gradient of our current function defined as\n `dx_i/dx_i-1 = (1 - 1 / (1 + e))`. The upstream gradient `upstream` would be\n `dx_n/dx_n-1 * dx_n-1/dx_n-2 * ... * dx_i+1/dx_i`. The upstream gradient\n multiplied by the current gradient is then passed downstream.\n\n In case the function takes multiple variables as input, the `grad`\n function must also return the same number of variables.\n We take the function `z = x * y` as an example.\n\n >>> @tf.custom_gradient\n ... def bar(x, y):\n ... def grad(upstream):\n ... dz_dx = y\n ... dz_dy = x\n ... return upstream * dz_dx, upstream * dz_dy\n ... z = x * y\n ... return z, grad\n >>> x = tf.constant(2.0, dtype=tf.float32)\n >>> y = tf.constant(3.0, dtype=tf.float32)\n >>> with tf.GradientTape(persistent=True) as tape:\n ... tape.watch(x)\n ... tape.watch(y)\n ... z = bar(x, y)\n >>> z\n \n >>> tape.gradient(z, x)\n \n >>> tape.gradient(z, y)\n \n\n Nesting custom gradients can lead to unintuitive results. The default\n behavior does not correspond to n-th order derivatives. For example\n\n ```python\n @tf.custom_gradient\n def op(x):\n y = op1(x)\n @tf.custom_gradient\n def grad_fn(dy):\n gdy = op2(x, y, dy)\n def grad_grad_fn(ddy): # Not the 2nd order gradient of op w.r.t. x.\n return op3(x, y, dy, ddy)\n return gdy, grad_grad_fn\n return y, grad_fn\n ```\n\n The function `grad_grad_fn` will be calculating the first order gradient\n of `grad_fn` with respect to `dy`, which is used to generate forward-mode\n gradient graphs from backward-mode gradient graphs, but is not the same as\n the second order gradient of `op` with respect to `x`.\n\n Instead, wrap nested `@tf.custom_gradients` in another function:\n\n ```python\n @tf.custom_gradient\n def op_with_fused_backprop(x):\n y, x_grad = fused_op(x)\n def first_order_gradient(dy):\n @tf.custom_gradient\n def first_order_custom(unused_x):\n def second_order_and_transpose(ddy):\n return second_order_for_x(...), gradient_wrt_dy(...)\n return x_grad, second_order_and_transpose\n return dy * first_order_custom(x)\n return y, first_order_gradient\n ```\n\n Additional arguments to the inner `@tf.custom_gradient`-decorated function\n control the expected return values of the innermost function.\n\n The examples above illustrate how to specify custom gradients for functions\n which do not read from variables. The following example uses variables, which\n require special handling because they are effectively inputs of the forward\n function.\n\n >>> weights = tf.Variable(tf.ones([2])) # Trainable variable weights\n >>> @tf.custom_gradient\n ... def linear_poly(x):\n ... # Creating polynomial\n ... poly = weights[1] * x + weights[0]\n ...\n ... def grad_fn(dpoly, variables):\n ... # dy/dx = weights[1] and we need to left multiply dpoly\n ... grad_xs = dpoly * weights[1] # Scalar gradient\n ...\n ... grad_vars = [] # To store gradients of passed variables\n ... assert variables is not None\n ... assert len(variables) == 1\n ... assert variables[0] is weights\n ... # Manually computing dy/dweights\n ... dy_dw = dpoly * tf.stack([x ** 1, x ** 0])\n ... grad_vars.append(\n ... tf.reduce_sum(tf.reshape(dy_dw, [2, -1]), axis=1)\n ... )\n ... return grad_xs, grad_vars\n ... return poly, grad_fn\n >>> x = tf.constant([1., 2., 3.])\n >>> with tf.GradientTape(persistent=True) as tape:\n ... tape.watch(x)\n ... poly = linear_poly(x)\n >>> poly # poly = x + 1\n \n >>> tape.gradient(poly, x) # conventional scalar gradient dy/dx\n \n >>> tape.gradient(poly, weights)\n \n\n Above example illustrates usage of trainable variable `weights`.\n In the example, the inner `grad_fn` accepts an extra `variables` input\n parameter and also returns an extra `grad_vars` output. That extra argument\n is passed if the forward function reads any variables. You need to\n compute the gradient w.r.t. each of those `variables` and output it as a list\n of `grad_vars`. Note here that default value of `variables` is set to `None`\n when no variables are used in the forward function.\n\n It should be noted `tf.GradientTape` is still watching the forward pass of a\n `tf.custom_gradient`, and will use the ops it watches. As a consequence,\n calling `tf.function` while the tape is still watching leads\n to a gradient graph being built. If an op is used in `tf.function` without\n registered gradient, a `LookupError` will be raised.\n\n Users can insert `tf.stop_gradient` to customize this behavior. This\n is demonstrated in the example below. `tf.random.shuffle` does not have a\n registered gradient. As a result `tf.stop_gradient` is used to avoid the\n `LookupError`.\n\n ```python\n x = tf.constant([0.3, 0.5], dtype=tf.float32)\n\n @tf.custom_gradient\n def test_func_with_stop_grad(x):\n @tf.function\n def _inner_func():\n # Avoid exception during the forward pass\n return tf.stop_gradient(tf.random.shuffle(x))\n # return tf.random.shuffle(x) # This will raise\n\n res = _inner_func()\n def grad(upstream):\n return upstream # Arbitrarily defined custom gradient\n return res, grad\n\n with tf.GradientTape() as g:\n g.watch(x)\n res = test_func_with_stop_grad(x)\n\n g.gradient(res, x)\n ```\n\n See also `tf.RegisterGradient` which registers a gradient function for a\n primitive TensorFlow operation. `tf.custom_gradient` on the other hand allows\n for fine grained control over the gradient computation of a sequence of\n operations.\n\n Note that if the decorated function uses `Variable`s, the enclosing variable\n scope must be using `ResourceVariable`s.\n\n Args:\n f: function `f(*x)` that returns a tuple `(y, grad_fn)` where:\n - `x` is a sequence of (nested structures of) `Tensor` inputs to the\n function.\n - `y` is a (nested structure of) `Tensor` outputs of applying TensorFlow\n operations in `f` to `x`.\n - `grad_fn` is a function with the signature `g(*grad_ys)` which returns\n a list of `Tensor`s the same size as (flattened) `x` - the derivatives\n of `Tensor`s in `y` with respect to the `Tensor`s in `x`. `grad_ys` is\n a sequence of `Tensor`s the same size as (flattened) `y` holding the\n initial value gradients for each `Tensor` in `y`.\n\n In a pure mathematical sense, a vector-argument vector-valued function\n `f`'s derivatives should be its Jacobian matrix `J`. Here we are\n expressing the Jacobian `J` as a function `grad_fn` which defines how\n `J` will transform a vector `grad_ys` when left-multiplied with it\n (`grad_ys * J`, the vector-Jacobian product, or VJP). This functional\n representation of a matrix is convenient to use for chain-rule\n calculation (in e.g. the back-propagation algorithm).\n\n If `f` uses `Variable`s (that are not part of the\n inputs), i.e. through `get_variable`, then `grad_fn` should have\n signature `g(*grad_ys, variables=None)`, where `variables` is a list of\n the `Variable`s, and return a 2-tuple `(grad_xs, grad_vars)`, where\n `grad_xs` is the same as above, and `grad_vars` is a `list`\n with the derivatives of `Tensor`s in `y` with respect to the variables\n (that is, grad_vars has one Tensor per variable in variables).\n\n Returns:\n A function `h(x)` which returns the same value as `f(x)[0]` and whose\n gradient (as calculated by `tf.gradients`) is determined by `f(x)[1]`.\n ", "desc": "Decorator to define a function with a custom gradient.", "type": "API"}, {"name": "tf.data", "docs": "`tf.data.Dataset` API for input pipelines.\n\nSee [Importing Data](https://tensorflow.org/guide/data) for an overview.\n\n", "desc": "`tf.data.Dataset` API for input pipelines.", "type": "API"}, {"name": "tf.data.Dataset", "docs": "Represents a potentially large set of elements.\n\n The `tf.data.Dataset` API supports writing descriptive and efficient input\n pipelines. `Dataset` usage follows a common pattern:\n\n 1. Create a source dataset from your input data.\n 2. Apply dataset transformations to preprocess the data.\n 3. Iterate over the dataset and process the elements.\n\n Iteration happens in a streaming fashion, so the full dataset does not need to\n fit into memory.\n\n Source Datasets:\n\n The simplest way to create a dataset is to create it from a python `list`:\n\n >>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n >>> for element in dataset:\n ... print(element)\n tf.Tensor(1, shape=(), dtype=int32)\n tf.Tensor(2, shape=(), dtype=int32)\n tf.Tensor(3, shape=(), dtype=int32)\n\n To process lines from files, use `tf.data.TextLineDataset`:\n\n >>> dataset = tf.data.TextLineDataset([\"file1.txt\", \"file2.txt\"])\n\n To process records written in the `TFRecord` format, use `TFRecordDataset`:\n\n >>> dataset = tf.data.TFRecordDataset([\"file1.tfrecords\", \"file2.tfrecords\"])\n\n To create a dataset of all files matching a pattern, use\n `tf.data.Dataset.list_files`:\n\n ```python\n dataset = tf.data.Dataset.list_files(\"/path/*.txt\")\n ```\n\n See `tf.data.FixedLengthRecordDataset` and `tf.data.Dataset.from_generator`\n for more ways to create datasets.\n\n Transformations:\n\n Once you have a dataset, you can apply transformations to prepare the data for\n your model:\n\n >>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n >>> dataset = dataset.map(lambda x: x*2)\n >>> list(dataset.as_numpy_iterator())\n [2, 4, 6]\n\n Common Terms:\n\n **Element**: A single output from calling `next()` on a dataset iterator.\n Elements may be nested structures containing multiple components. For\n example, the element `(1, (3, \"apple\"))` has one tuple nested in another\n tuple. The components are `1`, `3`, and `\"apple\"`.\n\n **Component**: The leaf in the nested structure of an element.\n\n Supported types:\n\n Elements can be nested structures of tuples, named tuples, and dictionaries.\n Note that Python lists are *not* treated as nested structures of components.\n Instead, lists are converted to tensors and treated as components. For\n example, the element `(1, [1, 2, 3])` has only two components; the tensor `1`\n and the tensor `[1, 2, 3]`. Element components can be of any type\n representable by `tf.TypeSpec`, including `tf.Tensor`, `tf.data.Dataset`,\n `tf.sparse.SparseTensor`, `tf.RaggedTensor`, and `tf.TensorArray`.\n\n ```python\n a = 1 # Integer element\n b = 2.0 # Float element\n c = (1, 2) # Tuple element with 2 components\n d = {\"a\": (2, 2), \"b\": 3} # Dict element with 3 components\n Point = collections.namedtuple(\"Point\", [\"x\", \"y\"])\n e = Point(1, 2) # Named tuple\n f = tf.data.Dataset.range(10) # Dataset element\n ```\n\n For more information,\n read [this guide](https://www.tensorflow.org/guide/data).\n ", "desc": "Represents a potentially large set of elements.", "type": "API"}, {"name": "tf.data.DatasetSpec", "docs": "Type specification for `tf.data.Dataset`.\n\n See `tf.TypeSpec` for more information about TensorFlow type specifications.\n\n >>> dataset = tf.data.Dataset.range(3)\n >>> tf.data.DatasetSpec.from_value(dataset)\n DatasetSpec(TensorSpec(shape=(), dtype=tf.int64, name=None), TensorShape([]))\n ", "desc": "Type specification for `tf.data.Dataset`.", "type": "API"}, {"name": "tf.data.experimental", "docs": "Experimental API for building input pipelines.\n\nThis module contains experimental `Dataset` sources and transformations that can\nbe used in conjunction with the `tf.data.Dataset` API. Note that the\n`tf.data.experimental` API is not subject to the same backwards compatibility\nguarantees as `tf.data`, but we will provide deprecation advice in advance of\nremoving existing functionality.\n\nSee [Importing Data](https://tensorflow.org/guide/datasets) for an overview.\n\n@@AutoShardPolicy\n@@AutotuneAlgorithm\n@@AutotuneOptions\n@@CheckpointInputPipelineHook\n@@Counter\n@@CsvDataset\n@@DatasetInitializer\n@@DatasetStructure\n@@DistributeOptions\n@@ExternalStatePolicy\n@@OptimizationOptions\n@@Optional\n@@OptionalStructure\n@@RaggedTensorStructure\n@@RandomDataset\n@@Reducer\n@@SparseTensorStructure\n@@SqlDataset\n@@Structure\n@@TFRecordWriter\n@@TensorArrayStructure\n@@TensorStructure\n@@ThreadingOptions\n\n@@assert_cardinality\n@@bucket_by_sequence_length\n@@cardinality\n@@choose_from_datasets\n@@copy_to_device\n@@dense_to_ragged_batch\n@@dense_to_sparse_batch\n@@distribute\n@@enable_debug_mode\n@@enumerate_dataset\n@@from_variant\n@@get_next_as_optional\n@@get_single_element\n@@get_structure\n@@group_by_reducer\n@@group_by_window\n@@ignore_errors\n@@index_table_from_dataset\n@@load\n@@make_batched_features_dataset\n@@make_csv_dataset\n@@make_saveable_from_iterator\n@@map_and_batch\n@@map_and_batch_with_legacy_function\n@@parallel_interleave\n@@parse_example_dataset\n@@prefetch_to_device\n@@rejection_resample\n@@sample_from_datasets\n@@save\n@@scan\n@@shuffle_and_repeat\n@@snapshot\n@@table_from_dataset\n@@take_while\n@@to_variant\n@@unbatch\n@@unique\n\n@@AUTOTUNE\n@@INFINITE_CARDINALITY\n@@SHARD_HINT\n@@UNKNOWN_CARDINALITY\n\n", "desc": "Experimental API for building input pipelines.", "type": "API"}, {"name": "tf.data.experimental.assert_cardinality", "docs": "Asserts the cardinality of the input dataset.\n\n NOTE: The following assumes that \"examples.tfrecord\" contains 42 records.\n\n >>> dataset = tf.data.TFRecordDataset(\"examples.tfrecord\")\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())\n True\n >>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42))\n >>> print(tf.data.experimental.cardinality(dataset).numpy())\n 42\n\n Args:\n expected_cardinality: The expected cardinality of the input dataset.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\n Raises:\n FailedPreconditionError: The assertion is checked at runtime (when iterating\n the dataset) and an error is raised if the actual and expected cardinality\n differ.\n ", "desc": "Asserts the cardinality of the input dataset.", "type": "API"}, {"name": "tf.data.experimental.AutoShardPolicy", "docs": "Represents the type of auto-sharding to use.\n\n OFF: No sharding will be performed.\n\n AUTO: Attempts FILE-based sharding, falling back to DATA-based sharding.\n\n FILE: Shards by input files (i.e. each worker will get a set of files to\n process). When this option is selected, make sure that there is at least as\n many files as workers. If there are fewer input files than workers, a runtime\n error will be raised.\n\n DATA: Shards by elements produced by the dataset. Each worker will process the\n whole dataset and discard the portion that is not for itself. Note that for\n this mode to correctly partitions the dataset elements, the dataset needs to\n produce elements in a deterministic order.\n\n HINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a\n placeholder to replace with `shard(num_workers, worker_index)`.\n ", "desc": "Represents the type of auto-sharding to use.", "type": "API"}, {"name": "tf.data.experimental.bucket_by_sequence_length", "docs": "A transformation that buckets elements in a `Dataset` by length. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.bucket_by_sequence_length(...)`.\n\nElements of the `Dataset` are grouped together by length and then are padded\nand batched.\n\nThis is useful for sequence tasks in which the elements have variable length.\nGrouping together elements that have similar lengths reduces the total\nfraction of padding in a batch which increases training step efficiency.\n\nBelow is an example to bucketize the input data to the 3 buckets\n\"[0, 3), [3, 5), [5, inf)\" based on sequence length, with batch size 2.\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int64, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[3, 5],\n... bucket_batch_sizes=[2, 2, 2]))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[1 2 3 4]\n [5 6 7 0]]\n[[ 7 8 9 10 11 0]\n [13 14 15 16 19 20]]\n[[ 0 0]\n [21 22]]\n\nThere is also a possibility to pad the dataset till the bucket boundary.\nYou can also provide which value to be used while padding the data.\nBelow example uses `-1` as padding and it also shows the input data\nbeing bucketizied to two buckets \"[0,3], [4,6]\".\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int32, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[4, 7],\n... bucket_batch_sizes=[2, 2, 2],\n... pad_to_bucket_boundary=True,\n... padding_values=-1))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[ 0 -1 -1]\n [ 5 6 7]]\n[[ 1 2 3 4 -1 -1]\n [ 7 8 9 10 11 -1]]\n[[21 22 -1]]\n[[13 14 15 16 19 20]]\n\nWhen using `pad_to_bucket_boundary` option, it can be seen that it is\nnot always possible to maintain the bucket batch size.\nYou can drop the batches that do not maintain the bucket batch size by\nusing the option `drop_remainder`. Using the same input data as in the\nabove example you get the following result.\n\n>>> elements = [\n... [0], [1, 2, 3, 4], [5, 6, 7],\n... [7, 8, 9, 10, 11], [13, 14, 15, 16, 19, 20], [21, 22]]\n\n>>> dataset = tf.data.Dataset.from_generator(\n... lambda: elements, tf.int32, output_shapes=[None])\n\n>>> dataset = dataset.apply(\n... tf.data.experimental.bucket_by_sequence_length(\n... element_length_func=lambda elem: tf.shape(elem)[0],\n... bucket_boundaries=[4, 7],\n... bucket_batch_sizes=[2, 2, 2],\n... pad_to_bucket_boundary=True,\n... padding_values=-1,\n... drop_remainder=True))\n\n>>> for elem in dataset.as_numpy_iterator():\n... print(elem)\n[[ 0 -1 -1]\n [ 5 6 7]]\n[[ 1 2 3 4 -1 -1]\n [ 7 8 9 10 11 -1]]\n\nArgs:\n element_length_func: function from element in `Dataset` to `tf.int32`,\n determines the length of the element, which will determine the bucket it\n goes into.\n bucket_boundaries: `list`, upper length boundaries of the buckets.\n bucket_batch_sizes: `list`, batch size per bucket. Length should be\n `len(bucket_boundaries) + 1`.\n padded_shapes: Nested structure of `tf.TensorShape` to pass to\n `tf.data.Dataset.padded_batch`. If not provided, will use\n `dataset.output_shapes`, which will result in variable length dimensions\n being padded out to the maximum length in each batch.\n padding_values: Values to pad with, passed to\n `tf.data.Dataset.padded_batch`. Defaults to padding with 0.\n pad_to_bucket_boundary: bool, if `False`, will pad dimensions with unknown\n size to maximum length in batch. If `True`, will pad dimensions with\n unknown size to bucket boundary minus 1 (i.e., the maximum length in each\n bucket), and caller must ensure that the source `Dataset` does not contain\n any elements with length longer than `max(bucket_boundaries)`.\n no_padding: `bool`, indicates whether to pad the batch features (features\n need to be either of type `tf.sparse.SparseTensor` or of same shape).\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in the case it has fewer than\n `batch_size` elements; the default behavior is not to drop the smaller\n batch.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`.", "desc": "A transformation that buckets elements in a `Dataset` by length. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.cardinality", "docs": "Returns the cardinality of `dataset`, if known.\n\n The operation returns the cardinality of `dataset`. The operation may return\n `tf.data.experimental.INFINITE_CARDINALITY` if `dataset` contains an infinite\n number of elements or `tf.data.experimental.UNKNOWN_CARDINALITY` if the\n analysis fails to determine the number of elements in `dataset` (e.g. when the\n dataset source is a file).\n\n >>> dataset = tf.data.Dataset.range(42)\n >>> print(tf.data.experimental.cardinality(dataset).numpy())\n 42\n >>> dataset = dataset.repeat()\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy())\n True\n >>> dataset = dataset.filter(lambda x: True)\n >>> cardinality = tf.data.experimental.cardinality(dataset)\n >>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())\n True\n\n Args:\n dataset: A `tf.data.Dataset` for which to determine cardinality.\n\n Returns:\n A scalar `tf.int64` `Tensor` representing the cardinality of `dataset`. If\n the cardinality is infinite or unknown, the operation returns the named\n constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively.\n ", "desc": "Returns the cardinality of `dataset`, if known.", "type": "API"}, {"name": "tf.data.experimental.CheckpointInputPipelineHook", "docs": "Checkpoints input pipeline state every N steps or seconds.\n\n This hook saves the state of the iterators in the `Graph` so that when\n training is resumed the input pipeline continues from where it left off.\n This could potentially avoid overfitting in certain pipelines where the\n number of training steps per eval are small compared to the dataset\n size or if the training pipeline is pre-empted.\n\n Differences from `CheckpointSaverHook`:\n 1. Saves only the input pipelines in the \"iterators\" collection and not the\n global variables or other saveable objects.\n 2. Does not write the `GraphDef` and `MetaGraphDef` to the summary.\n\n Example of checkpointing the training pipeline:\n\n ```python\n est = tf.estimator.Estimator(model_fn)\n while True:\n est.train(\n train_input_fn,\n hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)],\n steps=train_steps_per_eval)\n # Note: We do not pass the hook here.\n metrics = est.evaluate(eval_input_fn)\n if should_stop_the_training(metrics):\n break\n ```\n\n This hook should be used if the input pipeline state needs to be saved\n separate from the model checkpoint. Doing so may be useful for a few reasons:\n 1. The input pipeline checkpoint may be large, if there are large shuffle\n or prefetch buffers for instance, and may bloat the checkpoint size.\n 2. If the input pipeline is shared between training and validation, restoring\n the checkpoint during validation may override the validation input\n pipeline.\n\n For saving the input pipeline checkpoint alongside the model weights use\n `tf.data.experimental.make_saveable_from_iterator` directly to create a\n `SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however,\n that you will need to be careful not to restore the training iterator during\n eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS\n collector when building the eval graph.\n ", "desc": "Checkpoints input pipeline state every N steps or seconds.", "type": "API"}, {"name": "tf.data.experimental.choose_from_datasets", "docs": "Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.choose_from_datasets(...)` instead. Note that, unlike the experimental endpoint, the non-experimental endpoint sets `stop_on_empty_dataset=True` by default. You should set this argument explicitly in case you would like to match the behavior of the experimental endpoint.\n\nFor example, given the following datasets:\n\n```python\ndatasets = [tf.data.Dataset.from_tensors(\"foo\").repeat(),\n tf.data.Dataset.from_tensors(\"bar\").repeat(),\n tf.data.Dataset.from_tensors(\"baz\").repeat()]\n\n# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.\nchoice_dataset = tf.data.Dataset.range(3).repeat(3)\n\nresult = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)\n```\n\nThe elements of `result` will be:\n\n```\n\"foo\", \"bar\", \"baz\", \"foo\", \"bar\", \"baz\", \"foo\", \"bar\", \"baz\"\n```\n\nArgs:\n datasets: A non-empty list of `tf.data.Dataset` objects with compatible\n structure.\n choice_dataset: A `tf.data.Dataset` of scalar `tf.int64` tensors between `0`\n and `len(datasets) - 1`.\n stop_on_empty_dataset: If `True`, selection stops if it encounters an empty\n dataset. If `False`, it skips empty datasets. It is recommended to set it\n to `True`. Otherwise, the selected elements start off as the user intends,\n but may change as input datasets become empty. This can be difficult to\n detect since the dataset starts off looking correct. Default to `False`\n for backward compatibility.\n\nReturns:\n A dataset that interleaves elements from `datasets` according to the values\n of `choice_dataset`.\n\nRaises:\n TypeError: If `datasets` or `choice_dataset` has the wrong type.\n ValueError: If `datasets` is empty.", "desc": "Creates a dataset that deterministically chooses elements from `datasets`. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.copy_to_device", "docs": "A transformation that copies dataset elements to the given `target_device`.\n\n Args:\n target_device: The name of a device to which elements will be copied.\n source_device: The original device on which `input_dataset` will be placed.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that copies dataset elements to the given `target_device`.", "type": "API"}, {"name": "tf.data.experimental.Counter", "docs": "Creates a `Dataset` that counts from `start` in steps of size `step`.\n\n Unlike `tf.data.Dataset.range` which will stop at some ending number,\n `Counter` will produce elements indefinitely.\n\n >>> dataset = tf.data.experimental.Counter().take(5)\n >>> list(dataset.as_numpy_iterator())\n [0, 1, 2, 3, 4]\n >>> dataset.element_spec\n TensorSpec(shape=(), dtype=tf.int64, name=None)\n >>> dataset = tf.data.experimental.Counter(dtype=tf.int32)\n >>> dataset.element_spec\n TensorSpec(shape=(), dtype=tf.int32, name=None)\n >>> dataset = tf.data.experimental.Counter(start=2).take(5)\n >>> list(dataset.as_numpy_iterator())\n [2, 3, 4, 5, 6]\n >>> dataset = tf.data.experimental.Counter(start=2, step=5).take(5)\n >>> list(dataset.as_numpy_iterator())\n [2, 7, 12, 17, 22]\n >>> dataset = tf.data.experimental.Counter(start=10, step=-1).take(5)\n >>> list(dataset.as_numpy_iterator())\n [10, 9, 8, 7, 6]\n\n Args:\n start: (Optional.) The starting value for the counter. Defaults to 0.\n step: (Optional.) The step size for the counter. Defaults to 1.\n dtype: (Optional.) The data type for counter elements. Defaults to\n `tf.int64`.\n\n Returns:\n A `Dataset` of scalar `dtype` elements.\n ", "desc": "Creates a `Dataset` that counts from `start` in steps of size `step`.", "type": "API"}, {"name": "tf.data.experimental.CsvDataset", "docs": "A Dataset comprising lines from one or more CSV files.\n\n The `tf.data.experimental.CsvDataset` class provides a minimal CSV Dataset\n interface. There is also a richer `tf.data.experimental.make_csv_dataset`\n function which provides additional convenience features such as column header\n parsing, column type-inference, automatic shuffling, and file interleaving.\n\n The elements of this dataset correspond to records from the file(s).\n RFC 4180 format is expected for CSV files\n (https://tools.ietf.org/html/rfc4180)\n Note that we allow leading and trailing spaces for int or float fields.\n\n For example, suppose we have a file 'my_file0.csv' with four CSV columns of\n different data types:\n\n >>> with open('/tmp/my_file0.csv', 'w') as f:\n ... f.write('abcdefg,4.28E10,5.55E6,12\\n')\n ... f.write('hijklmn,-5.3E14,,2\\n')\n\n We can construct a CsvDataset from it as follows:\n\n >>> dataset = tf.data.experimental.CsvDataset(\n ... \"/tmp/my_file0.csv\",\n ... [tf.float32, # Required field, use dtype or empty tensor\n ... tf.constant([0.0], dtype=tf.float32), # Optional field, default to 0.0\n ... tf.int32, # Required field, use dtype or empty tensor\n ... ],\n ... select_cols=[1,2,3] # Only parse last three columns\n ... )\n\n The expected output of its iterations is:\n\n >>> for element in dataset.as_numpy_iterator():\n ... print(element)\n (4.28e10, 5.55e6, 12)\n (-5.3e14, 0.0, 2)\n\n See\n https://www.tensorflow.org/tutorials/load_data/csv#tfdataexperimentalcsvdataset\n for more in-depth example usage.\n ", "desc": "A Dataset comprising lines from one or more CSV files.", "type": "API"}, {"name": "tf.data.experimental.dense_to_ragged_batch", "docs": "A transformation that batches ragged elements into `tf.RaggedTensor`s.\n\n This transformation combines multiple consecutive elements of the input\n dataset into a single element.\n\n Like `tf.data.Dataset.batch`, the components of the resulting element will\n have an additional outer dimension, which will be `batch_size` (or\n `N % batch_size` for the last element if `batch_size` does not divide the\n number of input elements `N` evenly and `drop_remainder` is `False`). If\n your program depends on the batches having the same outer dimension, you\n should set the `drop_remainder` argument to `True` to prevent the smaller\n batch from being produced.\n\n Unlike `tf.data.Dataset.batch`, the input elements to be batched may have\n different shapes:\n\n * If an input element is a `tf.Tensor` whose static `tf.TensorShape` is\n fully defined, then it is batched as normal.\n * If an input element is a `tf.Tensor` whose static `tf.TensorShape` contains\n one or more axes with unknown size (i.e., `shape[i]=None`), then the output\n will contain a `tf.RaggedTensor` that is ragged up to any of such\n dimensions.\n * If an input element is a `tf.RaggedTensor` or any other type, then it is\n batched as normal.\n\n Example:\n\n >>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6))\n >>> dataset = dataset.map(lambda x: tf.range(x))\n >>> dataset.element_spec.shape\n TensorShape([None])\n >>> dataset = dataset.apply(\n ... tf.data.experimental.dense_to_ragged_batch(batch_size=2))\n >>> for batch in dataset:\n ... print(batch)\n \n \n \n\n Args:\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in the case it has fewer than\n `batch_size` elements; the default behavior is not to drop the smaller\n batch.\n row_splits_dtype: The dtype that should be used for the `row_splits` of any\n new ragged tensors. Existing `tf.RaggedTensor` elements do not have their\n row_splits dtype changed.\n\n Returns:\n Dataset: A `Dataset`.\n ", "desc": "A transformation that batches ragged elements into `tf.RaggedTensor`s.", "type": "API"}, {"name": "tf.data.experimental.dense_to_sparse_batch", "docs": "A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.\n\n Like `Dataset.padded_batch()`, this transformation combines multiple\n consecutive elements of the dataset, which might have different\n shapes, into a single element. The resulting element has three\n components (`indices`, `values`, and `dense_shape`), which\n comprise a `tf.sparse.SparseTensor` that represents the same data. The\n `row_shape` represents the dense shape of each row in the\n resulting `tf.sparse.SparseTensor`, to which the effective batch size is\n prepended. For example:\n\n ```python\n # NOTE: The following examples use `{ ... }` to represent the\n # contents of a dataset.\n a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }\n\n a.apply(tf.data.experimental.dense_to_sparse_batch(\n batch_size=2, row_shape=[6])) ==\n {\n ([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices\n ['a', 'b', 'c', 'a', 'b'], # values\n [2, 6]), # dense_shape\n ([[0, 0], [0, 1], [0, 2], [0, 3]],\n ['a', 'b', 'c', 'd'],\n [1, 6])\n }\n ```\n\n Args:\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like object\n representing the equivalent dense shape of a row in the resulting\n `tf.sparse.SparseTensor`. Each element of this dataset must have the same\n rank as `row_shape`, and must have size less than or equal to `row_shape`\n in each dimension.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.", "type": "API"}, {"name": "tf.data.experimental.DistributeOptions", "docs": "Represents options for distributed data processing.\n\n You can set the distribution options of a dataset through the\n `experimental_distribute` property of `tf.data.Options`; the property is\n an instance of `tf.data.experimental.DistributeOptions`.\n\n ```python\n options = tf.data.Options()\n options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for distributed data processing.", "type": "API"}, {"name": "tf.data.experimental.enable_debug_mode", "docs": "Enables debug mode for tf.data.\n\n Example usage with pdb module:\n ```\n import tensorflow as tf\n import pdb\n\n tf.data.experimental.enable_debug_mode()\n\n def func(x):\n # Python 3.7 and older requires `pdb.Pdb(nosigint=True).set_trace()`\n pdb.set_trace()\n x = x + 1\n return x\n\n dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n dataset = dataset.map(func)\n\n for item in dataset:\n print(item)\n ```\n\n The effect of debug mode is two-fold:\n\n 1) Any transformations that would introduce asynchrony, parallelism, or\n non-determinism to the input pipeline execution will be forced to execute\n synchronously, sequentially, and deterministically.\n\n 2) Any user-defined functions passed into tf.data transformations such as\n `map` will be wrapped in `tf.py_function` so that their body is executed\n \"eagerly\" as a Python function as opposed to a traced TensorFlow graph, which\n is the default behavior. Note that even when debug mode is enabled, the\n user-defined function is still traced to infer the shape and type of its\n outputs; as a consequence, any `print` statements or breakpoints will be\n triggered once during the tracing before the actual execution of the input\n pipeline.\n\n NOTE: As the debug mode setting affects the construction of the tf.data input\n pipeline, it should be enabled before any tf.data definitions.\n\n Raises:\n ValueError: When invoked from graph mode.\n ", "desc": "Enables debug mode for tf.data.", "type": "API"}, {"name": "tf.data.experimental.enumerate_dataset", "docs": "A transformation that enumerates the elements of a dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.enumerate()`.\n\nIt is similar to python's `enumerate`.\nFor example:\n\n```python\n# NOTE: The following examples use `{ ... }` to represent the\n# contents of a dataset.\na = { 1, 2, 3 }\nb = { (7, 8), (9, 10) }\n\n# The nested structure of the `datasets` argument determines the\n# structure of elements in the resulting dataset.\na.apply(tf.data.experimental.enumerate_dataset(start=5))\n=> { (5, 1), (6, 2), (7, 3) }\nb.apply(tf.data.experimental.enumerate_dataset())\n=> { (0, (7, 8)), (1, (9, 10)) }\n```\n\nArgs:\n start: A `tf.int64` scalar `tf.Tensor`, representing the start value for\n enumeration.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that enumerates the elements of a dataset. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.ExternalStatePolicy", "docs": "Represents how to handle external state during serialization.\n\n See the `tf.data.Options.experimental_external_state_policy` documentation\n for more information.\n ", "desc": "Represents how to handle external state during serialization.", "type": "API"}, {"name": "tf.data.experimental.from_variant", "docs": "Constructs a dataset from the given variant and (nested) structure.\n\n Args:\n variant: A scalar `tf.variant` tensor representing a dataset.\n structure: A (nested) structure of `tf.TypeSpec` objects representing the\n structure of each element in the dataset.\n\n Returns:\n A `tf.data.Dataset` instance.\n ", "desc": "Constructs a dataset from the given variant and (nested) structure.", "type": "API"}, {"name": "tf.data.experimental.get_next_as_optional", "docs": "Returns a `tf.experimental.Optional` with the next element of the iterator. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Iterator.get_next_as_optional()` instead.\n\nIf the iterator has reached the end of the sequence, the returned\n`tf.experimental.Optional` will have no value.\n\nArgs:\n iterator: A `tf.data.Iterator`.\n\nReturns:\n A `tf.experimental.Optional` object which either contains the next element\n of the iterator (if it exists) or no value.", "desc": "Returns a `tf.experimental.Optional` with the next element of the iterator. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.get_single_element", "docs": "Returns the single element of the `dataset` as a nested structure of tensors. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.get_single_element()`.\n\nThe function enables you to use a `tf.data.Dataset` in a stateless\n\"tensor-in tensor-out\" expression, without creating an iterator.\nThis facilitates the ease of data transformation on tensors using the\noptimized `tf.data.Dataset` abstraction on top of them.\n\nFor example, lets consider a `preprocessing_fn` which would take as an\ninput the raw features and returns the processed feature along with\nit's label.\n\n```python\ndef preprocessing_fn(raw_feature):\n # ... the raw_feature is preprocessed as per the use-case\n return feature\n\nraw_features = ... # input batch of BATCH_SIZE elements.\ndataset = (tf.data.Dataset.from_tensor_slices(raw_features)\n .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n .batch(BATCH_SIZE))\n\nprocessed_features = tf.data.experimental.get_single_element(dataset)\n```\n\nIn the above example, the `raw_features` tensor of length=BATCH_SIZE\nwas converted to a `tf.data.Dataset`. Next, each of the `raw_feature` was\nmapped using the `preprocessing_fn` and the processed features were\ngrouped into a single batch. The final `dataset` contains only one element\nwhich is a batch of all the processed features.\n\nNOTE: The `dataset` should contain only one element.\n\nNow, instead of creating an iterator for the `dataset` and retrieving the\nbatch of features, the `tf.data.experimental.get_single_element()` function\nis used to skip the iterator creation process and directly output the batch\nof features.\n\nThis can be particularly useful when your tensor transformations are\nexpressed as `tf.data.Dataset` operations, and you want to use those\ntransformations while serving your model.\n\n# Keras\n\n```python\n\nmodel = ... # A pre-built or custom model\n\nclass PreprocessingModel(tf.keras.Model):\n def __init__(self, model):\n super().__init__(self)\n self.model = model\n\n @tf.function(input_signature=[...])\n def serving_fn(self, data):\n ds = tf.data.Dataset.from_tensor_slices(data)\n ds = ds.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n ds = ds.batch(batch_size=BATCH_SIZE)\n return tf.argmax(\n self.model(tf.data.experimental.get_single_element(ds)),\n axis=-1\n )\n\npreprocessing_model = PreprocessingModel(model)\nyour_exported_model_dir = ... # save the model to this path.\ntf.saved_model.save(preprocessing_model, your_exported_model_dir,\n signatures={'serving_default': preprocessing_model.serving_fn})\n```\n\n# Estimator\n\nIn the case of estimators, you need to generally define a `serving_input_fn`\nwhich would require the features to be processed by the model while\ninferencing.\n\n```python\ndef serving_input_fn():\n\n raw_feature_spec = ... # Spec for the raw_features\n input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n raw_feature_spec, default_batch_size=None)\n )\n serving_input_receiver = input_fn()\n raw_features = serving_input_receiver.features\n\n def preprocessing_fn(raw_feature):\n # ... the raw_feature is preprocessed as per the use-case\n return feature\n\n dataset = (tf.data.Dataset.from_tensor_slices(raw_features)\n .map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)\n .batch(BATCH_SIZE))\n\n processed_features = tf.data.experimental.get_single_element(dataset)\n\n # Please note that the value of `BATCH_SIZE` should be equal to\n # the size of the leading dimension of `raw_features`. This ensures\n # that `dataset` has only element, which is a pre-requisite for\n # using `tf.data.experimental.get_single_element(dataset)`.\n\n return tf.estimator.export.ServingInputReceiver(\n processed_features, serving_input_receiver.receiver_tensors)\n\nestimator = ... # A pre-built or custom estimator\nestimator.export_saved_model(your_exported_model_dir, serving_input_fn)\n```\n\nArgs:\n dataset: A `tf.data.Dataset` object containing a single element.\n\nReturns:\n A nested structure of `tf.Tensor` objects, corresponding to the single\n element of `dataset`.\n\nRaises:\n TypeError: if `dataset` is not a `tf.data.Dataset` object.\n InvalidArgumentError: (at runtime) if `dataset` does not contain exactly\n one element.", "desc": "Returns the single element of the `dataset` as a nested structure of tensors. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.get_structure", "docs": "Returns the type signature for elements of the input dataset / iterator.\n\n Args:\n dataset_or_iterator: A `tf.data.Dataset` or an `tf.data.Iterator`.\n\n Returns:\n A (nested) structure of `tf.TypeSpec` objects matching the structure of an\n element of `dataset_or_iterator` and specifying the type of individual\n components.\n\n Raises:\n TypeError: If input is not a `tf.data.Dataset` or an `tf.data.Iterator`\n object.\n ", "desc": "Returns the type signature for elements of the input dataset / iterator.", "type": "API"}, {"name": "tf.data.experimental.group_by_reducer", "docs": "A transformation that groups elements and performs a reduction.\n\n This transformation maps element of a dataset to a key using `key_func` and\n groups the elements by key. The `reducer` is used to process each group; its\n `init_func` is used to initialize state for each group when it is created, the\n `reduce_func` is used to update the state every time an element is mapped to\n the matching group, and the `finalize_func` is used to map the final state to\n an output value.\n\n Args:\n key_func: A function mapping a nested structure of tensors\n (having shapes and types defined by `self.output_shapes` and\n `self.output_types`) to a scalar `tf.int64` tensor.\n reducer: An instance of `Reducer`, which captures the reduction logic using\n the `init_func`, `reduce_func`, and `finalize_func` functions.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that groups elements and performs a reduction.", "type": "API"}, {"name": "tf.data.experimental.group_by_window", "docs": "A transformation that groups windows of elements by key and reduces them. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.group_by_window(...)`.\n\nThis transformation maps each consecutive element in a dataset to a key\nusing `key_func` and groups the elements by key. It then applies\n`reduce_func` to at most `window_size_func(key)` elements matching the same\nkey. All except the final window for each key will contain\n`window_size_func(key)` elements; the final window may be smaller.\n\nYou may provide either a constant `window_size` or a window size determined by\nthe key through `window_size_func`.\n\nArgs:\n key_func: A function mapping a nested structure of tensors\n (having shapes and types defined by `self.output_shapes` and\n `self.output_types`) to a scalar `tf.int64` tensor.\n reduce_func: A function mapping a key and a dataset of up to `window_size`\n consecutive elements matching that key to another dataset.\n window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements matching the same key to combine in a single\n batch, which will be passed to `reduce_func`. Mutually exclusive with\n `window_size_func`.\n window_size_func: A function mapping a key to a `tf.int64` scalar\n `tf.Tensor`, representing the number of consecutive elements matching\n the same key to combine in a single batch, which will be passed to\n `reduce_func`. Mutually exclusive with `window_size`.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: if neither or both of {`window_size`, `window_size_func`} are\n passed.", "desc": "A transformation that groups windows of elements by key and reduces them. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.ignore_errors", "docs": "Creates a `Dataset` from another `Dataset` and silently ignores any errors.\n\n Use this transformation to produce a dataset that contains the same elements\n as the input, but silently drops any elements that caused an error. For\n example:\n\n ```python\n dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.])\n\n # Computing `tf.debugging.check_numerics(1. / 0.)` will raise an\n InvalidArgumentError.\n dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, \"error\"))\n\n # Using `ignore_errors()` will drop the element that causes an error.\n dataset =\n dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2}\n ```\n Args:\n log_warning: (Optional.) A 'tf.bool' scalar indicating whether ignored\n errors should be logged to stderr. Defaults to 'False'.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "Creates a `Dataset` from another `Dataset` and silently ignores any errors.", "type": "API"}, {"name": "tf.data.experimental.load", "docs": "Loads a previously saved dataset.\n\n Example usage:\n\n >>> import tempfile\n >>> path = os.path.join(tempfile.gettempdir(), \"saved_data\")\n >>> # Save a dataset\n >>> dataset = tf.data.Dataset.range(2)\n >>> tf.data.experimental.save(dataset, path)\n >>> new_dataset = tf.data.experimental.load(path)\n >>> for elem in new_dataset:\n ... print(elem)\n tf.Tensor(0, shape=(), dtype=int64)\n tf.Tensor(1, shape=(), dtype=int64)\n\n\n Note that to load a previously saved dataset, you need to specify\n `element_spec` -- a type signature of the elements of the saved dataset, which\n can be obtained via `tf.data.Dataset.element_spec`. This requirement exists so\n that shape inference of the loaded dataset does not need to perform I/O.\n\n If the default option of sharding the saved dataset was used, the element\n order of the saved dataset will be preserved when loading it.\n\n The `reader_func` argument can be used to specify a custom order in which\n elements should be loaded from the individual shards. The `reader_func` is\n expected to take a single argument -- a dataset of datasets, each containing\n elements of one of the shards -- and return a dataset of elements. For\n example, the order of shards can be shuffled when loading them as follows:\n\n ```python\n def custom_reader_func(datasets):\n datasets = datasets.shuffle(NUM_SHARDS)\n return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)\n\n dataset = tf.data.experimental.load(\n path=\"/path/to/data\", ..., reader_func=custom_reader_func)\n ```\n\n Args:\n path: Required. A path pointing to a previously saved dataset.\n element_spec: Optional. A nested structure of `tf.TypeSpec` objects matching\n the structure of an element of the saved dataset and specifying the type\n of individual element components. If not provided, the nested structure of\n `tf.TypeSpec` saved with the saved dataset is used. This argument needs to\n be provided if the method is executed in graph mode.\n compression: Optional. The algorithm to use to decompress the data when\n reading it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.\n reader_func: Optional. A function to control how to read data from shards.\n If present, the function will be traced and executed as graph computation.\n\n Returns:\n A `tf.data.Dataset` instance.\n\n Raises:\n FileNotFoundError: If `element_spec` is not specified and the saved nested\n structure of `tf.TypeSpec` can not be located with the saved dataset.\n ValueError: If `element_spec` is not specified and the method is executed\n in graph mode.\n ", "desc": "Loads a previously saved dataset.", "type": "API"}, {"name": "tf.data.experimental.make_batched_features_dataset", "docs": "Returns a `Dataset` of feature dictionaries from `Example` protos.\n\n If label_key argument is provided, returns a `Dataset` of tuple\n comprising of feature dictionaries and label.\n\n Example:\n\n ```\n serialized_examples = [\n features {\n feature { key: \"age\" value { int64_list { value: [ 0 ] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n feature { key: \"kws\" value { bytes_list { value: [ \"code\", \"art\" ] } } }\n },\n features {\n feature { key: \"age\" value { int64_list { value: [] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n feature { key: \"kws\" value { bytes_list { value: [ \"sports\" ] } } }\n }\n ]\n ```\n\n We can use arguments:\n\n ```\n features: {\n \"age\": FixedLenFeature([], dtype=tf.int64, default_value=-1),\n \"gender\": FixedLenFeature([], dtype=tf.string),\n \"kws\": VarLenFeature(dtype=tf.string),\n }\n ```\n\n And the expected output is:\n\n ```python\n {\n \"age\": [[0], [-1]],\n \"gender\": [[\"f\"], [\"f\"]],\n \"kws\": SparseTensor(\n indices=[[0, 0], [0, 1], [1, 0]],\n values=[\"code\", \"art\", \"sports\"]\n dense_shape=[2, 2]),\n }\n ```\n\n Args:\n file_pattern: List of files or patterns of file paths containing\n `Example` records. See `tf.io.gfile.glob` for pattern rules.\n batch_size: An int representing the number of records to combine\n in a single batch.\n features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` values. See `tf.io.parse_example`.\n reader: A function or class that can be\n called with a `filenames` tensor and (optional) `reader_args` and returns\n a `Dataset` of `Example` tensors. Defaults to `tf.data.TFRecordDataset`.\n label_key: (Optional) A string corresponding to the key labels are stored in\n `tf.Examples`. If provided, it must be one of the `features` key,\n otherwise results in `ValueError`.\n reader_args: Additional arguments to pass to the reader class.\n num_epochs: Integer specifying the number of times to read through the\n dataset. If None, cycles through the dataset forever. Defaults to `None`.\n shuffle: A boolean, indicates whether the input should be shuffled. Defaults\n to `True`.\n shuffle_buffer_size: Buffer size of the ShuffleDataset. A large capacity\n ensures better shuffling but would increase memory usage and startup time.\n shuffle_seed: Randomization seed to use for shuffling.\n prefetch_buffer_size: Number of feature batches to prefetch in order to\n improve performance. Recommended value is the number of batches consumed\n per training step. Defaults to auto-tune.\n reader_num_threads: Number of threads used to read `Example` records. If >1,\n the results will be interleaved. Defaults to `1`.\n parser_num_threads: Number of threads to use for parsing `Example` tensors\n into a dictionary of `Feature` tensors. Defaults to `2`.\n sloppy_ordering: If `True`, reading performance will be improved at\n the cost of non-deterministic ordering. If `False`, the order of elements\n produced is deterministic prior to shuffling (elements are still\n randomized if `shuffle=True`. Note that if the seed is set, then order\n of elements after shuffling is deterministic). Defaults to `False`.\n drop_final_batch: If `True`, and the batch size does not evenly divide the\n input dataset size, the final smaller batch will be dropped. Defaults to\n `False`.\n\n Returns:\n A dataset of `dict` elements, (or a tuple of `dict` elements and label).\n Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects.\n\n Raises:\n TypeError: If `reader` is of the wrong type.\n ValueError: If `label_key` is not one of the `features` keys.\n ", "desc": "Returns a `Dataset` of feature dictionaries from `Example` protos.", "type": "API"}, {"name": "tf.data.experimental.make_csv_dataset", "docs": "Reads CSV files into a dataset.\n\n Reads CSV files into a dataset, where each element of the dataset is a\n (features, labels) tuple that corresponds to a batch of CSV rows. The features\n dictionary maps feature column names to `Tensor`s containing the corresponding\n feature data, and labels is a `Tensor` containing the batch's label data.\n\n By default, the first rows of the CSV files are expected to be headers listing\n the column names. If the first rows are not headers, set `header=False` and\n provide the column names with the `column_names` argument.\n\n By default, the dataset is repeated indefinitely, reshuffling the order each\n time. This behavior can be modified by setting the `num_epochs` and `shuffle`\n arguments.\n\n For example, suppose you have a CSV file containing\n\n | Feature_A | Feature_B |\n | --------- | --------- |\n | 1 | \"a\" |\n | 2 | \"b\" |\n | 3 | \"c\" |\n | 4 | \"d\" |\n\n ```\n # No label column specified\n dataset = tf.data.experimental.make_csv_dataset(filename, batch_size=2)\n iterator = dataset.as_numpy_iterator()\n print(dict(next(iterator)))\n # prints a dictionary of batched features:\n # OrderedDict([('Feature_A', array([1, 4], dtype=int32)),\n # ('Feature_B', array([b'a', b'd'], dtype=object))])\n ```\n\n ```\n # Set Feature_B as label column\n dataset = tf.data.experimental.make_csv_dataset(\n filename, batch_size=2, label_name=\"Feature_B\")\n iterator = dataset.as_numpy_iterator()\n print(next(iterator))\n # prints (features, labels) tuple:\n # (OrderedDict([('Feature_A', array([1, 2], dtype=int32))]),\n # array([b'a', b'b'], dtype=object))\n ```\n\n See the\n [Load CSV data guide](https://www.tensorflow.org/tutorials/load_data/csv) for\n more examples of using `make_csv_dataset` to read CSV data.\n\n Args:\n file_pattern: List of files or patterns of file paths containing CSV\n records. See `tf.io.gfile.glob` for pattern rules.\n batch_size: An int representing the number of records to combine\n in a single batch.\n column_names: An optional list of strings that corresponds to the CSV\n columns, in order. One per column of the input record. If this is not\n provided, infers the column names from the first row of the records.\n These names will be the keys of the features dict of each dataset element.\n column_defaults: A optional list of default values for the CSV fields. One\n item per selected column of the input record. Each item in the list is\n either a valid CSV dtype (float32, float64, int32, int64, or string), or a\n `Tensor` with one of the aforementioned types. The tensor can either be\n a scalar default value (if the column is optional), or an empty tensor (if\n the column is required). If a dtype is provided instead of a tensor, the\n column is also treated as required. If this list is not provided, tries\n to infer types based on reading the first num_rows_for_inference rows of\n files specified, and assumes all columns are optional, defaulting to `0`\n for numeric values and `\"\"` for string values. If both this and\n `select_columns` are specified, these must have the same lengths, and\n `column_defaults` is assumed to be sorted in order of increasing column\n index.\n label_name: A optional string corresponding to the label column. If\n provided, the data for this column is returned as a separate `Tensor` from\n the features dictionary, so that the dataset complies with the format\n expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input\n function.\n select_columns: An optional list of integer indices or string column\n names, that specifies a subset of columns of CSV data to select. If\n column names are provided, these must correspond to names provided in\n `column_names` or inferred from the file header lines. When this argument\n is specified, only a subset of CSV columns will be parsed and returned,\n corresponding to the columns specified. Using this results in faster\n parsing and lower memory usage. If both this and `column_defaults` are\n specified, these must have the same lengths, and `column_defaults` is\n assumed to be sorted in order of increasing column index.\n field_delim: An optional `string`. Defaults to `\",\"`. Char delimiter to\n separate fields in a record.\n use_quote_delim: An optional bool. Defaults to `True`. If false, treats\n double quotation marks as regular characters inside of the string fields.\n na_value: Additional string to recognize as NA/NaN.\n header: A bool that indicates whether the first rows of provided CSV files\n correspond to header lines with column names, and should not be included\n in the data.\n num_epochs: An int specifying the number of times this dataset is repeated.\n If None, cycles through the dataset forever.\n shuffle: A bool that indicates whether the input should be shuffled.\n shuffle_buffer_size: Buffer size to use for shuffling. A large buffer size\n ensures better shuffling, but increases memory usage and startup time.\n shuffle_seed: Randomization seed to use for shuffling.\n prefetch_buffer_size: An int specifying the number of feature\n batches to prefetch for performance improvement. Recommended value is the\n number of batches consumed per training step. Defaults to auto-tune.\n num_parallel_reads: Number of threads used to read CSV records from files.\n If >1, the results will be interleaved. Defaults to `1`.\n sloppy: If `True`, reading performance will be improved at\n the cost of non-deterministic ordering. If `False`, the order of elements\n produced is deterministic prior to shuffling (elements are still\n randomized if `shuffle=True`. Note that if the seed is set, then order\n of elements after shuffling is deterministic). Defaults to `False`.\n num_rows_for_inference: Number of rows of a file to use for type inference\n if record_defaults is not provided. If None, reads all the rows of all\n the files. Defaults to 100.\n compression_type: (Optional.) A `tf.string` scalar evaluating to one of\n `\"\"` (no compression), `\"ZLIB\"`, or `\"GZIP\"`. Defaults to no compression.\n ignore_errors: (Optional.) If `True`, ignores errors with CSV file parsing,\n such as malformed data or empty lines, and moves on to the next valid\n CSV record. Otherwise, the dataset raises an error and stops processing\n when encountering any invalid records. Defaults to `False`.\n\n Returns:\n A dataset, where each element is a (features, labels) tuple that corresponds\n to a batch of `batch_size` CSV rows. The features dictionary maps feature\n column names to `Tensor`s containing the corresponding column data, and\n labels is a `Tensor` containing the column data for the label column\n specified by `label_name`.\n\n Raises:\n ValueError: If any of the arguments is malformed.\n ", "desc": "Reads CSV files into a dataset.", "type": "API"}, {"name": "tf.data.experimental.make_saveable_from_iterator", "docs": "Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\n`make_saveable_from_iterator` is intended for use in TF1 with `tf.compat.v1.Saver`. In TF2, use `tf.train.Checkpoint` instead.\n\nArgs:\n iterator: Iterator.\n external_state_policy: A string that identifies how to handle input\n pipelines that depend on external state. Possible values are\n 'ignore': The external state is silently ignored.\n 'warn': The external state is ignored, logging a warning.\n 'fail': The operation fails upon encountering external state.\n By default we set it to 'fail'.\n\nReturns:\n A SaveableObject for saving/restoring iterator state using Saver.\n\nRaises:\n ValueError: If iterator does not support checkpointing.\n ValueError: If `external_state_policy` is not one of 'warn', 'ignore' or\n 'fail'.\n\nFor example:\n\n```python\nwith tf.Graph().as_default():\n ds = tf.data.Dataset.range(10)\n iterator = ds.make_initializable_iterator()\n # Build the iterator SaveableObject.\n saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator)\n # Add the SaveableObject to the SAVEABLE_OBJECTS collection so\n # it can be automatically saved using Saver.\n tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj)\n saver = tf.compat.v1.train.Saver()\n\n while continue_training:\n ... Perform training ...\n if should_save_checkpoint:\n saver.save()\n```\n\nNote: When restoring the iterator, the existing iterator state is completely\ndiscarded. This means that any changes you may have made to the Dataset\ngraph will be discarded as well! This includes the new Dataset graph\nthat you may have built during validation. So, while running validation,\nmake sure to run the initializer for the validation input pipeline after\nrestoring the checkpoint.\n\nNote: Not all iterators support checkpointing yet. Attempting to save the\nstate of an unsupported iterator will throw an error.", "desc": "Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.map_and_batch", "docs": "Fused implementation of `map` and `batch`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.map(map_func, num_parallel_calls)` followed by `tf.data.Dataset.batch(batch_size, drop_remainder)`. Static tf.data optimizations will take care of using the fused implementation.\n\nMaps `map_func` across `batch_size` consecutive elements of this dataset\nand then combines them into a batch. Functionally, it is equivalent to `map`\nfollowed by `batch`. This API is temporary and deprecated since input pipeline\noptimization now fuses consecutive `map` and `batch` operations automatically.\n\nArgs:\n map_func: A function mapping a nested structure of tensors to another\n nested structure of tensors.\n batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of\n consecutive elements of this dataset to combine in a single batch.\n num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,\n representing the number of batches to create in parallel. On one hand,\n higher values can help mitigate the effect of stragglers. On the other\n hand, higher values can increase contention if CPU is scarce.\n drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing\n whether the last batch should be dropped in case its size is smaller than\n desired; the default behavior is not to drop the smaller batch.\n num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,\n representing the number of elements to process in parallel. If not\n specified, `batch_size * num_parallel_batches` elements will be processed\n in parallel. If the value `tf.data.AUTOTUNE` is used, then\n the number of parallel calls is set dynamically based on available CPU.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\nRaises:\n ValueError: If both `num_parallel_batches` and `num_parallel_calls` are\n specified.", "desc": "Fused implementation of `map` and `batch`. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.OptimizationOptions", "docs": "Represents options for dataset optimizations.\n\n You can set the optimization options of a dataset through the\n `experimental_optimization` property of `tf.data.Options`; the property is\n an instance of `tf.data.experimental.OptimizationOptions`.\n\n ```python\n options = tf.data.Options()\n options.experimental_optimization.noop_elimination = True\n options.experimental_optimization.apply_default_optimizations = False\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for dataset optimizations.", "type": "API"}, {"name": "tf.data.experimental.Optional", "docs": "Represents a value that may or may not be present.\n\n A `tf.experimental.Optional` can represent the result of an operation that may\n fail as a value, rather than raising an exception and halting execution. For\n example, `tf.data.Iterator.get_next_as_optional()` returns a\n `tf.experimental.Optional` that either contains the next element of an\n iterator if one exists, or an \"empty\" value that indicates the end of the\n sequence has been reached.\n\n `tf.experimental.Optional` can only be used with values that are convertible\n to `tf.Tensor` or `tf.CompositeTensor`.\n\n One can create a `tf.experimental.Optional` from a value using the\n `from_value()` method:\n\n >>> optional = tf.experimental.Optional.from_value(42)\n >>> print(optional.has_value())\n tf.Tensor(True, shape=(), dtype=bool)\n >>> print(optional.get_value())\n tf.Tensor(42, shape=(), dtype=int32)\n\n or without a value using the `empty()` method:\n\n >>> optional = tf.experimental.Optional.empty(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))\n >>> print(optional.has_value())\n tf.Tensor(False, shape=(), dtype=bool)\n ", "desc": "Represents a value that may or may not be present.", "type": "API"}, {"name": "tf.data.experimental.parallel_interleave", "docs": "A parallel version of the `Dataset.interleave()` transformation. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.\n\n`parallel_interleave()` maps `map_func` across its input to produce nested\ndatasets, and outputs their elements interleaved. Unlike\n`tf.data.Dataset.interleave`, it gets elements from `cycle_length` nested\ndatasets in parallel, which increases the throughput, especially in the\npresence of stragglers. Furthermore, the `sloppy` argument can be used to\nimprove performance, by relaxing the requirement that the outputs are produced\nin a deterministic order, and allowing the implementation to skip over nested\ndatasets whose elements are not readily available when requested.\n\nExample usage:\n\n```python\n# Preprocess 4 files concurrently.\nfilenames = tf.data.Dataset.list_files(\"/path/to/data/train*.tfrecords\")\ndataset = filenames.apply(\n tf.data.experimental.parallel_interleave(\n lambda filename: tf.data.TFRecordDataset(filename),\n cycle_length=4))\n```\n\nWARNING: If `sloppy` is `True`, the order of produced elements is not\ndeterministic.\n\nArgs:\n map_func: A function mapping a nested structure of tensors to a `Dataset`.\n cycle_length: The number of input `Dataset`s to interleave from in parallel.\n block_length: The number of consecutive elements to pull from an input\n `Dataset` before advancing to the next input `Dataset`.\n sloppy: A boolean controlling whether determinism should be traded for\n performance by allowing elements to be produced out of order. If `sloppy`\n is `None`, the `tf.data.Options.deterministic` dataset option (`True` by\n default) is used to decide whether to enforce a deterministic order.\n buffer_output_elements: The number of elements each iterator being\n interleaved should buffer (similar to the `.prefetch()` transformation for\n each interleaved iterator).\n prefetch_input_elements: The number of input elements to transform to\n iterators before they are needed for interleaving.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A parallel version of the `Dataset.interleave()` transformation. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.parse_example_dataset", "docs": "A transformation that parses `Example` protos into a `dict` of tensors.\n\n Parses a number of serialized `Example` protos given in `serialized`. We refer\n to `serialized` as a batch with `batch_size` many entries of individual\n `Example` protos.\n\n This op parses serialized examples into a dictionary mapping keys to `Tensor`,\n `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to\n `VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature`\n objects. Each `VarLenFeature` and `SparseFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenFeature` is mapped to a `Tensor`. See `tf.io.parse_example` for more\n details about feature dictionaries.\n\n Args:\n features: A `dict` mapping feature keys to `FixedLenFeature`,\n `VarLenFeature`, `RaggedFeature`, and `SparseFeature` values.\n num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,\n representing the number of parsing processes to call in parallel.\n deterministic: (Optional.) A boolean controlling whether determinism\n should be traded for performance by allowing elements to be produced out\n of order if some parsing calls complete faster than others. If\n `deterministic` is `None`, the\n `tf.data.Options.deterministic` dataset option (`True` by default) is used\n to decide whether to produce elements deterministically.\n\n Returns:\n A dataset transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n\n Raises:\n ValueError: if features argument is None.\n ", "desc": "A transformation that parses `Example` protos into a `dict` of tensors.", "type": "API"}, {"name": "tf.data.experimental.prefetch_to_device", "docs": "A transformation that prefetches dataset values to the given `device`.\n\n NOTE: Although the transformation creates a `tf.data.Dataset`, the\n transformation must be the final `Dataset` in the input pipeline.\n\n For example,\n >>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n >>> dataset = dataset.apply(tf.data.experimental.prefetch_to_device(\"/cpu:0\"))\n >>> for element in dataset:\n ... print(f'Tensor {element} is on device {element.device}')\n Tensor 1 is on device /job:localhost/replica:0/task:0/device:CPU:0\n Tensor 2 is on device /job:localhost/replica:0/task:0/device:CPU:0\n Tensor 3 is on device /job:localhost/replica:0/task:0/device:CPU:0\n\n Args:\n device: A string. The name of a device to which elements will be prefetched.\n buffer_size: (Optional.) The number of elements to buffer on `device`.\n Defaults to an automatically chosen value.\n\n Returns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.\n ", "desc": "A transformation that prefetches dataset values to the given `device`.", "type": "API"}, {"name": "tf.data.experimental.RandomDataset", "docs": "A `Dataset` of pseudorandom values. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.random(...)`.", "desc": "A `Dataset` of pseudorandom values. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.Reducer", "docs": "A reducer is used for reducing a set of elements.\n\n A reducer is represented as a tuple of the three functions:\n - init_func - to define initial value: key => initial state\n - reducer_func - operation to perform on values with same key: (old state, input) => new state\n - finalize_func - value to return in the end: state => result\n \n For example,\n \n ```\n def init_func(_):\n return (0.0, 0.0)\n\n def reduce_func(state, value):\n return (state[0] + value['features'], state[1] + 1)\n\n def finalize_func(s, n):\n return s / n\n\n reducer = tf.data.experimental.Reducer(init_func, reduce_func, finalize_func)\n ```\n ", "desc": "A reducer is used for reducing a set of elements.", "type": "API"}, {"name": "tf.data.experimental.rejection_resample", "docs": "A transformation that resamples a dataset to achieve a target distribution. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.rejection_resample(...)`.\n\n**NOTE** Resampling is performed via rejection sampling; some fraction\nof the input values will be dropped.\n\nArgs:\n class_func: A function mapping an element of the input dataset to a scalar\n `tf.int32` tensor. Values should be in `[0, num_classes)`.\n target_dist: A floating point type tensor, shaped `[num_classes]`.\n initial_dist: (Optional.) A floating point type tensor, shaped\n `[num_classes]`. If not provided, the true class distribution is\n estimated live in a streaming fashion.\n seed: (Optional.) Python integer seed for the resampler.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that resamples a dataset to achieve a target distribution. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.sample_from_datasets", "docs": "Samples elements at random from the datasets in `datasets`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.sample_from_datasets(...)`.\n\nCreates a dataset by interleaving elements of `datasets` with `weight[i]`\nprobability of picking an element from dataset `i`. Sampling is done without\nreplacement. For example, suppose we have 2 datasets:\n\n```python\ndataset1 = tf.data.Dataset.range(0, 3)\ndataset2 = tf.data.Dataset.range(100, 103)\n```\n\nSuppose also that we sample from these 2 datasets with the following weights:\n\n```python\nsample_dataset = tf.data.Dataset.sample_from_datasets(\n [dataset1, dataset2], weights=[0.5, 0.5])\n```\n\nOne possible outcome of elements in sample_dataset is:\n\n```\nprint(list(sample_dataset.as_numpy_iterator()))\n# [100, 0, 1, 101, 2, 102]\n```\n\nArgs:\n datasets: A non-empty list of `tf.data.Dataset` objects with compatible\n structure.\n weights: (Optional.) A list or Tensor of `len(datasets)` floating-point\n values where `weights[i]` represents the probability to sample from\n `datasets[i]`, or a `tf.data.Dataset` object where each element is such a\n list. Defaults to a uniform distribution across `datasets`.\n seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random\n seed that will be used to create the distribution. See\n `tf.random.set_seed` for behavior.\n stop_on_empty_dataset: If `True`, sampling stops if it encounters an empty\n dataset. If `False`, it skips empty datasets. It is recommended to set it\n to `True`. Otherwise, the distribution of samples starts off as the user\n intends, but may change as input datasets become empty. This can be\n difficult to detect since the dataset starts off looking correct. Default\n to `False` for backward compatibility.\n\nReturns:\n A dataset that interleaves elements from `datasets` at random, according to\n `weights` if provided, otherwise with uniform probability.\n\nRaises:\n TypeError: If the `datasets` or `weights` arguments have the wrong type.\n ValueError:\n - If `datasets` is empty, or\n - If `weights` is specified and does not match the length of `datasets`.", "desc": "Samples elements at random from the datasets in `datasets`. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.save", "docs": "Saves the content of the given dataset.\n\n Example usage:\n\n >>> import tempfile\n >>> path = os.path.join(tempfile.gettempdir(), \"saved_data\")\n >>> # Save a dataset\n >>> dataset = tf.data.Dataset.range(2)\n >>> tf.data.experimental.save(dataset, path)\n >>> new_dataset = tf.data.experimental.load(path)\n >>> for elem in new_dataset:\n ... print(elem)\n tf.Tensor(0, shape=(), dtype=int64)\n tf.Tensor(1, shape=(), dtype=int64)\n\n The saved dataset is saved in multiple file \"shards\". By default, the dataset\n output is divided to shards in a round-robin fashion but custom sharding can\n be specified via the `shard_func` function. For example, you can save the\n dataset to using a single shard as follows:\n\n ```python\n dataset = make_dataset()\n def custom_shard_func(element):\n return 0\n dataset = tf.data.experimental.save(\n path=\"/path/to/data\", ..., shard_func=custom_shard_func)\n ```\n\n To enable checkpointing, pass in `checkpoint_args` to the `save` method\n as follows:\n\n ```python\n dataset = tf.data.Dataset.range(100)\n save_dir = \"...\"\n checkpoint_prefix = \"...\"\n step_counter = tf.Variable(0, trainable=False)\n checkpoint_args = {\n \"checkpoint_interval\": 50,\n \"step_counter\": step_counter,\n \"directory\": checkpoint_prefix,\n \"max_to_keep\": 20,\n }\n dataset.save(dataset, save_dir, checkpoint_args=checkpoint_args)\n ```\n\n NOTE: The directory layout and file format used for saving the dataset is\n considered an implementation detail and may change. For this reason, datasets\n saved through `tf.data.experimental.save` should only be consumed through\n `tf.data.experimental.load`, which is guaranteed to be backwards compatible.\n\n Args:\n dataset: The dataset to save.\n path: Required. A directory to use for saving the dataset.\n compression: Optional. The algorithm to use to compress data when writing\n it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.\n shard_func: Optional. A function to control the mapping of dataset elements\n to file shards. The function is expected to map elements of the input\n dataset to int64 shard IDs. If present, the function will be traced and\n executed as graph computation.\n checkpoint_args: Optional args for checkpointing which will be passed into\n the `tf.train.CheckpointManager`. If `checkpoint_args` are not specified,\n then checkpointing will not be performed. The `save()` implementation\n creates a `tf.train.Checkpoint` object internally, so users should not\n set the `checkpoint` argument in `checkpoint_args`.\n Raises:\n ValueError if `checkpoint` is passed into `checkpoint_args`.\n ", "desc": "Saves the content of the given dataset.", "type": "API"}, {"name": "tf.data.experimental.scan", "docs": "A transformation that scans a function across an input dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.scan(...) instead\n\nThis transformation is a stateful relative of `tf.data.Dataset.map`.\nIn addition to mapping `scan_func` across the elements of the input dataset,\n`scan()` accumulates one or more state tensors, whose initial values are\n`initial_state`.\n\nArgs:\n initial_state: A nested structure of tensors, representing the initial state\n of the accumulator.\n scan_func: A function that maps `(old_state, input_element)` to\n `(new_state, output_element)`. It must take two arguments and return a\n pair of nested structures of tensors. The `new_state` must match the\n structure of `initial_state`.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that scans a function across an input dataset. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.service", "docs": "API for using the tf.data service.\n\nThis module contains:\n\n1. tf.data server implementations for running the tf.data service.\n2. APIs for registering datasets with the tf.data service and reading from\n the registered datasets.\n\nThe tf.data service provides the following benefits:\n\n- Horizontal scaling of tf.data input pipeline processing to solve input\n bottlenecks.\n- Data coordination for distributed training. Coordinated reads\n enable all replicas to train on similar-length examples across each global\n training step, improving step times in synchronous training.\n- Dynamic balancing of data across training replicas.\n\n>>> dispatcher = tf.data.experimental.service.DispatchServer()\n>>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n>>> worker = tf.data.experimental.service.WorkerServer(\n... tf.data.experimental.service.WorkerConfig(\n... dispatcher_address=dispatcher_address))\n>>> dataset = tf.data.Dataset.range(10)\n>>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n... processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n... service=dispatcher.target))\n>>> print(list(dataset.as_numpy_iterator()))\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n## Setup\n\nThis section goes over how to set up the tf.data service.\n\n### Run tf.data servers\n\nThe tf.data service consists of one dispatch server and `n` worker servers.\ntf.data servers should be brought up alongside your training jobs, then brought\ndown when the jobs are finished.\nUse `tf.data.experimental.service.DispatchServer` to start a dispatch server,\nand `tf.data.experimental.service.WorkerServer` to start worker servers. Servers\ncan be run in the same process for testing purposes, or scaled up on separate\nmachines.\n\nSee https://github.com/tensorflow/ecosystem/tree/master/data_service for an\nexample of using Google Kubernetes Engine (GKE) to manage the tf.data service.\nNote that the server implementation in\n[tf_std_data_server.py](https://github.com/tensorflow/ecosystem/blob/master/data_service/tf_std_data_server.py)\nis not GKE-specific, and can be used to run the tf.data service in other\ncontexts.\n\n### Custom ops\n\nIf your dataset uses custom ops, these ops need to be made available to tf.data\nservers by calling\n[load_op_library](https://www.tensorflow.org/api_docs/python/tf/load_op_library)\nfrom the dispatcher and worker processes at startup.\n\n## Usage\n\nUsers interact with tf.data service by programmatically registering their\ndatasets with tf.data service, then creating datasets that read from the\nregistered datasets. The\n[register_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/register_dataset)\nfunction registers a dataset, then the\n[from_dataset_id](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/from_dataset_id)\nfunction creates a new dataset which reads from the registered dataset.\nThe\n[distribute](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service/distribute)\nfunction wraps `register_dataset` and `from_dataset_id` into a single convenient\ntransformation which registers its input dataset and then reads from it.\n`distribute` enables tf.data service to be used with a one-line code change.\nHowever, it assumes that the dataset is created and consumed by the same entity\nand this assumption might not always be valid or desirable. In particular, in\ncertain scenarios, such as distributed training, it might be desirable to\ndecouple the creation and consumption of the dataset (via `register_dataset`\nand `from_dataset_id` respectively) to avoid having to create the dataset on\neach of the training workers.\n\n### Example\n\n#### `distribute`\n\nTo use the `distribute` transformation, apply the transformation after the\nprefix of your input pipeline that you would like to be executed using tf.data\nservice (typically at the end).\n\n```\ndataset = ... # Define your dataset here.\n# Move dataset processing from the local machine to the tf.data service\ndataset = dataset.apply(\n tf.data.experimental.service.distribute(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n job_name=\"shared_job\"))\n# Any transformations added after `distribute` will be run on the local machine.\ndataset = dataset.prefetch(1)\n```\n\nThe above code will create a tf.data service \"job\", which iterates through the\ndataset to generate data. To share the data from a job across multiple clients\n(e.g. when using TPUStrategy or MultiWorkerMirroredStrategy), set a common\n`job_name` across all clients.\n\n#### `register_dataset` and `from_dataset_id`\n\n`register_dataset` registers a dataset with the tf.data service, returning a\ndataset id for the registered dataset. `from_dataset_id` creates a dataset that\nreads from the registered dataset. These APIs can be used to reduce dataset\nbuilding time for distributed training. Instead of building the dataset on all\ntraining workers, we can build the dataset just once and then register the\ndataset using `register_dataset`. Then all workers can call `from_dataset_id`\nwithout needing to build the dataset themselves.\n\n```\ndataset = ... # Define your dataset here.\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n# Use `from_dataset_id` to create per-worker datasets.\nper_worker_datasets = {}\nfor worker in workers:\n per_worker_datasets[worker] = tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n```\n\n### Processing Modes\n\n`processing_mode` specifies how to shard a dataset among tf.data service\nworkers. tf.data service supports `OFF`, `DYNAMIC`, `FILE`, `DATA`,\n`FILE_OR_DATA`, `HINT` sharding policies.\n\nOFF: No sharding will be performed. The entire input dataset will be processed\nindependently by each of the tf.data service workers. For this reason, it is\nimportant to shuffle data (e.g. filenames) non-deterministically, so that each\nworker will process the elements of the dataset in a different order. This mode\ncan be used to distribute datasets that aren't splittable.\n\nIf a worker is added or restarted during ShardingPolicy.OFF processing, the\nworker will instantiate a new copy of the dataset and begin producing data from\nthe beginning.\n\n#### Dynamic Sharding\n\nDYNAMIC: In this mode, tf.data service divides the dataset into two components:\na source component that generates \"splits\" such as filenames, and a processing\ncomponent that takes splits and outputs dataset elements. The source component\nis executed in a centralized fashion by the tf.data service dispatcher, which\ngenerates different splits of input data. The processing component is executed\nin a parallel fashion by the tf.data service workers, each operating on a\ndifferent set of input data splits.\n\nFor example, consider the following dataset:\n\n```\ndataset = tf.data.Dataset.from_tensor_slices(filenames)\ndataset = dataset.interleave(TFRecordDataset)\ndataset = dataset.map(preprocess_fn)\ndataset = dataset.batch(batch_size)\ndataset = dataset.apply(\n tf.data.experimental.service.distribute(\n processing_mode=tf.data.experimental.service.ShardingPolicy.DYNAMIC,\n ...))\n```\n\nThe `from_tensor_slices` will be run on the dispatcher, while the `interleave`,\n`map`, and `batch` will be run on tf.data service workers. The workers will pull\nfilenames from the dispatcher for processing. To process a dataset with\ndynamic sharding, the dataset must have a splittable source, and all of\nits transformations must be compatible with splitting. While most sources and\ntransformations support splitting, there are exceptions, such as custom datasets\nwhich may not implement the splitting API. Please file a Github issue if you\nwould like to use distributed epoch processing for a currently unsupported\ndataset source or transformation.\n\nIf no workers are restarted during training, dynamic sharding mode will visit\nevery example exactly once. If workers are restarted during training, the splits\nthey were processing will not be fully visited. The dispatcher maintains a\ncursor through the dataset's splits. Assuming fault tolerance is enabled (See\n\"Fault Tolerance\" below), the dispatcher will store cursor state in write-ahead\nlogs so that the cursor can be restored in case the dispatcher is restarted\nmid-training. This provides an at-most-once visitation guarantee in the presence\nof server restarts.\n\n#### Static Sharding\n\nThe following are static sharding policies. The semantics are similar to\n`tf.data.experimental.AutoShardPolicy`. These policies require:\n\n * The tf.data service cluster is configured with a fixed list of workers\n in DispatcherConfig.\n * Each client only reads from the local tf.data service worker.\n\nIf a worker is restarted while performing static sharding, the worker will\nbegin processing its shard again from the beginning.\n\nFILE: Shards by input files (i.e. each worker will get a fixed set of files to\nprocess). When this option is selected, make sure that there is at least as\nmany files as workers. If there are fewer input files than workers, a runtime\nerror will be raised.\n\nDATA: Shards by elements produced by the dataset. Each worker will process the\nwhole dataset and discard the portion that is not for itself. Note that for\nthis mode to correctly partition the dataset elements, the dataset needs to\nproduce elements in a deterministic order.\n\nFILE_OR_DATA: Attempts FILE-based sharding, falling back to DATA-based\nsharding on failure.\n\nHINT: Looks for the presence of `shard(SHARD_HINT, ...)` which is treated as a\nplaceholder to replace with `shard(num_workers, worker_index)`.\n\nFor backwards compatibility, `processing_mode` may also be set to the strings\n`\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively equivalent\nto `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n\n### Coordinated Data Read\n\nBy default, when multiple consumers read from the same job, they receive data on\na first-come first-served basis. In some use cases, it is advantageous to\ncoordinate the consumers. At each step, consumers read data from the same\nworker.\n\nFor example, the tf.data service can be used to coordinate example sizes across\na cluster during synchronous training, so that during each step all replicas\ntrain on similar-sized elements. To achieve this, define a dataset which\ngenerates rounds of `num_consumers` consecutive similar-sized batches, then\nenable coordinated reads by setting `consumer_index` and `num_consumers`.\n\nNOTE: To keep consumers in sync, coordinated reads require that the dataset have\ninfinite cardinality. You can get this by adding `.repeat()` at the end of the\ndataset definition.\n\n### Jobs\n\nA tf.data service \"job\" refers to the process of reading from a dataset managed\nby the tf.data service, using one or more data consumers. Jobs are created when\niterating over datasets that read from tf.data service. The data produced by a\njob is determined by (1) dataset associated with the job and (2) the job's\nprocessing mode. For example, if a job is created for the dataset\n`Dataset.range(5)`, and the processing mode is `ShardingPolicy.OFF`, each\ntf.data worker will produce the elements `{0, 1, 2, 3, 4}` for the job,\nresulting in the\njob producing `5 * num_workers` elements. If the processing mode is\n`ShardingPolicy.DYNAMIC`, the job will only produce `5` elements.\n\nOne or more consumers can consume data from a job. By default, jobs are\n\"anonymous\", meaning that only the consumer which created the job can read from\nit. To share the output of a job across multiple consumers, you can set a common\n`job_name`.\n\n### Fault Tolerance\n\nBy default, the tf.data dispatch server stores its state in-memory, making it a\nsingle point of failure during training. To avoid this, pass\n`fault_tolerant_mode=True` when creating your `DispatchServer`. Dispatcher\nfault tolerance requires `work_dir` to be configured and accessible from the\ndispatcher both before and after restart (e.g. a GCS path). With fault tolerant\nmode enabled, the dispatcher will journal its state to the work directory so\nthat no state is lost when the dispatcher is restarted.\n\nWorkerServers may be freely restarted, added, or removed during training. At\nstartup, workers will register with the dispatcher and begin processing all\noutstanding jobs from the beginning.\n\n### Usage with tf.distribute\n\ntf.distribute is the TensorFlow API for distributed training. There are\nseveral ways to use tf.data with tf.distribute:\n`strategy.experimental_distribute_dataset`,\n`strategy.distribute_datasets_from_function`, and (for PSStrategy)\n`coordinator.create_per_worker_dataset`. The following sections give code\nexamples for each.\n\nIn general we recommend using\n`tf.data.experimental.service.{register_dataset,from_dataset_id}` over\n`tf.data.experimental.service.distribute` for two reasons:\n\n- The dataset only needs to be constructed and optimized once, instead of once\n per worker. This can significantly reduce startup time, because the current\n `experimental_distribute_dataset` and `distribute_datasets_from_function`\n implementations create and optimize worker datasets sequentially.\n- If a dataset depends on lookup tables or variables that are only present on\n one host, the dataset needs to be registered from that host. Typically this\n only happens when resources are placed on the chief or worker 0. Registering\n the dataset from the chief will avoid issues with depending on remote\n resources.\n\n#### strategy.experimental_distribute_dataset\n\nNothing special is required when using\n`strategy.experimental_distribute_dataset`, just apply `register_dataset` and\n`from_dataset_id` as above, making sure to specify a `job_name` so that all\nworkers consume from the same tf.data service job.\n\n```\ndataset = ... # Define your dataset here.\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\ndataset = tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = strategy.experimental_distribute_dataset(dataset)\n```\n\n#### strategy.distribute_datasets_from_function\n\nFirst, make sure the dataset produced by the `dataset_fn` does not depend on the\n`input_context` for the training worker on which it is run. Instead of each\nworker building its own (sharded) dataset, one worker should register an\nunsharded dataset, and the remaining workers should consume data from that\ndataset.\n\n```\ndataset = dataset_fn()\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n\ndef new_dataset_fn(input_context):\n del input_context\n return tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = strategy.distribute_datasets_from_function(new_dataset_fn)\n```\n\n#### coordinator.create_per_worker_dataset\n\n`create_per_worker_dataset` works the same as\n`distribute_datasets_from_function`.\n\n```\ndataset = dataset_fn()\ndataset_id = tf.data.experimental.service.register_dataset(\n service=FLAGS.tf_data_service_address,\n dataset=dataset)\n\ndef new_dataset_fn(input_context):\n del input_context\n return tf.data.experimental.service.from_dataset_id(\n processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,\n service=FLAGS.tf_data_service_address,\n dataset_id=dataset_id,\n job_name=\"shared_job\")\n\ndataset = coordinator.create_per_worker_dataset(new_dataset_fn)\n```\n\n## Limitations\n\n- Python-based data processing: Datasets which use Python-based data processing\n (e.g. `tf.py_function`, `tf.numpy_function`, or\n `tf.data.Dataset.from_generator`) are currently not supported.\n- Non-Serializable Resources: Datasets may only depend on TF resources that\n support serialization. Serialization is currently supported for lookup\n tables and variables. If your dataset depends on a TF resource that cannot be\n serialized, please file a Github issue.\n- Remote Resources: If a dataset depends on a resource, the dataset must be\n registered from the same process that created the resource (e.g. the \"chief\"\n job of ParameterServerStrategy).\n\n", "desc": "API for using the tf.data service.", "type": "API"}, {"name": "tf.data.experimental.service.DispatcherConfig", "docs": "Configuration class for tf.data service dispatchers.\n\n Fields:\n port: Specifies the port to bind to. A value of 0 indicates that the server\n may bind to any available port.\n protocol: The protocol to use for communicating with the tf.data service,\n e.g. \"grpc\".\n work_dir: A directory to store dispatcher state in. This\n argument is required for the dispatcher to be able to recover from\n restarts.\n fault_tolerant_mode: Whether the dispatcher should write its state to a\n journal so that it can recover from restarts. Dispatcher state, including\n registered datasets and created jobs, is synchronously written to the\n journal before responding to RPCs. If `True`, `work_dir` must also be\n specified.\n worker_addresses: If the job uses auto-sharding, it needs to specify a fixed\n list of worker addresses that will register with the dispatcher. The\n worker addresses should be in the format `\"host\"` or `\"host:port\"`, where\n `\"port\"` is an integer, named port, or `%port%` to match any port.\n job_gc_check_interval_ms: How often the dispatcher should scan through to\n delete old and unused jobs, in milliseconds. If not set, the runtime will\n select a reasonable default. A higher value will reduce load on the\n dispatcher, while a lower value will reduce the time it takes for the\n dispatcher to garbage collect expired jobs.\n job_gc_timeout_ms: How long a job needs to be unused before it becomes a\n candidate for garbage collection, in milliseconds. A value of -1 indicates\n that jobs should never be garbage collected. If not set, the runtime will\n select a reasonable default. A higher value will cause jobs to stay around\n longer with no consumers. This is useful if there is a large gap in\n time between when consumers read from the job. A lower value will reduce\n the time it takes to reclaim the resources from expired jobs.\n ", "desc": "Configuration class for tf.data service dispatchers.", "type": "API"}, {"name": "tf.data.experimental.service.DispatchServer", "docs": "An in-process tf.data service dispatch server.\n\n A `tf.data.experimental.service.DispatchServer` coordinates a cluster of\n `tf.data.experimental.service.WorkerServer`s. When the workers start, they\n register themselves with the dispatcher.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"parallel_epochs\", service=dispatcher.target))\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n When starting a dedicated tf.data dispatch process, use join() to block\n indefinitely after starting up the server.\n\n ```\n dispatcher = tf.data.experimental.service.DispatchServer(\n tf.data.experimental.service.DispatcherConfig(port=5050))\n dispatcher.join()\n ```\n\n To start a `DispatchServer` in fault-tolerant mode, set `work_dir` and\n `fault_tolerant_mode` like below:\n\n ```\n dispatcher = tf.data.experimental.service.DispatchServer(\n tf.data.experimental.service.DispatcherConfig(\n port=5050,\n work_dir=\"gs://my-bucket/dispatcher/work_dir\",\n fault_tolerant_mode=True))\n ```\n ", "desc": "An in-process tf.data service dispatch server.", "type": "API"}, {"name": "tf.data.experimental.service.distribute", "docs": "A transformation that moves dataset processing to the tf.data service.\n\n When you iterate over a dataset containing the `distribute` transformation,\n the tf.data service creates a \"job\" which produces data for the dataset\n iteration.\n\n The tf.data service uses a cluster of workers to prepare data for training\n your model.\n The `processing_mode` argument to `tf.data.experimental.service.distribute`\n describes how to leverage multiple workers to process the input dataset.\n Currently, there are two processing modes to choose from: \"distributed_epoch\"\n and \"parallel_epochs\".\n\n \"distributed_epoch\" means that the dataset will be split across all tf.data\n service workers.\n The dispatcher produces \"splits\" for the dataset and sends them to workers for\n further processing. For example, if a dataset begins with a list of filenames,\n the dispatcher will iterate through the filenames and send the filenames to\n tf.data workers, which will perform the rest of the dataset transformations on\n those files. \"distributed_epoch\" is useful when your model needs to see each\n element of the dataset exactly once, or if it needs to see the data in a\n generally-sequential order. \"distributed_epoch\" only works for datasets with\n splittable sources, such as `Dataset.from_tensor_slices`,\n `Dataset.list_files`, or `Dataset.range`.\n\n \"parallel_epochs\" means that the entire input dataset will be processed\n independently by each of the tf.data service workers.\n For this reason, it is important to shuffle data (e.g. filenames)\n non-deterministically, so that each worker will process the elements of the\n dataset in a different order. \"parallel_epochs\" can be used to distribute\n datasets that aren't splittable.\n\n With two workers, \"parallel_epochs\" will produce every element of the dataset\n twice:\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> # Start two workers\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"parallel_epochs\", service=dispatcher.target))\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]\n\n \"distributed_epoch\", on the other hand, will still produce each element once:\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"distributed_epoch\", service=dispatcher.target))\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n When using `apply(tf.data.experimental.service.distribute(...))`, the dataset\n before the `apply` transformation executes within the tf.data service, while\n the operations after `apply` happen within the local process.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> workers = [\n ... tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address)) for _ in range(2)\n ... ]\n >>> dataset = tf.data.Dataset.range(5)\n >>> dataset = dataset.map(lambda x: x*x)\n >>> dataset = dataset.apply(\n ... tf.data.experimental.service.distribute(\"parallel_epochs\",\n ... dispatcher.target))\n >>> dataset = dataset.map(lambda x: x+1)\n >>> print(sorted(list(dataset.as_numpy_iterator())))\n [1, 1, 2, 2, 5, 5, 10, 10, 17, 17]\n\n In the above example, the dataset operations (before applying the `distribute`\n function on the elements) will be executed on the tf.data workers,\n and the elements are provided over RPC. The remaining transformations\n (after the call to `distribute`) will be executed locally. The dispatcher\n and the workers will bind to usused free ports (which are chosen at random),\n in order to communicate with each other. However, to bind them to specific\n ports, the `port` parameter can be passed.\n\n The `job_name` argument allows jobs to be shared across multiple\n datasets. Instead of each dataset creating its own job, all\n datasets with the same `job_name` will consume from the same job. A new job\n will be created for each iteration of the dataset (with each repetition of\n `Dataset.repeat` counting as a new iteration). Suppose the `DispatchServer`\n is serving on `localhost:5000` and two training workers (in either a single\n client or multi-client setup) iterate over the below dataset, and there is a\n single tf.data worker:\n\n ```\n range5_dataset = tf.data.Dataset.range(5)\n dataset = range5_dataset.apply(tf.data.experimental.service.distribute(\n \"parallel_epochs\", \"localhost:5000\", job_name=\"my_job_name\"))\n for iteration in range(3):\n print(list(dataset))\n ```\n\n The elements of each job will be split between the two processes, with\n elements being consumed by the processes on a first-come first-served basis.\n One possible result is that process 1 prints\n\n ```\n [0, 2, 4]\n [0, 1, 3]\n [1]\n ```\n\n and process 2 prints\n\n ```\n [1, 3]\n [2, 4]\n [0, 2, 3, 4]\n ```\n\n Job names must not be re-used across different training jobs within the\n lifetime of the tf.data service. In general, the tf.data service is expected\n to live for the duration of a single training job.\n To use the tf.data service with multiple training jobs, make sure to use\n different job names to avoid conflicts. For example, suppose a training job\n calls `distribute` with `job_name=\"job\"` and reads until end of input. If\n another independent job connects to the same tf.data service and tries to read\n from `job_name=\"job\"`, it will immediately receive end of input, without\n getting any data.\n\n **Coordinated data read**\n\n By default, when multiple consumers read from the same job, they receive data\n on a first-come first-served basis. In some use cases, it is advantageous to\n coordinate the consumers. At each step, consumers read data from the same\n worker.\n\n For example, the tf.data service can be used to coordinate example sizes\n across a cluster during synchronous training, so that during each step all\n replicas train on similar-sized elements. To achieve this, define a dataset\n which generates rounds of `num_consumers` consecutive similar-sized batches,\n then enable coordinated reads by setting `consumer_index` and `num_consumers`.\n\n NOTE: To keep consumers in sync, round robin data consumption requires that\n the dataset have infinite cardinality. You can get this by adding `.repeat()`\n at the end of the dataset definition.\n\n **Keras and Distribution Strategies**\n\n The dataset produced by the `distribute` transformation can be passed to\n Keras' `Model.fit` or Distribution Strategy's\n `tf.distribute.Strategy.experimental_distribute_dataset` like any other\n `tf.data.Dataset`. We recommend setting a `job_name` on the call to\n `distribute` so that if there are multiple workers, they read data from the\n same job. Note that the autosharding normally performed by\n `experimental_distribute_dataset` will be disabled when setting a `job_name`,\n since sharing the job already results in splitting data across the workers.\n When using a shared job, data will be dynamically balanced across workers, so\n that they reach end of input about the same time. This results in better\n worker utilization than with autosharding, where each worker processes an\n independent set of files, and some workers may run out of data earlier than\n others.\n\n Args:\n processing_mode: A `tf.data.experimental.service.ShardingPolicy` specifying\n how to shard the dataset among tf.data workers. See\n `tf.data.experimental.service.ShardingPolicy` for details. For backwards\n compatibility, `processing_mode` may also be set to the strings\n `\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively\n equivalent to `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n job_name: (Optional.) The name of the job. If provided, it must be a\n non-empty string. This argument makes it possible for multiple datasets to\n share the same job. The default behavior is that the dataset creates\n anonymous, exclusively owned jobs.\n consumer_index: (Optional.) The index of the consumer in the range from `0`\n to `num_consumers`. Must be specified alongside `num_consumers`. When\n specified, consumers will read from the job in a strict round-robin order,\n instead of the default first-come-first-served order.\n num_consumers: (Optional.) The number of consumers which will consume from\n the job. Must be specified alongside `consumer_index`. When specified,\n consumers will read from the job in a strict round-robin order, instead of\n the default first-come-first-served order. When `num_consumers` is\n specified, the dataset must have infinite cardinality to prevent a\n producer from running out of data early and causing consumers to go out of\n sync.\n max_outstanding_requests: (Optional.) A limit on how many elements may be\n requested at the same time. You can use this option to control the amount\n of memory used, since `distribute` won't use more than `element_size` *\n `max_outstanding_requests` of memory.\n data_transfer_protocol: (Optional.) The protocol to use for transferring\n data with the tf.data service. By default, data is transferred using gRPC.\n compression: How to compress the dataset's elements before transferring them\n over the network. \"AUTO\" leaves the decision of how to compress up to the\n tf.data service runtime. `None` indicates not to compress.\n target_workers: (Optional.) Which workers to read from. If `\"AUTO\"`, tf.data\n runtime decides which workers to read from. If `\"ANY\"`, reads from any\n tf.data service workers. If `\"LOCAL\"`, only reads from local in-processs\n tf.data service workers. `\"AUTO\"` works well for most cases, while users\n can specify other targets. For example, `\"LOCAL\"` helps avoid RPCs and\n data copy if every TF worker colocates with a tf.data service worker.\n Consumers of a shared job must use the same `target_workers`. Defaults to\n `\"AUTO\"`.\n\n Returns:\n Dataset: A `Dataset` of the elements produced by the data service.\n ", "desc": "A transformation that moves dataset processing to the tf.data service.", "type": "API"}, {"name": "tf.data.experimental.service.from_dataset_id", "docs": "Creates a dataset which reads data from the tf.data service.\n\n This is useful when the dataset is registered by one process, then used in\n another process. When the same process is both registering and reading from\n the dataset, it is simpler to use `tf.data.experimental.service.distribute`\n instead.\n\n Before using `from_dataset_id`, the dataset must have been registered with the\n tf.data service using `tf.data.experimental.service.register_dataset`.\n `register_dataset` returns a dataset id for the registered dataset. That is\n the `dataset_id` which should be passed to `from_dataset_id`.\n\n The `element_spec` argument indicates the `tf.TypeSpec`s for the elements\n produced by the dataset. Currently `element_spec` must be explicitly\n specified, and match the dataset registered under `dataset_id`. `element_spec`\n defaults to `None` so that in the future we can support automatically\n discovering the `element_spec` by querying the tf.data service.\n\n `tf.data.experimental.service.distribute` is a convenience method which\n combines `register_dataset` and `from_dataset_id` into a dataset\n transformation.\n See the documentation for `tf.data.experimental.service.distribute` for more\n detail about how `from_dataset_id` works.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset_id = tf.data.experimental.service.register_dataset(\n ... dispatcher.target, dataset)\n >>> dataset = tf.data.experimental.service.from_dataset_id(\n ... processing_mode=\"parallel_epochs\",\n ... service=dispatcher.target,\n ... dataset_id=dataset_id,\n ... element_spec=dataset.element_spec)\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n Args:\n processing_mode: A `tf.data.experimental.service.ShardingPolicy` specifying\n how to shard the dataset among tf.data workers. See\n `tf.data.experimental.service.ShardingPolicy` for details. For backwards\n compatibility, `processing_mode` may also be set to the strings\n `\"parallel_epochs\"` or `\"distributed_epoch\"`, which are respectively\n equivalent to `ShardingPolicy.OFF` and `ShardingPolicy.DYNAMIC`.\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n dataset_id: The id of the dataset to read from. This id is returned by\n `register_dataset` when the dataset is registered with the tf.data\n service.\n element_spec: A nested structure of `tf.TypeSpec`s representing the type of\n elements produced by the dataset. This argument is only required inside a\n tf.function. Use `tf.data.Dataset.element_spec` to get the element spec\n for a given dataset.\n job_name: (Optional.) The name of the job. If provided, it must be a\n non-empty string. This argument makes it possible for multiple datasets to\n share the same job. The default behavior is that the dataset creates\n anonymous, exclusively owned jobs.\n consumer_index: (Optional.) The index of the consumer in the range from `0`\n to `num_consumers`. Must be specified alongside `num_consumers`. When\n specified, consumers will read from the job in a strict round-robin order,\n instead of the default first-come-first-served order.\n num_consumers: (Optional.) The number of consumers which will consume from\n the job. Must be specified alongside `consumer_index`. When specified,\n consumers will read from the job in a strict round-robin order, instead of\n the default first-come-first-served order. When `num_consumers` is\n specified, the dataset must have infinite cardinality to prevent a\n producer from running out of data early and causing consumers to go out of\n sync.\n max_outstanding_requests: (Optional.) A limit on how many elements may be\n requested at the same time. You can use this option to control the amount\n of memory used, since `distribute` won't use more than `element_size` *\n `max_outstanding_requests` of memory.\n data_transfer_protocol: (Optional.) The protocol to use for transferring\n data with the tf.data service. By default, data is transferred using gRPC.\n target_workers: (Optional.) Which workers to read from. If `\"AUTO\"`, tf.data\n runtime decides which workers to read from. If `\"ANY\"`, reads from any\n tf.data service workers. If `\"LOCAL\"`, only reads from local in-processs\n tf.data service workers. `\"AUTO\"` works well for most cases, while users\n can specify other targets. For example, `\"LOCAL\"` helps avoid RPCs and\n data copy if every TF worker colocates with a tf.data service worker.\n Consumers of a shared job must use the same `target_workers`. Defaults to\n `\"AUTO\"`.\n\n Returns:\n A `tf.data.Dataset` which reads from the tf.data service.\n ", "desc": "Creates a dataset which reads data from the tf.data service.", "type": "API"}, {"name": "tf.data.experimental.service.register_dataset", "docs": "Registers a dataset with the tf.data service.\n\n `register_dataset` registers a dataset with the tf.data service so that\n datasets can be created later with\n `tf.data.experimental.service.from_dataset_id`. This is useful when the\n dataset\n is registered by one process, then used in another process. When the same\n process is both registering and reading from the dataset, it is simpler to use\n `tf.data.experimental.service.distribute` instead.\n\n If the dataset is already registered with the tf.data service,\n `register_dataset` returns the already-registered dataset's id.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset_id = tf.data.experimental.service.register_dataset(\n ... dispatcher.target, dataset)\n >>> dataset = tf.data.experimental.service.from_dataset_id(\n ... processing_mode=\"parallel_epochs\",\n ... service=dispatcher.target,\n ... dataset_id=dataset_id,\n ... element_spec=dataset.element_spec)\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n Args:\n service: A string or a tuple indicating how to connect to the tf.data\n service. If it's a string, it should be in the format\n `[://]
`, where `
` identifies the dispatcher\n address and `` can optionally be used to override the default\n protocol to use. If it's a tuple, it should be (protocol, address).\n dataset: A `tf.data.Dataset` to register with the tf.data service.\n compression: (Optional.) How to compress the dataset's elements before\n transferring them over the network. \"AUTO\" leaves the decision of how to\n compress up to the tf.data service runtime. `None` indicates not to\n compress.\n\n Returns:\n A scalar int64 tensor of the registered dataset's id.\n ", "desc": "Registers a dataset with the tf.data service.", "type": "API"}, {"name": "tf.data.experimental.service.WorkerConfig", "docs": "Configuration class for tf.data service dispatchers.\n\n Fields:\n dispatcher_address: Specifies the address of the dispatcher.\n worker_address: Specifies the address of the worker server. This address is\n passed to the dispatcher so that the dispatcher can tell clients how to\n connect to this worker.\n port: Specifies the port to bind to. A value of 0 indicates that the worker\n can bind to any available port.\n protocol: (Optional.) Specifies the protocol to be used by the server, e.g.\n \"grpc\".\n heartbeat_interval_ms: How often the worker should heartbeat to the\n dispatcher, in milliseconds. If not set, the runtime will select a\n reasonable default. A higher value will reduce the load on the dispatcher,\n while a lower value will reduce the time it takes to reclaim resources\n from finished jobs.\n dispatcher_timeout_ms: How long, in milliseconds, to retry requests to the\n dispatcher before giving up and reporting an error. Defaults to 1 hour.\n ", "desc": "Configuration class for tf.data service dispatchers.", "type": "API"}, {"name": "tf.data.experimental.service.WorkerServer", "docs": "An in-process tf.data service worker server.\n\n A `tf.data.experimental.service.WorkerServer` performs `tf.data.Dataset`\n processing for user-defined datasets, and provides the resulting elements over\n RPC. A worker is associated with a single\n `tf.data.experimental.service.DispatchServer`.\n\n >>> dispatcher = tf.data.experimental.service.DispatchServer()\n >>> dispatcher_address = dispatcher.target.split(\"://\")[1]\n >>> worker = tf.data.experimental.service.WorkerServer(\n ... tf.data.experimental.service.WorkerConfig(\n ... dispatcher_address=dispatcher_address))\n >>> dataset = tf.data.Dataset.range(10)\n >>> dataset = dataset.apply(tf.data.experimental.service.distribute(\n ... processing_mode=\"parallel_epochs\", service=dispatcher.target))\n >>> print(list(dataset.as_numpy_iterator()))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n When starting a dedicated tf.data worker process, use join() to block\n indefinitely after starting up the server.\n\n ```\n worker = tf.data.experimental.service.WorkerServer(\n port=5051, dispatcher_address=\"localhost:5050\")\n worker.join()\n ```\n ", "desc": "An in-process tf.data service worker server.", "type": "API"}, {"name": "tf.data.experimental.shuffle_and_repeat", "docs": "Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.\n\n>>> d = tf.data.Dataset.from_tensor_slices([1, 2, 3])\n>>> d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2))\n>>> [elem.numpy() for elem in d] # doctest: +SKIP\n[2, 3, 1, 1, 3, 2]\n\n```python\ndataset.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed))\n```\n\nproduces the same output as\n\n```python\ndataset.shuffle(\n buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count)\n```\n\nIn each repetition, this dataset fills a buffer with `buffer_size` elements,\nthen randomly samples elements from this buffer, replacing the selected\nelements with new elements. For perfect shuffling, set the buffer size equal\nto the full size of the dataset.\n\nFor instance, if your dataset contains 10,000 elements but `buffer_size` is\nset to 1,000, then `shuffle` will initially select a random element from\nonly the first 1,000 elements in the buffer. Once an element is selected,\nits space in the buffer is replaced by the next (i.e. 1,001-st) element,\nmaintaining the 1,000 element buffer.\n\nArgs:\n buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the maximum\n number elements that will be buffered when prefetching.\n count: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the number\n of times the dataset should be repeated. The default behavior (if `count`\n is `None` or `-1`) is for the dataset be repeated indefinitely.\n seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random\n seed that will be used to create the distribution. See\n `tf.random.set_seed` for behavior.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.snapshot", "docs": "API to persist the output of the input dataset. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.snapshot(...)`.\n\nThe snapshot API allows users to transparently persist the output of their\npreprocessing pipeline to disk, and materialize the pre-processed data on a\ndifferent training run.\n\nThis API enables repeated preprocessing steps to be consolidated, and allows\nre-use of already processed data, trading off disk storage and network\nbandwidth for freeing up more valuable CPU resources and accelerator compute\ntime.\n\nhttps://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md\nhas detailed design documentation of this feature.\n\nUsers can specify various options to control the behavior of snapshot,\nincluding how snapshots are read from and written to by passing in\nuser-defined functions to the `reader_func` and `shard_func` parameters.\n\n`shard_func` is a user specified function that maps input elements to snapshot\nshards.\n\nUsers may want to specify this function to control how snapshot files should\nbe written to disk. Below is an example of how a potential shard_func could\nbe written.\n\n```python\ndataset = ...\ndataset = dataset.enumerate()\ndataset = dataset.apply(tf.data.experimental.snapshot(\"/path/to/snapshot/dir\",\n shard_func=lambda x, y: x % NUM_SHARDS, ...))\ndataset = dataset.map(lambda x, y: y)\n```\n\n`reader_func` is a user specified function that accepts a single argument:\n(1) a Dataset of Datasets, each representing a \"split\" of elements of the\noriginal dataset. The cardinality of the input dataset matches the\nnumber of the shards specified in the `shard_func` (see above). The function\nshould return a Dataset of elements of the original dataset.\n\nUsers may want specify this function to control how snapshot files should be\nread from disk, including the amount of shuffling and parallelism.\n\nHere is an example of a standard reader function a user can define. This\nfunction enables both dataset shuffling and parallel reading of datasets:\n\n```python\ndef user_reader_func(datasets):\n # shuffle the datasets splits\n datasets = datasets.shuffle(NUM_CORES)\n # read datasets in parallel and interleave their elements\n return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)\n\ndataset = dataset.apply(tf.data.experimental.snapshot(\"/path/to/snapshot/dir\",\n reader_func=user_reader_func))\n```\n\nBy default, snapshot parallelizes reads by the number of cores available on\nthe system, but will not attempt to shuffle the data.\n\nArgs:\n path: Required. A directory to use for storing / loading the snapshot to /\n from.\n compression: Optional. The type of compression to apply to the snapshot\n written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None.\n Defaults to AUTO, which attempts to pick an appropriate compression\n algorithm for the dataset.\n reader_func: Optional. A function to control how to read data from snapshot\n shards.\n shard_func: Optional. A function to control how to shard data when writing a\n snapshot.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "API to persist the output of the input dataset. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.SqlDataset", "docs": "A `Dataset` consisting of the results from a SQL query.\n\n `SqlDataset` allows a user to read data from the result set of a SQL query.\n For example:\n\n ```python\n dataset = tf.data.experimental.SqlDataset(\"sqlite\", \"/foo/bar.sqlite3\",\n \"SELECT name, age FROM people\",\n (tf.string, tf.int32))\n # Prints the rows of the result set of the above query.\n for element in dataset:\n print(element)\n ```\n ", "desc": "A `Dataset` consisting of the results from a SQL query.", "type": "API"}, {"name": "tf.data.experimental.take_while", "docs": "A transformation that stops dataset iteration based on a `predicate`. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.take_while(...)\n\nArgs:\n predicate: A function that maps a nested structure of tensors (having shapes\n and types defined by `self.output_shapes` and `self.output_types`) to a\n scalar `tf.bool` tensor.\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "A transformation that stops dataset iteration based on a `predicate`. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.TFRecordWriter", "docs": "Writes a dataset to a TFRecord file. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nTo write TFRecords to disk, use `tf.io.TFRecordWriter`. To save and load the contents of a dataset, use `tf.data.experimental.save` and `tf.data.experimental.load`\n\nThe elements of the dataset must be scalar strings. To serialize dataset\nelements as strings, you can use the `tf.io.serialize_tensor` function.\n\n```python\ndataset = tf.data.Dataset.range(3)\ndataset = dataset.map(tf.io.serialize_tensor)\nwriter = tf.data.experimental.TFRecordWriter(\"/path/to/file.tfrecord\")\nwriter.write(dataset)\n```\n\nTo read back the elements, use `TFRecordDataset`.\n\n```python\ndataset = tf.data.TFRecordDataset(\"/path/to/file.tfrecord\")\ndataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64))\n```\n\nTo shard a `dataset` across multiple TFRecord files:\n\n```python\ndataset = ... # dataset to be written\n\ndef reduce_func(key, dataset):\n filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)])\n writer = tf.data.experimental.TFRecordWriter(filename)\n writer.write(dataset.map(lambda _, x: x))\n return tf.data.Dataset.from_tensors(filename)\n\ndataset = dataset.enumerate()\ndataset = dataset.apply(tf.data.experimental.group_by_window(\n lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max\n))\n\n# Iterate through the dataset to trigger data writing.\nfor _ in dataset:\n pass\n```", "desc": "Writes a dataset to a TFRecord file. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.ThreadingOptions", "docs": "Represents options for dataset threading.\n\n You can set the threading options of a dataset through the\n `experimental_threading` property of `tf.data.Options`; the property is\n an instance of `tf.data.ThreadingOptions`.\n\n ```python\n options = tf.data.Options()\n options.threading.private_threadpool_size = 10\n dataset = dataset.with_options(options)\n ```\n ", "desc": "Represents options for dataset threading.", "type": "API"}, {"name": "tf.data.experimental.to_variant", "docs": "Returns a variant representing the given dataset.\n\n Args:\n dataset: A `tf.data.Dataset`.\n\n Returns:\n A scalar `tf.variant` tensor representing the given dataset.\n ", "desc": "Returns a variant representing the given dataset.", "type": "API"}, {"name": "tf.data.experimental.unbatch", "docs": "Splits elements of a dataset into multiple elements on the batch dimension. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.unbatch()`.\n\nFor example, if elements of the dataset are shaped `[B, a0, a1, ...]`,\nwhere `B` may vary for each input element, then for each element in the\ndataset, the unbatched dataset will contain `B` consecutive elements\nof shape `[a0, a1, ...]`.\n\n```python\n# NOTE: The following example uses `{ ... }` to represent the contents\n# of a dataset.\na = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }\n\na.unbatch() == {\n 'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'}\n```\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Splits elements of a dataset into multiple elements on the batch dimension. (deprecated)", "type": "API"}, {"name": "tf.data.experimental.unique", "docs": "Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.unique(...)\n\nUse this transformation to produce a dataset that contains one instance of\neach unique element in the input. For example:\n\n```python\ndataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])\n\n# Using `unique()` will drop the duplicate elements.\ndataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 }\n```\n\nReturns:\n A `Dataset` transformation function, which can be passed to\n `tf.data.Dataset.apply`.", "desc": "Creates a `Dataset` from another `Dataset`, discarding duplicates. (deprecated)", "type": "API"}, {"name": "tf.data.FixedLengthRecordDataset", "docs": "A `Dataset` of fixed-length records from one or more binary files.\n\n The `tf.data.FixedLengthRecordDataset` reads fixed length records from binary\n files and creates a dataset where each record becomes an element of the\n dataset. The binary files can have a fixed length header and a fixed length\n footer, which will both be skipped.\n\n For example, suppose we have 2 files \"fixed_length0.bin\" and\n \"fixed_length1.bin\" with the following content:\n\n >>> with open('/tmp/fixed_length0.bin', 'wb') as f:\n ... f.write(b'HEADER012345FOOTER')\n >>> with open('/tmp/fixed_length1.bin', 'wb') as f:\n ... f.write(b'HEADER6789abFOOTER')\n\n We can construct a `FixedLengthRecordDataset` from them as follows:\n\n >>> dataset1 = tf.data.FixedLengthRecordDataset(\n ... filenames=['/tmp/fixed_length0.bin', '/tmp/fixed_length1.bin'],\n ... record_bytes=2, header_bytes=6, footer_bytes=6)\n\n The elements of the dataset are:\n\n >>> for element in dataset1.as_numpy_iterator():\n ... print(element)\n b'01'\n b'23'\n b'45'\n b'67'\n b'89'\n b'ab'\n ", "desc": "A `Dataset` of fixed-length records from one or more binary files.", "type": "API"}, {"name": "tf.data.Iterator", "docs": "Represents an iterator of a `tf.data.Dataset`.\n\n `tf.data.Iterator` is the primary mechanism for enumerating elements of a\n `tf.data.Dataset`. It supports the Python Iterator protocol, which means\n it can be iterated over using a for-loop:\n\n >>> dataset = tf.data.Dataset.range(2)\n >>> for element in dataset:\n ... print(element)\n tf.Tensor(0, shape=(), dtype=int64)\n tf.Tensor(1, shape=(), dtype=int64)\n\n or by fetching individual elements explicitly via `get_next()`:\n\n >>> dataset = tf.data.Dataset.range(2)\n >>> iterator = iter(dataset)\n >>> print(iterator.get_next())\n tf.Tensor(0, shape=(), dtype=int64)\n >>> print(iterator.get_next())\n tf.Tensor(1, shape=(), dtype=int64)\n\n In addition, non-raising iteration is supported via `get_next_as_optional()`,\n which returns the next element (if available) wrapped in a\n `tf.experimental.Optional`.\n\n >>> dataset = tf.data.Dataset.from_tensors(42)\n >>> iterator = iter(dataset)\n >>> optional = iterator.get_next_as_optional()\n >>> print(optional.has_value())\n tf.Tensor(True, shape=(), dtype=bool)\n >>> optional = iterator.get_next_as_optional()\n >>> print(optional.has_value())\n tf.Tensor(False, shape=(), dtype=bool)\n ", "desc": "Represents an iterator of a `tf.data.Dataset`.", "type": "API"}, {"name": "tf.data.IteratorSpec", "docs": "Type specification for `tf.data.Iterator`.\n\n For instance, `tf.data.IteratorSpec` can be used to define a tf.function that\n takes `tf.data.Iterator` as an input argument:\n\n >>> @tf.function(input_signature=[tf.data.IteratorSpec(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])\n ... def square(iterator):\n ... x = iterator.get_next()\n ... return x * x\n >>> dataset = tf.data.Dataset.from_tensors(5)\n >>> iterator = iter(dataset)\n >>> print(square(iterator))\n tf.Tensor(25, shape=(), dtype=int32)\n\n Attributes:\n element_spec: A (nested) structure of `tf.TypeSpec` objects that represents\n the type specification of the iterator elements.\n ", "desc": "Type specification for `tf.data.Iterator`.", "type": "API"}, {"name": "tf.data.Options", "docs": "Represents options for `tf.data.Dataset`.\n\n A `tf.data.Options` object can be, for instance, used to control which static\n optimizations to apply to the input pipeline graph or whether to use\n performance modeling to dynamically tune the parallelism of operations such as\n `tf.data.Dataset.map` or `tf.data.Dataset.interleave`.\n\n The options are set for the entire dataset and are carried over to datasets\n created through tf.data transformations.\n\n The options can be set by constructing an `Options` object and using the\n `tf.data.Dataset.with_options(options)` transformation, which returns a\n dataset with the options set.\n\n >>> dataset = tf.data.Dataset.range(42)\n >>> options = tf.data.Options()\n >>> options.deterministic = False\n >>> dataset = dataset.with_options(options)\n >>> print(dataset.options().deterministic)\n False\n\n Note: A known limitation of the `tf.data.Options` implementation is that the\n options are not preserved across tf.function boundaries. In particular, to\n set options for a dataset that is iterated within a tf.function, the options\n need to be set within the same tf.function.\n ", "desc": "Represents options for `tf.data.Dataset`.", "type": "API"}, {"name": "tf.data.TextLineDataset", "docs": "Creates a `Dataset` comprising lines from one or more text files.\n\n The `tf.data.TextLineDataset` loads text from text files and creates a dataset\n where each line of the files becomes an element of the dataset.\n\n For example, suppose we have 2 files \"text_lines0.txt\" and \"text_lines1.txt\"\n with the following lines:\n\n >>> with open('/tmp/text_lines0.txt', 'w') as f:\n ... f.write('the cow\\n')\n ... f.write('jumped over\\n')\n ... f.write('the moon\\n')\n >>> with open('/tmp/text_lines1.txt', 'w') as f:\n ... f.write('jack and jill\\n')\n ... f.write('went up\\n')\n ... f.write('the hill\\n')\n\n We can construct a TextLineDataset from them as follows:\n\n >>> dataset = tf.data.TextLineDataset(['/tmp/text_lines0.txt',\n ... '/tmp/text_lines1.txt'])\n\n The elements of the dataset are expected to be:\n\n >>> for element in dataset.as_numpy_iterator():\n ... print(element)\n b'the cow'\n b'jumped over'\n b'the moon'\n b'jack and jill'\n b'went up'\n b'the hill'\n ", "desc": "Creates a `Dataset` comprising lines from one or more text files.", "type": "API"}, {"name": "tf.data.TFRecordDataset", "docs": "A `Dataset` comprising records from one or more TFRecord files.\n\n This dataset loads TFRecords from the files as bytes, exactly as they were\n written.`TFRecordDataset` does not do any parsing or decoding on its own.\n Parsing and decoding can be done by applying `Dataset.map` transformations\n after the `TFRecordDataset`.\n\n A minimal example is given below:\n\n >>> import tempfile\n >>> example_path = os.path.join(tempfile.gettempdir(), \"example.tfrecords\")\n >>> np.random.seed(0)\n\n >>> # Write the records to a file.\n ... with tf.io.TFRecordWriter(example_path) as file_writer:\n ... for _ in range(4):\n ... x, y = np.random.random(), np.random.random()\n ...\n ... record_bytes = tf.train.Example(features=tf.train.Features(feature={\n ... \"x\": tf.train.Feature(float_list=tf.train.FloatList(value=[x])),\n ... \"y\": tf.train.Feature(float_list=tf.train.FloatList(value=[y])),\n ... })).SerializeToString()\n ... file_writer.write(record_bytes)\n\n >>> # Read the data back out.\n >>> def decode_fn(record_bytes):\n ... return tf.io.parse_single_example(\n ... # Data\n ... record_bytes,\n ...\n ... # Schema\n ... {\"x\": tf.io.FixedLenFeature([], dtype=tf.float32),\n ... \"y\": tf.io.FixedLenFeature([], dtype=tf.float32)}\n ... )\n\n >>> for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn):\n ... print(\"x = {x:.4f}, y = {y:.4f}\".format(**batch))\n x = 0.5488, y = 0.7152\n x = 0.6028, y = 0.5449\n x = 0.4237, y = 0.6459\n x = 0.4376, y = 0.8918\n ", "desc": "A `Dataset` comprising records from one or more TFRecord files.", "type": "API"}, {"name": "tf.debugging", "docs": "Public API for tf.debugging namespace.\n", "desc": "Public API for tf.debugging namespace.", "type": "API"}, {"name": "tf.debugging.Assert", "docs": "Asserts that the given condition is true.\n\nIf `condition` evaluates to false, print the list of tensors in `data`.\n`summarize` determines how many entries of the tensors to print.\n\nArgs:\n condition: The condition to evaluate.\n data: The tensors to print out when condition is false.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional).\n\nReturns:\n assert_op: An `Operation` that, when executed, raises a\n `tf.errors.InvalidArgumentError` if `condition` is not true.\n @compatibility(eager)\n returns None\n @end_compatibility\n\nRaises:\n @compatibility(TF1)\n When in TF V1 mode (that is, outside `tf.function`) Assert needs a control\n dependency on the output to ensure the assertion executes:\n\n```python\n# Ensure maximum element of x is smaller or equal to 1\nassert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])\nwith tf.control_dependencies([assert_op]):\n ... code using x ...\n```\n\n @end_compatibility\n\n\nNote: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.", "desc": "Asserts that the given condition is true.", "type": "API"}, {"name": "tf.debugging.assert_all_finite", "docs": "Assert that the tensor does not contain any NaN's or Inf's.\n\n Args:\n x: Tensor to check.\n message: Message to log on failure.\n name: A name for this operation (optional).\n\n Returns:\n Same tensor as `x`.\n ", "desc": "Assert that the tensor does not contain any NaN's or Inf's.", "type": "API"}, {"name": "tf.debugging.assert_equal", "docs": "Assert the condition `x == y` holds element-wise.\n\n This Op checks that `x[i] == y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` and `y` are not equal, `message`, as well as the first `summarize`\n entries of `x` and `y` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x == y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x == y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x == y` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_greater", "docs": "Assert the condition `x > y` holds element-wise.\n\n This Op checks that `x[i] > y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not greater than `y` element-wise, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is\n raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_greater\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x > y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x > y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x > y` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_greater_equal", "docs": "Assert the condition `x >= y` holds element-wise.\n\n This Op checks that `x[i] >= y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not greater or equal to `y` element-wise, `message`, as well as the\n first `summarize` entries of `x` and `y` are printed, and\n `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to\n \"assert_greater_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x >= y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x >= y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x >= y` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_integer", "docs": "Assert that `x` is of integer dtype.\n\n If `x` has a non-integer type, `message`, as well as the dtype of `x` are\n printed, and `InvalidArgumentError` is raised.\n\n This can always be checked statically, so this method returns nothing.\n\n Args:\n x: A `Tensor`.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_integer\".\n\n Raises:\n TypeError: If `x.dtype` is not a non-quantized integer type.\n ", "desc": "Assert that `x` is of integer dtype.", "type": "API"}, {"name": "tf.debugging.assert_less", "docs": "Assert the condition `x < y` holds element-wise.\n\n This Op checks that `x[i] < y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not less than `y` element-wise, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError` is\n raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_less\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x < y` is False.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x < y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x < y` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_less_equal", "docs": "Assert the condition `x <= y` holds element-wise.\n\n This Op checks that `x[i] <= y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If `x` is not less or equal than `y` element-wise, `message`, as well as the\n first `summarize` entries of `x` and `y` are printed, and\n `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_less_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x <= y` is False. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x <= y` is False. The check can be performed immediately during eager\n execution or if `x` and `y` are statically known.\n ", "desc": "Assert the condition `x <= y` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_near", "docs": "Assert the condition `x` and `y` are close element-wise.\n\n This Op checks that `x[i] - y[i] < atol + rtol * tf.abs(y[i])` holds for every\n pair of (possibly broadcast) elements of `x` and `y`. If both `x` and `y` are\n empty, this is trivially satisfied.\n\n If any elements of `x` and `y` are not close, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError`\n is raised.\n\n The default `atol` and `rtol` is `10 * eps`, where `eps` is the smallest\n representable positive number such that `1 + eps != 1`. This is about\n `1.2e-6` in `32bit`, `2.22e-15` in `64bit`, and `0.00977` in `16bit`.\n See `numpy.finfo`.\n\n Args:\n x: Float or complex `Tensor`.\n y: Float or complex `Tensor`, same dtype as and broadcastable to `x`.\n rtol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The relative tolerance. Default is `10 * eps`.\n atol: `Tensor`. Same `dtype` as, and broadcastable to, `x`.\n The absolute tolerance. Default is `10 * eps`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_near\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x` and `y` are not close enough.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x != y` is False for any pair of elements in `x` and `y`. The check can\n be performed immediately during eager execution or if `x` and `y` are\n statically known.\n\n @compatibility(numpy)\n Similar to `numpy.testing.assert_allclose`, except tolerance depends on data\n type. This is due to the fact that `TensorFlow` is often used with `32bit`,\n `64bit`, and even `16bit` data.\n @end_compatibility\n ", "desc": "Assert the condition `x` and `y` are close element-wise.", "type": "API"}, {"name": "tf.debugging.assert_negative", "docs": "Assert the condition `x < 0` holds element-wise.\n\n This Op checks that `x[i] < 0` holds for every element of `x`. If `x` is\n empty, this is trivially satisfied.\n\n If `x` is not negative everywhere, `message`, as well as the first `summarize`\n entries of `x` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_negative\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` is all negative. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x[i] < 0` is False. The check can be performed immediately during eager\n execution or if `x` is statically known.\n ", "desc": "Assert the condition `x < 0` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_non_negative", "docs": "Assert the condition `x >= 0` holds element-wise.\n\n This Op checks that `x[i] >= 0` holds for every element of `x`. If `x` is\n empty, this is trivially satisfied.\n\n If `x` is not >= 0 everywhere, `message`, as well as the first `summarize`\n entries of `x` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to\n \"assert_non_negative\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` is all non-negative. This can\n be used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x[i] >= 0` is False. The check can be performed immediately during eager\n execution or if `x` is statically known.\n ", "desc": "Assert the condition `x >= 0` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_non_positive", "docs": "Assert the condition `x <= 0` holds element-wise.\n\n This Op checks that `x[i] <= 0` holds for every element of `x`. If `x` is\n empty, this is trivially satisfied.\n\n If `x` is not <= 0 everywhere, `message`, as well as the first `summarize`\n entries of `x` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to\n \"assert_non_positive\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` is all non-positive. This can\n be used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x[i] <= 0` is False. The check can be performed immediately during eager\n execution or if `x` is statically known.\n ", "desc": "Assert the condition `x <= 0` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_none_equal", "docs": "Assert the condition `x != y` holds for all elements.\n\n This Op checks that `x[i] != y[i]` holds for every pair of (possibly\n broadcast) elements of `x` and `y`. If both `x` and `y` are empty, this is\n trivially satisfied.\n\n If any elements of `x` and `y` are equal, `message`, as well as the first\n `summarize` entries of `x` and `y` are printed, and `InvalidArgumentError`\n is raised.\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to\n \"assert_none_equal\".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x != y` is ever False. This can\n be used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x != y` is False for any pair of elements in `x` and `y`. The check can\n be performed immediately during eager execution or if `x` and `y` are\n statically known.\n ", "desc": "Assert the condition `x != y` holds for all elements.", "type": "API"}, {"name": "tf.debugging.assert_positive", "docs": "Assert the condition `x > 0` holds element-wise.\n\n This Op checks that `x[i] > 0` holds for every element of `x`. If `x` is\n empty, this is trivially satisfied.\n\n If `x` is not positive everywhere, `message`, as well as the first `summarize`\n entries of `x` are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: Numeric `Tensor`.\n message: A string to prefix to the default message.\n summarize: Print this many entries of each tensor.\n name: A name for this operation (optional). Defaults to \"assert_positive\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` is all positive. This can be\n used with `tf.control_dependencies` inside of `tf.function`s to block\n followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x[i] > 0` is False. The check can be performed immediately during eager\n execution or if `x` is statically known.\n ", "desc": "Assert the condition `x > 0` holds element-wise.", "type": "API"}, {"name": "tf.debugging.assert_proper_iterable", "docs": "Static assert that values is a \"proper\" iterable.\n\n `Ops` that expect iterables of `Tensor` can call this to validate input.\n Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.\n\n Args:\n values: Object to be checked.\n\n Raises:\n TypeError: If `values` is not iterable or is one of\n `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`.\n ", "desc": "Static assert that values is a \"proper\" iterable.", "type": "API"}, {"name": "tf.debugging.assert_rank", "docs": "Assert that `x` has rank equal to `rank`.\n\n This Op checks that the rank of `x` is equal to `rank`.\n\n If `x` has a different rank, `message`, as well as the shape of `x` are\n printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: `Tensor`.\n rank: Scalar integer `Tensor`.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to\n \"assert_rank\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x` does not have rank `rank`. The check can be performed immediately\n during eager execution or if the shape of `x` is statically known.\n ", "desc": "Assert that `x` has rank equal to `rank`.", "type": "API"}, {"name": "tf.debugging.assert_rank_at_least", "docs": "Assert that `x` has rank of at least `rank`.\n\n This Op checks that the rank of `x` is greater or equal to `rank`.\n\n If `x` has a rank lower than `rank`, `message`, as well as the shape of `x`\n are printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: `Tensor`.\n rank: Scalar integer `Tensor`.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to\n \"assert_rank_at_least\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank or higher.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: `x` does not have rank at least `rank`, but the rank\n cannot be statically determined.\n ValueError: If static checks determine `x` has mismatched rank.\n ", "desc": "Assert that `x` has rank of at least `rank`.", "type": "API"}, {"name": "tf.debugging.assert_rank_in", "docs": "Assert that `x` has a rank in `ranks`.\n\n This Op checks that the rank of `x` is in `ranks`.\n\n If `x` has a different rank, `message`, as well as the shape of `x` are\n printed, and `InvalidArgumentError` is raised.\n\n Args:\n x: `Tensor`.\n ranks: `Iterable` of scalar `Tensor` objects.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_rank_in\".\n\n Returns:\n Op raising `InvalidArgumentError` unless rank of `x` is in `ranks`.\n If static checks determine `x` has matching rank, a `no_op` is returned.\n This can be used with `tf.control_dependencies` inside of `tf.function`s\n to block followup computation until the check has executed.\n @compatibility(eager)\n returns None\n @end_compatibility\n\n Raises:\n InvalidArgumentError: `x` does not have rank in `ranks`, but the rank cannot\n be statically determined.\n ValueError: If static checks determine `x` has mismatched rank.\n ", "desc": "Assert that `x` has a rank in `ranks`.", "type": "API"}, {"name": "tf.debugging.assert_same_float_dtype", "docs": "Validate and return float type based on `tensors` and `dtype`.\n\n For ops such as matrix multiplication, inputs and weights must be of the\n same float type. This function validates that all `tensors` are the same type,\n validates that type is `dtype` (if supplied), and returns the type. Type must\n be a floating point type. If neither `tensors` nor `dtype` is supplied,\n the function will return `dtypes.float32`.\n\n Args:\n tensors: Tensors of input values. Can include `None` elements, which will be\n ignored.\n dtype: Expected type.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if neither `tensors` nor `dtype` is supplied, or result is not\n float, or the common type of the inputs is not a floating point type.\n ", "desc": "Validate and return float type based on `tensors` and `dtype`.", "type": "API"}, {"name": "tf.debugging.assert_scalar", "docs": "Asserts that the given `tensor` is a scalar.\n\n This function raises `ValueError` unless it can be certain that the given\n `tensor` is a scalar. `ValueError` is also raised if the shape of `tensor` is\n unknown.\n\n This is always checked statically, so this method returns nothing.\n\n Args:\n tensor: A `Tensor`.\n message: A string to prefix to the default message.\n name: A name for this operation. Defaults to \"assert_scalar\"\n\n Raises:\n ValueError: If the tensor is not scalar (rank 0), or if its shape is\n unknown.\n ", "desc": "Asserts that the given `tensor` is a scalar.", "type": "API"}, {"name": "tf.debugging.assert_shapes", "docs": "Assert tensor shapes and dimension size relationships between tensors.\n\n This Op checks that a collection of tensors shape relationships\n satisfies given constraints.\n\n Example:\n\n >>> n = 10\n >>> q = 3\n >>> d = 7\n >>> x = tf.zeros([n,q])\n >>> y = tf.ones([n,d])\n >>> param = tf.Variable([1.0, 2.0, 3.0])\n >>> scalar = 1.0\n >>> tf.debugging.assert_shapes([\n ... (x, ('N', 'Q')),\n ... (y, ('N', 'D')),\n ... (param, ('Q',)),\n ... (scalar, ()),\n ... ])\n\n >>> tf.debugging.assert_shapes([\n ... (x, ('N', 'D')),\n ... (y, ('N', 'D'))\n ... ])\n Traceback (most recent call last):\n ...\n ValueError: ...\n\n If `x`, `y`, `param` or `scalar` does not have a shape that satisfies\n all specified constraints, `message`, as well as the first `summarize` entries\n of the first encountered violating tensor are printed, and\n `InvalidArgumentError` is raised.\n\n Size entries in the specified shapes are checked against other entries by\n their __hash__, except:\n - a size entry is interpreted as an explicit size if it can be parsed as an\n integer primitive.\n - a size entry is interpreted as *any* size if it is None or '.'.\n\n If the first entry of a shape is `...` (type `Ellipsis`) or '*' that indicates\n a variable number of outer dimensions of unspecified size, i.e. the constraint\n applies to the inner-most dimensions only.\n\n Scalar tensors and specified shapes of length zero (excluding the 'inner-most'\n prefix) are both treated as having a single dimension of size one.\n\n Args:\n shapes: dictionary with (`Tensor` to shape) items, or a list of\n (`Tensor`, shape) tuples. A shape must be an iterable.\n data: The tensors to print out if the condition is False. Defaults to error\n message and first few entries of the violating tensor.\n summarize: Print this many entries of the tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to \"assert_shapes\".\n\n Raises:\n ValueError: If static checks determine any shape constraint is violated.\n ", "desc": "Assert tensor shapes and dimension size relationships between tensors.", "type": "API"}, {"name": "tf.debugging.assert_type", "docs": "Asserts that the given `Tensor` is of the specified type.\n\n This can always be checked statically, so this method returns nothing.\n\n Example:\n\n >>> a = tf.Variable(1.0)\n >>> tf.debugging.assert_type(a, tf_type= tf.float32)\n\n >>> b = tf.constant(21)\n >>> tf.debugging.assert_type(b, tf_type=tf.bool)\n Traceback (most recent call last):\n ...\n TypeError: ...\n\n >>> c = tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2],\n ... dense_shape=[3, 4])\n >>> tf.debugging.assert_type(c, tf_type= tf.int32)\n\n Args:\n tensor: A `Tensor`, `SparseTensor` or `tf.Variable` .\n tf_type: A tensorflow type (`dtypes.float32`, `tf.int64`, `dtypes.bool`,\n etc).\n message: A string to prefix to the default message.\n name: A name for this operation. Defaults to \"assert_type\"\n\n Raises:\n TypeError: If the tensor's data type doesn't match `tf_type`.\n ", "desc": "Asserts that the given `Tensor` is of the specified type.", "type": "API"}, {"name": "tf.debugging.check_numerics", "docs": "Checks a tensor for NaN and Inf values.\n\n When run, reports an `InvalidArgument` error if `tensor` has any values\n that are not a number (NaN) or infinity (Inf). Otherwise, returns the input\n tensor.\n\n Example usage:\n\n ``` python\n a = tf.Variable(1.0)\n tf.debugging.check_numerics(a, message='')\n\n b = tf.Variable(np.nan)\n try:\n tf.debugging.check_numerics(b, message='Checking b')\n except Exception as e:\n assert \"Checking b : Tensor had NaN values\" in e.message\n\n c = tf.Variable(np.inf)\n try:\n tf.debugging.check_numerics(c, message='Checking c')\n except Exception as e:\n assert \"Checking c : Tensor had Inf values\" in e.message\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n message: A `string`. Prefix of the error message.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Checks a tensor for NaN and Inf values.", "type": "API"}, {"name": "tf.debugging.disable_check_numerics", "docs": "Disable the eager/graph unified numerics checking mechanism.\n\n This method can be used after a call to `tf.debugging.enable_check_numerics()`\n to disable the numerics-checking mechanism that catches infinity and NaN\n values output by ops executed eagerly or in tf.function-compiled graphs.\n\n This method is idempotent. Calling it multiple times has the same effect\n as calling it once.\n\n This method takes effect only on the thread in which it is called.\n ", "desc": "Disable the eager/graph unified numerics checking mechanism.", "type": "API"}, {"name": "tf.debugging.enable_check_numerics", "docs": "Enable tensor numerics checking in an eager/graph unified fashion.\n\n The numerics checking mechanism will cause any TensorFlow eager execution or\n graph execution to error out as soon as an op's output tensor contains\n infinity or NaN.\n\n This method is idempotent. Calling it multiple times has the same effect\n as calling it once.\n\n This method takes effect only on the thread in which it is called.\n\n When a op's float-type output tensor contains any Infinity or NaN, an\n `tf.errors.InvalidArgumentError` will be thrown, with an error message that\n reveals the following information:\n - The type of the op that generated the tensor with bad numerics.\n - Data type (dtype) of the tensor.\n - Shape of the tensor (to the extent known at the time of eager execution\n or graph construction).\n - Name of the containing graph (if available).\n - (Graph mode only): The stack trace of the intra-graph op's creation,\n with a stack-height limit and a path-length limit for visual clarity.\n The stack frames that belong to the user's code (as opposed to\n tensorflow's internal code) are highlighted with a text arrow (\"->\").\n - (Eager mode only): How many of the offending tensor's elements are\n `Infinity` and `NaN`, respectively.\n\n Once enabled, the check-numerics mechanism can be disabled by using\n `tf.debugging.disable_check_numerics()`.\n\n Example usage:\n\n 1. Catching infinity during the execution of a `tf.function` graph:\n\n ```py\n import tensorflow as tf\n\n tf.debugging.enable_check_numerics()\n\n @tf.function\n def square_log_x_plus_1(x):\n v = tf.math.log(x + 1)\n return tf.math.square(v)\n\n x = -1.0\n\n # When the following line runs, a function graph will be compiled\n # from the Python function `square_log_x_plus_1()`. Due to the\n # `enable_check_numerics()` call above, the graph will contain\n # numerics checking ops that will run during the function graph's\n # execution. The function call generates an -infinity when the Log\n # (logarithm) op operates on the output tensor of the Add op.\n # The program errors out at this line, printing an error message.\n y = square_log_x_plus_1(x)\n z = -y\n ```\n\n 2. Catching NaN during eager execution:\n\n ```py\n import numpy as np\n import tensorflow as tf\n\n tf.debugging.enable_check_numerics()\n\n x = np.array([[0.0, -1.0], [4.0, 3.0]])\n\n # The following line executes the Sqrt op eagerly. Due to the negative\n # element in the input array, a NaN is generated. Due to the\n # `enable_check_numerics()` call above, the program errors immediately\n # at this line, printing an error message.\n y = tf.math.sqrt(x)\n z = tf.matmul(y, y)\n ```\n\n NOTE: If your code is running on TPUs, be sure to call\n `tf.config.set_soft_device_placement(True)` before calling\n `tf.debugging.enable_check_numerics()` as this API uses automatic outside\n compilation on TPUs. For example:\n\n ```py\n tf.config.set_soft_device_placement(True)\n tf.debugging.enable_check_numerics()\n\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n strategy = tf.distribute.TPUStrategy(resolver)\n with strategy.scope():\n # ...\n ```\n\n Args:\n stack_height_limit: Limit to the height of the printed stack trace.\n Applicable only to ops in `tf.function`s (graphs).\n path_length_limit: Limit to the file path included in the printed stack\n trace. Applicable only to ops in `tf.function`s (graphs).\n ", "desc": "Enable tensor numerics checking in an eager/graph unified fashion.", "type": "API"}, {"name": "tf.debugging.experimental", "docs": "Public API for tf.debugging.experimental namespace.\n", "desc": "Public API for tf.debugging.experimental namespace.", "type": "API"}, {"name": "tf.debugging.experimental.disable_dump_debug_info", "docs": "Disable the currently-enabled debugging dumping.\n\n If the `enable_dump_debug_info()` method under the same Python namespace\n has been invoked before, calling this method disables it. If no call to\n `enable_dump_debug_info()` has been made, calling this method is a no-op.\n Calling this method more than once is idempotent.\n ", "desc": "Disable the currently-enabled debugging dumping.", "type": "API"}, {"name": "tf.debugging.experimental.enable_dump_debug_info", "docs": "Enable dumping debugging information from a TensorFlow program.\n\n The debugging information is dumped to a directory on the file system\n specified as `dump_root`.\n\n The dumped debugging information can be ingested by debugger UIs.\n\n The files in the dump directory contain the following information:\n - TensorFlow Function construction (e.g., compilation of Python functions\n decorated with @tf.function), the op types, names (if available), context,\n the input and output tensors, and the associated stack traces.\n - Execution of TensorFlow operations (ops) and Functions and their stack\n traces, op types, names (if available) and contexts. In addition,\n depending on the value of the `tensor_debug_mode` argument (see Args\n section below), the value(s) of the output tensors or more concise\n summaries of the tensor values will be dumped.\n - A snapshot of Python source files involved in the execution of the\n TensorFlow program.\n\n Once enabled, the dumping can be disabled with the corresponding\n `disable_dump_debug_info()` method under the same Python namespace.\n Calling this method more than once with the same `dump_root` is idempotent.\n Calling this method more than once with different `tensor_debug_mode`s\n leads to a `ValueError`.\n Calling this method more than once with different `circular_buffer_size`s\n leads to a `ValueError`.\n Calling this method with a different `dump_root` abolishes the\n previously-enabled `dump_root`.\n\n Usage example:\n\n ```py\n tf.debugging.experimental.enable_dump_debug_info('/tmp/my-tfdbg-dumps')\n\n # Code to build, train and run your TensorFlow model...\n ```\n\n NOTE: If your code is running on TPUs, be sure to call\n `tf.config.set_soft_device_placement(True)` before calling\n `tf.debugging.experimental.enable_dump_debug_info()` as this API uses\n automatic outside compilation on TPUs. For example:\n\n ```py\n tf.config.set_soft_device_placement(True)\n tf.debugging.experimental.enable_dump_debug_info(\n logdir, tensor_debug_mode=\"FULL_HEALTH\")\n\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n strategy = tf.distribute.TPUStrategy(resolver)\n with strategy.scope():\n # ...\n ```\n\n Args:\n dump_root: The directory path where the dumping information will be written.\n tensor_debug_mode: Debug mode for tensor values, as a string.\n The currently supported options are:\n - \"NO_TENSOR\": (Default) Only traces the output tensors of all executed\n ops (including those executed eagerly at the Python level or as a part\n of a TensorFlow graph) and functions, while not extracting any\n information from the values of the tensors.\n - \"CURT_HEALTH\": For each floating-dtype tensor (e.g., tensors of dtypes\n such as `float32`, `float64` and `bfloat16`), extracts a binary bit\n indicating whether it contains any -infinity, +infinity or NaN.\n - \"CONCISE_HEALTH\": For each floating-dtype tensor, extract total\n element count, and counts of -infinity, +infinity and NaN elements.\n - \"FULL_HEALTH\": For each floating-dtype tensor, extracts the dtype,\n rank (number of dimensions), total element count, and counts of\n -infinity, +infinity and NaN elements.\n - \"SHAPE\": For each tensor (regardless of dtype), extracts its dtype,\n rank, total element count and shape.\n circular_buffer_size: Size of the circular buffers for execution events.\n These circular buffers are designed to reduce the overhead of debugging\n dumping. They hold the most recent debug events concerning eager execution\n of ops and `tf.function`s and traces of tensor values computed inside\n `tf.function`s. They are written to the file system only when the proper\n flushing method is called (see description of return values below).\n Expected to be an integer. If <= 0, the circular-buffer behavior will be\n disabled, i.e., the execution debug events will be written to the file\n writers in the same way as non-execution events such as op creations and\n source-file snapshots.\n op_regex: Dump data from only the tensors from op types that matches to the\n regular expression (through Python's `re.match()`).\n \"Op type\" refers to the names of the TensorFlow operations (e.g.,\n \"MatMul\", \"LogSoftmax\"), which may repeat in a TensorFlow\n function. It does *not* refer to the names of nodes (e.g.,\n \"dense/MatMul\", \"dense_1/MatMul_1\") which are unique within a function.\n - Example 1: Dump tensor data from only MatMul and Relu ops\n `op_regex=\"^(MatMul|Relu)$\"`.\n - Example 2: Dump tensors from all ops *except* Relu:\n `op_regex=\"(?!^Relu$)\"`.\n This filter operates in a logical AND relation with `tensor_dtypes`.\n tensor_dtypes: Dump data from only the tensors of which the specified\n dtypes. This optional argument can be in any of the following format:\n - a list or tuple of `DType` objects or strings that can be converted\n to `DType` objects via `tf.as_dtype()`. Examples:\n - `tensor_dtype=[tf.float32, tf.float64]`,\n - `tensor_dtype=[\"float32\", \"float64\"]`,\n - `tensor_dtypes=(tf.int32, tf.bool)`,\n - `tensor_dtypes=(\"int32\", \"bool\")`\n - a callable that takes a single `DType` argument and returns a Python\n `boolean` indicating whether the dtype is to be included in the data\n dumping. Examples:\n - `tensor_dtype=lambda dtype: dtype.is_integer`.\n This filter operates in a logical AND relation with `op_regex`.\n Returns:\n A DebugEventsWriter instance used by the dumping callback. The caller\n may use its flushing methods, including `FlushNonExecutionFiles()` and\n `FlushExecutionFiles()`.\n ", "desc": "Enable dumping debugging information from a TensorFlow program.", "type": "API"}, {"name": "tf.debugging.get_log_device_placement", "docs": "Get if device placements are logged.\n\n Returns:\n If device placements are logged.\n ", "desc": "Get if device placements are logged.", "type": "API"}, {"name": "tf.debugging.is_numeric_tensor", "docs": "Returns `True` if the elements of `tensor` are numbers.\n\n Specifically, returns `True` if the dtype of `tensor` is one of the following:\n\n * `tf.float16`\n * `tf.float32`\n * `tf.float64`\n * `tf.int8`\n * `tf.int16`\n * `tf.int32`\n * `tf.int64`\n * `tf.uint8`\n * `tf.uint16`\n * `tf.uint32`\n * `tf.uint64`\n * `tf.qint8`\n * `tf.qint16`\n * `tf.qint32`\n * `tf.quint8`\n * `tf.quint16`\n * `tf.complex64`\n * `tf.complex128`\n * `tf.bfloat16`\n\n Returns `False` if `tensor` is of a non-numeric type or if `tensor` is not\n a `tf.Tensor` object.\n ", "desc": "Returns `True` if the elements of `tensor` are numbers.", "type": "API"}, {"name": "tf.debugging.set_log_device_placement", "docs": "Turns logging for device placement decisions on or off.\n\n Operations execute on a particular device, producing and consuming tensors on\n that device. This may change the performance of the operation or require\n TensorFlow to copy data to or from an accelerator, so knowing where operations\n execute is useful for debugging performance issues.\n\n For more advanced profiling, use the [TensorFlow\n profiler](https://www.tensorflow.org/guide/profiler).\n\n Device placement for operations is typically controlled by a `tf.device`\n scope, but there are exceptions, for example operations on a `tf.Variable`\n which follow the initial placement of the variable. Turning off soft device\n placement (with `tf.config.set_soft_device_placement`) provides more explicit\n control.\n\n >>> tf.debugging.set_log_device_placement(True)\n >>> tf.ones([])\n >>> # [...] op Fill in device /job:localhost/replica:0/task:0/device:GPU:0\n >>> with tf.device(\"CPU\"):\n ... tf.ones([])\n >>> # [...] op Fill in device /job:localhost/replica:0/task:0/device:CPU:0\n >>> tf.debugging.set_log_device_placement(False)\n\n Turning on `tf.debugging.set_log_device_placement` also logs the placement of\n ops inside `tf.function` when the function is called.\n\n Args:\n enabled: Whether to enabled device placement logging.\n ", "desc": "Turns logging for device placement decisions on or off.", "type": "API"}, {"name": "tf.device", "docs": "Specifies the device for ops created/executed in this context.\n\n This function specifies the device to be used for ops created/executed in a\n particular context. Nested contexts will inherit and also create/execute\n their ops on the specified device. If a specific device is not required,\n consider not using this function so that a device can be automatically\n assigned. In general the use of this function is optional. `device_name` can\n be fully specified, as in \"/job:worker/task:1/device:cpu:0\", or partially\n specified, containing only a subset of the \"/\"-separated fields. Any fields\n which are specified will override device annotations from outer scopes.\n\n For example:\n\n ```python\n with tf.device('/job:foo'):\n # ops created here have devices with /job:foo\n with tf.device('/job:bar/task:0/device:gpu:2'):\n # ops created here have the fully specified device above\n with tf.device('/device:gpu:1'):\n # ops created here have the device '/job:foo/device:gpu:1'\n ```\n\n Args:\n device_name: The device name to use in the context.\n\n Returns:\n A context manager that specifies the default device to use for newly\n created ops.\n\n Raises:\n RuntimeError: If a function is passed in.\n ", "desc": "Specifies the device for ops created/executed in this context.", "type": "API"}, {"name": "tf.DeviceSpec", "docs": "Represents a (possibly partial) specification for a TensorFlow device.\n\n `DeviceSpec`s are used throughout TensorFlow to describe where state is stored\n and computations occur. Using `DeviceSpec` allows you to parse device spec\n strings to verify their validity, merge them or compose them programmatically.\n\n Example:\n\n ```python\n # Place the operations on device \"GPU:0\" in the \"ps\" job.\n device_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n with tf.device(device_spec.to_string()):\n # Both my_var and squared_var will be placed on /job:ps/device:GPU:0.\n my_var = tf.Variable(..., name=\"my_variable\")\n squared_var = tf.square(my_var)\n ```\n\n With eager execution disabled (by default in TensorFlow 1.x and by calling\n disable_eager_execution() in TensorFlow 2.x), the following syntax\n can be used:\n\n ```python\n tf.compat.v1.disable_eager_execution()\n\n # Same as previous\n device_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n # No need of .to_string() method.\n with tf.device(device_spec):\n my_var = tf.Variable(..., name=\"my_variable\")\n squared_var = tf.square(my_var)\n ```\n\n If a `DeviceSpec` is partially specified, it will be merged with other\n `DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`\n components defined in inner scopes take precedence over those defined in\n outer scopes.\n\n ```python\n gpu0_spec = DeviceSpec(job=\"ps\", device_type=\"GPU\", device_index=0)\n with tf.device(DeviceSpec(job=\"train\").to_string()):\n with tf.device(gpu0_spec.to_string()):\n # Nodes created here will be assigned to /job:ps/device:GPU:0.\n with tf.device(DeviceSpec(device_type=\"GPU\", device_index=1).to_string()):\n # Nodes created here will be assigned to /job:train/device:GPU:1.\n ```\n\n A `DeviceSpec` consists of 5 components -- each of\n which is optionally specified:\n\n * Job: The job name.\n * Replica: The replica index.\n * Task: The task index.\n * Device type: The device type string (e.g. \"CPU\" or \"GPU\").\n * Device index: The device index.\n ", "desc": "Represents a (possibly partial) specification for a TensorFlow device.", "type": "API"}, {"name": "tf.distribute", "docs": "Library for running a computation across multiple devices.\n\nThe intent of this library is that you can write an algorithm in a stylized way\nand it will be usable with a variety of different `tf.distribute.Strategy`\nimplementations. Each descendant will implement a different strategy for\ndistributing the algorithm across multiple devices/machines. Furthermore, these\nchanges can be hidden inside the specific layers and other library classes that\nneed special treatment to run in a distributed setting, so that most users'\nmodel definition code can run unchanged. The `tf.distribute.Strategy` API works\nthe same way with eager and graph execution.\n\n*Guides*\n\n* [TensorFlow v2.x](https://www.tensorflow.org/guide/distributed_training)\n* [TensorFlow v1.x](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb)\n\n*Tutorials*\n\n* [Distributed Training Tutorials](https://www.tensorflow.org/tutorials/distribute/)\n\n The tutorials cover how to use `tf.distribute.Strategy` to do distributed\n training with native Keras APIs, custom training loops,\n and Estimator APIs. They also cover how to save/load model when using\n `tf.distribute.Strategy`.\n\n*Glossary*\n\n* _Data parallelism_ is where we run multiple copies of the model\n on different slices of the input data. This is in contrast to\n _model parallelism_ where we divide up a single copy of a model\n across multiple devices.\n Note: we only support data parallelism for now, but\n hope to add support for model parallelism in the future.\n* A _device_ is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that\n TensorFlow can run operations on (see e.g. `tf.device`). You may have multiple\n devices on a single machine, or be connected to devices on multiple\n machines. Devices used to run computations are called _worker devices_.\n Devices used to store variables are _parameter devices_. For some strategies,\n such as `tf.distribute.MirroredStrategy`, the worker and parameter devices\n will be the same (see mirrored variables below). For others they will be\n different. For example, `tf.distribute.experimental.CentralStorageStrategy`\n puts the variables on a single device (which may be a worker device or may be\n the CPU), and `tf.distribute.experimental.ParameterServerStrategy` puts the\n variables on separate machines called _parameter servers_ (see below).\n* A _replica_ is one copy of the model, running on one slice of the\n input data. Right now each replica is executed on its own\n worker device, but once we add support for model parallelism\n a replica may span multiple worker devices.\n* A _host_ is the CPU device on a machine with worker devices, typically\n used for running input pipelines.\n* A _worker_ is defined to be the physical machine(s) containing the physical\n devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A\n worker may contain one or more replicas, but contains at least one\n replica. Typically one worker will correspond to one machine, but in the case\n of very large models with model parallelism, one worker may span multiple\n machines. We typically run one input pipeline per worker, feeding all the\n replicas on that worker.\n* _Synchronous_, or more commonly _sync_, training is where the updates from\n each replica are aggregated together before updating the model variables. This\n is in contrast to _asynchronous_, or _async_ training, where each replica\n updates the model variables independently. You may also have replicas\n partitioned into groups which are in sync within each group but async between\n groups.\n* _Parameter servers_: These are machines that hold a single copy of\n parameters/variables, used by some strategies (right now just\n `tf.distribute.experimental.ParameterServerStrategy`). All replicas that want\n to operate on a variable retrieve it at the beginning of a step and send an\n update to be applied at the end of the step. These can in principle support\n either sync or async training, but right now we only have support for async\n training with parameter servers. Compare to\n `tf.distribute.experimental.CentralStorageStrategy`, which puts all variables\n on a single device on the same machine (and does sync training), and\n `tf.distribute.MirroredStrategy`, which mirrors variables to multiple devices\n (see below).\n\n* _Replica context_ vs. _Cross-replica context_ vs _Update context_\n\n A _replica context_ applies\n when you execute the computation function that was called with `strategy.run`.\n Conceptually, you're in replica context when executing the computation\n function that is being replicated.\n\n An _update context_ is entered in a `tf.distribute.StrategyExtended.update`\n call.\n\n An _cross-replica context_ is entered when you enter a `strategy.scope`. This\n is useful for calling `tf.distribute.Strategy` methods which operate across\n the replicas (like `reduce_to()`). By default you start in a _replica context_\n (the \"default single _replica context_\") and then some methods can switch you\n back and forth.\n\n* _Distributed value_: Distributed value is represented by the base class\n `tf.distribute.DistributedValues`. `tf.distribute.DistributedValues` is useful\n to represent values on multiple devices, and it contains a map from replica id\n to values. Two representative kinds of `tf.distribute.DistributedValues` are\n \"PerReplica\" and \"Mirrored\" values.\n\n \"PerReplica\" values exist on the worker\n devices, with a different value for each replica. They are produced by\n iterating through a distributed dataset returned by\n `tf.distribute.Strategy.experimental_distribute_dataset` and\n `tf.distribute.Strategy.distribute_datasets_from_function`. They\n are also the typical result returned by\n `tf.distribute.Strategy.run`.\n\n \"Mirrored\" values are like \"PerReplica\" values, except we know that the value\n on all replicas are the same. We can safely read a \"Mirrored\" value in a\n cross-replica context by using the value on any replica.\n\n* _Unwrapping_ and _merging_: Consider calling a function `fn` on multiple\n replicas, like `strategy.run(fn, args=[w])` with an\n argument `w` that is a `tf.distribute.DistributedValues`. This means `w` will\n have a map taking replica id `0` to `w0`, replica id `1` to `w1`, etc.\n `strategy.run()` unwraps `w` before calling `fn`, so it calls `fn(w0)` on\n device `d0`, `fn(w1)` on device `d1`, etc. It then merges the return\n values from `fn()`, which leads to one common object if the returned values\n are the same object from every replica, or a `DistributedValues` object\n otherwise.\n\n* _Reductions_ and _all-reduce_: A _reduction_ is a method of aggregating\n multiple values into one value, like \"sum\" or \"mean\". If a strategy is doing\n sync training, we will perform a reduction on the gradients to a parameter\n from all replicas before applying the update. _All-reduce_ is an algorithm for\n performing a reduction on values from multiple devices and making the result\n available on all of those devices.\n\n* _Mirrored variables_: These are variables that are created on multiple\n devices, where we keep the variables in sync by applying the same\n updates to every copy. Mirrored variables are created with\n `tf.Variable(...synchronization=tf.VariableSynchronization.ON_WRITE...)`.\n Normally they are only used in synchronous training.\n\n* _SyncOnRead variables_\n\n _SyncOnRead variables_ are created by\n `tf.Variable(...synchronization=tf.VariableSynchronization.ON_READ...)`, and\n they are created on multiple devices. In replica context, each\n component variable on the local replica can perform reads and writes without\n synchronization with each other. When the\n _SyncOnRead variable_ is read in cross-replica context, the values from\n component variables are aggregated and returned.\n\n _SyncOnRead variables_ bring a lot of custom configuration difficulty to the\n underlying logic, so we do not encourage users to instantiate and use\n _SyncOnRead variable_ on their own. We have mainly used _SyncOnRead\n variables_ for use cases such as batch norm and metrics. For performance\n reasons, we often don't need to keep these statistics in sync every step and\n they can be accumulated on each replica independently. The only time we want\n to sync them is reporting or checkpointing, which typically happens in\n cross-replica context. _SyncOnRead variables_ are also often used by advanced\n users who want to control when variable values are aggregated. For example,\n users sometimes want to maintain gradients independently on each replica for a\n couple of steps without aggregation.\n\n* _Distribute-aware layers_\n\n Layers are generally called in a replica context, except when defining a\n Keras functional model. `tf.distribute.in_cross_replica_context` will let you\n determine which case you are in. If in a replica context,\n the `tf.distribute.get_replica_context` function will return the default\n replica context outside a strategy scope, `None` within a strategy scope, and\n a `tf.distribute.ReplicaContext` object inside a strategy scope and within a\n `tf.distribute.Strategy.run` function. The `ReplicaContext` object has an\n `all_reduce` method for aggregating across all replicas.\n\n\nNote that we provide a default version of `tf.distribute.Strategy` that is\nused when no other strategy is in scope, that provides the same API with\nreasonable default behavior.\n\n", "desc": "Library for running a computation across multiple devices.", "type": "API"}, {"name": "tf.distribute.cluster_resolver", "docs": "Library imports for ClusterResolvers.\n\n This library contains all implementations of ClusterResolvers.\n ClusterResolvers are a way of specifying cluster information for distributed\n execution. Built on top of existing `ClusterSpec` framework, ClusterResolvers\n are a way for TensorFlow to communicate with various cluster management\n systems (e.g. GCE, AWS, etc...).\n\n", "desc": "Library imports for ClusterResolvers.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.ClusterResolver", "docs": "Abstract class for all implementations of ClusterResolvers.\n\n This defines the skeleton for all implementations of ClusterResolvers.\n ClusterResolvers are a way for TensorFlow to communicate with various cluster\n management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary\n information to set up distributed training.\n\n By letting TensorFlow communicate with these systems, we will be able to\n automatically discover and resolve IP addresses for various TensorFlow\n workers. This will eventually allow us to automatically recover from\n underlying machine failures and scale TensorFlow worker clusters up and down.\n\n Note to Implementors of `tf.distribute.cluster_resolver.ClusterResolver`\n subclass: In addition to these abstract methods, when task_type, task_id, and\n rpc_layer attributes are applicable, you should also implement them either as\n properties with getters or setters, or directly set the attributes\n `self._task_type`, `self._task_id`, or `self._rpc_layer` so the base class'\n getters and setters are used. See\n `tf.distribute.cluster_resolver.SimpleClusterResolver.__init__` for an\n example.\n\n In general, multi-client tf.distribute strategies such as\n `tf.distribute.experimental.MultiWorkerMirroredStrategy` require task_type and\n task_id properties to be available in the `ClusterResolver` they are using. On\n the other hand, these concepts are not applicable in single-client strategies,\n such as `tf.distribute.experimental.TPUStrategy`, because the program is only\n expected to be run on one task, so there should not be a need to have code\n branches according to task type and task id.\n\n - task_type is the name of the server's current named job (e.g. 'worker',\n 'ps' in a distributed parameterized training job).\n - task_id is the ordinal index of the server within the task type.\n - rpc_layer is the protocol used by TensorFlow to communicate with other\n TensorFlow servers in a distributed environment.\n ", "desc": "Abstract class for all implementations of ClusterResolvers.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.GCEClusterResolver", "docs": "ClusterResolver for Google Compute Engine.\n\n This is an implementation of cluster resolvers for the Google Compute Engine\n instance group platform. By specifying a project, zone, and instance group,\n this will retrieve the IP address of all the instances within the instance\n group and return a ClusterResolver object suitable for use for distributed\n TensorFlow.\n\n Note: this cluster resolver cannot retrieve `task_type`, `task_id` or\n `rpc_layer`. To use it with some distribution strategies like\n `tf.distribute.experimental.MultiWorkerMirroredStrategy`, you will need to\n specify `task_type` and `task_id` in the constructor.\n\n Usage example with tf.distribute.Strategy:\n\n ```Python\n # On worker 0\n cluster_resolver = GCEClusterResolver(\"my-project\", \"us-west1\",\n \"my-instance-group\",\n task_type=\"worker\", task_id=0)\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = GCEClusterResolver(\"my-project\", \"us-west1\",\n \"my-instance-group\",\n task_type=\"worker\", task_id=1)\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "ClusterResolver for Google Compute Engine.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.KubernetesClusterResolver", "docs": "ClusterResolver for Kubernetes.\n\n This is an implementation of cluster resolvers for Kubernetes. When given the\n the Kubernetes namespace and label selector for pods, we will retrieve the\n pod IP addresses of all running pods matching the selector, and return a\n ClusterSpec based on that information.\n\n Note: it cannot retrieve `task_type`, `task_id` or `rpc_layer`. To use it\n with some distribution strategies like\n `tf.distribute.experimental.MultiWorkerMirroredStrategy`, you will need to\n specify `task_type` and `task_id` by setting these attributes.\n\n Usage example with tf.distribute.Strategy:\n\n ```Python\n # On worker 0\n cluster_resolver = KubernetesClusterResolver(\n {\"worker\": [\"job-name=worker-cluster-a\", \"job-name=worker-cluster-b\"]})\n cluster_resolver.task_type = \"worker\"\n cluster_resolver.task_id = 0\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = KubernetesClusterResolver(\n {\"worker\": [\"job-name=worker-cluster-a\", \"job-name=worker-cluster-b\"]})\n cluster_resolver.task_type = \"worker\"\n cluster_resolver.task_id = 1\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "ClusterResolver for Kubernetes.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.SimpleClusterResolver", "docs": "Simple implementation of ClusterResolver that accepts all attributes.\n\n Please see the base class for documentation of arguments of its constructor.\n\n It is useful if you want to specify some or all attributes.\n\n Usage example with `tf.distribute.Strategy`:\n\n ```Python\n cluster = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\"]})\n\n # On worker 0\n cluster_resolver = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=0,\n num_accelerators={\"GPU\": 8},\n rpc_layer=\"grpc\")\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n\n # On worker 1\n cluster_resolver = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=1,\n num_accelerators={\"GPU\": 8},\n rpc_layer=\"grpc\")\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=cluster_resolver)\n ```\n ", "desc": "Simple implementation of ClusterResolver that accepts all attributes.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.SlurmClusterResolver", "docs": "ClusterResolver for system with Slurm workload manager.\n\n This is an implementation of ClusterResolver for Slurm clusters. This allows\n the specification of jobs and task counts, number of tasks per node, number\n of GPUs on each node and number of GPUs for each task. It retrieves system\n attributes by Slurm environment variables, resolves allocated computing node\n names, constructs a cluster and returns a ClusterResolver object which can be\n used for distributed TensorFlow.\n ", "desc": "ClusterResolver for system with Slurm workload manager.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.TFConfigClusterResolver", "docs": "Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar.\n\n This is an implementation of cluster resolvers when using TF_CONFIG to set\n information about the cluster. The cluster spec returned will be\n initialized from the TF_CONFIG environment variable.\n\n An example to set TF_CONFIG is:\n\n ```Python\n os.environ['TF_CONFIG'] = json.dumps({\n 'cluster': {\n 'worker': [\"localhost:12345\", \"localhost:23456\"]\n },\n 'task': {'type': 'worker', 'index': 0}\n })\n ```\n\n However, sometimes the container orchestration framework will set TF_CONFIG\n for you. In this case, you can just create an instance without passing in any\n arguments. You can find an example here to let Kuburnetes set TF_CONFIG for\n you: https://github.com/tensorflow/ecosystem/tree/master/kubernetes. Then you\n can use it with `tf.distribute.Strategy` as:\n\n ```Python\n # `TFConfigClusterResolver` is already the default one in the following\n # strategy.\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(\n cluster_resolver=TFConfigClusterResolver())\n ```\n ", "desc": "Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.TPUClusterResolver", "docs": "Cluster Resolver for Google Cloud TPUs.\n\n This is an implementation of cluster resolvers for the Google Cloud TPU\n service.\n\n TPUClusterResolver supports the following distinct environments:\n Google Compute Engine\n Google Kubernetes Engine\n Google internal\n\n It can be passed into `tf.distribute.TPUStrategy` to support TF2 training on\n Cloud TPUs.\n ", "desc": "Cluster Resolver for Google Cloud TPUs.", "type": "API"}, {"name": "tf.distribute.cluster_resolver.UnionResolver", "docs": "Performs a union on underlying ClusterResolvers.\n\n This class performs a union given two or more existing ClusterResolvers. It\n merges the underlying ClusterResolvers, and returns one unified ClusterSpec\n when cluster_spec is called. The details of the merge function is\n documented in the cluster_spec function.\n\n For additional ClusterResolver properties such as task type, task index,\n rpc layer, environment, etc..., we will return the value from the first\n ClusterResolver in the union.\n\n An example to combine two cluster resolvers:\n\n ```Python\n cluster_0 = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\"]})\n cluster_resolver_0 = SimpleClusterResolver(cluster, task_type=\"worker\",\n task_id=0,\n rpc_layer=\"grpc\")\n\n cluster_1 = tf.train.ClusterSpec({\"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n cluster_resolver_1 = SimpleClusterResolver(cluster, task_type=\"ps\",\n task_id=0,\n rpc_layer=\"grpc\")\n\n # Its task type would be \"worker\".\n cluster_resolver = UnionClusterResolver(cluster_resolver_0,\n cluster_resolver_1)\n ```\n\n An example to override the number of GPUs in a TFConfigClusterResolver\n instance:\n\n ```Python\n tf_config = TFConfigClusterResolver()\n gpu_override = SimpleClusterResolver(tf_config.cluster_spec(),\n num_accelerators={\"GPU\": 1})\n cluster_resolver = UnionResolver(gpu_override, tf_config)\n ```\n ", "desc": "Performs a union on underlying ClusterResolvers.", "type": "API"}, {"name": "tf.distribute.CrossDeviceOps", "docs": "Base class for cross-device reduction and broadcasting algorithms.\n\n The main purpose of this class is to be passed to\n `tf.distribute.MirroredStrategy` in order to choose among different cross\n device communication implementations. Prefer using the methods of\n `tf.distribute.Strategy` instead of the ones of this class.\n\n Implementations:\n * `tf.distribute.ReductionToOneDevice`\n * `tf.distribute.NcclAllReduce`\n * `tf.distribute.HierarchicalCopyAllReduce`\n ", "desc": "Base class for cross-device reduction and broadcasting algorithms.", "type": "API"}, {"name": "tf.distribute.DistributedDataset", "docs": "Represents a dataset distributed among devices and machines.\n\n A `tf.distribute.DistributedDataset` could be thought of as a \"distributed\"\n dataset. When you use `tf.distribute` API to scale training to multiple\n devices or machines, you also need to distribute the input data, which leads\n to a `tf.distribute.DistributedDataset` instance, instead of a\n `tf.data.Dataset` instance in the non-distributed case. In TF 2.x,\n `tf.distribute.DistributedDataset` objects are Python iterables.\n\n Note: `tf.distribute.DistributedDataset` instances are *not* of type\n `tf.data.Dataset`. It only supports two usages we will mention below:\n iteration and `element_spec`. We don't support any other APIs to transform or\n inspect the dataset.\n\n There are two APIs to create a `tf.distribute.DistributedDataset` object:\n `tf.distribute.Strategy.experimental_distribute_dataset(dataset)`and\n `tf.distribute.Strategy.distribute_datasets_from_function(dataset_fn)`.\n *When to use which?* When you have a `tf.data.Dataset` instance, and the\n regular batch splitting (i.e. re-batch the input `tf.data.Dataset` instance\n with a new batch size that is equal to the global batch size divided by the\n number of replicas in sync) and autosharding (i.e. the\n `tf.data.experimental.AutoShardPolicy` options) work for you, use the former\n API. Otherwise, if you are *not* using a canonical `tf.data.Dataset` instance,\n or you would like to customize the batch splitting or sharding, you can wrap\n these logic in a `dataset_fn` and use the latter API. Both API handles\n prefetch to device for the user. For more details and examples, follow the\n links to the APIs.\n\n\n There are two main usages of a `DistributedDataset` object:\n\n 1. Iterate over it to generate the input for a single device or multiple\n devices, which is a `tf.distribute.DistributedValues` instance. To do this,\n you can:\n\n * use a pythonic for-loop construct:\n\n >>> global_batch_size = 4\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(4).batch(global_batch_size)\n >>> dist_dataset = strategy.experimental_distribute_dataset(dataset)\n >>> @tf.function\n ... def train_step(input):\n ... features, labels = input\n ... return labels - 0.3 * features\n >>> for x in dist_dataset:\n ... # train_step trains the model using the dataset elements\n ... loss = strategy.run(train_step, args=(x,))\n ... print(\"Loss is\", loss)\n Loss is PerReplica:{\n 0: tf.Tensor(\n [[0.7]\n [0.7]], shape=(2, 1), dtype=float32),\n 1: tf.Tensor(\n [[0.7]\n [0.7]], shape=(2, 1), dtype=float32)\n }\n\n Placing the loop inside a `tf.function` will give a performance boost.\n However `break` and `return` are currently not supported if the loop is\n placed inside a `tf.function`. We also don't support placing the loop\n inside a `tf.function` when using\n `tf.distribute.experimental.MultiWorkerMirroredStrategy` or\n `tf.distribute.experimental.TPUStrategy` with multiple workers.\n\n * use `__iter__` to create an explicit iterator, which is of type\n `tf.distribute.DistributedIterator`\n\n >>> global_batch_size = 4\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> train_dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(50).batch(global_batch_size)\n >>> train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)\n >>> @tf.function\n ... def distributed_train_step(dataset_inputs):\n ... def train_step(input):\n ... loss = tf.constant(0.1)\n ... return loss\n ... per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))\n ... return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,axis=None)\n >>> EPOCHS = 2\n >>> STEPS = 3\n >>> for epoch in range(EPOCHS):\n ... total_loss = 0.0\n ... num_batches = 0\n ... dist_dataset_iterator = iter(train_dist_dataset)\n ... for _ in range(STEPS):\n ... total_loss += distributed_train_step(next(dist_dataset_iterator))\n ... num_batches += 1\n ... average_train_loss = total_loss / num_batches\n ... template = (\"Epoch {}, Loss: {:.4f}\")\n ... print (template.format(epoch+1, average_train_loss))\n Epoch 1, Loss: 0.2000\n Epoch 2, Loss: 0.2000\n\n\n To achieve a performance improvement, you can also wrap the `strategy.run`\n call with a `tf.range` inside a `tf.function`. This runs multiple steps in a\n `tf.function`. Autograph will convert it to a `tf.while_loop` on the worker.\n However, it is less flexible comparing with running a single step inside\n `tf.function`. For example, you cannot run things eagerly or arbitrary\n python code within the steps.\n\n\n 2. Inspect the `tf.TypeSpec` of the data generated by `DistributedDataset`.\n\n `tf.distribute.DistributedDataset` generates\n `tf.distribute.DistributedValues` as input to the devices. If you pass the\n input to a `tf.function` and would like to specify the shape and type of\n each Tensor argument to the function, you can pass a `tf.TypeSpec` object to\n the `input_signature` argument of the `tf.function`. To get the\n `tf.TypeSpec` of the input, you can use the `element_spec` property of the\n `tf.distribute.DistributedDataset` or `tf.distribute.DistributedIterator`\n object.\n\n For example:\n\n >>> global_batch_size = 4\n >>> epochs = 1\n >>> steps_per_epoch = 1\n >>> mirrored_strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensors(([2.])).repeat(100).batch(global_batch_size)\n >>> dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)\n >>> @tf.function(input_signature=[dist_dataset.element_spec])\n ... def train_step(per_replica_inputs):\n ... def step_fn(inputs):\n ... return tf.square(inputs)\n ... return mirrored_strategy.run(step_fn, args=(per_replica_inputs,))\n >>> for _ in range(epochs):\n ... iterator = iter(dist_dataset)\n ... for _ in range(steps_per_epoch):\n ... output = train_step(next(iterator))\n ... print(output)\n PerReplica:{\n 0: tf.Tensor(\n [[4.]\n [4.]], shape=(2, 1), dtype=float32),\n 1: tf.Tensor(\n [[4.]\n [4.]], shape=(2, 1), dtype=float32)\n }\n\n\n Visit the [tutorial](https://www.tensorflow.org/tutorials/distribute/input)\n on distributed input for more examples and caveats.\n ", "desc": "Represents a dataset distributed among devices and machines.", "type": "API"}, {"name": "tf.distribute.DistributedIterator", "docs": "An iterator over `tf.distribute.DistributedDataset`.\n\n `tf.distribute.DistributedIterator` is the primary mechanism for enumerating\n elements of a `tf.distribute.DistributedDataset`. It supports the Python\n Iterator protocol, which means it can be iterated over using a for-loop or by\n fetching individual elements explicitly via `get_next()`.\n\n You can create a `tf.distribute.DistributedIterator` by calling `iter` on\n a `tf.distribute.DistributedDataset` or creating a python loop over a\n `tf.distribute.DistributedDataset`.\n\n Visit the [tutorial](https://www.tensorflow.org/tutorials/distribute/input)\n on distributed input for more examples and caveats.\n ", "desc": "An iterator over `tf.distribute.DistributedDataset`.", "type": "API"}, {"name": "tf.distribute.DistributedValues", "docs": "Base class for representing distributed values.\n\n A subclass instance of `tf.distribute.DistributedValues` is created when\n creating variables within a distribution strategy, iterating a\n `tf.distribute.DistributedDataset` or through `tf.distribute.Strategy.run`.\n This base class should never be instantiated directly.\n `tf.distribute.DistributedValues` contains a value per replica. Depending on\n the subclass, the values could either be synced on update, synced on demand,\n or never synced.\n\n `tf.distribute.DistributedValues` can be reduced to obtain single value across\n replicas, as input into `tf.distribute.Strategy.run` or the per-replica values\n inspected using `tf.distribute.Strategy.experimental_local_results`.\n\n Example usage:\n\n 1. Created from a `tf.distribute.DistributedDataset`:\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n >>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n >>> distributed_values = next(dataset_iterator)\n\n 2. Returned by `run`:\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> @tf.function\n ... def run():\n ... ctx = tf.distribute.get_replica_context()\n ... return ctx.replica_id_in_sync_group\n >>> distributed_values = strategy.run(run)\n\n 3. As input into `run`:\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n >>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n >>> distributed_values = next(dataset_iterator)\n >>> @tf.function\n ... def run(input):\n ... return input + 1.0\n >>> updated_value = strategy.run(run, args=(distributed_values,))\n\n 4. Reduce value:\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n >>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n >>> distributed_values = next(dataset_iterator)\n >>> reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM,\n ... distributed_values,\n ... axis = 0)\n\n 5. Inspect local replica values:\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n >>> dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n >>> per_replica_values = strategy.experimental_local_results(\n ... distributed_values)\n >>> per_replica_values\n (,\n )\n\n ", "desc": "Base class for representing distributed values.", "type": "API"}, {"name": "tf.distribute.experimental", "docs": "Experimental Distribution Strategy library.\n", "desc": "Experimental Distribution Strategy library.", "type": "API"}, {"name": "tf.distribute.experimental.CentralStorageStrategy", "docs": "A one-machine strategy that puts all variables on a single device.\n\n Variables are assigned to local CPU or the only GPU. If there is more\n than one GPU, compute operations (other than variable update operations)\n will be replicated across all GPUs.\n\n For Example:\n ```\n strategy = tf.distribute.experimental.CentralStorageStrategy()\n # Create a dataset\n ds = tf.data.Dataset.range(5).batch(2)\n # Distribute that dataset\n dist_dataset = strategy.experimental_distribute_dataset(ds)\n\n with strategy.scope():\n @tf.function\n def train_step(val):\n return val + 1\n\n # Iterate over the distributed dataset\n for x in dist_dataset:\n # process dataset elements\n strategy.run(train_step, args=(x,))\n ```\n ", "desc": "A one-machine strategy that puts all variables on a single device.", "type": "API"}, {"name": "tf.distribute.experimental.CollectiveCommunication", "docs": "Cross device communication implementation.\n\n Warning: The alias `tf.distribute.experimental.CollectiveCommunication` is\n deprecated and will be removed in a future version. Use\n `tf.distribute.experimental.CommunicationImplementation` instead.\n\n * `AUTO`: Automatically chosen by Tensorflow.\n * `RING`: TensorFlow's ring algorithms for all-reduce and\n all-gather.\n * `NCCL`: NVIDIA\u00ae's NCCL library. This is now only used for all-reduce on\n GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.\n ", "desc": "Cross device communication implementation.", "type": "API"}, {"name": "tf.distribute.experimental.CollectiveHints", "docs": "Hints for collective operations like AllReduce.\n\n This can be passed to methods like\n `tf.distribute.get_replica_context().all_reduce()` to optimize collective\n operation performance. Note that these are only hints, which may or may not\n change the actual behavior. Some options only apply to certain strategy and\n are ignored by others.\n\n One common optimization is to break gradients all-reduce into multiple packs\n so that weight updates can overlap with gradient all-reduce.\n\n Examples:\n\n - bytes_per_pack\n\n ```python\n hints = tf.distribute.experimental.CollectiveHints(\n bytes_per_pack=50 * 1024 * 1024)\n grads = tf.distribute.get_replica_context().all_reduce(\n 'sum', grads, experimental_hints=hints)\n optimizer.apply_gradients(zip(grads, vars),\n experimental_aggregate_gradients=False)\n ```\n\n - timeout_seconds\n\n ```python\n strategy = tf.distribute.MirroredStrategy()\n hints = tf.distribute.experimental.CollectiveHints(\n timeout_seconds=120.0)\n try:\n strategy.reduce(\"sum\", v, axis=None, experimental_hints=hints)\n except tf.errors.DeadlineExceededError:\n do_something()\n ```\n\n ", "desc": "Hints for collective operations like AllReduce.", "type": "API"}, {"name": "tf.distribute.experimental.CommunicationImplementation", "docs": "Cross device communication implementation.\n\n Warning: The alias `tf.distribute.experimental.CollectiveCommunication` is\n deprecated and will be removed in a future version. Use\n `tf.distribute.experimental.CommunicationImplementation` instead.\n\n * `AUTO`: Automatically chosen by Tensorflow.\n * `RING`: TensorFlow's ring algorithms for all-reduce and\n all-gather.\n * `NCCL`: NVIDIA\u00ae's NCCL library. This is now only used for all-reduce on\n GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.\n ", "desc": "Cross device communication implementation.", "type": "API"}, {"name": "tf.distribute.experimental.CommunicationOptions", "docs": "Options for cross device communications like All-reduce.\n\n This can be passed to methods like\n `tf.distribute.get_replica_context().all_reduce()` to optimize collective\n operation performance. Note that these are only hints, which may or may not\n change the actual behavior. Some options only apply to certain strategy and\n are ignored by others.\n\n One common optimization is to break gradients all-reduce into multiple packs\n so that weight updates can overlap with gradient all-reduce.\n\n Examples:\n\n ```python\n options = tf.distribute.experimental.CommunicationOptions(\n bytes_per_pack=50 * 1024 * 1024,\n timeout_seconds=120.0,\n implementation=tf.distribute.experimental.CommunicationImplementation.NCCL\n )\n grads = tf.distribute.get_replica_context().all_reduce(\n 'sum', grads, options=options)\n optimizer.apply_gradients(zip(grads, vars),\n experimental_aggregate_gradients=False)\n ```\n\n ", "desc": "Options for cross device communications like All-reduce.", "type": "API"}, {"name": "tf.distribute.experimental.coordinator", "docs": "Public API for tf.distribute.experimental.coordinator namespace.\n", "desc": "Public API for tf.distribute.experimental.coordinator namespace.", "type": "API"}, {"name": "tf.distribute.experimental.coordinator.ClusterCoordinator", "docs": "An object to schedule and coordinate remote function execution.\n\n This class is used to create fault-tolerant resources and dispatch functions\n to remote TensorFlow servers.\n\n Currently, this class is not supported to be used in a standalone manner. It\n should be used in conjunction with a `tf.distribute` strategy that is designed\n to work with it. The `ClusterCoordinator` class currently only works\n `tf.distribute.experimental.ParameterServerStrategy`.\n\n __The `schedule`/`join` APIs__\n\n The most important APIs provided by this class is the `schedule`/`join` pair.\n The `schedule` API is non-blocking in that it queues a `tf.function` and\n returns a `RemoteValue` immediately. The queued functions will be dispatched\n to remote workers in background threads and their `RemoteValue`s will be\n filled asynchronously. Since `schedule` doesn\u2019t require worker assignment, the\n `tf.function` passed in can be executed on any available worker. If the worker\n it is executed on becomes unavailable before its completion, it will be\n migrated to another worker. Because of this fact and function execution is not\n atomic, a function may be executed more than once.\n\n __Handling Task Failure__\n\n This class when used with\n `tf.distribute.experimental.ParameterServerStrategy`, comes with built-in\n fault tolerance for worker failures. That is, when some workers are not\n available for any reason to be reached from the coordinator, the training\n progress continues to be made with the remaining workers. Upon recovery of a\n failed worker, it will be added for function execution after datasets created\n by `create_per_worker_dataset` are re-built on it.\n\n When a parameter server fails, a `tf.errors.UnavailableError` is raised by\n `schedule`, `join` or `done`. In this case, in addition to bringing back the\n failed parameter server, users should restart the coordinator so that it\n reconnects to workers and parameter servers, re-creates the variables, and\n loads checkpoints. If the coordinator fails, after the user brings it back,\n the program will automatically connect to workers and parameter servers, and\n continue the progress from a checkpoint.\n\n It is thus essential that in user's program, a checkpoint file is periodically\n saved, and restored at the start of the program. If an\n `tf.keras.optimizers.Optimizer` is checkpointed, after restoring from a\n checkpoiont, its `iterations` property roughly indicates the number of steps\n that have been made. This can be used to decide how many epochs and steps are\n needed before the training completion.\n\n See `tf.distribute.experimental.ParameterServerStrategy` docstring for an\n example usage of this API.\n\n This is currently under development, and the API as well as implementation\n are subject to changes.\n ", "desc": "An object to schedule and coordinate remote function execution.", "type": "API"}, {"name": "tf.distribute.experimental.coordinator.PerWorkerValues", "docs": "A container that holds a list of values, one value per worker.\n\n `tf.distribute.experimental.coordinator.PerWorkerValues` contains a collection\n of values, where each of the values is located on its corresponding worker,\n and upon being used as one of the `args` or `kwargs` of\n `tf.distribute.experimental.coordinator.ClusterCoordinator.schedule()`, the\n value specific to a worker will be passed into the function being executed at\n that corresponding worker.\n\n Currently, the only supported path to create an object of\n `tf.distribute.experimental.coordinator.PerWorkerValues` is through calling\n `iter` on a `ClusterCoordinator.create_per_worker_dataset`-returned\n distributed dataset instance. The mechanism to create a custom\n `tf.distribute.experimental.coordinator.PerWorkerValues` is not yet supported.\n ", "desc": "A container that holds a list of values, one value per worker.", "type": "API"}, {"name": "tf.distribute.experimental.coordinator.RemoteValue", "docs": "An asynchronously available value of a scheduled function.\n\n This class is used as the return value of\n `tf.distribute.experimental.coordinator.ClusterCoordinator.schedule` where\n the underlying value becomes available at a later time once the function has\n been executed.\n\n Using `tf.distribute.experimental.coordinator.RemoteValue` as an input to\n a subsequent function scheduled with\n `tf.distribute.experimental.coordinator.ClusterCoordinator.schedule` is\n currently not supported.\n\n Example:\n\n ```python\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=...)\n coordinator = (\n tf.distribute.experimental.coordinator.ClusterCoordinator(strategy))\n\n with strategy.scope():\n v1 = tf.Variable(initial_value=0.0)\n v2 = tf.Variable(initial_value=1.0)\n\n @tf.function\n def worker_fn():\n v1.assign_add(0.1)\n v2.assign_sub(0.2)\n return v1.read_value() / v2.read_value()\n\n result = coordinator.schedule(worker_fn)\n # Note that `fetch()` gives the actual result instead of a `tf.Tensor`.\n assert result.fetch() == 0.125\n\n for _ in range(10):\n # `worker_fn` will be run on arbitrary workers that are available. The\n # `result` value will be available later.\n result = coordinator.schedule(worker_fn)\n ```\n ", "desc": "An asynchronously available value of a scheduled function.", "type": "API"}, {"name": "tf.distribute.experimental.MultiWorkerMirroredStrategy", "docs": "A distribution strategy for synchronous training on multiple workers.\n\n This strategy implements synchronous distributed training across multiple\n workers, each with potentially multiple GPUs. Similar to\n `tf.distribute.MirroredStrategy`, it replicates all variables and computations\n to each local device. The difference is that it uses a distributed collective\n implementation (e.g. all-reduce), so that multiple workers can work together.\n\n You need to launch your program on each worker and configure\n `cluster_resolver` correctly. For example, if you are using\n `tf.distribute.cluster_resolver.TFConfigClusterResolver`, each worker needs to\n have its corresponding `task_type` and `task_id` set in the `TF_CONFIG`\n environment variable. An example TF_CONFIG on worker-0 of a two worker cluster\n is:\n\n ```\n TF_CONFIG = '{\"cluster\": {\"worker\": [\"localhost:12345\", \"localhost:23456\"]}, \"task\": {\"type\": \"worker\", \"index\": 0} }'\n ```\n\n Your program runs on each worker as-is. Note that collectives require each\n worker to participate. All `tf.distribute` and non `tf.distribute` API may use\n collectives internally, e.g. checkpointing and saving since reading a\n `tf.Variable` with `tf.VariableSynchronization.ON_READ` all-reduces the value.\n Therefore it's recommended to run exactly the same program on each worker.\n Dispatching based on `task_type` or `task_id` of the worker is error-prone.\n\n `cluster_resolver.num_accelerators()` determines the number of GPUs the\n strategy uses. If it's zero, the strategy uses the CPU. All workers need to\n use the same number of devices, otherwise the behavior is undefined.\n\n This strategy is not intended for TPU. Use `tf.distribute.TPUStrategy`\n instead.\n\n After setting up TF_CONFIG, using this strategy is similar to using\n `tf.distribute.MirroredStrategy` and `tf.distribute.TPUStrategy`.\n\n ```\n strategy = tf.distribute.MultiWorkerMirroredStrategy()\n\n with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(2, input_shape=(5,)),\n ])\n optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n def dataset_fn(ctx):\n x = np.random.random((2, 5)).astype(np.float32)\n y = np.random.randint(2, size=(2, 1))\n dataset = tf.data.Dataset.from_tensor_slices((x, y))\n return dataset.repeat().batch(1, drop_remainder=True)\n dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)\n\n model.compile()\n model.fit(dist_dataset)\n ```\n\n You can also write your own training loop:\n\n ```\n @tf.function\n def train_step(iterator):\n\n def step_fn(inputs):\n features, labels = inputs\n with tf.GradientTape() as tape:\n logits = model(features, training=True)\n loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, logits)\n\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n\n strategy.run(step_fn, args=(next(iterator),))\n\n for _ in range(NUM_STEP):\n train_step(iterator)\n ```\n\n See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras)\n for a detailed tutorial.\n\n __Saving__\n\n You need to save and checkpoint on all workers instead of just one. This is\n because variables whose synchronization=ON_READ triggers aggregation during\n saving. It's recommended to save to a different path on each worker to avoid\n race conditions. Each worker saves the same thing. See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading)\n tutorial for examples.\n\n __Known Issues__\n\n * `tf.distribute.cluster_resolver.TFConfigClusterResolver` does not return the\n correct number of accelerators. The strategy uses all available GPUs if\n `cluster_resolver` is `tf.distribute.cluster_resolver.TFConfigClusterResolver`\n or `None`.\n * In eager mode, the strategy needs to be created before calling any other\n Tensorflow API.\n\n ", "desc": "A distribution strategy for synchronous training on multiple workers.", "type": "API"}, {"name": "tf.distribute.experimental.ParameterServerStrategy", "docs": "An multi-worker tf.distribute strategy with parameter servers.\n\n Parameter server training is a common data-parallel method to scale up a\n machine learning model on multiple machines. A parameter server training\n cluster consists of workers and parameter servers. Variables are created on\n parameter servers and they are read and updated by workers in each step.\n By default, workers read and update these variables independently without\n synchronizing with each other. Under this configuration, it is known as\n asynchronous training.\n\n In TensorFlow 2, we recommend an architecture based on central coordination\n for parameter server training. Each worker and parameter server runs a\n `tf.distribute.Server`, and on top of that, a coordinator task is responsible\n for creating resources on workers and parameter servers, dispatching\n functions, and coordinating the training. The coordinator uses a\n `tf.distribute.experimental.coordinator.ClusterCoordinator` to coordinate the\n cluster, and a `tf.distribute.experimental.ParameterServerStrategy` to define\n variables on parameter servers and computation on workers.\n\n For the training to work, the coordinator dispatches `tf.function`s to be\n executed on remote workers. Upon receiving requests from the coordinator, a\n worker executes the `tf.function` by reading the variables from parameter\n servers, executing the ops, and updating the variables on the parameter\n servers. Each of the worker only processes the requests from the coordinator,\n and communicates with parameter servers, without direct interactions with\n other workers in the cluster.\n\n As a result, failures of some workers do not prevent the cluster from\n continuing the work, and this allows the cluster to train with instances that\n can be occasionally unavailable (e.g. preemptible or spot instances). The\n coordinator and parameter servers though, must be available at all times for\n the cluster to make progress.\n\n Note that the coordinator is not one of the training workers. Instead, it\n creates resources such as variables and datasets, dispatchs `tf.function`s,\n saves checkpoints and so on. In addition to workers, parameter servers and\n the coordinator, an optional evaluator can be run on the side that\n periodically reads the checkpoints saved by the coordinator and runs\n evaluations against each checkpoint.\n\n `ParameterServerStrategy` is supported with two training APIs: [Custom\n Training Loop (CTL)]\n (https://www.tensorflow.org/tutorials/distribute/custom_training)\n and [Keras Training API, also known as `Model.fit`]\n (https://www.tensorflow.org/tutorials/distribute/keras). CTL is recommended\n when users prefer to define the details of their training loop, and\n `Model.fit` is recommended when users prefer a high-level abstraction and\n handling of training.\n\n When using a CTL, `ParameterServerStrategy` has to work in conjunction with a\n `tf.distribute.experimental.coordinator.ClusterCoordinator` object.\n\n When using `Model.fit`, currently only the\n `tf.keras.utils.experimental.DatasetCreator` input type is supported.\n\n __Example code for coordinator__\n\n This section provides code snippets that are intended to be run on (the only)\n one task that is designated as the coordinator. Note that `cluster_resolver`,\n `variable_partitioner`, and `dataset_fn` arguments are explained in the\n following \"Cluster setup\", \"Variable partitioning\", and \"Dataset preparation\"\n sections.\n\n With a CTL,\n\n ```python\n # Prepare a strategy to use with the cluster and variable partitioning info.\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=...,\n variable_partitioner=...)\n coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(\n strategy=strategy)\n\n # Prepare a distribute dataset that will place datasets on the workers.\n distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn=...)\n\n with strategy.scope():\n model = ...\n optimizer, metrics = ... # Keras optimizer/metrics are great choices\n checkpoint = tf.train.Checkpoint(model=model, optimizer=optimizer)\n checkpoint_manager = tf.train.CheckpointManager(\n checkpoint, checkpoint_dir, max_to_keep=2)\n # `load_checkpoint` infers initial epoch from `optimizer.iterations`.\n initial_epoch = load_checkpoint(checkpoint_manager) or 0\n\n @tf.function\n def worker_fn(iterator):\n\n def replica_fn(inputs):\n batch_data, labels = inputs\n # calculate gradient, applying gradient, metrics update etc.\n\n strategy.run(replica_fn, args=(next(iterator),))\n\n for epoch in range(initial_epoch, num_epoch):\n distributed_iterator = iter(distributed_dataset) # Reset iterator state.\n for step in range(steps_per_epoch):\n\n # Asynchronously schedule the `worker_fn` to be executed on an arbitrary\n # worker. This call returns immediately.\n coordinator.schedule(worker_fn, args=(distributed_iterator,))\n\n # `join` blocks until all scheduled `worker_fn`s finish execution. Once it\n # returns, we can read the metrics and save checkpoints as needed.\n coordinator.join()\n logging.info('Metric result: %r', metrics.result())\n train_accuracy.reset_states()\n checkpoint_manager.save()\n ```\n\n With `Model.fit`,\n\n ```python\n # Prepare a strategy to use with the cluster and variable partitioning info.\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=...,\n variable_partitioner=...)\n\n # A dataset function takes a `input_context` and returns a `Dataset`\n def dataset_fn(input_context):\n dataset = tf.data.Dataset.from_tensors(...)\n return dataset.repeat().shard(...).batch(...).prefetch(...)\n\n # With `Model.fit`, a `DatasetCreator` needs to be used.\n input = tf.keras.utils.experimental.DatasetCreator(dataset_fn=...)\n\n with strategy.scope():\n model = ... # Make sure the `Model` is created within scope.\n model.compile(optimizer=\"rmsprop\", loss=\"mse\", steps_per_execution=..., ...)\n\n # Optional callbacks to checkpoint the model, back up the progress, etc.\n callbacks = [tf.keras.callbacks.ModelCheckpoint(...), ...]\n\n # `steps_per_epoch` is required with `ParameterServerStrategy`.\n model.fit(input, epochs=..., steps_per_epoch=..., callbacks=callbacks)\n ```\n\n __Example code for worker and parameter servers__\n\n In addition to the coordinator, there should be tasks designated as\n \"worker\" or \"ps\". They should run the following code to start a TensorFlow\n server, waiting for coordinator's requests:\n\n ```python\n # Provide a `tf.distribute.cluster_resolver.ClusterResolver` that serves\n # the cluster information. See below \"Cluster setup\" section.\n cluster_resolver = ...\n\n server = tf.distribute.Server(\n cluster_resolver.cluster_spec(),\n job_name=cluster_resolver.task_type,\n task_index=cluster_resolver.task_id,\n protocol=\"grpc\")\n\n # Blocking the process that starts a server from exiting.\n server.join()\n ```\n\n __Cluster setup__\n\n In order for the tasks in the cluster to know other tasks' addresses,\n a `tf.distribute.cluster_resolver.ClusterResolver` is required to be used\n in coordinator, worker, and ps. The\n `tf.distribute.cluster_resolver.ClusterResolver` is responsible for providing\n the cluster information, as well as the task type and id of the current task.\n See `tf.distribute.cluster_resolver.ClusterResolver` for more information.\n\n If `TF_CONFIG` environment variable is set, a\n `tf.distribute.cluster_resolver.TFConfigClusterResolver` should be used as\n well.\n\n Since there are assumptions in\n `tf.distribute.experimental.ParameterServerStrategy` around the naming of the\n task types, \"chief\", \"ps\", and \"worker\" should be used in the\n `tf.distribute.cluster_resolver.ClusterResolver` to refer to the coordinator,\n parameter servers, and workers, respectively.\n\n The following example demonstrates setting `TF_CONFIG` for the task designated\n as a parameter server (task type \"ps\") and index 1 (the second task), in a\n cluster with 1 chief, 2 parameter servers, and 3 workers. Note that it needs\n to be set before the use of\n `tf.distribute.cluster_resolver.TFConfigClusterResolver`.\n\n Example code for cluster setup:\n ```python\n os.environ['TF_CONFIG'] = '''\n {\n \"cluster\": {\n \"chief\": [\"chief.example.com:2222\"],\n \"ps\": [\"ps0.example.com:2222\", \"ps1.example.com:2222\"],\n \"worker\": [\"worker0.example.com:2222\", \"worker1.example.com:2222\",\n \"worker2.example.com:2222\"]\n },\n \"task\": {\n \"type\": \"ps\",\n \"index\": 1\n }\n }\n '''\n ```\n\n If you prefer to run the same binary for all tasks, you will need to let the\n binary branch into different roles at the beginning of the program:\n ```python\n # If coordinator, create a strategy and start the training program.\n if cluster_resolver.task_type == 'chief':\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver)\n ...\n\n # If worker/ps, create a server\n elif cluster_resolver.task_type in (\"worker\", \"ps\"):\n server = tf.distribute.Server(...)\n ...\n ```\n Alternatively, you can also start a bunch of TensorFlow servers in advance and\n connect to them later. The coordinator can be in the same cluster or on any\n machine that has connectivity to workers and parameter servers. This is\n covered in our guide and tutorial.\n\n __Variable creation with `strategy.scope()`__\n\n `tf.distribute.experimental.ParameterServerStrategy` follows the\n `tf.distribute` API contract where variable creation is expected to be inside\n the context manager returned by `strategy.scope()`, in order to be correctly\n placed on parameter servers in a round-robin manner:\n\n ```python\n # In this example, we're assuming having 3 ps.\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=...)\n coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(\n strategy=strategy)\n\n # Variables should be created inside scope to be placed on parameter servers.\n # If created outside scope such as `v1` here, it would be placed on the\n # coordinator.\n v1 = tf.Variable(initial_value=0.0)\n\n with strategy.scope():\n v2 = tf.Variable(initial_value=1.0)\n v3 = tf.Variable(initial_value=2.0)\n v4 = tf.Variable(initial_value=3.0)\n v5 = tf.Variable(initial_value=4.0)\n\n # v2 through v5 are created in scope and are distributed on parameter servers.\n # Default placement is round-robin but the order should not be relied on.\n assert v2.device == \"/job:ps/replica:0/task:0/device:CPU:0\"\n assert v3.device == \"/job:ps/replica:0/task:1/device:CPU:0\"\n assert v4.device == \"/job:ps/replica:0/task:2/device:CPU:0\"\n assert v5.device == \"/job:ps/replica:0/task:0/device:CPU:0\"\n ```\n\n See `distribute.Strategy.scope` for more information.\n\n __Variable partitioning__\n\n Having dedicated servers to store variables means being able to divide up, or\n \"shard\" the variables across the ps. Partitioning large variable among ps is a\n commonly used technique to boost training throughput and mitigate memory\n constraints. It enables parallel computations and updates on different shards\n of a variable, and often yields better load balancing across parameter\n servers. Without sharding, models with large variables (e.g, embeddings) that\n can't fit into one machine's memory would otherwise be unable to train.\n\n With `tf.distribute.experimental.ParameterServerStrategy`, if a\n `variable_partitioner` is provided to `__init__` and certain conditions are\n satisfied, the resulting variables created in scope are sharded across the\n parameter servers, in a round-robin fashion. The variable reference returned\n from `tf.Variable` becomes a type that serves as the container of the sharded\n variables. One can access `variables` attribute of this container for the\n actual variable components. If building model with `tf.Module` or Keras,\n the variable components are collected in the `variables` alike attributes.\n\n It is recommended to use size-based partitioners like\n `tf.distribute.experimental.partitioners.MinSizePartitioner` to avoid\n partitioning small variables, which could have negative impact on model\n training speed.\n\n ```python\n # Partition the embedding layer into 2 shards.\n variable_partitioner = (\n tf.distribute.experimental.partitioners.MinSizePartitioner(\n min_shard_bytes=(256 << 10),\n max_shards = 2))\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver=...,\n variable_partitioner = variable_partitioner)\n with strategy.scope():\n embedding = tf.keras.layers.Embedding(input_dim=1024, output_dim=1024)\n assert len(embedding.variables) == 2\n assert isinstance(embedding.variables[0], tf.Variable)\n assert isinstance(embedding.variables[1], tf.Variable)\n assert embedding.variables[0].shape == (512, 1024)\n assert embedding.variables[1].shape == (512, 1024)\n ```\n\n The sharded variable container can be converted to a `Tensor` via\n `tf.convert_to_tensor`. This means the container can be directly used in most\n Python Ops where such `Tensor` conversion automatically happens. For example,\n in the above code snippet, `x * self.w` would implicitly apply the said tensor\n conversion. Note that such conversion can be expensive, as the variable\n components need to be transferred from multiple parameter servers to where\n the value is used.\n\n `tf.nn.embedding_lookup` on the other hand doesn't apply the tensor\n conversion, and performs parallel lookups on the variable components instead.\n This is crucial to scale up embedding lookups when the embedding table\n variable is large.\n\n When a partitioned variable is saved to a `SavedModel`, it will be saved as if\n it is one single variable. This improves serving efficiency by eliminating\n a number of Ops that handle the partiton aspects.\n\n Known limitations of variable partitioning:\n\n * Number of partitions must not change across Checkpoint saving/loading.\n\n * After saving partitioned variables to a SavedModel, the SavedModel can't be\n loaded via `tf.saved_model.load`.\n\n * Partition variable doesn't directly work with `tf.GradientTape`, please use\n the `variables` attributes to get the actual variable components and use\n them in gradient APIs instead.\n\n __Dataset preparation__\n\n With `tf.distribute.experimental.ParameterServerStrategy`, a dataset is\n created in each of the workers to be used for training. This is done by\n creating a `dataset_fn` that takes no argument and returns a\n `tf.data.Dataset`, and passing the `dataset_fn` into\n `tf.distribute.experimental.coordinator.\n ClusterCoordinator.create_per_worker_dataset`. We recommend the dataset to be\n shuffled and repeated to have the examples run through the training as evenly\n as possible.\n\n ```python\n def dataset_fn():\n filenames = ...\n dataset = tf.data.Dataset.from_tensor_slices(filenames)\n\n # Dataset is recommended to be shuffled, and repeated.\n return dataset.shuffle(buffer_size=...).repeat().batch(batch_size=...)\n\n coordinator =\n tf.distribute.experimental.coordinator.ClusterCoordinator(strategy=...)\n distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn)\n ```\n\n __Limitations__\n\n * `tf.distribute.experimental.ParameterServerStrategy` in TF2 is experimental,\n and the API is subject to further changes.\n\n * When using `Model.fit`, `tf.distribute.experimental.ParameterServerStrategy`\n must be used with a `tf.keras.utils.experimental.DatasetCreator`, and\n `steps_per_epoch` must be specified.\n ", "desc": "An multi-worker tf.distribute strategy with parameter servers.", "type": "API"}, {"name": "tf.distribute.experimental.partitioners", "docs": "Public API for tf.distribute.experimental.partitioners namespace.\n", "desc": "Public API for tf.distribute.experimental.partitioners namespace.", "type": "API"}, {"name": "tf.distribute.experimental.partitioners.FixedShardsPartitioner", "docs": "Partitioner that allocates a fixed number of shards.\n\n Examples:\n\n >>> # standalone usage:\n >>> partitioner = FixedShardsPartitioner(num_shards=2)\n >>> partitions = partitioner(tf.TensorShape([10, 3]), tf.float32)\n >>> [2, 1]\n >>>\n >>> # use in ParameterServerStrategy\n >>> # strategy = tf.distribute.experimental.ParameterServerStrategy(\n >>> # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n\n ", "desc": "Partitioner that allocates a fixed number of shards.", "type": "API"}, {"name": "tf.distribute.experimental.partitioners.MaxSizePartitioner", "docs": "Partitioner that keeps shards below `max_shard_bytes`.\n\n This partitioner ensures each shard has at most `max_shard_bytes`, and tries\n to allocate as few shards as possible, i.e., keeping shard size as large\n as possible.\n\n If the partitioner hits the `max_shards` limit, then each shard may end up\n larger than `max_shard_bytes`. By default `max_shards` equals `None` and no\n limit on the number of shards is enforced.\n\n Examples:\n\n >>> partitioner = MaxSizePartitioner(max_shard_bytes=4)\n >>> partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n >>> [6, 1]\n >>> partitioner = MaxSizePartitioner(max_shard_bytes=4, max_shards=2)\n >>> partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n >>> [2, 1]\n >>> partitioner = MaxSizePartitioner(max_shard_bytes=1024)\n >>> partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n >>> [1, 1]\n >>>\n >>> # use in ParameterServerStrategy\n >>> # strategy = tf.distribute.experimental.ParameterServerStrategy(\n >>> # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n ", "desc": "Partitioner that keeps shards below `max_shard_bytes`.", "type": "API"}, {"name": "tf.distribute.experimental.partitioners.MinSizePartitioner", "docs": "Partitioner that allocates a minimum size per shard.\n\n This partitioner ensures each shard has at least `min_shard_bytes`, and tries\n to allocate as many shards as possible, i.e., keeping shard size as small as\n possible. The maximum number of such shards (upper bound) is given by\n `max_shards`.\n\n Examples:\n\n >>> partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=2)\n >>> partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n >>> [2, 1]\n >>> partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=10)\n >>> partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n >>> [6, 1]\n >>>\n >>> # use in ParameterServerStrategy\n >>> # strategy = tf.distribute.experimental.ParameterServerStrategy(\n >>> # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n ", "desc": "Partitioner that allocates a minimum size per shard.", "type": "API"}, {"name": "tf.distribute.experimental.partitioners.Partitioner", "docs": "Partitioner base class: all partitiners inherit from this class.\n\n Partitioners should implement a `__call__` method with the following\n signature:\n\n ```python\n def __call__(self, shape, dtype, axis=0):\n # Partitions the given `shape` and returns the partition results.\n # See docstring of `__call__` method for the format of partition results.\n ```\n ", "desc": "Partitioner base class: all partitiners inherit from this class.", "type": "API"}, {"name": "tf.distribute.experimental.TPUStrategy", "docs": "Synchronous training on TPUs and TPU Pods.\n\n To construct a TPUStrategy object, you need to run the\n initialization code as below:\n\n >>> resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n >>> tf.config.experimental_connect_to_cluster(resolver)\n >>> tf.tpu.experimental.initialize_tpu_system(resolver)\n >>> strategy = tf.distribute.experimental.TPUStrategy(resolver)\n\n While using distribution strategies, the variables created within the\n strategy's scope will be replicated across all the replicas and can be kept in\n sync using all-reduce algorithms.\n\n To run TF2 programs on TPUs, you can either use `.compile` and\n `.fit` APIs in `tf.keras` with TPUStrategy, or write your own customized\n training loop by calling `strategy.run` directly. Note that\n TPUStrategy doesn't support pure eager execution, so please make sure the\n function passed into `strategy.run` is a `tf.function` or\n `strategy.run` is called inside a `tf.function` if eager\n behavior is enabled.\n ", "desc": "Synchronous training on TPUs and TPU Pods.", "type": "API"}, {"name": "tf.distribute.experimental.ValueContext", "docs": "A class wrapping information needed by a distribute function.\n\n This is a context class that is passed to the `value_fn` in\n `strategy.experimental_distribute_values_from_function` and contains\n information about the compute replicas. The `num_replicas_in_sync` and\n `replica_id` can be used to customize the value on each replica.\n\n Example usage:\n\n 1. Directly constructed.\n\n >>> def value_fn(context):\n ... return context.replica_id_in_sync_group/context.num_replicas_in_sync\n >>> context = tf.distribute.experimental.ValueContext(\n ... replica_id_in_sync_group=2, num_replicas_in_sync=4)\n >>> per_replica_value = value_fn(context)\n >>> per_replica_value\n 0.5\n\n 2. Passed in by `experimental_distribute_values_from_function`.\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> def value_fn(value_context):\n ... return value_context.num_replicas_in_sync\n >>> distributed_values = (\n ... strategy.experimental_distribute_values_from_function(\n ... value_fn))\n >>> local_result = strategy.experimental_local_results(distributed_values)\n >>> local_result\n (2, 2)\n\n ", "desc": "A class wrapping information needed by a distribute function.", "type": "API"}, {"name": "tf.distribute.experimental_set_strategy", "docs": "Set a `tf.distribute.Strategy` as current without `with strategy.scope()`.\n\n ```\n tf.distribute.experimental_set_strategy(strategy1)\n f()\n tf.distribute.experimental_set_strategy(strategy2)\n g()\n tf.distribute.experimental_set_strategy(None)\n h()\n ```\n\n is equivalent to:\n\n ```\n with strategy1.scope():\n f()\n with strategy2.scope():\n g()\n h()\n ```\n\n In general, you should use the `with strategy.scope():` API, but this\n alternative may be convenient in notebooks where you would have to put\n each cell in a `with strategy.scope():` block.\n\n Note: This should only be called outside of any TensorFlow scope to\n avoid improper nesting.\n\n Args:\n strategy: A `tf.distribute.Strategy` object or None.\n\n Raises:\n RuntimeError: If called inside a `with strategy.scope():`.\n ", "desc": "Set a `tf.distribute.Strategy` as current without `with strategy.scope()`.", "type": "API"}, {"name": "tf.distribute.get_replica_context", "docs": "Returns the current `tf.distribute.ReplicaContext` or `None`.\n\n Returns `None` if in a cross-replica context.\n\n Note that execution:\n\n 1. starts in the default (single-replica) replica context (this function\n will return the default `ReplicaContext` object);\n 2. switches to cross-replica context (in which case this will return\n `None`) when entering a `with tf.distribute.Strategy.scope():` block;\n 3. switches to a (non-default) replica context inside `strategy.run(fn, ...)`;\n 4. if `fn` calls `get_replica_context().merge_call(merge_fn, ...)`, then\n inside `merge_fn` you are back in the cross-replica context (and again\n this function will return `None`).\n\n Most `tf.distribute.Strategy` methods may only be executed in\n a cross-replica context, in a replica context you should use the\n API of the `tf.distribute.ReplicaContext` object returned by this\n method instead.\n\n ```\n assert tf.distribute.get_replica_context() is not None # default\n with strategy.scope():\n assert tf.distribute.get_replica_context() is None\n\n def f():\n replica_context = tf.distribute.get_replica_context() # for strategy\n assert replica_context is not None\n tf.print(\"Replica id: \", replica_context.replica_id_in_sync_group,\n \" of \", replica_context.num_replicas_in_sync)\n\n strategy.run(f)\n ```\n\n Returns:\n The current `tf.distribute.ReplicaContext` object when in a replica context\n scope, else `None`.\n\n Within a particular block, exactly one of these two things will be true:\n\n * `get_replica_context()` returns non-`None`, or\n * `tf.distribute.is_cross_replica_context()` returns True.\n ", "desc": "Returns the current `tf.distribute.ReplicaContext` or `None`.", "type": "API"}, {"name": "tf.distribute.get_strategy", "docs": "Returns the current `tf.distribute.Strategy` object.\n\n Typically only used in a cross-replica context:\n\n ```\n if tf.distribute.in_cross_replica_context():\n strategy = tf.distribute.get_strategy()\n ...\n ```\n\n Returns:\n A `tf.distribute.Strategy` object. Inside a `with strategy.scope()` block,\n it returns `strategy`, otherwise it returns the default (single-replica)\n `tf.distribute.Strategy` object.\n ", "desc": "Returns the current `tf.distribute.Strategy` object.", "type": "API"}, {"name": "tf.distribute.has_strategy", "docs": "Return if there is a current non-default `tf.distribute.Strategy`.\n\n ```\n assert not tf.distribute.has_strategy()\n with strategy.scope():\n assert tf.distribute.has_strategy()\n ```\n\n Returns:\n True if inside a `with strategy.scope():`.\n ", "desc": "Return if there is a current non-default `tf.distribute.Strategy`.", "type": "API"}, {"name": "tf.distribute.HierarchicalCopyAllReduce", "docs": "Hierarchical copy all-reduce implementation of CrossDeviceOps.\n\n It reduces to one GPU along edges in some hierarchy and broadcasts back to\n each GPU along the same path. For the batch API, tensors will be repacked or\n aggregated for more efficient cross-device transportation.\n\n This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like\n that on DGX-1 machine. If you have different GPU inter-connections, it is\n likely that it would be slower than `tf.distribute.ReductionToOneDevice`.\n\n For reduces that are not all-reduce, it falls back to\n `tf.distribute.ReductionToOneDevice`.\n\n Here is how you can use `HierarchicalCopyAllReduce` in\n `tf.distribute.MirroredStrategy`:\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())\n ```\n ", "desc": "Hierarchical copy all-reduce implementation of CrossDeviceOps.", "type": "API"}, {"name": "tf.distribute.in_cross_replica_context", "docs": "Returns `True` if in a cross-replica context.\n\n See `tf.distribute.get_replica_context` for details.\n\n ```\n assert not tf.distribute.in_cross_replica_context()\n with strategy.scope():\n assert tf.distribute.in_cross_replica_context()\n\n def f():\n assert not tf.distribute.in_cross_replica_context()\n\n strategy.run(f)\n ```\n\n Returns:\n `True` if in a cross-replica context (`get_replica_context()` returns\n `None`), or `False` if in a replica context (`get_replica_context()` returns\n non-`None`).\n ", "desc": "Returns `True` if in a cross-replica context.", "type": "API"}, {"name": "tf.distribute.InputContext", "docs": "A class wrapping information needed by an input function.\n\n This is a context class that is passed to the user's input function and\n contains information about the compute replicas and input pipelines. The\n number of compute replicas (in sync training) helps compute the local batch\n size from the desired global batch size for each replica. The input pipeline\n information can be used to return a different subset of the input in each\n replica (for e.g. shard the input pipeline, use a different input\n source etc).\n ", "desc": "A class wrapping information needed by an input function.", "type": "API"}, {"name": "tf.distribute.InputOptions", "docs": "Run options for `experimental_distribute_dataset(s_from_function)`.\n\n This can be used to hold some strategy specific configs.\n\n ```python\n # Setup TPUStrategy\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n tf.config.experimental_connect_to_cluster(resolver)\n tf.tpu.experimental.initialize_tpu_system(resolver)\n strategy = tf.distribute.TPUStrategy(resolver)\n\n dataset = tf.data.Dataset.range(16)\n distributed_dataset_on_host = (\n strategy.experimental_distribute_dataset(\n dataset,\n tf.distribute.InputOptions(\n experimental_replication_mode=\n experimental_replication_mode.PER_WORKER,\n experimental_place_dataset_on_device=False,\n experimental_per_replica_buffer_size=1)))\n ```\n\n Attributes:\n experimental_fetch_to_device: Boolean. If True, dataset\n elements will be prefetched to accelerator device memory. When False,\n dataset elements are prefetched to host device memory. Must be False when\n using TPUEmbedding API. experimental_fetch_to_device can only be used\n with experimental_replication_mode=PER_WORKER. Default behavior is same as\n setting it to True.\n experimental_replication_mode: Replication mode for the input function.\n Currently, the InputReplicationMode.PER_REPLICA is only supported with\n tf.distribute.MirroredStrategy.\n experimental_distribute_datasets_from_function.\n The default value is InputReplicationMode.PER_WORKER.\n experimental_place_dataset_on_device: Boolean. Default to False. When True,\n dataset will be placed on the device, otherwise it will remain on the\n host. experimental_place_dataset_on_device=True can only be used with\n experimental_replication_mode=PER_REPLICA\n experimental_per_replica_buffer_size: Integer. Default to 1. Indicates the\n prefetch buffer size in the replica device memory. Users can set it\n to 0 to completely disable prefetching behavior, or a number greater than\n 1 to enable larger buffer size. Note that this option is still\n valid with `experimental_fetch_to_device=False`.\n ", "desc": "Run options for `experimental_distribute_dataset(s_from_function)`.", "type": "API"}, {"name": "tf.distribute.InputReplicationMode", "docs": "Replication mode for input function.\n\n * `PER_WORKER`: The input function will be called on each worker\n independently, creating as many input pipelines as number of workers.\n Replicas will dequeue from the local Dataset on their worker.\n `tf.distribute.Strategy` doesn't manage any state sharing between such\n separate input pipelines.\n * `PER_REPLICA`: The input function will be called on each replica separately.\n `tf.distribute.Strategy` doesn't manage any state sharing between such\n separate input pipelines.\n ", "desc": "Replication mode for input function.", "type": "API"}, {"name": "tf.distribute.MirroredStrategy", "docs": "Synchronous training across multiple replicas on one machine.\n\n This strategy is typically used for training on one\n machine with multiple GPUs. For TPUs, use\n `tf.distribute.TPUStrategy`. To use `MirroredStrategy` with multiple workers,\n please refer to `tf.distribute.experimental.MultiWorkerMirroredStrategy`.\n\n For example, a variable created under a `MirroredStrategy` is a\n `MirroredVariable`. If no devices are specified in the constructor argument of\n the strategy then it will use all the available GPUs. If no GPUs are found, it\n will use the available CPUs. Note that TensorFlow treats all CPUs on a\n machine as a single device, and uses threads internally for parallelism.\n\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> with strategy.scope():\n ... x = tf.Variable(1.)\n >>> x\n MirroredVariable:{\n 0: ,\n 1: \n }\n\n While using distribution strategies, all the variable creation should be done\n within the strategy's scope. This will replicate the variables across all the\n replicas and keep them in sync using an all-reduce algorithm.\n\n Variables created inside a `MirroredStrategy` which is wrapped with a\n `tf.function` are still `MirroredVariables`.\n\n >>> x = []\n >>> @tf.function # Wrap the function with tf.function.\n ... def create_variable():\n ... if not x:\n ... x.append(tf.Variable(1.))\n ... return x[0]\n >>> strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n >>> with strategy.scope():\n ... _ = create_variable()\n ... print(x[0])\n MirroredVariable:{\n 0: ,\n 1: \n }\n\n `experimental_distribute_dataset` can be used to distribute the dataset across\n the replicas when writing your own training loop. If you are using `.fit` and\n `.compile` methods available in `tf.keras`, then `tf.keras` will handle the\n distribution for you.\n\n For example:\n\n ```python\n my_strategy = tf.distribute.MirroredStrategy()\n with my_strategy.scope():\n @tf.function\n def distribute_train_epoch(dataset):\n def replica_fn(input):\n # process input and return result\n return result\n\n total_result = 0\n for x in dataset:\n per_replica_result = my_strategy.run(replica_fn, args=(x,))\n total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,\n per_replica_result, axis=None)\n return total_result\n\n dist_dataset = my_strategy.experimental_distribute_dataset(dataset)\n for _ in range(EPOCHS):\n train_result = distribute_train_epoch(dist_dataset)\n ```\n\n Args:\n devices: a list of device strings such as `['/gpu:0', '/gpu:1']`. If\n `None`, all available GPUs are used. If no GPUs are found, CPU is used.\n cross_device_ops: optional, a descedant of `CrossDeviceOps`. If this is not\n set, `NcclAllReduce()` will be used by default. One would customize this\n if NCCL isn't available or if a special implementation that exploits\n the particular hardware is available.\n ", "desc": "Synchronous training across multiple replicas on one machine.", "type": "API"}, {"name": "tf.distribute.MultiWorkerMirroredStrategy", "docs": "A distribution strategy for synchronous training on multiple workers.\n\n This strategy implements synchronous distributed training across multiple\n workers, each with potentially multiple GPUs. Similar to\n `tf.distribute.MirroredStrategy`, it replicates all variables and computations\n to each local device. The difference is that it uses a distributed collective\n implementation (e.g. all-reduce), so that multiple workers can work together.\n\n You need to launch your program on each worker and configure\n `cluster_resolver` correctly. For example, if you are using\n `tf.distribute.cluster_resolver.TFConfigClusterResolver`, each worker needs to\n have its corresponding `task_type` and `task_id` set in the `TF_CONFIG`\n environment variable. An example TF_CONFIG on worker-0 of a two worker cluster\n is:\n\n ```\n TF_CONFIG = '{\"cluster\": {\"worker\": [\"localhost:12345\", \"localhost:23456\"]}, \"task\": {\"type\": \"worker\", \"index\": 0} }'\n ```\n\n Your program runs on each worker as-is. Note that collectives require each\n worker to participate. All `tf.distribute` and non `tf.distribute` API may use\n collectives internally, e.g. checkpointing and saving since reading a\n `tf.Variable` with `tf.VariableSynchronization.ON_READ` all-reduces the value.\n Therefore it's recommended to run exactly the same program on each worker.\n Dispatching based on `task_type` or `task_id` of the worker is error-prone.\n\n `cluster_resolver.num_accelerators()` determines the number of GPUs the\n strategy uses. If it's zero, the strategy uses the CPU. All workers need to\n use the same number of devices, otherwise the behavior is undefined.\n\n This strategy is not intended for TPU. Use `tf.distribute.TPUStrategy`\n instead.\n\n After setting up TF_CONFIG, using this strategy is similar to using\n `tf.distribute.MirroredStrategy` and `tf.distribute.TPUStrategy`.\n\n ```\n strategy = tf.distribute.MultiWorkerMirroredStrategy()\n\n with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(2, input_shape=(5,)),\n ])\n optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n def dataset_fn(ctx):\n x = np.random.random((2, 5)).astype(np.float32)\n y = np.random.randint(2, size=(2, 1))\n dataset = tf.data.Dataset.from_tensor_slices((x, y))\n return dataset.repeat().batch(1, drop_remainder=True)\n dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)\n\n model.compile()\n model.fit(dist_dataset)\n ```\n\n You can also write your own training loop:\n\n ```\n @tf.function\n def train_step(iterator):\n\n def step_fn(inputs):\n features, labels = inputs\n with tf.GradientTape() as tape:\n logits = model(features, training=True)\n loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, logits)\n\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n\n strategy.run(step_fn, args=(next(iterator),))\n\n for _ in range(NUM_STEP):\n train_step(iterator)\n ```\n\n See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras)\n for a detailed tutorial.\n\n __Saving__\n\n You need to save and checkpoint on all workers instead of just one. This is\n because variables whose synchronization=ON_READ triggers aggregation during\n saving. It's recommended to save to a different path on each worker to avoid\n race conditions. Each worker saves the same thing. See\n [Multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading)\n tutorial for examples.\n\n __Known Issues__\n\n * `tf.distribute.cluster_resolver.TFConfigClusterResolver` does not return the\n correct number of accelerators. The strategy uses all available GPUs if\n `cluster_resolver` is `tf.distribute.cluster_resolver.TFConfigClusterResolver`\n or `None`.\n * In eager mode, the strategy needs to be created before calling any other\n Tensorflow API.\n\n ", "desc": "A distribution strategy for synchronous training on multiple workers.", "type": "API"}, {"name": "tf.distribute.NcclAllReduce", "docs": "NCCL all-reduce implementation of CrossDeviceOps.\n\n It uses Nvidia NCCL for all-reduce. For the batch API, tensors will be\n repacked or aggregated for more efficient cross-device transportation.\n\n For reduces that are not all-reduce, it falls back to\n `tf.distribute.ReductionToOneDevice`.\n\n Here is how you can use `NcclAllReduce` in `tf.distribute.MirroredStrategy`:\n\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.NcclAllReduce())\n ```\n ", "desc": "NCCL all-reduce implementation of CrossDeviceOps.", "type": "API"}, {"name": "tf.distribute.OneDeviceStrategy", "docs": "A distribution strategy for running on a single device.\n\n Using this strategy will place any variables created in its scope on the\n specified device. Input distributed through this strategy will be\n prefetched to the specified device. Moreover, any functions called via\n `strategy.run` will also be placed on the specified device\n as well.\n\n Typical usage of this strategy could be testing your code with the\n tf.distribute.Strategy API before switching to other strategies which\n actually distribute to multiple devices/machines.\n\n For example:\n ```\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n\n with strategy.scope():\n v = tf.Variable(1.0)\n print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0\n\n def step_fn(x):\n return x * 2\n\n result = 0\n for i in range(10):\n result += strategy.run(step_fn, args=(i,))\n print(result) # 90\n ```\n ", "desc": "A distribution strategy for running on a single device.", "type": "API"}, {"name": "tf.distribute.ReduceOp", "docs": "Indicates how a set of values should be reduced.\n\n * `SUM`: Add all the values.\n * `MEAN`: Take the arithmetic mean (\"average\") of the values.\n ", "desc": "Indicates how a set of values should be reduced.", "type": "API"}, {"name": "tf.distribute.ReductionToOneDevice", "docs": "A CrossDeviceOps implementation that copies values to one device to reduce.\n\n This implementation always copies values to one device to reduce them, then\n broadcast reduced values to the destinations. It doesn't support efficient\n batching.\n\n Here is how you can use `ReductionToOneDevice` in\n `tf.distribute.MirroredStrategy`:\n\n ```\n strategy = tf.distribute.MirroredStrategy(\n cross_device_ops=tf.distribute.ReductionToOneDevice())\n ```\n ", "desc": "A CrossDeviceOps implementation that copies values to one device to reduce.", "type": "API"}, {"name": "tf.distribute.ReplicaContext", "docs": "A class with a collection of APIs that can be called in a replica context.\n\n You can use `tf.distribute.get_replica_context` to get an instance of\n `ReplicaContext`, which can only be called inside the function passed to\n `tf.distribute.Strategy.run`.\n\n >>> strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1'])\n >>> def func():\n ... replica_context = tf.distribute.get_replica_context()\n ... return replica_context.replica_id_in_sync_group\n >>> strategy.run(func)\n PerReplica:{\n 0: ,\n 1: \n }\n ", "desc": "A class with a collection of APIs that can be called in a replica context.", "type": "API"}, {"name": "tf.distribute.RunOptions", "docs": "Run options for `strategy.run`.\n\n This can be used to hold some strategy specific configs.\n\n Attributes:\n experimental_enable_dynamic_batch_size: Boolean. Only applies to\n TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic\n padder to support dynamic batch size for the inputs. Otherwise only static\n shape inputs are allowed.\n experimental_bucketizing_dynamic_shape: Boolean. Only applies to\n TPUStrategy. Default to False. If True, TPUStrategy will automatic\n bucketize inputs passed into `run` if the input shape is\n dynamic. This is a performance optimization to reduce XLA recompilation,\n which should not have impact on correctness.\n experimental_xla_options: A `tf.tpu.XLAOptions` instance. Only applies to\n TPUStrategy. Controls the XLA compiling options on TPUs. Default to None.\n ", "desc": "Run options for `strategy.run`.", "type": "API"}, {"name": "tf.distribute.Server", "docs": "An in-process TensorFlow server, for use in distributed training.\n\n A `tf.distribute.Server` instance encapsulates a set of devices and a\n `tf.compat.v1.Session` target that\n can participate in distributed training. A server belongs to a\n cluster (specified by a `tf.train.ClusterSpec`), and\n corresponds to a particular task in a named job. The server can\n communicate with any other server in the same cluster.\n ", "desc": "An in-process TensorFlow server, for use in distributed training.", "type": "API"}, {"name": "tf.distribute.Strategy", "docs": "A state & compute distribution policy on a list of devices.\n\n See [the guide](https://www.tensorflow.org/guide/distributed_training)\n for overview and examples. See `tf.distribute.StrategyExtended` and\n [`tf.distribute`](https://www.tensorflow.org/api_docs/python/tf/distribute)\n for a glossary of concepts mentioned on this page such as \"per-replica\",\n _replica_, and _reduce_.\n\n In short:\n\n * To use it with Keras `compile`/`fit`,\n [please\n read](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_keras).\n * You may pass descendant of `tf.distribute.Strategy` to\n `tf.estimator.RunConfig` to specify how a `tf.estimator.Estimator`\n should distribute its computation. See\n [guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_estimator_limited_support).\n * Otherwise, use `tf.distribute.Strategy.scope` to specify that a\n strategy should be used when building an executing your model.\n (This puts you in the \"cross-replica context\" for this strategy, which\n means the strategy is put in control of things like variable placement.)\n * If you are writing a custom training loop, you will need to call a few more\n methods,\n [see the\n guide](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_custom_training_loops):\n\n * Start by creating a `tf.data.Dataset` normally.\n * Use `tf.distribute.Strategy.experimental_distribute_dataset` to convert\n a `tf.data.Dataset` to something that produces \"per-replica\" values.\n If you want to manually specify how the dataset should be partitioned\n across replicas, use\n `tf.distribute.Strategy.distribute_datasets_from_function`\n instead.\n * Use `tf.distribute.Strategy.run` to run a function\n once per replica, taking values that may be \"per-replica\" (e.g.\n from a `tf.distribute.DistributedDataset` object) and returning\n \"per-replica\" values.\n This function is executed in \"replica context\", which means each\n operation is performed separately on each replica.\n * Finally use a method (such as `tf.distribute.Strategy.reduce`) to\n convert the resulting \"per-replica\" values into ordinary `Tensor`s.\n\n A custom training loop can be as simple as:\n\n ```\n with my_strategy.scope():\n @tf.function\n def distribute_train_epoch(dataset):\n def replica_fn(input):\n # process input and return result\n return result\n\n total_result = 0\n for x in dataset:\n per_replica_result = my_strategy.run(replica_fn, args=(x,))\n total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,\n per_replica_result, axis=None)\n return total_result\n\n dist_dataset = my_strategy.experimental_distribute_dataset(dataset)\n for _ in range(EPOCHS):\n train_result = distribute_train_epoch(dist_dataset)\n ```\n\n This takes an ordinary `dataset` and `replica_fn` and runs it\n distributed using a particular `tf.distribute.Strategy` named\n `my_strategy` above. Any variables created in `replica_fn` are created\n using `my_strategy`'s policy, and library functions called by\n `replica_fn` can use the `get_replica_context()` API to implement\n distributed-specific behavior.\n\n You can use the `reduce` API to aggregate results across replicas and use\n this as a return value from one iteration over a\n `tf.distribute.DistributedDataset`. Or\n you can use `tf.keras.metrics` (such as loss, accuracy, etc.) to\n accumulate metrics across steps in a given epoch.\n\n See the\n [custom training loop\n tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training)\n for a more detailed example.\n\n Note: `tf.distribute.Strategy` currently does not support TensorFlow's\n partitioned variables (where a single variable is split across multiple\n devices) at this time.\n ", "desc": "A state & compute distribution policy on a list of devices.", "type": "API"}, {"name": "tf.distribute.StrategyExtended", "docs": "Additional APIs for algorithms that need to be distribution-aware.\n\n Note: For most usage of `tf.distribute.Strategy`, there should be no need to\n call these methods, since TensorFlow libraries (such as optimizers) already\n call these methods when needed on your behalf.\n\n\n Some common use cases of functions on this page:\n\n * _Locality_\n\n `tf.distribute.DistributedValues` can have the same _locality_ as a\n _distributed variable_, which leads to a mirrored value residing on the same\n devices as the variable (as opposed to the compute devices). Such values may\n be passed to a call to `tf.distribute.StrategyExtended.update` to update the\n value of a variable. You may use\n `tf.distribute.StrategyExtended.colocate_vars_with` to give a variable the\n same locality as another variable. You may convert a \"PerReplica\" value to a\n variable's locality by using `tf.distribute.StrategyExtended.reduce_to` or\n `tf.distribute.StrategyExtended.batch_reduce_to`.\n\n * _How to update a distributed variable_\n\n A distributed variable is variables created on multiple devices. As discussed\n in the [glossary](https://www.tensorflow.org/api_docs/python/tf/distribute),\n mirrored variable and SyncOnRead variable are two examples. The standard\n pattern for updating distributed variables is to:\n\n 1. In your function passed to `tf.distribute.Strategy.run`,\n compute a list of (update, variable) pairs. For example, the update might\n be a gradient of the loss with respect to the variable.\n 2. Switch to cross-replica mode by calling\n `tf.distribute.get_replica_context().merge_call()` with the updates and\n variables as arguments.\n 3. Call\n `tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v)`\n (for one variable) or `tf.distribute.StrategyExtended.batch_reduce_to`\n (for a list of variables) to sum the updates.\n 4. Call `tf.distribute.StrategyExtended.update(v)` for each variable to update\n its value.\n\n Steps 2 through 4 are done automatically by class\n `tf.keras.optimizers.Optimizer` if you call its\n `tf.keras.optimizers.Optimizer.apply_gradients` method in a replica context.\n\n In fact, a higher-level solution to update a distributed variable is by\n calling `assign` on the variable as you would do to a regular `tf.Variable`.\n You can call the method in both _replica context_ and _cross-replica context_.\n For a _mirrored variable_, calling `assign` in _replica context_ requires you\n to specify the `aggregation` type in the variable constructor. In that case,\n the context switching and sync described in steps 2 through 4 are handled for\n you. If you call `assign` on _mirrored variable_ in _cross-replica context_,\n you can only assign a single value or assign values from another mirrored\n variable or a mirrored `tf.distribute.DistributedValues`. For a _SyncOnRead\n variable_, in _replica context_, you can simply call `assign` on it and no\n aggregation happens under the hood. In _cross-replica context_, you can only\n assign a single value to a SyncOnRead variable. One example case is restoring\n from a checkpoint: if the `aggregation` type of the variable is\n `tf.VariableAggregation.SUM`, it is assumed that replica values were added\n before checkpointing, so at the time of restoring, the value is divided by\n the number of replicas and then assigned to each replica; if the `aggregation`\n type is `tf.VariableAggregation.MEAN`, the value is assigned to each replica\n directly.\n\n ", "desc": "Additional APIs for algorithms that need to be distribution-aware.", "type": "API"}, {"name": "tf.distribute.TPUStrategy", "docs": "Synchronous training on TPUs and TPU Pods.\n\n To construct a TPUStrategy object, you need to run the\n initialization code as below:\n\n >>> resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n >>> tf.config.experimental_connect_to_cluster(resolver)\n >>> tf.tpu.experimental.initialize_tpu_system(resolver)\n >>> strategy = tf.distribute.TPUStrategy(resolver)\n\n While using distribution strategies, the variables created within the\n strategy's scope will be replicated across all the replicas and can be kept in\n sync using all-reduce algorithms.\n\n To run TF2 programs on TPUs, you can either use `.compile` and\n `.fit` APIs in `tf.keras` with TPUStrategy, or write your own customized\n training loop by calling `strategy.run` directly. Note that\n TPUStrategy doesn't support pure eager execution, so please make sure the\n function passed into `strategy.run` is a `tf.function` or\n `strategy.run` is called inside a `tf.function` if eager\n behavior is enabled. See more details in https://www.tensorflow.org/guide/tpu.\n\n `distribute_datasets_from_function` and\n `experimental_distribute_dataset` APIs can be used to distribute the dataset\n across the TPU workers when writing your own training loop. If you are using\n `fit` and `compile` methods available in `tf.keras.Model`, then Keras will\n handle the distribution for you.\n\n An example of writing customized training loop on TPUs:\n\n >>> with strategy.scope():\n ... model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(2, input_shape=(5,)),\n ... ])\n ... optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n >>> def dataset_fn(ctx):\n ... x = np.random.random((2, 5)).astype(np.float32)\n ... y = np.random.randint(2, size=(2, 1))\n ... dataset = tf.data.Dataset.from_tensor_slices((x, y))\n ... return dataset.repeat().batch(1, drop_remainder=True)\n >>> dist_dataset = strategy.distribute_datasets_from_function(\n ... dataset_fn)\n >>> iterator = iter(dist_dataset)\n\n >>> @tf.function()\n ... def train_step(iterator):\n ...\n ... def step_fn(inputs):\n ... features, labels = inputs\n ... with tf.GradientTape() as tape:\n ... logits = model(features, training=True)\n ... loss = tf.keras.losses.sparse_categorical_crossentropy(\n ... labels, logits)\n ...\n ... grads = tape.gradient(loss, model.trainable_variables)\n ... optimizer.apply_gradients(zip(grads, model.trainable_variables))\n ...\n ... strategy.run(step_fn, args=(next(iterator),))\n\n >>> train_step(iterator)\n\n For the advanced use cases like model parallelism, you can set\n `experimental_device_assignment` argument when creating TPUStrategy to specify\n number of replicas and number of logical devices. Below is an example to\n initialize TPU system with 2 logical devices and 1 replica.\n\n >>> resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n >>> tf.config.experimental_connect_to_cluster(resolver)\n >>> topology = tf.tpu.experimental.initialize_tpu_system(resolver)\n >>> device_assignment = tf.tpu.experimental.DeviceAssignment.build(\n ... topology,\n ... computation_shape=[1, 1, 1, 2],\n ... num_replicas=1)\n >>> strategy = tf.distribute.TPUStrategy(\n ... resolver, experimental_device_assignment=device_assignment)\n\n Then you can run a `tf.add` operation only on logical device 0.\n\n >>> @tf.function()\n ... def step_fn(inputs):\n ... features, _ = inputs\n ... output = tf.add(features, features)\n ...\n ... # Add operation will be executed on logical device 0.\n ... output = strategy.experimental_assign_to_logical_device(output, 0)\n ... return output\n >>> dist_dataset = strategy.distribute_datasets_from_function(\n ... dataset_fn)\n >>> iterator = iter(dist_dataset)\n >>> strategy.run(step_fn, args=(next(iterator),))\n\n `experimental_spmd_xla_partitioning` enables the experimental XLA SPMD feature\n for model parallelism. This flag can reduce the compilation time and HBM\n requirements. When running in this mode, every input tensor must either be\n partitioned (via `strategy.experimental_split_to_logical_devices`) or fully\n replicated (via `strategy.experimental_replicate_to_logical_devices`) to all\n logical devices. And calling `strategy.experimental_assign_to_logical_device`\n will result in a ValueError in this mode.\n ", "desc": "Synchronous training on TPUs and TPU Pods.", "type": "API"}, {"name": "tf.divide", "docs": "Computes Python style division of `x` by `y`.\n\n For example:\n\n >>> x = tf.constant([16, 12, 11])\n >>> y = tf.constant([4, 6, 2])\n >>> tf.divide(x,y)\n \n\n Args:\n x: A `Tensor`\n y: A `Tensor`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with same shape as input\n ", "desc": "Computes Python style division of `x` by `y`.", "type": "API"}, {"name": "tf.DType", "docs": "Represents the type of the elements in a `Tensor`.\n\n `DType`'s are used to specify the output data type for operations which\n require it, or to inspect the data type of existing `Tensor`'s.\n\n Examples:\n\n >>> tf.constant(1, dtype=tf.int64)\n \n >>> tf.constant(1.0).dtype\n tf.float32\n\n See `tf.dtypes` for a complete list of `DType`'s defined.\n ", "desc": "Represents the type of the elements in a `Tensor`.", "type": "API"}, {"name": "tf.dtypes", "docs": "Public API for tf.dtypes namespace.\n", "desc": "Public API for tf.dtypes namespace.", "type": "API"}, {"name": "tf.dtypes.as_dtype", "docs": "Converts the given `type_value` to a `DType`.\n\n Note: `DType` values are interned. When passed a new `DType` object,\n `as_dtype` always returns the interned value.\n\n Args:\n type_value: A value that can be converted to a `tf.DType` object. This may\n currently be a `tf.DType` object, a [`DataType`\n enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),\n a string type name, or a [`numpy.dtype`](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html).\n\n Returns:\n A `DType` corresponding to `type_value`.\n\n Raises:\n TypeError: If `type_value` cannot be converted to a `DType`.\n ", "desc": "Converts the given `type_value` to a `DType`.", "type": "API"}, {"name": "tf.dtypes.cast", "docs": "Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.cast(x, tf.int32)\n \n\n Notice `tf.cast` has an alias `tf.dtypes.cast`:\n\n >>> x = tf.constant([1.8, 2.2], dtype=tf.float32)\n >>> tf.dtypes.cast(x, tf.int32)\n \n\n The operation supports data types (for `x` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`.\n In case of casting from complex types (`complex64`, `complex128`) to real\n types, only the real part of `x` is returned. In case of casting from real\n types to complex types (`complex64`, `complex128`), the imaginary part of the\n returned value is set to `0`. The handling of complex types here matches the\n behavior of numpy.\n\n Note casting nan and inf values to integral types has undefined behavior.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices` of numeric type. It could\n be `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`,\n `int64`, `float16`, `float32`, `float64`, `complex64`, `complex128`,\n `bfloat16`.\n dtype: The destination type. The list of supported dtypes is the same as\n `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` and\n same type as `dtype`.\n\n Raises:\n TypeError: If `x` cannot be cast to the `dtype`.\n ", "desc": "Casts a tensor to a new type.", "type": "API"}, {"name": "tf.dtypes.complex", "docs": "Converts two real numbers to a complex number.\n\n Given a tensor `real` representing the real part of a complex number, and a\n tensor `imag` representing the imaginary part of a complex number, this\n operation returns complex numbers elementwise of the form \\\\(a + bj\\\\), where\n *a* represents the `real` part and *b* represents the `imag` part.\n\n The input tensors `real` and `imag` must have the same shape.\n\n For example:\n\n ```python\n real = tf.constant([2.25, 3.25])\n imag = tf.constant([4.75, 5.75])\n tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]]\n ```\n\n Args:\n real: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n imag: A `Tensor`. Must have the same type as `real`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64` or `complex128`.\n\n Raises:\n TypeError: Real and imag must be correct types\n ", "desc": "Converts two real numbers to a complex number.", "type": "API"}, {"name": "tf.dtypes.DType", "docs": "Represents the type of the elements in a `Tensor`.\n\n `DType`'s are used to specify the output data type for operations which\n require it, or to inspect the data type of existing `Tensor`'s.\n\n Examples:\n\n >>> tf.constant(1, dtype=tf.int64)\n \n >>> tf.constant(1.0).dtype\n tf.float32\n\n See `tf.dtypes` for a complete list of `DType`'s defined.\n ", "desc": "Represents the type of the elements in a `Tensor`.", "type": "API"}, {"name": "tf.dtypes.saturate_cast", "docs": "Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underflow in the cast, this op\n applies the appropriate clamping before the cast.\n\n Args:\n value: A `Tensor`.\n dtype: The desired output `DType`.\n name: A name for the operation (optional).\n\n Returns:\n `value` safely cast to `dtype`.\n ", "desc": "Performs a safe saturating cast of `value` to `dtype`.", "type": "API"}, {"name": "tf.dynamic_partition", "docs": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.\n\n For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`\n becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`\n are placed in `outputs[i]` in lexicographic order of `js`, and the first\n dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.\n In detail,\n\n ```python\n outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]\n\n outputs[i] = pack([data[js, ...] for js if partitions[js] == i])\n ```\n\n `data.shape` must start with `partitions.shape`.\n\n For example:\n\n ```python\n # Scalar partitions.\n partitions = 1\n num_partitions = 2\n data = [10, 20]\n outputs[0] = [] # Empty with shape [0, 2]\n outputs[1] = [[10, 20]]\n\n # Vector partitions.\n partitions = [0, 0, 1, 1, 0]\n num_partitions = 2\n data = [10, 20, 30, 40, 50]\n outputs[0] = [10, 20, 50]\n outputs[1] = [30, 40]\n ```\n\n See `dynamic_stitch` for an example on how to merge partitions back.\n\n
\n \n
\n\n Args:\n data: A `Tensor`.\n partitions: A `Tensor` of type `int32`.\n Any shape. Indices in the range `[0, num_partitions)`.\n num_partitions: An `int` that is `>= 1`.\n The number of partitions to output.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_partitions` `Tensor` objects with the same type as `data`.\n ", "desc": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.", "type": "API"}, {"name": "tf.dynamic_stitch", "docs": "Interleave the values from the `data` tensors into a single tensor.\n\n Builds a merged tensor such that\n\n ```python\n merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]\n ```\n\n For example, if each `indices[m]` is scalar or vector, we have\n\n ```python\n # Scalar indices:\n merged[indices[m], ...] = data[m][...]\n\n # Vector indices:\n merged[indices[m][i], ...] = data[m][i, ...]\n ```\n\n Each `data[i].shape` must start with the corresponding `indices[i].shape`,\n and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we\n must have `data[i].shape = indices[i].shape + constant`. In terms of this\n `constant`, the output shape is\n\n merged.shape = [max(indices)] + constant\n\n Values are merged in order, so if an index appears in both `indices[m][i]` and\n `indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the\n merged result. If you do not need this guarantee, ParallelDynamicStitch might\n perform better on some devices.\n\n For example:\n\n ```python\n indices[0] = 6\n indices[1] = [4, 1]\n indices[2] = [[5, 2], [0, 3]]\n data[0] = [61, 62]\n data[1] = [[41, 42], [11, 12]]\n data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]\n merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],\n [51, 52], [61, 62]]\n ```\n\n This method can be used to merge partitions created by `dynamic_partition`\n as illustrated on the following example:\n\n ```python\n # Apply function (increments x_i) on elements for which a certain condition\n # apply (x_i != -1 in this example).\n x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])\n condition_mask=tf.not_equal(x,tf.constant(-1.))\n partitioned_data = tf.dynamic_partition(\n x, tf.cast(condition_mask, tf.int32) , 2)\n partitioned_data[1] = partitioned_data[1] + 1.0\n condition_indices = tf.dynamic_partition(\n tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)\n x = tf.dynamic_stitch(condition_indices, partitioned_data)\n # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain\n # unchanged.\n ```\n\n
\n \n
\n\n Args:\n indices: A list of at least 1 `Tensor` objects with type `int32`.\n data: A list with the same length as `indices` of `Tensor` objects with the same type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Interleave the values from the `data` tensors into a single tensor.", "type": "API"}, {"name": "tf.edit_distance", "docs": "Computes the Levenshtein distance between sequences.\n\n This operation takes variable-length sequences (`hypothesis` and `truth`),\n each provided as a `SparseTensor`, and computes the Levenshtein distance.\n You can normalize the edit distance by length of `truth` by setting\n `normalize` to true.\n\n For example:\n\n Given the following input,\n * `hypothesis` is a `tf.SparseTensor` of shape `[2, 1, 1]`\n * `truth` is a `tf.SparseTensor` of shape `[2, 2, 2]`\n\n >>> hypothesis = tf.SparseTensor(\n ... [[0, 0, 0],\n ... [1, 0, 0]],\n ... [\"a\", \"b\"],\n ... (2, 1, 1))\n >>> truth = tf.SparseTensor(\n ... [[0, 1, 0],\n ... [1, 0, 0],\n ... [1, 0, 1],\n ... [1, 1, 0]],\n ... [\"a\", \"b\", \"c\", \"a\"],\n ... (2, 2, 2))\n >>> tf.edit_distance(hypothesis, truth, normalize=True)\n \n\n The operation returns a dense Tensor of shape `[2, 2]` with\n edit distances normalized by `truth` lengths.\n\n **Note**: It is possible to calculate edit distance between two\n sparse tensors with variable-length values. However, attempting to create\n them while eager execution is enabled will result in a `ValueError`.\n\n For the following inputs,\n\n ```python\n # 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:\n # (0,0) = [\"a\"]\n # (1,0) = [\"b\"]\n hypothesis = tf.sparse.SparseTensor(\n [[0, 0, 0],\n [1, 0, 0]],\n [\"a\", \"b\"],\n (2, 1, 1))\n\n # 'truth' is a tensor of shape `[2, 2]` with variable-length values:\n # (0,0) = []\n # (0,1) = [\"a\"]\n # (1,0) = [\"b\", \"c\"]\n # (1,1) = [\"a\"]\n truth = tf.sparse.SparseTensor(\n [[0, 1, 0],\n [1, 0, 0],\n [1, 0, 1],\n [1, 1, 0]],\n [\"a\", \"b\", \"c\", \"a\"],\n (2, 2, 2))\n\n normalize = True\n\n # The output would be a dense Tensor of shape `(2,)`, with edit distances\n normalized by 'truth' lengths.\n # output => array([0., 0.5], dtype=float32)\n ```\n\n Args:\n hypothesis: A `SparseTensor` containing hypothesis sequences.\n truth: A `SparseTensor` containing truth sequences.\n normalize: A `bool`. If `True`, normalizes the Levenshtein distance by\n length of `truth.`\n name: A name for the operation (optional).\n\n Returns:\n A dense `Tensor` with rank `R - 1`, where R is the rank of the\n `SparseTensor` inputs `hypothesis` and `truth`.\n\n Raises:\n TypeError: If either `hypothesis` or `truth` are not a `SparseTensor`.\n ", "desc": "Computes the Levenshtein distance between sequences.", "type": "API"}, {"name": "tf.eig", "docs": "Computes the eigen decomposition of a batch of matrices.\n\n The eigenvalues\n and eigenvectors for a non-Hermitian matrix in general are complex. The\n eigenvectors are not guaranteed to be linearly independent.\n\n Computes the eigenvalues and right eigenvectors of the innermost\n N-by-N matrices in `tensor` such that\n `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of\n each inner inner matrix is referenced.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order.\n v: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most\n matrices contain eigenvectors of the corresponding matrices in `tensor`\n ", "desc": "Computes the eigen decomposition of a batch of matrices.", "type": "API"}, {"name": "tf.eigvals", "docs": "Computes the eigenvalues of one or more matrices.\n\n Note: If your program backpropagates through this function, you should replace\n it with a call to tf.linalg.eig (possibly ignoring the second output) to\n avoid computing the eigen decomposition twice. This is because the\n eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See\n _SelfAdjointEigV2Grad in linalg_grad.py.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`\n eigenvalues of `tensor[..., :, :]`.\n ", "desc": "Computes the eigenvalues of one or more matrices.", "type": "API"}, {"name": "tf.einsum", "docs": "Tensor contraction over specified indices and outer product.\n\n Einsum allows defining Tensors by defining their element-wise computation.\n This computation is defined by `equation`, a shorthand form based on Einstein\n summation. As an example, consider multiplying two matrices A and B to form a\n matrix C. The elements of C are given by:\n\n $$ C_{i,k} = \\sum_j A_{i,j} B_{j,k} $$\n\n or\n\n ```\n C[i,k] = sum_j A[i,j] * B[j,k]\n ```\n\n The corresponding einsum `equation` is:\n\n ```\n ij,jk->ik\n ```\n\n In general, to convert the element-wise equation into the `equation` string,\n use the following procedure (intermediate strings for matrix multiplication\n example provided in parentheses):\n\n 1. remove variable names, brackets, and commas, (`ik = sum_j ij * jk`)\n 2. replace \"*\" with \",\", (`ik = sum_j ij , jk`)\n 3. drop summation signs, and (`ik = ij, jk`)\n 4. move the output to the right, while replacing \"=\" with \"->\". (`ij,jk->ik`)\n\n Note: If the output indices are not specified repeated indices are summed.\n So `ij,jk->ik` can be simplified to `ij,jk`.\n\n Many common operations can be expressed in this way. For example:\n\n **Matrix multiplication**\n\n >>> m0 = tf.random.normal(shape=[2, 3])\n >>> m1 = tf.random.normal(shape=[3, 5])\n >>> e = tf.einsum('ij,jk->ik', m0, m1)\n >>> # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n Repeated indices are summed if the output indices are not specified.\n\n >>> e = tf.einsum('ij,jk', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n\n **Dot product**\n\n >>> u = tf.random.normal(shape=[5])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,i->', u, v) # output = sum_i u[i]*v[i]\n >>> print(e.shape)\n ()\n\n **Outer product**\n\n >>> u = tf.random.normal(shape=[3])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]\n >>> print(e.shape)\n (3, 5)\n\n **Transpose**\n\n >>> m = tf.ones(2,3)\n >>> e = tf.einsum('ij->ji', m0) # output[j,i] = m0[i,j]\n >>> print(e.shape)\n (3, 2)\n\n **Diag**\n\n >>> m = tf.reshape(tf.range(9), [3,3])\n >>> diag = tf.einsum('ii->i', m)\n >>> print(diag.shape)\n (3,)\n\n **Trace**\n\n >>> # Repeated indices are summed.\n >>> trace = tf.einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]\n >>> assert trace == sum(diag)\n >>> print(trace.shape)\n ()\n\n **Batch matrix multiplication**\n\n >>> s = tf.random.normal(shape=[7,5,3])\n >>> t = tf.random.normal(shape=[7,3,2])\n >>> e = tf.einsum('bij,bjk->bik', s, t)\n >>> # output[a,i,k] = sum_j s[a,i,j] * t[a, j, k]\n >>> print(e.shape)\n (7, 5, 2)\n\n This method does not support broadcasting on named-axes. All axes with\n matching labels should have the same length. If you have length-1 axes,\n use `tf.squeeze` or `tf.reshape` to eliminate them.\n\n To write code that is agnostic to the number of indices in the input\n use an ellipsis. The ellipsis is a placeholder for \"whatever other indices\n fit here\".\n\n For example, to perform a NumPy-style broadcasting-batch-matrix multiplication\n where the matrix multiply acts on the last two axes of the input, use:\n\n >>> s = tf.random.normal(shape=[11, 7, 5, 3])\n >>> t = tf.random.normal(shape=[11, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Einsum **will** broadcast over axes covered by the ellipsis.\n\n >>> s = tf.random.normal(shape=[11, 1, 5, 3])\n >>> t = tf.random.normal(shape=[1, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Args:\n equation: a `str` describing the contraction, in the same format as\n `numpy.einsum`.\n *inputs: the inputs to contract (each one a `Tensor`), whose shapes should\n be consistent with `equation`.\n **kwargs:\n - optimize: Optimization strategy to use to find contraction path using\n opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or\n 'auto'. (optional, default: 'greedy').\n - name: A name for the operation (optional).\n\n Returns:\n The contracted `Tensor`, with shape determined by `equation`.\n\n Raises:\n ValueError: If\n - the format of `equation` is incorrect,\n - number of inputs or their shapes are inconsistent with `equation`.\n ", "desc": "Tensor contraction over specified indices and outer product.", "type": "API"}, {"name": "tf.ensure_shape", "docs": "Updates the shape of a tensor and checks at runtime that the shape holds.\n\n When executed, this operation asserts that the input tensor `x`'s shape\n is compatible with the `shape` argument.\n See `tf.TensorShape.is_compatible_with` for details.\n\n >>> x = tf.constant([[1, 2, 3],\n ... [4, 5, 6]])\n >>> x = tf.ensure_shape(x, [2, 3])\n\n Use `None` for unknown dimensions:\n\n >>> x = tf.ensure_shape(x, [None, 3])\n >>> x = tf.ensure_shape(x, [2, None])\n\n If the tensor's shape is not compatible with the `shape` argument, an error\n is raised:\n\n >>> x = tf.ensure_shape(x, [5])\n Traceback (most recent call last):\n ...\n tf.errors.InvalidArgumentError: Shape of tensor dummy_input [3] is not\n compatible with expected shape [5]. [Op:EnsureShape]\n\n During graph construction (typically tracing a `tf.function`),\n `tf.ensure_shape` updates the static-shape of the **result** tensor by\n merging the two shapes. See `tf.TensorShape.merge_with` for details.\n\n This is most useful when **you** know a shape that can't be determined\n statically by TensorFlow.\n\n The following trivial `tf.function` prints the input tensor's\n static-shape before and after `ensure_shape` is applied.\n\n >>> @tf.function\n ... def f(tensor):\n ... print(\"Static-shape before:\", tensor.shape)\n ... tensor = tf.ensure_shape(tensor, [None, 3])\n ... print(\"Static-shape after:\", tensor.shape)\n ... return tensor\n\n This lets you see the effect of `tf.ensure_shape` when the function is traced:\n >>> cf = f.get_concrete_function(tf.TensorSpec([None, None]))\n Static-shape before: (None, None)\n Static-shape after: (None, 3)\n\n >>> cf(tf.zeros([3, 3])) # Passes\n >>> cf(tf.constant([1, 2, 3])) # fails\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Shape of tensor x [3] is not compatible with expected shape [3,3].\n\n The above example raises `tf.errors.InvalidArgumentError`, because `x`'s\n shape, `(3,)`, is not compatible with the `shape` argument, `(None, 3)`\n\n Inside a `tf.function` or `v1.Graph` context it checks both the buildtime and\n runtime shapes. This is stricter than `tf.Tensor.set_shape` which only\n checks the buildtime shape.\n\n Note: This differs from `tf.Tensor.set_shape` in that it sets the static shape\n of the resulting tensor and enforces it at runtime, raising an error if the\n tensor's runtime shape is incompatible with the specified shape.\n `tf.Tensor.set_shape` sets the static shape of the tensor without enforcing it\n at runtime, which may result in inconsistencies between the statically-known\n shape of tensors and the runtime value of tensors.\n\n For example, of loading images of a known size:\n\n >>> @tf.function\n ... def decode_image(png):\n ... image = tf.image.decode_png(png, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... image = tf.ensure_shape(image,[28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n When tracing a function, no ops are being executed, shapes may be unknown.\n See the [Concrete Functions Guide](https://www.tensorflow.org/guide/concrete_function)\n for details.\n\n >>> concrete_decode = decode_image.get_concrete_function(\n ... tf.TensorSpec([], dtype=tf.string))\n Initial shape: (None, None, 3)\n Final shape: (28, 28, 3)\n\n >>> image = tf.random.uniform(maxval=255, shape=[28, 28, 3], dtype=tf.int32)\n >>> image = tf.cast(image,tf.uint8)\n >>> png = tf.image.encode_png(image)\n >>> image2 = concrete_decode(png)\n >>> print(image2.shape)\n (28, 28, 3)\n\n >>> image = tf.concat([image,image], axis=0)\n >>> print(image.shape)\n (56, 28, 3)\n >>> png = tf.image.encode_png(image)\n >>> image2 = concrete_decode(png)\n Traceback (most recent call last):\n ...\n tf.errors.InvalidArgumentError: Shape of tensor DecodePng [56,28,3] is not\n compatible with expected shape [28,28,3].\n\n Caution: if you don't use the result of `tf.ensure_shape` the check may not\n run.\n\n >>> @tf.function\n ... def bad_decode_image(png):\n ... image = tf.image.decode_png(png, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... # BAD: forgot to use the returned tensor.\n ... tf.ensure_shape(image,[28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n >>> image = bad_decode_image(png)\n Initial shape: (None, None, 3)\n Final shape: (None, None, 3)\n >>> print(image.shape)\n (56, 28, 3)\n\n Args:\n x: A `Tensor`.\n shape: A `TensorShape` representing the shape of this tensor, a\n `TensorShapeProto`, a list, a tuple, or None.\n name: A name for this operation (optional). Defaults to \"EnsureShape\".\n\n Returns:\n A `Tensor`. Has the same type and contents as `x`.\n\n Raises:\n tf.errors.InvalidArgumentError: If `shape` is incompatible with the shape\n of `x`.\n ", "desc": "Updates the shape of a tensor and checks at runtime that the shape holds.", "type": "API"}, {"name": "tf.equal", "docs": "Returns the truth value of (x == y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise equality comparison, returning a Tensor of\n boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x == y) element-wise.", "type": "API"}, {"name": "tf.errors", "docs": "Exception types for TensorFlow errors.\n", "desc": "Exception types for TensorFlow errors.", "type": "API"}, {"name": "tf.errors.AbortedError", "docs": "The operation was aborted, typically due to a concurrent action.\n\n For example, running a\n `tf.QueueBase.enqueue`\n operation may raise `AbortedError` if a\n `tf.QueueBase.close` operation\n previously ran.\n\n @@__init__\n ", "desc": "The operation was aborted, typically due to a concurrent action.", "type": "API"}, {"name": "tf.errors.AlreadyExistsError", "docs": "Raised when an entity that we attempted to create already exists.\n\n For example, running an operation that saves a file\n (e.g. `tf.train.Saver.save`)\n could potentially raise this exception if an explicit filename for an\n existing file was passed.\n\n @@__init__\n ", "desc": "Raised when an entity that we attempted to create already exists.", "type": "API"}, {"name": "tf.errors.CancelledError", "docs": "Raised when an operation or step is cancelled.\n\n For example, a long-running operation (e.g.\n `tf.QueueBase.enqueue` may be\n cancelled by running another operation (e.g.\n `tf.QueueBase.close`,\n or by `tf.Session.close`.\n A step that is running such a long-running operation will fail by raising\n `CancelledError`.\n\n @@__init__\n ", "desc": "Raised when an operation or step is cancelled.", "type": "API"}, {"name": "tf.errors.DataLossError", "docs": "Raised when unrecoverable data loss or corruption is encountered.\n\n For example, this may be raised by running a\n `tf.WholeFileReader.read`\n operation, if the file is truncated while it is being read.\n\n @@__init__\n ", "desc": "Raised when unrecoverable data loss or corruption is encountered.", "type": "API"}, {"name": "tf.errors.DeadlineExceededError", "docs": "Raised when a deadline expires before an operation could complete.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "Raised when a deadline expires before an operation could complete.", "type": "API"}, {"name": "tf.errors.FailedPreconditionError", "docs": "Operation was rejected because the system is not in a state to execute it.\n\n This exception is most commonly raised when running an operation\n that reads a `tf.Variable`\n before it has been initialized.\n\n @@__init__\n ", "desc": "Operation was rejected because the system is not in a state to execute it.", "type": "API"}, {"name": "tf.errors.InternalError", "docs": "Raised when the system experiences an internal error.\n\n This exception is raised when some invariant expected by the runtime\n has been broken. Catching this exception is not recommended.\n\n @@__init__\n ", "desc": "Raised when the system experiences an internal error.", "type": "API"}, {"name": "tf.errors.InvalidArgumentError", "docs": "Raised when an operation receives an invalid argument.\n\n This error is typically raised when an op receives mismatched arguments.\n\n Example:\n\n >>> tf.reshape([1, 2, 3], (2,))\n Traceback (most recent call last):\n ...\n InvalidArgumentError: ...\n\n @@__init__\n ", "desc": "Raised when an operation receives an invalid argument.", "type": "API"}, {"name": "tf.errors.NotFoundError", "docs": "Raised when a requested entity (e.g., a file or directory) was not found.\n\n For example, running the\n `tf.WholeFileReader.read`\n operation could raise `NotFoundError` if it receives the name of a file that\n does not exist.\n\n @@__init__\n ", "desc": "Raised when a requested entity (e.g., a file or directory) was not found.", "type": "API"}, {"name": "tf.errors.OperatorNotAllowedInGraphError", "docs": "An error is raised for unsupported operator in Graph execution.\n\n For example, using a `tf.Tensor` as a Python `bool` in Graph execution\n is not allowed.\n ", "desc": "An error is raised for unsupported operator in Graph execution.", "type": "API"}, {"name": "tf.errors.OpError", "docs": "The base class for TensorFlow exceptions.\n\n Usually, TensorFlow will raise a more specific subclass of `OpError` from the\n `tf.errors` module.\n ", "desc": "The base class for TensorFlow exceptions.", "type": "API"}, {"name": "tf.errors.OutOfRangeError", "docs": "Raised when an operation iterates past the valid input range.\n\n This exception is raised in \"end-of-file\" conditions, such as when a\n `tf.QueueBase.dequeue`\n operation is blocked on an empty queue, and a\n `tf.QueueBase.close`\n operation executes.\n\n @@__init__\n ", "desc": "Raised when an operation iterates past the valid input range.", "type": "API"}, {"name": "tf.errors.PermissionDeniedError", "docs": "Raised when the caller does not have permission to run an operation.\n\n For example, running the\n `tf.WholeFileReader.read`\n operation could raise `PermissionDeniedError` if it receives the name of a\n file for which the user does not have the read file permission.\n\n @@__init__\n ", "desc": "Raised when the caller does not have permission to run an operation.", "type": "API"}, {"name": "tf.errors.ResourceExhaustedError", "docs": "Some resource has been exhausted.\n\n For example, this error might be raised if a per-user quota is\n exhausted, or perhaps the entire file system is out of space.\n\n @@__init__\n ", "desc": "Some resource has been exhausted.", "type": "API"}, {"name": "tf.errors.UnauthenticatedError", "docs": "The request does not have valid authentication credentials.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "The request does not have valid authentication credentials.", "type": "API"}, {"name": "tf.errors.UnavailableError", "docs": "Raised when the runtime is currently unavailable.\n\n This exception is not currently used.\n\n @@__init__\n ", "desc": "Raised when the runtime is currently unavailable.", "type": "API"}, {"name": "tf.errors.UnimplementedError", "docs": "Raised when an operation has not been implemented.\n\n Some operations may raise this error when passed otherwise-valid\n arguments that it does not currently support. For example, running\n the `tf.nn.max_pool2d` operation\n would raise this error if pooling was requested on the batch dimension,\n because this is not yet supported.\n\n @@__init__\n ", "desc": "Raised when an operation has not been implemented.", "type": "API"}, {"name": "tf.errors.UnknownError", "docs": "Unknown error.\n\n An example of where this error may be returned is if a Status value\n received from another address space belongs to an error-space that\n is not known to this address space. Also, errors raised by APIs that\n do not return enough error information may be converted to this\n error.\n\n @@__init__\n ", "desc": "Unknown error.", "type": "API"}, {"name": "tf.estimator", "docs": "", "desc": "", "type": "API"}, {"name": "tf.estimator.add_metrics", "docs": "Creates a new `tf.estimator.Estimator` which has given metrics.\n\n Example:\n\n ```python\n def my_auc(labels, predictions):\n auc_metric = tf.keras.metrics.AUC(name=\"my_auc\")\n auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'])\n return {'auc': auc_metric}\n\n estimator = tf.estimator.DNNClassifier(...)\n estimator = tf.estimator.add_metrics(estimator, my_auc)\n estimator.train(...)\n estimator.evaluate(...)\n ```\n Example usage of custom metric which uses features:\n\n ```python\n def my_auc(labels, predictions, features):\n auc_metric = tf.keras.metrics.AUC(name=\"my_auc\")\n auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'],\n sample_weight=features['weight'])\n return {'auc': auc_metric}\n\n estimator = tf.estimator.DNNClassifier(...)\n estimator = tf.estimator.add_metrics(estimator, my_auc)\n estimator.train(...)\n estimator.evaluate(...)\n ```\n\n Args:\n estimator: A `tf.estimator.Estimator` object.\n metric_fn: A function which should obey the following signature:\n - Args: can only have following four arguments in any order:\n * predictions: Predictions `Tensor` or dict of `Tensor` created by given\n `estimator`.\n * features: Input `dict` of `Tensor` objects created by `input_fn` which\n is given to `estimator.evaluate` as an argument.\n * labels: Labels `Tensor` or dict of `Tensor` created by `input_fn`\n which is given to `estimator.evaluate` as an argument.\n * config: config attribute of the `estimator`.\n - Returns: Dict of metric results keyed by name. Final metrics are a\n union of this and `estimator's` existing metrics. If there is a name\n conflict between this and `estimator`s existing metrics, this will\n override the existing one. The values of the dict are the results of\n calling a metric function, namely a `(metric_tensor, update_op)` tuple.\n\n Returns:\n A new `tf.estimator.Estimator` which has a union of original metrics with\n given ones.\n ", "desc": "Creates a new `tf.estimator.Estimator` which has given metrics.", "type": "API"}, {"name": "tf.estimator.BaselineClassifier", "docs": "A classifier that can establish a simple baseline.\n\n This classifier ignores feature values and will learn to predict the average\n value of each label. For single-label problems, this will predict the\n probability distribution of the classes as seen in the labels. For multi-label\n problems, this will predict the fraction of examples that are positive for\n each class.\n\n Example:\n\n ```python\n\n # Build BaselineClassifier\n classifier = tf.estimator.BaselineClassifier(n_classes=3)\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n classifier.train(input_fn=input_fn_train)\n\n # Evaluate cross entropy between the test and train labels.\n loss = classifier.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # predict outputs the probability distribution of the classes as seen in\n # training.\n predictions = classifier.predict(new_samples)\n\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A classifier that can establish a simple baseline.", "type": "API"}, {"name": "tf.estimator.BaselineEstimator", "docs": "An estimator that can establish a simple baseline.\n\n The estimator uses a user-specified head.\n\n This estimator ignores feature values and will learn to predict the average\n value of each label. E.g. for single-label classification problems, this will\n predict the probability distribution of the classes as seen in the labels.\n For multi-label classification problems, it will predict the ratio of examples\n that contain each class.\n\n Example:\n\n ```python\n\n # Build baseline multi-label classifier.\n estimator = tf.estimator.BaselineEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3))\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n estimator.train(input_fn=input_fn_train)\n\n # Evaluates cross entropy between the test and train labels.\n loss = estimator.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # For each class, predicts the ratio of training examples that contain the\n # class.\n predictions = estimator.predict(new_samples)\n\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is specified in the `head` constructor (and not None) for\n the head passed to BaselineEstimator's constructor, a feature with\n `key=weight_column` whose value is a `Tensor`.\n ", "desc": "An estimator that can establish a simple baseline.", "type": "API"}, {"name": "tf.estimator.BaselineRegressor", "docs": "A regressor that can establish a simple baseline.\n\n This regressor ignores feature values and will learn to predict the average\n value of each label.\n\n Example:\n\n ```python\n\n # Build BaselineRegressor\n regressor = tf.estimator.BaselineRegressor()\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n\n # Fit model.\n regressor.train(input_fn=input_fn_train)\n\n # Evaluate squared-loss between the test and train targets.\n loss = regressor.evaluate(input_fn=input_fn_eval)[\"loss\"]\n\n # predict outputs the mean value seen during training.\n predictions = regressor.predict(new_samples)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A regressor that can establish a simple baseline.", "type": "API"}, {"name": "tf.estimator.BestExporter", "docs": "This class exports the serving graph and checkpoints of the best models.\n\n This class performs a model export everytime the new model is better than any\n existing model.\n ", "desc": "This class exports the serving graph and checkpoints of the best models.", "type": "API"}, {"name": "tf.estimator.BinaryClassHead", "docs": "Creates a `Head` for single label binary classification.\n\n Uses `sigmoid_cross_entropy_with_logits` loss.\n\n The head expects `logits` with shape `[D0, D1, ... DN, 1]`.\n In many applications, the shape is `[batch_size, 1]`.\n\n `labels` must be a dense `Tensor` with shape matching `logits`, namely\n `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string\n `Tensor` with values from the vocabulary. If `label_vocabulary` is not given,\n `labels` must be float `Tensor` with values in the interval `[0, 1]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n The loss is the weighted sum over the input dimensions. Namely, if the input\n labels have shape `[batch_size, 1]`, the loss is the weighted sum over\n `batch_size`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns loss\n with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support float `labels` with\n shape `[D0, D1, ... DN, 1]`. Namely, the head applies `label_vocabulary` to\n the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> head = tf.estimator.BinaryClassHead()\n >>> logits = np.array(((45,), (-41,),), dtype=np.float32)\n >>> labels = np.array(((1,), (1,),), dtype=np.int32)\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size\n >>> # = sum(0, 41) / 2 = 41 / 2 = 20.50\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 20.50\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n accuracy : 0.50\n accuracy_baseline : 1.00\n auc : 0.00\n auc_precision_recall : 1.00\n average_loss : 20.50\n label/mean : 1.00\n precision : 1.00\n prediction/mean : 0.50\n recall : 0.50\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[ 45.]\n [-41.]], shape=(2, 1), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.BinaryClassHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.BinaryClassHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n thresholds: Iterable of floats in the range `(0, 1)`. For binary\n classification metrics such as precision and recall, an eval metric is\n generated for each threshold value. This threshold is applied to the\n logistic values to determine the binary classification (i.e., above the\n threshold is `true`, below is `false`.\n label_vocabulary: A list or tuple of strings representing possible label\n values. If it is not given, that means labels are already encoded within\n [0, 1]. If given, labels must be string type and have any value in\n `label_vocabulary`. Note that errors will be raised if `label_vocabulary`\n is not provided but labels are strings.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by `batch size * label_dimension`.\n loss_fn: Optional loss function.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for single label binary classification.", "type": "API"}, {"name": "tf.estimator.CheckpointSaverHook", "docs": "Saves checkpoints every N steps or seconds.", "desc": "Saves checkpoints every N steps or seconds.", "type": "API"}, {"name": "tf.estimator.CheckpointSaverListener", "docs": "Interface for listeners that take action before or after checkpoint save.\n\n `CheckpointSaverListener` triggers only in steps when `CheckpointSaverHook` is\n triggered, and provides callbacks at the following points:\n - before using the session\n - before each call to `Saver.save()`\n - after each call to `Saver.save()`\n - at the end of session\n\n To use a listener, implement a class and pass the listener to a\n `CheckpointSaverHook`, as in this example:\n\n ```python\n class ExampleCheckpointSaverListener(CheckpointSaverListener):\n def begin(self):\n # You can add ops to the graph here.\n print('Starting the session.')\n self.your_tensor = ...\n\n def before_save(self, session, global_step_value):\n print('About to write a checkpoint')\n\n def after_save(self, session, global_step_value):\n print('Done writing checkpoint.')\n if decided_to_stop_training():\n return True\n\n def end(self, session, global_step_value):\n print('Done with the session.')\n\n ...\n listener = ExampleCheckpointSaverListener()\n saver_hook = tf.estimator.CheckpointSaverHook(\n checkpoint_dir, listeners=[listener])\n with\n tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):\n ...\n ```\n\n A `CheckpointSaverListener` may simply take some action after every\n checkpoint save. It is also possible for the listener to use its own schedule\n to act less frequently, e.g. based on global_step_value. In this case,\n implementors should implement the `end()` method to handle actions related to\n the last checkpoint save. But the listener should not act twice if\n `after_save()` already handled this last checkpoint save.\n\n A `CheckpointSaverListener` can request training to be stopped, by returning\n True in `after_save`. Please note that, in replicated distributed training\n setting, only `chief` should use this behavior. Otherwise each worker will do\n their own evaluation, which may be wasteful of resources.\n ", "desc": "Interface for listeners that take action before or after checkpoint save.", "type": "API"}, {"name": "tf.estimator.classifier_parse_example_spec", "docs": "Generates parsing spec for tf.parse_example to be used with classifiers.\n\n If users keep data in tf.Example format, they need to call tf.parse_example\n with a proper feature spec. There are two main things that this utility helps:\n\n * Users need to combine parsing spec of features with labels and weights\n (if any) since they are all parsed from same tf.Example instance. This\n utility combines these specs.\n * It is difficult to map expected label by a classifier such as\n `DNNClassifier` to corresponding tf.parse_example spec. This utility encodes\n it by getting related information from users (key, dtype).\n\n Example output of parsing spec:\n\n ```python\n # Define features and transformations\n feature_b = tf.feature_column.numeric_column(...)\n feature_c_bucketized = tf.feature_column.bucketized_column(\n tf.feature_column.numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = tf.feature_column.crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]\n parsing_spec = tf.estimator.classifier_parse_example_spec(\n feature_columns, label_key='my-label', label_dtype=tf.string)\n\n # For the above example, classifier_parse_example_spec would return the dict:\n assert parsing_spec == {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n \"my-label\" : parsing_ops.FixedLenFeature([1], dtype=tf.string)\n }\n ```\n\n Example usage with a classifier:\n\n ```python\n feature_columns = # define features via tf.feature_column\n estimator = DNNClassifier(\n n_classes=1000,\n feature_columns=feature_columns,\n weight_column='example-weight',\n label_vocabulary=['photos', 'keep', ...],\n hidden_units=[256, 64, 16])\n # This label configuration tells the classifier the following:\n # * weights are retrieved with key 'example-weight'\n # * label is string and can be one of the following ['photos', 'keep', ...]\n # * integer id for label 'photos' is 0, 'keep' is 1, ...\n\n\n # Input builders\n def input_fn_train(): # Returns a tuple of features and labels.\n features = tf.contrib.learn.read_keyed_batch_features(\n file_pattern=train_files,\n batch_size=batch_size,\n # creates parsing configuration for tf.parse_example\n features=tf.estimator.classifier_parse_example_spec(\n feature_columns,\n label_key='my-label',\n label_dtype=tf.string,\n weight_column='example-weight'),\n reader=tf.RecordIOReader)\n labels = features.pop('my-label')\n return features, labels\n\n estimator.train(input_fn=input_fn_train)\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `FeatureColumn`.\n label_key: A string identifying the label. It means tf.Example stores labels\n with this key.\n label_dtype: A `tf.dtype` identifies the type of labels. By default it is\n `tf.int64`. If user defines a `label_vocabulary`, this should be set as\n `tf.string`. `tf.float32` labels are only supported for binary\n classification.\n label_default: used as label if label_key does not exist in given\n tf.Example. An example usage: let's say `label_key` is 'clicked' and\n tf.Example contains clicked data only for positive examples in following\n format `key:clicked, value:1`. This means that if there is no data with\n key 'clicked' it should count as negative example by setting\n `label_deafault=0`. Type of this value should be compatible with\n `label_dtype`.\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. If it is a string, it is\n used as a key to fetch weight tensor from the `features`. If it is a\n `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then\n weight_column.normalizer_fn is applied on it to get weight tensor.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If label is used in `feature_columns`.\n ValueError: If weight_column is used in `feature_columns`.\n ValueError: If any of the given `feature_columns` is not a `_FeatureColumn`\n instance.\n ValueError: If `weight_column` is not a `NumericColumn` instance.\n ValueError: if label_key is None.\n ", "desc": "Generates parsing spec for tf.parse_example to be used with classifiers.", "type": "API"}, {"name": "tf.estimator.DNNClassifier", "docs": "A classifier for TensorFlow DNN models.\n\n Example:\n\n ```python\n categorical_feature_a = categorical_column_with_hash_bucket(...)\n categorical_feature_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A classifier for TensorFlow DNN models.", "type": "API"}, {"name": "tf.estimator.DNNEstimator", "docs": "An estimator for TensorFlow DNN models with user-specified head.\n\n Example:\n\n ```python\n sparse_feature_a = sparse_column_with_hash_bucket(...)\n sparse_feature_b = sparse_column_with_hash_bucket(...)\n\n sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,\n ...)\n sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,\n ...)\n\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss and predicted output are determined by the specified head.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow DNN models with user-specified head.", "type": "API"}, {"name": "tf.estimator.DNNLinearCombinedClassifier", "docs": "An estimator for TensorFlow Linear and DNN joined classification models.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_id_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedClassifier(\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...),\n # warm-start settings\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined classification models.", "type": "API"}, {"name": "tf.estimator.DNNLinearCombinedEstimator", "docs": "An estimator for TensorFlow Linear and DNN joined models with custom head.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...))\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined models with custom head.", "type": "API"}, {"name": "tf.estimator.DNNLinearCombinedRegressor", "docs": "An estimator for TensorFlow Linear and DNN joined models for regression.\n\n Note: This estimator is also known as wide-n-deep.\n\n Example:\n\n ```python\n numeric_feature = numeric_column(...)\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNLinearCombinedRegressor(\n # wide settings\n linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],\n linear_optimizer=tf.keras.optimizers.Ftrl(...),\n # deep settings\n dnn_feature_columns=[\n categorical_feature_a_emb, categorical_feature_b_emb,\n numeric_feature],\n dnn_hidden_units=[1000, 500, 100],\n dnn_optimizer=tf.keras.optimizers.Adagrad(...),\n # warm-start settings\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # To apply L1 and L2 regularization, you can set dnn_optimizer to:\n tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001,\n l2_regularization_strength=0.001)\n # To apply learning rate decay, you can set dnn_optimizer to a callable:\n lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96)\n # It is the same for linear_optimizer.\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * for each `column` in `dnn_feature_columns` + `linear_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear and DNN joined models for regression.", "type": "API"}, {"name": "tf.estimator.DNNRegressor", "docs": "A regressor for TensorFlow DNN models.\n\n Example:\n\n ```python\n categorical_feature_a = categorical_column_with_hash_bucket(...)\n categorical_feature_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_emb = embedding_column(\n categorical_column=categorical_feature_a, ...)\n categorical_feature_b_emb = embedding_column(\n categorical_column=categorical_feature_b, ...)\n\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256])\n\n # Or estimator using the ProximalAdagradOptimizer optimizer with\n # regularization.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n optimizer=lambda: tf.keras.optimizers.Adam(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.DNNRegressor(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "A regressor for TensorFlow DNN models.", "type": "API"}, {"name": "tf.estimator.Estimator", "docs": "Estimator class to train and evaluate TensorFlow models.\n\n The `Estimator` object wraps a model which is specified by a `model_fn`,\n which, given inputs and a number of other parameters, returns the ops\n necessary to perform training, evaluation, or predictions.\n\n All outputs (checkpoints, event files, etc.) are written to `model_dir`, or a\n subdirectory thereof. If `model_dir` is not set, a temporary directory is\n used.\n\n The `config` argument can be passed `tf.estimator.RunConfig` object containing\n information about the execution environment. It is passed on to the\n `model_fn`, if the `model_fn` has a parameter named \"config\" (and input\n functions in the same manner). If the `config` parameter is not passed, it is\n instantiated by the `Estimator`. Not passing config means that defaults useful\n for local execution are used. `Estimator` makes config available to the model\n (for instance, to allow specialization based on the number of workers\n available), and also uses some of its fields to control internals, especially\n regarding checkpointing.\n\n The `params` argument contains hyperparameters. It is passed to the\n `model_fn`, if the `model_fn` has a parameter named \"params\", and to the input\n functions in the same manner. `Estimator` only passes params along, it does\n not inspect it. The structure of `params` is therefore entirely up to the\n developer.\n\n None of `Estimator`'s methods can be overridden in subclasses (its\n constructor enforces this). Subclasses should use `model_fn` to configure\n the base class, and may add methods implementing specialized functionality.\n\n See [estimators](https://tensorflow.org/guide/estimator) for more\n information.\n\n To warm-start an `Estimator`:\n\n ```python\n estimator = tf.estimator.DNNClassifier(\n feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],\n hidden_units=[1024, 512, 256],\n warm_start_from=\"/path/to/checkpoint/dir\")\n ```\n\n For more details on warm-start configuration, see\n `tf.estimator.WarmStartSettings`.\n\n @compatibility(eager)\n Calling methods of `Estimator` will work while eager execution is enabled.\n However, the `model_fn` and `input_fn` is not executed eagerly, `Estimator`\n will switch to graph mode before calling all user-provided functions (incl.\n hooks), so their code has to be compatible with graph mode execution. Note\n that `input_fn` code using `tf.data` generally works in both graph and eager\n modes.\n @end_compatibility\n ", "desc": "Estimator class to train and evaluate TensorFlow models.", "type": "API"}, {"name": "tf.estimator.EstimatorSpec", "docs": "Ops and objects returned from a `model_fn` and passed to an `Estimator`.\n\n `EstimatorSpec` fully defines the model to be run by an `Estimator`.\n ", "desc": "Ops and objects returned from a `model_fn` and passed to an `Estimator`.", "type": "API"}, {"name": "tf.estimator.EvalSpec", "docs": "Configuration for the \"eval\" part for the `train_and_evaluate` call.\n\n `EvalSpec` combines details of evaluation of the trained model as well as its\n export. Evaluation consists of computing metrics to judge the performance of\n the trained model. Export writes out the trained model on to external\n storage.\n ", "desc": "Configuration for the \"eval\" part for the `train_and_evaluate` call.", "type": "API"}, {"name": "tf.estimator.experimental", "docs": "Public API for tf.estimator.experimental namespace.\n", "desc": "Public API for tf.estimator.experimental namespace.", "type": "API"}, {"name": "tf.estimator.experimental.build_raw_supervised_input_receiver_fn", "docs": "Build a supervised_input_receiver_fn for raw features and labels.\n\n This function wraps tensor placeholders in a supervised_receiver_fn\n with the expectation that the features and labels appear precisely as\n the model_fn expects them. Features and labels can therefore be dicts of\n tensors, or raw tensors.\n\n Args:\n features: a dict of string to `Tensor` or `Tensor`.\n labels: a dict of string to `Tensor` or `Tensor`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A supervised_input_receiver_fn.\n\n Raises:\n ValueError: if features and labels have overlapping keys.\n ", "desc": "Build a supervised_input_receiver_fn for raw features and labels.", "type": "API"}, {"name": "tf.estimator.experimental.call_logit_fn", "docs": "Calls logit_fn (experimental).\n\n THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs\n for logit and model composition.\n\n A utility function that calls the provided logit_fn with the relevant subset\n of provided arguments. Similar to tf.estimator._call_model_fn().\n\n Args:\n logit_fn: A logit_fn as defined above.\n features: The features dict.\n mode: TRAIN / EVAL / PREDICT ModeKeys.\n params: The hyperparameter dict.\n config: The configuration object.\n\n Returns:\n A logit Tensor, the output of logit_fn.\n\n Raises:\n ValueError: if logit_fn does not return a Tensor or a dictionary mapping\n strings to Tensors.\n ", "desc": "Calls logit_fn (experimental).", "type": "API"}, {"name": "tf.estimator.experimental.InMemoryEvaluatorHook", "docs": "Hook to run evaluation in training without a checkpoint.\n\n Example:\n\n ```python\n def train_input_fn():\n ...\n return train_dataset\n\n def eval_input_fn():\n ...\n return eval_dataset\n\n estimator = tf.estimator.DNNClassifier(...)\n\n evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(\n estimator, eval_input_fn)\n estimator.train(train_input_fn, hooks=[evaluator])\n ```\n\n Current limitations of this approach are:\n\n * It doesn't support multi-node distributed mode.\n * It doesn't support saveable objects other than variables (such as boosted\n tree support)\n * It doesn't support custom saver logic (such as ExponentialMovingAverage\n support)\n\n ", "desc": "Hook to run evaluation in training without a checkpoint.", "type": "API"}, {"name": "tf.estimator.experimental.LinearSDCA", "docs": "Stochastic Dual Coordinate Ascent helper for linear estimators.\n\n Objects of this class are intended to be provided as the optimizer argument\n (though LinearSDCA objects do not implement the `tf.train.Optimizer`\n interface)\n when creating `tf.estimator.LinearClassifier` or\n `tf.estimator.LinearRegressor`.\n\n SDCA can only be used with `LinearClassifier` and `LinearRegressor` under the\n following conditions:\n\n - Feature columns are of type V2.\n - Multivalent categorical columns are not normalized. In other words the\n `sparse_combiner` argument in the estimator constructor should be \"sum\".\n - For classification: binary label.\n - For regression: one-dimensional label.\n\n Example usage:\n\n ```python\n real_feature_column = numeric_column(...)\n sparse_feature_column = categorical_column_with_hash_bucket(...)\n linear_sdca = tf.estimator.experimental.LinearSDCA(\n example_id_column='example_id',\n num_loss_partitions=1,\n num_table_shards=1,\n symmetric_l2_regularization=2.0)\n classifier = tf.estimator.LinearClassifier(\n feature_columns=[real_feature_column, sparse_feature_column],\n weight_column=...,\n optimizer=linear_sdca)\n classifier.train(input_fn_train, steps=50)\n classifier.evaluate(input_fn=input_fn_eval)\n ```\n\n Here the expectation is that the `input_fn_*` functions passed to train and\n evaluate return a pair (dict, label_tensor) where dict has `example_id_column`\n as `key` whose value is a `Tensor` of shape [batch_size] and dtype string.\n num_loss_partitions defines sigma' in eq (11) of [3]. Convergence of (global)\n loss is guaranteed if `num_loss_partitions` is larger or equal to the product\n `(#concurrent train ops/per worker) x (#workers)`. Larger values for\n `num_loss_partitions` lead to slower convergence. The recommended value for\n `num_loss_partitions` in `tf.estimator` (where currently there is one process\n per worker) is the number of workers running the train steps. It defaults to 1\n (single machine).\n `num_table_shards` defines the number of shards for the internal state\n table, typically set to match the number of parameter servers for large\n data sets.\n\n The SDCA algorithm was originally introduced in [1] and it was followed by\n the L1 proximal step [2], a distributed version [3] and adaptive sampling [4].\n [1] www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf\n [2] https://arxiv.org/pdf/1309.2375.pdf\n [3] https://arxiv.org/pdf/1502.03508.pdf\n [4] https://arxiv.org/pdf/1502.08053.pdf\n Details specific to this implementation are provided in:\n https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear_optimizer/doc/sdca.ipynb\n ", "desc": "Stochastic Dual Coordinate Ascent helper for linear estimators.", "type": "API"}, {"name": "tf.estimator.experimental.make_early_stopping_hook", "docs": "Creates early-stopping hook.\n\n Returns a `SessionRunHook` that stops training when `should_stop_fn` returns\n `True`.\n\n Usage example:\n\n ```python\n estimator = ...\n hook = early_stopping.make_early_stopping_hook(\n estimator, should_stop_fn=make_stop_fn(...))\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n should_stop_fn: `callable`, function that takes no arguments and returns a\n `bool`. If the function returns `True`, stopping will be initiated by the\n chief.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n A `SessionRunHook` that periodically executes `should_stop_fn` and initiates\n early stopping if the function returns `True`.\n\n Raises:\n TypeError: If `estimator` is not of type `tf.estimator.Estimator`.\n ValueError: If both `run_every_secs` and `run_every_steps` are set.\n ", "desc": "Creates early-stopping hook.", "type": "API"}, {"name": "tf.estimator.experimental.make_stop_at_checkpoint_step_hook", "docs": "Creates a proper StopAtCheckpointStepHook based on chief status.", "desc": "Creates a proper StopAtCheckpointStepHook based on chief status.", "type": "API"}, {"name": "tf.estimator.experimental.RNNClassifier", "docs": "A classifier for TensorFlow RNN models.\n\n Trains a recurrent neural network model to classify instances into one of\n multiple classes.\n\n Example:\n\n ```python\n token_sequence = sequence_categorical_column_with_hash_bucket(...)\n token_emb = embedding_column(categorical_column=token_sequence, ...)\n\n estimator = RNNClassifier(\n sequence_feature_columns=[token_emb],\n units=[32, 16], cell_type='lstm')\n\n # Input builders\n def input_fn_train: # returns x, y\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n\n def input_fn_eval: # returns x, y\n pass\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n def input_fn_predict: # returns x, None\n pass\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n * for each `column` in `sequence_feature_columns`:\n - a feature with `key=column.name` whose `value` is a `SparseTensor`.\n * for each `column` in `context_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators are not compatible with eager execution.\n @end_compatibility\n ", "desc": "A classifier for TensorFlow RNN models.", "type": "API"}, {"name": "tf.estimator.experimental.RNNEstimator", "docs": "An Estimator for TensorFlow RNN models with user-specified head.\n\n Example:\n\n ```python\n token_sequence = sequence_categorical_column_with_hash_bucket(...)\n token_emb = embedding_column(categorical_column=token_sequence, ...)\n\n estimator = RNNEstimator(\n head=tf.estimator.RegressionHead(),\n sequence_feature_columns=[token_emb],\n units=[32, 16], cell_type='lstm')\n\n # Or with custom RNN cell:\n def rnn_cell_fn(_):\n cells = [ tf.keras.layers.LSTMCell(size) for size in [32, 16] ]\n return tf.keras.layers.StackedRNNCells(cells)\n\n estimator = RNNEstimator(\n head=tf.estimator.RegressionHead(),\n sequence_feature_columns=[token_emb],\n rnn_cell_fn=rnn_cell_fn)\n\n # Input builders\n def input_fn_train: # returns x, y\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n\n def input_fn_eval: # returns x, y\n pass\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n def input_fn_predict: # returns x, None\n pass\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if the head's `weight_column` is not `None`, a feature with\n `key=weight_column` whose value is a `Tensor`.\n * for each `column` in `sequence_feature_columns`:\n - a feature with `key=column.name` whose `value` is a `SparseTensor`.\n * for each `column` in `context_feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss and predicted output are determined by the specified head.\n\n @compatibility(eager)\n Estimators are not compatible with eager execution.\n @end_compatibility\n ", "desc": "An Estimator for TensorFlow RNN models with user-specified head.", "type": "API"}, {"name": "tf.estimator.experimental.stop_if_higher_hook", "docs": "Creates hook to stop if the given metric is higher than the threshold.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if accuracy becomes higher than 0.9.\n hook = early_stopping.stop_if_higher_hook(estimator, \"accuracy\", 0.9)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n threshold: Numeric threshold for the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric is higher than specified threshold and initiates\n early stopping if true.\n ", "desc": "Creates hook to stop if the given metric is higher than the threshold.", "type": "API"}, {"name": "tf.estimator.experimental.stop_if_lower_hook", "docs": "Creates hook to stop if the given metric is lower than the threshold.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if loss becomes lower than 100.\n hook = early_stopping.stop_if_lower_hook(estimator, \"loss\", 100)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n threshold: Numeric threshold for the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric is lower than specified threshold and initiates\n early stopping if true.\n ", "desc": "Creates hook to stop if the given metric is lower than the threshold.", "type": "API"}, {"name": "tf.estimator.experimental.stop_if_no_decrease_hook", "docs": "Creates hook to stop if metric does not decrease within given max steps.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if loss does not decrease in over 100000 steps.\n hook = early_stopping.stop_if_no_decrease_hook(estimator, \"loss\", 100000)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n max_steps_without_decrease: `int`, maximum number of training steps with no\n decrease in the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric shows no decrease over given maximum number of\n training steps, and initiates early stopping if true.\n ", "desc": "Creates hook to stop if metric does not decrease within given max steps.", "type": "API"}, {"name": "tf.estimator.experimental.stop_if_no_increase_hook", "docs": "Creates hook to stop if metric does not increase within given max steps.\n\n Usage example:\n\n ```python\n estimator = ...\n # Hook to stop training if accuracy does not increase in over 100000 steps.\n hook = early_stopping.stop_if_no_increase_hook(estimator, \"accuracy\", 100000)\n train_spec = tf.estimator.TrainSpec(..., hooks=[hook])\n tf.estimator.train_and_evaluate(estimator, train_spec, ...)\n ```\n\n Caveat: Current implementation supports early-stopping both training and\n evaluation in local mode. In distributed mode, training can be stopped but\n evaluation (where it's a separate job) will indefinitely wait for new model\n checkpoints to evaluate, so you will need other means to detect and stop it.\n Early-stopping evaluation in distributed mode requires changes in\n `train_and_evaluate` API and will be addressed in a future revision.\n\n Args:\n estimator: A `tf.estimator.Estimator` instance.\n metric_name: `str`, metric to track. \"loss\", \"accuracy\", etc.\n max_steps_without_increase: `int`, maximum number of training steps with no\n increase in the given metric.\n eval_dir: If set, directory containing summary files with eval metrics. By\n default, `estimator.eval_dir()` will be used.\n min_steps: `int`, stop is never requested if global step is less than this\n value. Defaults to 0.\n run_every_secs: If specified, calls `should_stop_fn` at an interval of\n `run_every_secs` seconds. Defaults to 60 seconds. Either this or\n `run_every_steps` must be set.\n run_every_steps: If specified, calls `should_stop_fn` every\n `run_every_steps` steps. Either this or `run_every_secs` must be set.\n\n Returns:\n An early-stopping hook of type `SessionRunHook` that periodically checks\n if the given metric shows no increase over given maximum number of\n training steps, and initiates early stopping if true.\n ", "desc": "Creates hook to stop if metric does not increase within given max steps.", "type": "API"}, {"name": "tf.estimator.export", "docs": "All public utility methods for exporting Estimator to SavedModel.\n\nThis file includes functions and constants from core (model_utils) and export.py\n\n", "desc": "All public utility methods for exporting Estimator to SavedModel.", "type": "API"}, {"name": "tf.estimator.export.build_parsing_serving_input_receiver_fn", "docs": "Build a serving_input_receiver_fn expecting fed tf.Examples.\n\n Creates a serving_input_receiver_fn that expects a serialized tf.Example fed\n into a string placeholder. The function parses the tf.Example according to\n the provided feature_spec, and returns all parsed Tensors as features.\n\n Args:\n feature_spec: a dict of string to `VarLenFeature`/`FixedLenFeature`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A serving_input_receiver_fn suitable for use in serving.\n ", "desc": "Build a serving_input_receiver_fn expecting fed tf.Examples.", "type": "API"}, {"name": "tf.estimator.export.build_raw_serving_input_receiver_fn", "docs": "Build a serving_input_receiver_fn expecting feature Tensors.\n\n Creates an serving_input_receiver_fn that expects all features to be fed\n directly.\n\n Args:\n features: a dict of string to `Tensor`.\n default_batch_size: the number of query examples expected per batch. Leave\n unset for variable batch size (recommended).\n\n Returns:\n A serving_input_receiver_fn.\n ", "desc": "Build a serving_input_receiver_fn expecting feature Tensors.", "type": "API"}, {"name": "tf.estimator.export.ClassificationOutput", "docs": "Represents the output of a classification head.\n\n Either classes or scores or both must be set.\n\n The classes `Tensor` must provide string labels, not integer class IDs.\n\n If only classes is set, it is interpreted as providing top-k results in\n descending order.\n\n If only scores is set, it is interpreted as providing a score for every class\n in order of class ID.\n\n If both classes and scores are set, they are interpreted as zipped, so each\n score corresponds to the class at the same index. Clients should not depend\n on the order of the entries.\n ", "desc": "Represents the output of a classification head.", "type": "API"}, {"name": "tf.estimator.export.EvalOutput", "docs": "Represents the output of a supervised eval process.\n\n This class generates the appropriate signature def for exporting\n eval output by type-checking and wrapping loss, predictions, and metrics\n values.\n ", "desc": "Represents the output of a supervised eval process.", "type": "API"}, {"name": "tf.estimator.export.ExportOutput", "docs": "Represents an output of a model that can be served.\n\n These typically correspond to model heads.\n ", "desc": "Represents an output of a model that can be served.", "type": "API"}, {"name": "tf.estimator.export.PredictOutput", "docs": "Represents the output of a generic prediction head.\n\n A generic prediction need not be either a classification or a regression.\n\n Named outputs must be provided as a dict from string to `Tensor`,\n ", "desc": "Represents the output of a generic prediction head.", "type": "API"}, {"name": "tf.estimator.export.RegressionOutput", "docs": "Represents the output of a regression head.", "desc": "Represents the output of a regression head.", "type": "API"}, {"name": "tf.estimator.export.ServingInputReceiver", "docs": "A return type for a serving_input_receiver_fn.\n\n Attributes:\n features: A `Tensor`, `SparseTensor`, or dict of string or int to `Tensor`\n or `SparseTensor`, specifying the features to be passed to the model.\n Note: if `features` passed is not a dict, it will be wrapped in a dict\n with a single entry, using 'feature' as the key. Consequently, the\n model\n must accept a feature dict of the form {'feature': tensor}. You may use\n `TensorServingInputReceiver` if you want the tensor to be passed as is.\n receiver_tensors: A `Tensor`, `SparseTensor`, or dict of string to `Tensor`\n or `SparseTensor`, specifying input nodes where this receiver expects to\n be fed by default. Typically, this is a single placeholder expecting\n serialized `tf.Example` protos.\n receiver_tensors_alternatives: a dict of string to additional groups of\n receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict\n of string to `Tensor` or`SparseTensor`. These named receiver tensor\n alternatives generate additional serving signatures, which may be used to\n feed inputs at different points within the input receiver subgraph. A\n typical usage is to allow feeding raw feature `Tensor`s *downstream* of\n the tf.parse_example() op. Defaults to None.\n ", "desc": "A return type for a serving_input_receiver_fn.", "type": "API"}, {"name": "tf.estimator.export.TensorServingInputReceiver", "docs": "A return type for a serving_input_receiver_fn.\n\n This is for use with models that expect a single `Tensor` or `SparseTensor`\n as an input feature, as opposed to a dict of features.\n\n The normal `ServingInputReceiver` always returns a feature dict, even if it\n contains only one entry, and so can be used only with models that accept such\n a dict. For models that accept only a single raw feature, the\n `serving_input_receiver_fn` provided to `Estimator.export_saved_model()`\n should return this `TensorServingInputReceiver` instead. See:\n https://github.com/tensorflow/tensorflow/issues/11674\n\n Note that the receiver_tensors and receiver_tensor_alternatives arguments\n will be automatically converted to the dict representation in either case,\n because the SavedModel format requires each input `Tensor` to have a name\n (provided by the dict key).\n\n Attributes:\n features: A single `Tensor` or `SparseTensor`, representing the feature to\n be passed to the model.\n receiver_tensors: A `Tensor`, `SparseTensor`, or dict of string to `Tensor`\n or `SparseTensor`, specifying input nodes where this receiver expects to\n be fed by default. Typically, this is a single placeholder expecting\n serialized `tf.Example` protos.\n receiver_tensors_alternatives: a dict of string to additional groups of\n receiver tensors, each of which may be a `Tensor`, `SparseTensor`, or dict\n of string to `Tensor` or`SparseTensor`. These named receiver tensor\n alternatives generate additional serving signatures, which may be used to\n feed inputs at different points within the input receiver subgraph. A\n typical usage is to allow feeding raw feature `Tensor`s *downstream* of\n the tf.parse_example() op. Defaults to None.\n ", "desc": "A return type for a serving_input_receiver_fn.", "type": "API"}, {"name": "tf.estimator.Exporter", "docs": "A class representing a type of model export.", "desc": "A class representing a type of model export.", "type": "API"}, {"name": "tf.estimator.FeedFnHook", "docs": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "desc": "Runs `feed_fn` and sets the `feed_dict` accordingly.", "type": "API"}, {"name": "tf.estimator.FinalExporter", "docs": "This class exports the serving graph and checkpoints at the end.\n\n This class performs a single export at the end of training.\n ", "desc": "This class exports the serving graph and checkpoints at the end.", "type": "API"}, {"name": "tf.estimator.FinalOpsHook", "docs": "A hook which evaluates `Tensors` at the end of a session.", "desc": "A hook which evaluates `Tensors` at the end of a session.", "type": "API"}, {"name": "tf.estimator.GlobalStepWaiterHook", "docs": "Delays execution until global step reaches `wait_until_step`.\n\n This hook delays execution until global step reaches to `wait_until_step`. It\n is used to gradually start workers in distributed settings. One example usage\n would be setting `wait_until_step=int(K*log(task_id+1))` assuming that\n task_id=0 is the chief.\n ", "desc": "Delays execution until global step reaches `wait_until_step`.", "type": "API"}, {"name": "tf.estimator.Head", "docs": "Interface for the head/top of a model.\n\n Head sits on top of the model network and handles computing the outputs of\n the network. Given logits (or output of a hidden layer), a Head knows how to\n compute predictions, loss, train_op, metrics and export outputs. It is meant\n to:\n\n 1. Simplify writing model_fn and to make model_fn more configurable for\n Estimator.\n 2. Simpilfy creating loss and metrics for the train and test loop in Eager\n execution.\n 3. Support wide range of machine learning models. Since most heads can work\n with logits, they can support DNN, RNN, Wide, Wide&Deep,\n Global objectives, Gradient boosted trees and many other types\n of machine learning models.\n\n Common usage:\n Here is simplified model_fn to build a DNN regression model.\n ```python\n def _my_dnn_model_fn(features, labels, mode, params, config=None):\n # Optionally your callers can pass head to model_fn as a param.\n head = tf.estimator.RegressionHead(...)\n\n feature_columns = tf.feature_column.numeric_column(...)\n feature_layer = tf.keras.layers.DenseFeatures(feature_columns)\n inputs = feature_layer(features)\n\n # Compute logits with tf.keras.layers API\n hidden_layer0 = tf.keras.layers.Dense(\n units=1000, activation=\"relu\")(inputs)\n hidden_layer1 = tf.keras.layers.Dense(\n units=500, activation=\"relu\")(hidden_layer0)\n logits = tf.keras.layers.Dense(\n units=head.logits_dimension, activation=None)(hidden_layer1)\n\n # Or use Keras model for logits computation\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(units=1000, activation=\"relu\"))\n model.add(tf.keras.layers.Dense(units=500, activation=\"relu\"))\n model.add(tf.keras.layers.Dense(\n units=head.logits_dimension, activation=None))\n logits = model(inputs)\n\n return head.create_estimator_spec(\n features=features,\n labels=labels,\n mode=mode,\n logits=logits,\n optimizer=optimizer)\n ```\n ", "desc": "Interface for the head/top of a model.", "type": "API"}, {"name": "tf.estimator.LatestExporter", "docs": "This class regularly exports the serving graph and checkpoints.\n\n In addition to exporting, this class also garbage collects stale exports.\n ", "desc": "This class regularly exports the serving graph and checkpoints.", "type": "API"}, {"name": "tf.estimator.LinearClassifier", "docs": "Linear classifier model.\n\n Train a linear model to classify instances into one of multiple possible\n classes. When number of possible classes is 2, this is binary classification.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.exponential_decay(\n learning_rate=0.1,\n global_step=tf.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.LinearClassifier(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `SparseColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedSparseColumn`, two features: the first with\n `key` the id column name, the second with `key` the weight column name.\n Both features' `value` must be a `SparseTensor`.\n - if `column` is a `RealValuedColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using softmax cross entropy.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "Linear classifier model.", "type": "API"}, {"name": "tf.estimator.LinearEstimator", "docs": "An estimator for TensorFlow linear models with user-specified head.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearEstimator(\n head=tf.estimator.MultiLabelHead(n_classes=3),\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train, steps=100)\n metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a `KeyError`:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `CategoricalColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedCategoricalColumn`, two features: the first\n with `key` the id column name, the second with `key` the weight column\n name. Both features' `value` must be a `SparseTensor`.\n - if `column` is a `DenseColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss and predicted output are determined by the specified head.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow linear models with user-specified head.", "type": "API"}, {"name": "tf.estimator.LinearRegressor", "docs": "An estimator for TensorFlow Linear regression problems.\n\n Train a linear regression model to predict label value given observation of\n feature values.\n\n Example:\n\n ```python\n categorical_column_a = categorical_column_with_hash_bucket(...)\n categorical_column_b = categorical_column_with_hash_bucket(...)\n\n categorical_feature_a_x_categorical_feature_b = crossed_column(...)\n\n # Estimator using the default optimizer.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b])\n\n # Or estimator using the FTRL optimizer with regularization.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=tf.keras.optimizers.Ftrl(\n learning_rate=0.1,\n l1_regularization_strength=0.001\n ))\n\n # Or estimator using an optimizer with a learning rate decay.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n optimizer=lambda: tf.keras.optimizers.Ftrl(\n learning_rate=tf.compat.v1.train.exponential_decay(\n learning_rate=0.1,\n global_step=tf.compat.v1.train.get_global_step(),\n decay_steps=10000,\n decay_rate=0.96))\n\n # Or estimator with warm-starting from a previous checkpoint.\n estimator = tf.estimator.LinearRegressor(\n feature_columns=[categorical_column_a,\n categorical_feature_a_x_categorical_feature_b],\n warm_start_from=\"/path/to/checkpoint/dir\")\n\n\n # Input builders\n def input_fn_train:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_eval:\n # Returns tf.data.Dataset of (x, y) tuple where y represents label's class\n # index.\n pass\n def input_fn_predict:\n # Returns tf.data.Dataset of (x, None) tuple.\n pass\n estimator.train(input_fn=input_fn_train)\n metrics = estimator.evaluate(input_fn=input_fn_eval)\n predictions = estimator.predict(input_fn=input_fn_predict)\n ```\n\n Input of `train` and `evaluate` should have following features,\n otherwise there will be a KeyError:\n\n * if `weight_column` is not `None`, a feature with `key=weight_column` whose\n value is a `Tensor`.\n * for each `column` in `feature_columns`:\n - if `column` is a `SparseColumn`, a feature with `key=column.name`\n whose `value` is a `SparseTensor`.\n - if `column` is a `WeightedSparseColumn`, two features: the first with\n `key` the id column name, the second with `key` the weight column name.\n Both features' `value` must be a `SparseTensor`.\n - if `column` is a `RealValuedColumn`, a feature with `key=column.name`\n whose `value` is a `Tensor`.\n\n Loss is calculated by using mean squared error.\n\n @compatibility(eager)\n Estimators can be used while eager execution is enabled. Note that `input_fn`\n and all hooks are executed inside a graph context, so they have to be written\n to be compatible with graph mode. Note that `input_fn` code using `tf.data`\n generally works in both graph and eager modes.\n @end_compatibility\n ", "desc": "An estimator for TensorFlow Linear regression problems.", "type": "API"}, {"name": "tf.estimator.LoggingTensorHook", "docs": "Prints the given tensors every N local steps, every N seconds, or at end.\n\n The tensors will be printed to the log, with `INFO` severity. If you are not\n seeing the logs, you might want to add the following line after your imports:\n\n ```python\n tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)\n ```\n\n Note that if `at_end` is True, `tensors` should not include any tensor\n whose evaluation produces a side effect such as consuming additional inputs.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n\n ", "desc": "Prints the given tensors every N local steps, every N seconds, or at end.", "type": "API"}, {"name": "tf.estimator.LogisticRegressionHead", "docs": "Creates a `Head` for logistic regression.\n\n Uses `sigmoid_cross_entropy_with_logits` loss, which is the same as\n `BinaryClassHead`. The differences compared to `BinaryClassHead` are:\n\n * Does not support `label_vocabulary`. Instead, labels must be float in the\n range [0, 1].\n * Does not calculate some metrics that do not make sense, such as AUC.\n * In `PREDICT` mode, only returns logits and predictions\n (`=tf.sigmoid(logits)`), whereas `BinaryClassHead` also returns\n probabilities, classes, and class_ids.\n * Export output defaults to `RegressionOutput`, whereas `BinaryClassHead`\n defaults to `PredictOutput`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, 1]`.\n In many applications, the shape is `[batch_size, 1]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\n This is implemented as a generalized linear model, see\n https://en.wikipedia.org/wiki/Generalized_linear_model.\n\n The head can be used with a canned estimator. Example:\n\n ```python\n my_head = tf.estimator.LogisticRegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.LogisticRegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch\n size * label_dimension`.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for logistic regression.", "type": "API"}, {"name": "tf.estimator.ModeKeys", "docs": "Standard names for Estimator model modes.\n\n The following standard keys are defined:\n\n * `TRAIN`: training/fitting mode.\n * `EVAL`: testing/evaluation mode.\n * `PREDICT`: predication/inference mode.\n ", "desc": "Standard names for Estimator model modes.", "type": "API"}, {"name": "tf.estimator.MultiClassHead", "docs": "Creates a `Head` for multi class classification.\n\n Uses `sparse_softmax_cross_entropy` loss.\n\n The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`.\n In many applications, the shape is `[batch_size, n_classes]`.\n\n `labels` must be a dense `Tensor` with shape matching `logits`, namely\n `[D0, D1, ... DN, 1]`. If `label_vocabulary` given, `labels` must be a string\n `Tensor` with values from the vocabulary. If `label_vocabulary` is not given,\n `labels` must be an integer `Tensor` with values specifying the class index.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n The loss is the weighted sum over the input dimensions. Namely, if the input\n labels have shape `[batch_size, 1]`, the loss is the weighted sum over\n `batch_size`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns\n unreduced loss with shape `[D0, D1, ... DN, 1]`. `loss_fn` must support\n integer `labels` with shape `[D0, D1, ... DN, 1]`. Namely, the head applies\n `label_vocabulary` to the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> n_classes = 3\n >>> head = tf.estimator.MultiClassHead(n_classes)\n >>> logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)\n >>> labels = np.array(((1,), (1,)), dtype=np.int64)\n >>> features = {'x': np.array(((42,),), dtype=np.int32)}\n >>> # expected_loss = sum(cross_entropy(labels, logits)) / batch_size\n >>> # = sum(10, 0) / 2 = 5.\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 5.00\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n accuracy : 0.50\n average_loss : 5.00\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[10. 0. 0.]\n [ 0. 10. 0.]], shape=(2, 3), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.MultiClassHead(n_classes=3)\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.MultiClassHead(n_classes=3)\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n n_classes: Number of classes, must be greater than 2 (for 2 classes, use\n `BinaryClassHead`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_vocabulary: A list or tuple of strings representing possible label\n values. If it is not given, that means labels are already encoded as an\n integer within [0, n_classes). If given, labels must be of string type and\n have any value in `label_vocabulary`. Note that errors will be raised if\n `label_vocabulary` is not provided but labels are strings. If both\n `n_classes` and `label_vocabulary` are provided, `label_vocabulary` should\n contain exactly `n_classes` items.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by `batch size * label_dimension`.\n loss_fn: Optional loss function.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for multi class classification.", "type": "API"}, {"name": "tf.estimator.MultiHead", "docs": "Creates a `Head` for multi-objective learning.\n\n This class merges the output of multiple `Head` objects. Specifically:\n\n * For training, sums losses of each head, calls `train_op_fn` with this\n final loss.\n * For eval, merges metrics by adding `head.name` suffix to the keys in eval\n metrics, such as `precision/head1.name`, `precision/head2.name`.\n * For prediction, merges predictions and updates keys in prediction dict to a\n 2-tuple, `(head.name, prediction_key)`. Merges `export_outputs` such that\n by default the first head is served.\n\n Usage:\n\n >>> head1 = tf.estimator.MultiLabelHead(n_classes=2, name='head1')\n >>> head2 = tf.estimator.MultiLabelHead(n_classes=3, name='head2')\n >>> multi_head = tf.estimator.MultiHead([head1, head2])\n >>> logits = {\n ... 'head1': np.array([[-10., 10.], [-15., 10.]], dtype=np.float32),\n ... 'head2': np.array([[20., -20., 20.], [-30., 20., -20.]],\n ... dtype=np.float32),}\n >>> labels = {\n ... 'head1': np.array([[1, 0], [1, 1]], dtype=np.int64),\n ... 'head2': np.array([[0, 1, 0], [1, 1, 0]], dtype=np.int64),}\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # For large logits, sigmoid cross entropy loss is approximated as:\n >>> # loss = labels * (logits < 0) * (-logits) +\n >>> # (1 - labels) * (logits > 0) * logits =>\n >>> # head1: expected_unweighted_loss = [[10., 10.], [15., 0.]]\n >>> # loss1 = ((10 + 10) / 2 + (15 + 0) / 2) / 2 = 8.75\n >>> # head2: expected_unweighted_loss = [[20., 20., 20.], [30., 0., 0]]\n >>> # loss2 = ((20 + 20 + 20) / 3 + (30 + 0 + 0) / 3) / 2 = 15.00\n >>> # loss = loss1 + loss2 = 8.75 + 15.00 = 23.75\n >>> loss = multi_head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 23.75\n >>> eval_metrics = multi_head.metrics()\n >>> updated_metrics = multi_head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n auc/head1 : 0.17\n auc/head2 : 0.33\n auc_precision_recall/head1 : 0.60\n auc_precision_recall/head2 : 0.40\n average_loss/head1 : 8.75\n average_loss/head2 : 15.00\n loss/head1 : 8.75\n loss/head2 : 15.00\n >>> preds = multi_head.predictions(logits)\n >>> print(preds[('head1', 'logits')])\n tf.Tensor(\n [[-10. 10.]\n [-15. 10.]], shape=(2, 2), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n # In `input_fn`, specify labels as a dict keyed by head name:\n def input_fn():\n features = ...\n labels1 = ...\n labels2 = ...\n return features, {'head1.name': labels1, 'head2.name': labels2}\n\n # In `model_fn`, specify logits as a dict keyed by head name:\n def model_fn(features, labels, mode):\n # Create simple heads and specify head name.\n head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')\n head2 = tf.estimator.BinaryClassHead(name='head2')\n # Create MultiHead from two simple heads.\n head = tf.estimator.MultiHead([head1, head2])\n # Create logits for each head, and combine them into a dict.\n logits1, logits2 = logit_fn()\n logits = {'head1.name': logits1, 'head2.name': logits2}\n # Return the merged EstimatorSpec\n return head.create_estimator_spec(..., logits=logits, ...)\n\n # Create an estimator with this model_fn.\n estimator = tf.estimator.Estimator(model_fn=model_fn)\n estimator.train(input_fn=input_fn)\n ```\n\n Also supports `logits` as a `Tensor` of shape\n `[D0, D1, ... DN, logits_dimension]`. It will split the `Tensor` along the\n last dimension and distribute it appropriately among the heads. E.g.:\n\n ```python\n # Input logits.\n logits = np.array([[-1., 1., 2., -2., 2.], [-1.5, 1., -3., 2., -2.]],\n dtype=np.float32)\n # Suppose head1 and head2 have the following logits dimension.\n head1.logits_dimension = 2\n head2.logits_dimension = 3\n # After splitting, the result will be:\n logits_dict = {'head1_name': [[-1., 1.], [-1.5, 1.]],\n 'head2_name': [[2., -2., 2.], [-3., 2., -2.]]}\n ```\n\n Usage:\n\n ```python\n def model_fn(features, labels, mode):\n # Create simple heads and specify head name.\n head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')\n head2 = tf.estimator.BinaryClassHead(name='head2')\n # Create multi-head from two simple heads.\n head = tf.estimator.MultiHead([head1, head2])\n # Create logits for the multihead. The result of logits is a `Tensor`.\n logits = logit_fn(logits_dimension=head.logits_dimension)\n # Return the merged EstimatorSpec\n return head.create_estimator_spec(..., logits=logits, ...)\n ```\n\n Args:\n heads: List or tuple of `Head` instances. All heads must have `name`\n specified. The first head in the list is the default used at serving time.\n head_weights: Optional list of weights, same length as `heads`. Used when\n merging losses to calculate the weighted sum of losses from each head. If\n `None`, all losses are weighted equally.\n ", "desc": "Creates a `Head` for multi-objective learning.", "type": "API"}, {"name": "tf.estimator.MultiLabelHead", "docs": "Creates a `Head` for multi-label classification.\n\n Multi-label classification handles the case where each example may have zero\n or more associated labels, from a discrete set. This is distinct from\n `MultiClassHead` which has exactly one label per example.\n\n Uses `sigmoid_cross_entropy` loss average over classes and weighted sum over\n the batch. Namely, if the input logits have shape `[batch_size, n_classes]`,\n the loss is the average over `n_classes` and the weighted sum over\n `batch_size`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, n_classes]`. In many\n applications, the shape is `[batch_size, n_classes]`.\n\n Labels can be:\n\n * A multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`\n * An integer `SparseTensor` of class indices. The `dense_shape` must be\n `[D0, D1, ... DN, ?]` and the values within `[0, n_classes)`.\n * If `label_vocabulary` is given, a string `SparseTensor`. The `dense_shape`\n must be `[D0, D1, ... DN, ?]` and the values within `label_vocabulary` or a\n multi-hot tensor of shape `[D0, D1, ... DN, n_classes]`.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, or `[D0, D1, ... DN, 1]`.\n\n Also supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features)` as arguments and returns unreduced loss with\n shape `[D0, D1, ... DN, 1]`. `loss_fn` must support indicator `labels` with\n shape `[D0, D1, ... DN, n_classes]`. Namely, the head applies\n `label_vocabulary` to the input labels before passing them to `loss_fn`.\n\n Usage:\n\n >>> n_classes = 2\n >>> head = tf.estimator.MultiLabelHead(n_classes)\n >>> logits = np.array([[-1., 1.], [-1.5, 1.5]], dtype=np.float32)\n >>> labels = np.array([[1, 0], [1, 1]], dtype=np.int64)\n >>> features = {'x': np.array([[41], [42]], dtype=np.int32)}\n >>> # expected_loss = sum(_sigmoid_cross_entropy(labels, logits)) / batch_size\n >>> # = sum(1.31326169, 0.9514133) / 2 = 1.13\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 1.13\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n auc : 0.33\n auc_precision_recall : 0.77\n average_loss : 1.13\n >>> preds = head.predictions(logits)\n >>> print(preds['logits'])\n tf.Tensor(\n [[-1. 1. ]\n [-1.5 1.5]], shape=(2, 2), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.MultiLabelHead(n_classes=3)\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.MultiLabelHead(n_classes=3)\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n n_classes: Number of classes, must be greater than 1 (for 1 class, use\n `BinaryClassHead`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. Per-class weighting is not\n supported.\n thresholds: Iterable of floats in the range `(0, 1)`. Accuracy, precision\n and recall metrics are evaluated for each threshold value. The threshold\n is applied to the predicted probabilities, i.e. above the threshold is\n `true`, below is `false`.\n label_vocabulary: A list of strings represents possible label values. If it\n is not given, that means labels are already encoded as integer within [0,\n n_classes) or multi-hot Tensor. If given, labels must be SparseTensor\n `string` type and have any value in `label_vocabulary`. Also there will be\n errors if vocabulary is not provided and labels are string.\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch. Defaults to `SUM_OVER_BATCH_SIZE`, namely\n weighted sum of losses divided by batch size.\n loss_fn: Optional loss function.\n classes_for_class_based_metrics: List of integer class IDs or string class\n names for which per-class metrics are evaluated. If integers, all must be\n in the range `[0, n_classes - 1]`. If strings, all must be in\n `label_vocabulary`.\n name: Name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for multi-label classification.", "type": "API"}, {"name": "tf.estimator.NanLossDuringTrainingError", "docs": "", "desc": "", "type": "API"}, {"name": "tf.estimator.NanTensorHook", "docs": "Monitors the loss tensor and stops training if loss is NaN.\n\n Can either fail with exception or just stop training.\n ", "desc": "Monitors the loss tensor and stops training if loss is NaN.", "type": "API"}, {"name": "tf.estimator.PoissonRegressionHead", "docs": "Creates a `Head` for poisson regression using `tf.nn.log_poisson_loss`.\n\n The loss is the weighted sum over all input dimensions. Namely, if the input\n labels have shape `[batch_size, label_dimension]`, the loss is the weighted\n sum over both `batch_size` and `label_dimension`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`.\n In many applications, the shape is `[batch_size, label_dimension]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape\n `[D0, D1, ... DN]` is also supported.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or\n `[D0, D1, ... DN, label_dimension]`.\n\n This is implemented as a generalized linear model, see\n https://en.wikipedia.org/wiki/Generalized_linear_model.\n\n The head can be used with a canned estimator. Example:\n\n ```python\n my_head = tf.estimator.PoissonRegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.PoissonRegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_dimension: Number of regression labels per example. This is the size\n of the last dimension of the labels `Tensor` (typically, this has shape\n `[batch_size, label_dimension]`).\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch\n size * label_dimension`.\n compute_full_loss: Whether to include the constant `log(z!)` term in\n computing the poisson loss. See `tf.nn.log_poisson_loss` for the full\n documentation.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for poisson regression using `tf.nn.log_poisson_loss`.", "type": "API"}, {"name": "tf.estimator.ProfilerHook", "docs": "Captures CPU/GPU profiling information every N steps or seconds.\n\n This produces files called \"timeline-.json\", which are in Chrome\n Trace format.\n\n For more information see:\n https://github.com/catapult-project/catapult/blob/master/tracing/README.md\n ", "desc": "Captures CPU/GPU profiling information every N steps or seconds.", "type": "API"}, {"name": "tf.estimator.RegressionHead", "docs": "Creates a `Head` for regression using the `mean_squared_error` loss.\n\n The loss is the weighted sum over all input dimensions. Namely, if the input\n labels have shape `[batch_size, label_dimension]`, the loss is the weighted\n sum over both `batch_size` and `label_dimension`.\n\n The head expects `logits` with shape `[D0, D1, ... DN, label_dimension]`.\n In many applications, the shape is `[batch_size, label_dimension]`.\n\n The `labels` shape must match `logits`, namely\n `[D0, D1, ... DN, label_dimension]`. If `label_dimension=1`, shape\n `[D0, D1, ... DN]` is also supported.\n\n If `weight_column` is specified, weights must be of shape\n `[D0, D1, ... DN]`, `[D0, D1, ... DN, 1]` or\n `[D0, D1, ... DN, label_dimension]`.\n\n Supports custom `loss_fn`. `loss_fn` takes `(labels, logits)` or\n `(labels, logits, features, loss_reduction)` as arguments and returns\n unreduced loss with shape `[D0, D1, ... DN, label_dimension]`.\n\n Also supports custom `inverse_link_fn`, also known as 'mean function'.\n `inverse_link_fn` is only used in `PREDICT` mode. It takes `logits` as\n argument and returns predicted values. This function is the inverse of the\n link function defined in\n https://en.wikipedia.org/wiki/Generalized_linear_model#Link_function\n Namely, for poisson regression, set `inverse_link_fn=tf.exp`.\n\n Usage:\n\n >>> head = tf.estimator.RegressionHead()\n >>> logits = np.array(((45,), (41,),), dtype=np.float32)\n >>> labels = np.array(((43,), (44,),), dtype=np.int32)\n >>> features = {'x': np.array(((42,),), dtype=np.float32)}\n >>> # expected_loss = weighted_loss / batch_size\n >>> # = (43-45)^2 + (44-41)^2 / 2 = 6.50\n >>> loss = head.loss(labels, logits, features=features)\n >>> print('{:.2f}'.format(loss.numpy()))\n 6.50\n >>> eval_metrics = head.metrics()\n >>> updated_metrics = head.update_metrics(\n ... eval_metrics, features, logits, labels)\n >>> for k in sorted(updated_metrics):\n ... print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))\n average_loss : 6.50\n label/mean : 43.50\n prediction/mean : 43.00\n >>> preds = head.predictions(logits)\n >>> print(preds['predictions'])\n tf.Tensor(\n [[45.]\n [41.]], shape=(2, 1), dtype=float32)\n\n Usage with a canned estimator:\n\n ```python\n my_head = tf.estimator.RegressionHead()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n ```\n\n It can also be used with a custom `model_fn`. Example:\n\n ```python\n def _my_model_fn(features, labels, mode):\n my_head = tf.estimator.RegressionHead()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.keras.optimizers.Adagrad(lr=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n ```\n\n Args:\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example.\n label_dimension: Number of regression labels per example. This is the size\n of the last dimension of the labels `Tensor` (typically, this has shape\n `[batch_size, label_dimension]`).\n loss_reduction: One of `tf.losses.Reduction` except `NONE`. Decides how to\n reduce training loss over batch and label dimension. Defaults to\n `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by\n `batch_size * label_dimension`.\n loss_fn: Optional loss function. Defaults to `mean_squared_error`.\n inverse_link_fn: Optional inverse link function, also known as 'mean\n function'. Defaults to identity.\n name: name of the head. If provided, summary and metrics keys will be\n suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops.\n ", "desc": "Creates a `Head` for regression using the `mean_squared_error` loss.", "type": "API"}, {"name": "tf.estimator.regressor_parse_example_spec", "docs": "Generates parsing spec for tf.parse_example to be used with regressors.\n\n If users keep data in tf.Example format, they need to call tf.parse_example\n with a proper feature spec. There are two main things that this utility helps:\n\n * Users need to combine parsing spec of features with labels and weights\n (if any) since they are all parsed from same tf.Example instance. This\n utility combines these specs.\n * It is difficult to map expected label by a regressor such as `DNNRegressor`\n to corresponding tf.parse_example spec. This utility encodes it by getting\n related information from users (key, dtype).\n\n Example output of parsing spec:\n\n ```python\n # Define features and transformations\n feature_b = tf.feature_column.numeric_column(...)\n feature_c_bucketized = tf.feature_column.bucketized_column(\n tf.feature_column.numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = tf.feature_column.crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]\n parsing_spec = tf.estimator.regressor_parse_example_spec(\n feature_columns, label_key='my-label')\n\n # For the above example, regressor_parse_example_spec would return the dict:\n assert parsing_spec == {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n \"my-label\" : parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n }\n ```\n\n Example usage with a regressor:\n\n ```python\n feature_columns = # define features via tf.feature_column\n estimator = DNNRegressor(\n hidden_units=[256, 64, 16],\n feature_columns=feature_columns,\n weight_column='example-weight',\n label_dimension=3)\n # This label configuration tells the regressor the following:\n # * weights are retrieved with key 'example-weight'\n # * label is a 3 dimension tensor with float32 dtype.\n\n\n # Input builders\n def input_fn_train(): # Returns a tuple of features and labels.\n features = tf.contrib.learn.read_keyed_batch_features(\n file_pattern=train_files,\n batch_size=batch_size,\n # creates parsing configuration for tf.parse_example\n features=tf.estimator.classifier_parse_example_spec(\n feature_columns,\n label_key='my-label',\n label_dimension=3,\n weight_column='example-weight'),\n reader=tf.RecordIOReader)\n labels = features.pop('my-label')\n return features, labels\n\n estimator.train(input_fn=input_fn_train)\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `_FeatureColumn`.\n label_key: A string identifying the label. It means tf.Example stores labels\n with this key.\n label_dtype: A `tf.dtype` identifies the type of labels. By default it is\n `tf.float32`.\n label_default: used as label if label_key does not exist in given\n tf.Example. By default default_value is none, which means\n `tf.parse_example` will error out if there is any missing label.\n label_dimension: Number of regression targets per example. This is the size\n of the last dimension of the labels and logits `Tensor` objects\n (typically, these have shape `[batch_size, label_dimension]`).\n weight_column: A string or a `NumericColumn` created by\n `tf.feature_column.numeric_column` defining feature column representing\n weights. It is used to down weight or boost examples during training. It\n will be multiplied by the loss of the example. If it is a string, it is\n used as a key to fetch weight tensor from the `features`. If it is a\n `NumericColumn`, raw tensor is fetched by key `weight_column.key`, then\n weight_column.normalizer_fn is applied on it to get weight tensor.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If label is used in `feature_columns`.\n ValueError: If weight_column is used in `feature_columns`.\n ValueError: If any of the given `feature_columns` is not a `_FeatureColumn`\n instance.\n ValueError: If `weight_column` is not a `NumericColumn` instance.\n ValueError: if label_key is None.\n ", "desc": "Generates parsing spec for tf.parse_example to be used with regressors.", "type": "API"}, {"name": "tf.estimator.RunConfig", "docs": "This class specifies the configurations for an `Estimator` run.", "desc": "This class specifies the configurations for an `Estimator` run.", "type": "API"}, {"name": "tf.estimator.SecondOrStepTimer", "docs": "Timer that triggers at most once every N seconds or once every N steps.\n\n This symbol is also exported to v2 in tf.estimator namespace. See\n https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/hooks/basic_session_run_hooks.py\n ", "desc": "Timer that triggers at most once every N seconds or once every N steps.", "type": "API"}, {"name": "tf.estimator.SessionRunArgs", "docs": "Represents arguments to be added to a `Session.run()` call.\n\n Args:\n fetches: Exactly like the 'fetches' argument to Session.Run().\n Can be a single tensor or op, a list of 'fetches' or a dictionary\n of fetches. For example:\n fetches = global_step_tensor\n fetches = [train_op, summary_op, global_step_tensor]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n Note that this can recurse as expected:\n fetches = {'step': global_step_tensor,\n 'ops': [train_op, check_nan_op]}\n feed_dict: Exactly like the `feed_dict` argument to `Session.Run()`\n options: Exactly like the `options` argument to `Session.run()`, i.e., a\n config_pb2.RunOptions proto.\n ", "desc": "Represents arguments to be added to a `Session.run()` call.", "type": "API"}, {"name": "tf.estimator.SessionRunContext", "docs": "Provides information about the `session.run()` call being made.\n\n Provides information about original request to `Session.Run()` function.\n SessionRunHook objects can stop the loop by calling `request_stop()` of\n `run_context`. In the future we may use this object to add more information\n about run without changing the Hook API.\n ", "desc": "Provides information about the `session.run()` call being made.", "type": "API"}, {"name": "tf.estimator.SessionRunHook", "docs": "Hook to extend calls to MonitoredSession.run().", "desc": "Hook to extend calls to MonitoredSession.run().", "type": "API"}, {"name": "tf.estimator.SessionRunValues", "docs": "Contains the results of `Session.run()`.\n\n In the future we may use this object to add more information about result of\n run without changing the Hook API.\n\n Args:\n results: The return values from `Session.run()` corresponding to the fetches\n attribute returned in the RunArgs. Note that this has the same shape as\n the RunArgs fetches. For example:\n fetches = global_step_tensor\n => results = nparray(int)\n fetches = [train_op, summary_op, global_step_tensor]\n => results = [None, nparray(string), nparray(int)]\n fetches = {'step': global_step_tensor, 'summ': summary_op}\n => results = {'step': nparray(int), 'summ': nparray(string)}\n options: `RunOptions` from the `Session.run()` call.\n run_metadata: `RunMetadata` from the `Session.run()` call.\n ", "desc": "Contains the results of `Session.run()`.", "type": "API"}, {"name": "tf.estimator.StepCounterHook", "docs": "Hook that counts steps per second.", "desc": "Hook that counts steps per second.", "type": "API"}, {"name": "tf.estimator.StopAtStepHook", "docs": "Hook that requests stop at a specified step.\n\n @compatibility(TF2)\n Please check this [notebook][notebook] on how to migrate the API to TF2.\n\n [notebook]:https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/logging_stop_hook.ipynb\n\n @end_compatibility\n ", "desc": "Hook that requests stop at a specified step.", "type": "API"}, {"name": "tf.estimator.SummarySaverHook", "docs": "Saves summaries every N steps.", "desc": "Saves summaries every N steps.", "type": "API"}, {"name": "tf.estimator.train_and_evaluate", "docs": "Train and evaluate the `estimator`.\n\n This utility function trains, evaluates, and (optionally) exports the model by\n using the given `estimator`. All training related specification is held in\n `train_spec`, including training `input_fn` and training max steps, etc. All\n evaluation and export related specification is held in `eval_spec`, including\n evaluation `input_fn`, steps, etc.\n\n This utility function provides consistent behavior for both local\n (non-distributed) and distributed configurations. The default distribution\n configuration is parameter server-based between-graph replication. For other\n types of distribution configurations such as all-reduce training, please use\n [DistributionStrategies](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/distribute).\n\n Overfitting: In order to avoid overfitting, it is recommended to set up the\n training `input_fn` to shuffle the training data properly.\n\n Stop condition: In order to support both distributed and non-distributed\n configuration reliably, the only supported stop condition for model\n training is `train_spec.max_steps`. If `train_spec.max_steps` is `None`, the\n model is trained forever. *Use with care* if model stop condition is\n different. For example, assume that the model is expected to be trained with\n one epoch of training data, and the training `input_fn` is configured to throw\n `OutOfRangeError` after going through one epoch, which stops the\n `Estimator.train`. For a three-training-worker distributed configuration, each\n training worker is likely to go through the whole epoch independently. So, the\n model will be trained with three epochs of training data instead of one epoch.\n\n Example of local (non-distributed) training:\n\n ```python\n # Set up feature columns.\n categorial_feature_a = categorial_column_with_hash_bucket(...)\n categorial_feature_a_emb = embedding_column(\n categorical_column=categorial_feature_a, ...)\n ... # other feature columns\n\n estimator = DNNClassifier(\n feature_columns=[categorial_feature_a_emb, ...],\n hidden_units=[1024, 512, 256])\n\n # Or set up the model directory\n # estimator = DNNClassifier(\n # config=tf.estimator.RunConfig(\n # model_dir='/my_model', save_summary_steps=100),\n # feature_columns=[categorial_feature_a_emb, ...],\n # hidden_units=[1024, 512, 256])\n\n # Input pipeline for train and evaluate.\n def train_input_fn(): # returns x, y\n # please shuffle the data.\n pass\n def eval_input_fn(): # returns x, y\n pass\n\n train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)\n eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn)\n\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n ```\n Note that in current implementation `estimator.evaluate` will be called\n multiple times. This means that evaluation graph (including eval_input_fn)\n will be re-created for each `evaluate` call. `estimator.train` will be called\n only once.\n\n Example of distributed training:\n\n Regarding the example of distributed training, the code above can be used\n without a change (Please do make sure that the `RunConfig.model_dir` for all\n workers is set to the same directory, i.e., a shared file system all workers\n can read and write). The only extra work to do is setting the environment\n variable `TF_CONFIG` properly for each worker correspondingly.\n\n Also see\n [Distributed TensorFlow](https://www.tensorflow.org/deploy/distributed).\n\n Setting environment variable depends on the platform. For example, on Linux,\n it can be done as follows (`$` is the shell prompt):\n\n ```\n $ TF_CONFIG='' python train_model.py\n ```\n\n For the content in `TF_CONFIG`, assume that the training cluster spec looks\n like:\n\n ```\n cluster = {\"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]}\n ```\n\n Example of `TF_CONFIG` for chief training worker (must have one and only one):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"chief\", \"index\": 0}\n }'\n ```\n Note that the chief worker also does the model training job, similar to other\n non-chief training workers (see next paragraph). In addition to the model\n training, it manages some extra work, e.g., checkpoint saving and restoring,\n writing summaries, etc.\n\n Example of `TF_CONFIG` for non-chief training worker (optional, could be\n multiple):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"worker\", \"index\": 0}\n }'\n ```\n where the `task.index` should be set as 0, 1, 2, in this example, respectively\n for non-chief training workers.\n\n Example of `TF_CONFIG` for parameter server, aka ps (could be multiple):\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"ps\", \"index\": 0}\n }'\n ```\n where the `task.index` should be set as 0 and 1, in this example, respectively\n for parameter servers.\n\n Example of `TF_CONFIG` for evaluator task. Evaluator is a special task that is\n not part of the training cluster. There could be only one. It is used for\n model evaluation.\n\n ```\n # This should be a JSON string, which is set as environment variable. Usually\n # the cluster manager handles that.\n TF_CONFIG='{\n \"cluster\": {\n \"chief\": [\"host0:2222\"],\n \"worker\": [\"host1:2222\", \"host2:2222\", \"host3:2222\"],\n \"ps\": [\"host4:2222\", \"host5:2222\"]\n },\n \"task\": {\"type\": \"evaluator\", \"index\": 0}\n }'\n ```\n\n When `distribute` or `experimental_distribute.train_distribute` and\n `experimental_distribute.remote_cluster` is set, this method will start a\n client running on the current host which connects to the `remote_cluster` for\n training and evaluation.\n\n Args:\n estimator: An `Estimator` instance to train and evaluate.\n train_spec: A `TrainSpec` instance to specify the training specification.\n eval_spec: A `EvalSpec` instance to specify the evaluation and export\n specification.\n\n Returns:\n A tuple of the result of the `evaluate` call to the `Estimator` and the\n export results using the specified `Exporter`s.\n Currently, the return value is undefined for distributed training mode.\n\n Raises:\n ValueError: if environment variable `TF_CONFIG` is incorrectly set.\n ", "desc": "Train and evaluate the `estimator`.", "type": "API"}, {"name": "tf.estimator.TrainSpec", "docs": "Configuration for the \"train\" part for the `train_and_evaluate` call.\n\n `TrainSpec` determines the input data for the training, as well as the\n duration. Optional hooks run at various stages of training.\n\n Usage:\n\n >>> train_spec = tf.estimator.TrainSpec(\n ... input_fn=lambda: 1,\n ... max_steps=100,\n ... hooks=[_StopAtSecsHook(stop_after_secs=10)],\n ... saving_listeners=[_NewCheckpointListenerForEvaluate(None, 20, None)])\n >>> train_spec.saving_listeners[0]._eval_throttle_secs\n 20\n >>> train_spec.hooks[0]._stop_after_secs\n 10\n >>> train_spec.max_steps\n 100\n ", "desc": "Configuration for the \"train\" part for the `train_and_evaluate` call.", "type": "API"}, {"name": "tf.estimator.VocabInfo", "docs": "Vocabulary information for warm-starting.\n\n See `tf.estimator.WarmStartSettings` for examples of using\n VocabInfo to warm-start.\n\n Args:\n new_vocab: [Required] A path to the new vocabulary file (used with the model\n to be trained).\n new_vocab_size: [Required] An integer indicating how many entries of the new\n vocabulary will used in training.\n num_oov_buckets: [Required] An integer indicating how many OOV buckets are\n associated with the vocabulary.\n old_vocab: [Required] A path to the old vocabulary file (used with the\n checkpoint to be warm-started from).\n old_vocab_size: [Optional] An integer indicating how many entries of the old\n vocabulary were used in the creation of the checkpoint. If not provided,\n the entire old vocabulary will be used.\n backup_initializer: [Optional] A variable initializer used for variables\n corresponding to new vocabulary entries and OOV. If not provided, these\n entries will be zero-initialized.\n axis: [Optional] Denotes what axis the vocabulary corresponds to. The\n default, 0, corresponds to the most common use case (embeddings or\n linear weights for binary classification / regression). An axis of 1\n could be used for warm-starting output layers with class vocabularies.\n\n Returns:\n A `VocabInfo` which represents the vocabulary information for warm-starting.\n\n Raises:\n ValueError: `axis` is neither 0 or 1.\n\n Example Usage:\n```python\n embeddings_vocab_info = tf.VocabInfo(\n new_vocab='embeddings_vocab',\n new_vocab_size=100,\n num_oov_buckets=1,\n old_vocab='pretrained_embeddings_vocab',\n old_vocab_size=10000,\n backup_initializer=tf.compat.v1.truncated_normal_initializer(\n mean=0.0, stddev=(1 / math.sqrt(embedding_dim))),\n axis=0)\n\n softmax_output_layer_kernel_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.glorot_uniform_initializer(),\n axis=1)\n\n softmax_output_layer_bias_vocab_info = tf.VocabInfo(\n new_vocab='class_vocab',\n new_vocab_size=5,\n num_oov_buckets=0, # No OOV for classes.\n old_vocab='old_class_vocab',\n old_vocab_size=8,\n backup_initializer=tf.compat.v1.zeros_initializer(),\n axis=0)\n\n #Currently, only axis=0 and axis=1 are supported.\n ```\n ", "desc": "Vocabulary information for warm-starting.", "type": "API"}, {"name": "tf.estimator.WarmStartSettings", "docs": "Settings for warm-starting in `tf.estimator.Estimators`.\n\n Example Use with canned `tf.estimator.DNNEstimator`:\n\n ```\n emb_vocab_file = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_vocabulary_file(\n \"sc_vocab_file\", \"new_vocab.txt\", vocab_size=100),\n dimension=8)\n emb_vocab_list = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_vocabulary_list(\n \"sc_vocab_list\", vocabulary_list=[\"a\", \"b\"]),\n dimension=8)\n estimator = tf.estimator.DNNClassifier(\n hidden_units=[128, 64], feature_columns=[emb_vocab_file, emb_vocab_list],\n warm_start_from=ws)\n ```\n\n where `ws` could be defined as:\n\n Warm-start all weights in the model (input layer and hidden weights).\n Either the directory or a specific checkpoint can be provided (in the case\n of the former, the latest checkpoint will be used):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\")\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp/model-1000\")\n ```\n\n Warm-start only the embeddings (input layer):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=\".*input_layer.*\")\n ```\n\n Warm-start all weights but the embedding parameters corresponding to\n `sc_vocab_file` have a different vocab from the one used in the current\n model:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\"\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start only `sc_vocab_file` embeddings (and no other variables), which\n have a different vocab from the one used in the current model:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\"\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=None,\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start all weights but the parameters corresponding to `sc_vocab_file`\n have a different vocab from the one used in current checkpoint, and only\n 100 of those entries were used:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\",\n old_vocab_size=100\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n })\n ```\n\n Warm-start all weights but the parameters corresponding to `sc_vocab_file`\n have a different vocab from the one used in current checkpoint and the\n parameters corresponding to `sc_vocab_list` have a different name from the\n current checkpoint:\n\n ```\n vocab_info = tf.estimator.VocabInfo(\n new_vocab=sc_vocab_file.vocabulary_file,\n new_vocab_size=sc_vocab_file.vocabulary_size,\n num_oov_buckets=sc_vocab_file.num_oov_buckets,\n old_vocab=\"old_vocab.txt\",\n old_vocab_size=100\n )\n ws = WarmStartSettings(\n ckpt_to_initialize_from=\"/tmp\",\n var_name_to_vocab_info={\n \"input_layer/sc_vocab_file_embedding/embedding_weights\": vocab_info\n },\n var_name_to_prev_var_name={\n \"input_layer/sc_vocab_list_embedding/embedding_weights\":\n \"old_tensor_name\"\n })\n ```\n\n Warm-start all TRAINABLE variables:\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=\".*\")\n ```\n\n Warm-start all variables (including non-TRAINABLE):\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=[\".*\"])\n ```\n\n Warm-start non-TRAINABLE variables \"v1\", \"v1/Momentum\", and \"v2\" but not\n \"v2/momentum\":\n\n ```\n ws = WarmStartSettings(ckpt_to_initialize_from=\"/tmp\",\n vars_to_warm_start=[\"v1\", \"v2[^/]\"])\n ```\n\n Attributes:\n ckpt_to_initialize_from: [Required] A string specifying the directory with\n checkpoint file(s) or path to checkpoint from which to warm-start the\n model parameters.\n vars_to_warm_start: [Optional] One of the following:\n\n * A regular expression (string) that captures which variables to\n warm-start (see tf.compat.v1.get_collection). This expression will only\n consider variables in the TRAINABLE_VARIABLES collection -- if you need\n to warm-start non_TRAINABLE vars (such as optimizer accumulators or\n batch norm statistics), please use the below option.\n * A list of strings, each a regex scope provided to\n tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see\n tf.compat.v1.get_collection). For backwards compatibility reasons, this\n is separate from the single-string argument type.\n * A list of Variables to warm-start. If you do not have access to the\n `Variable` objects at the call site, please use the above option.\n * `None`, in which case only TRAINABLE variables specified in\n `var_name_to_vocab_info` will be warm-started.\n\n Defaults to `'.*'`, which warm-starts all variables in the\n TRAINABLE_VARIABLES collection. Note that this excludes variables such as\n accumulators and moving statistics from batch norm.\n var_name_to_vocab_info: [Optional] Dict of variable names (strings) to\n `tf.estimator.VocabInfo`. The variable names should be \"full\" variables,\n not the names of the partitions. If not explicitly provided, the variable\n is assumed to have no (changes to) vocabulary.\n var_name_to_prev_var_name: [Optional] Dict of variable names (strings) to\n name of the previously-trained variable in `ckpt_to_initialize_from`. If\n not explicitly provided, the name of the variable is assumed to be same\n between previous checkpoint and current model. Note that this has no\n effect on the set of variables that is warm-started, and only controls\n name mapping (use `vars_to_warm_start` for controlling what variables to\n warm-start).\n ", "desc": "Settings for warm-starting in `tf.estimator.Estimators`.", "type": "API"}, {"name": "tf.executing_eagerly", "docs": "Checks whether the current thread has eager execution enabled.\n\n Eager execution is enabled by default and this API returns `True`\n in most of cases. However, this API might return `False` in the following use\n cases.\n\n * Executing inside `tf.function`, unless under `tf.init_scope` or\n `tf.config.run_functions_eagerly(True)` is previously called.\n * Executing inside a transformation function for `tf.dataset`.\n * `tf.compat.v1.disable_eager_execution()` is called.\n\n General case:\n\n >>> print(tf.executing_eagerly())\n True\n\n Inside `tf.function`:\n\n >>> @tf.function\n ... def fn():\n ... with tf.init_scope():\n ... print(tf.executing_eagerly())\n ... print(tf.executing_eagerly())\n >>> fn()\n True\n False\n\n Inside `tf.function` after `tf.config.run_functions_eagerly(True)` is called:\n\n >>> tf.config.run_functions_eagerly(True)\n >>> @tf.function\n ... def fn():\n ... with tf.init_scope():\n ... print(tf.executing_eagerly())\n ... print(tf.executing_eagerly())\n >>> fn()\n True\n True\n >>> tf.config.run_functions_eagerly(False)\n\n Inside a transformation function for `tf.dataset`:\n\n >>> def data_fn(x):\n ... print(tf.executing_eagerly())\n ... return x\n >>> dataset = tf.data.Dataset.range(100)\n >>> dataset = dataset.map(data_fn)\n False\n\n Returns:\n `True` if the current thread has eager execution enabled.\n ", "desc": "Checks whether the current thread has eager execution enabled.", "type": "API"}, {"name": "tf.exp", "docs": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).\n\n This function computes the exponential of the input tensor element-wise.\n i.e. `math.exp(x)` or \\\\(e^x\\\\), where `x` is the input tensor.\n \\\\(e\\\\) denotes Euler's number and is approximately equal to 2.718281.\n Output is positive for any real input.\n\n >>> x = tf.constant(2.0)\n >>> tf.math.exp(x)\n \n\n >>> x = tf.constant([2.0, 8.0])\n >>> tf.math.exp(x)\n \n\n For complex numbers, the exponential value is calculated as\n $$\n e^{x+iy} = {e^x} {e^{iy}} = {e^x} ({\\cos (y) + i \\sin (y)})\n $$\n\n For `1+1j` the value would be computed as:\n $$\n e^1 (\\cos (1) + i \\sin (1)) = 2.7182817 \\times (0.5403023+0.84147096j)\n $$\n\n >>> x = tf.constant(1 + 1j)\n >>> tf.math.exp(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.exp\n @end_compatibility\n ", "desc": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).", "type": "API"}, {"name": "tf.expand_dims", "docs": "Returns a tensor with a length 1 axis inserted at index `axis`.\n\n Given a tensor `input`, this operation inserts a dimension of length 1 at the\n dimension index `axis` of `input`'s shape. The dimension index follows Python\n indexing rules: It's zero-based, a negative index it is counted backward\n from the end.\n\n This operation is useful to:\n\n * Add an outer \"batch\" dimension to a single element.\n * Align axes for broadcasting.\n * To add an inner vector length axis to a tensor of scalars.\n\n For example:\n\n If you have a single image of shape `[height, width, channels]`:\n\n >>> image = tf.zeros([10,10,3])\n\n You can add an outer `batch` axis by passing `axis=0`:\n\n >>> tf.expand_dims(image, axis=0).shape.as_list()\n [1, 10, 10, 3]\n\n The new axis location matches Python `list.insert(axis, 1)`:\n\n >>> tf.expand_dims(image, axis=1).shape.as_list()\n [10, 1, 10, 3]\n\n Following standard Python indexing rules, a negative `axis` counts from the\n end so `axis=-1` adds an inner most dimension:\n\n >>> tf.expand_dims(image, -1).shape.as_list()\n [10, 10, 3, 1]\n\n This operation requires that `axis` is a valid index for `input.shape`,\n following Python indexing rules:\n\n ```\n -1-tf.rank(input) <= axis <= tf.rank(input)\n ```\n\n This operation is related to:\n\n * `tf.squeeze`, which removes dimensions of size 1.\n * `tf.reshape`, which provides more flexible reshaping capability.\n * `tf.sparse.expand_dims`, which provides this functionality for\n `tf.SparseTensor`\n\n Args:\n input: A `Tensor`.\n axis: Integer specifying the dimension index at which to expand the\n shape of `input`. Given an input of D dimensions, `axis` must be in range\n `[-(D+1), D]` (inclusive).\n name: Optional string. The name of the output `Tensor`.\n\n Returns:\n A tensor with the same data as `input`, with an additional dimension\n inserted at the index specified by `axis`.\n\n Raises:\n TypeError: If `axis` is not specified.\n InvalidArgumentError: If `axis` is out of range `[-(D+1), D]`.\n ", "desc": "Returns a tensor with a length 1 axis inserted at index `axis`.", "type": "API"}, {"name": "tf.experimental", "docs": "Public API for tf.experimental namespace.\n", "desc": "Public API for tf.experimental namespace.", "type": "API"}, {"name": "tf.experimental.async_clear_error", "docs": "Clear pending operations and error statuses in async execution.\n\n In async execution mode, an error in op/function execution can lead to errors\n in subsequent ops/functions that are scheduled but not yet executed. Calling\n this method clears all pending operations and reset the async execution state.\n\n Example:\n\n ```\n while True:\n try:\n # Step function updates the metric `loss` internally\n train_step_fn()\n except tf.errors.OutOfRangeError:\n tf.experimental.async_clear_error()\n break\n logging.info('loss = %s', loss.numpy())\n ```\n ", "desc": "Clear pending operations and error statuses in async execution.", "type": "API"}, {"name": "tf.experimental.async_scope", "docs": "Context manager for grouping async operations.\n\n Ops/function calls inside the scope can return before finishing the actual\n execution. When exiting the async scope, a synchronization barrier will be\n automatically added to ensure the completion of all async op and function\n execution, potentially raising exceptions if async execution results in\n an error state.\n\n Users may write the following code to asynchronously invoke `train_step_fn`\n and log the `loss` metric for every `num_steps` steps in a training loop.\n `train_step_fn` internally consumes data using `iterator.get_next()`, and may\n throw OutOfRangeError when running out of data. In the case:\n\n ```\n try:\n with tf.experimental.async_scope():\n for _ in range(num_steps):\n # Step function updates the metric `loss` internally\n train_step_fn()\n except tf.errors.OutOfRangeError:\n tf.experimental.async_clear_error()\n logging.info('loss = %s', loss.numpy())\n ```\n\n Yields:\n Context manager for grouping async operations.\n ", "desc": "Context manager for grouping async operations.", "type": "API"}, {"name": "tf.experimental.dlpack", "docs": "Public API for tf.experimental.dlpack namespace.\n", "desc": "Public API for tf.experimental.dlpack namespace.", "type": "API"}, {"name": "tf.experimental.dlpack.from_dlpack", "docs": "Returns the Tensorflow eager tensor.\n\n The returned tensor uses the memory shared by dlpack capsules from other\n framework.\n\n ```python\n a = tf.experimental.dlpack.from_dlpack(dlcapsule)\n # `a` uses the memory shared by dlpack\n ```\n\n Args:\n dlcapsule: A PyCapsule named as dltensor\n\n Returns:\n A Tensorflow eager tensor\n ", "desc": "Returns the Tensorflow eager tensor.", "type": "API"}, {"name": "tf.experimental.dlpack.to_dlpack", "docs": "Returns the dlpack capsule representing the tensor.\n\n This operation ensures the underlying data memory is ready when returns.\n\n ```python\n a = tf.tensor([1, 10])\n dlcapsule = tf.experimental.dlpack.to_dlpack(a)\n # dlcapsule represents the dlpack data structure\n ```\n\n Args:\n tf_tensor: Tensorflow eager tensor, to be converted to dlpack capsule.\n\n Returns:\n A PyCapsule named as dltensor, which shares the underlying memory to other\n framework. This PyCapsule can be consumed only once.\n ", "desc": "Returns the dlpack capsule representing the tensor.", "type": "API"}, {"name": "tf.experimental.function_executor_type", "docs": "Context manager for setting the executor of eager defined functions.\n\n Eager defined functions are functions decorated by tf.contrib.eager.defun.\n\n Args:\n executor_type: a string for the name of the executor to be used to execute\n functions defined by tf.contrib.eager.defun.\n\n Yields:\n Context manager for setting the executor of eager defined functions.\n ", "desc": "Context manager for setting the executor of eager defined functions.", "type": "API"}, {"name": "tf.experimental.numpy", "docs": "# tf.experimental.numpy: NumPy API on TensorFlow.\n\nThis module provides a subset of NumPy API, built on top of TensorFlow\noperations. APIs are based on and have been tested with NumPy 1.16 version.\n\nThe set of supported APIs may be expanded over time. Also future releases may\nchange the baseline version of NumPy API being supported. A list of some\nsystematic differences with NumPy is listed later in the \"Differences with\nNumPy\" section.\n\n## Getting Started\n\nPlease also see [TensorFlow NumPy Guide](\nhttps://www.tensorflow.org/guide/tf_numpy).\n\nIn the code snippets below, we will assume that `tf.experimental.numpy` is\nimported as `tnp` and NumPy is imported as `np`\n\n```python\nprint(tnp.ones([2,1]) + np.ones([1, 2]))\n```\n\n## Types\n\nThe module provides an `ndarray` class which wraps an immutable `tf.Tensor`.\nAdditional functions are provided which accept array-like objects. Here\narray-like objects include `ndarrays` as defined by this module, as well as\n`tf.Tensor`, in addition to types accepted by NumPy.\n\nA subset of NumPy dtypes are supported. Type promotion follows NumPy\nsemantics.\n\n```python\nprint(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8))\n```\n\n## Array Interface\n\nThe `ndarray` class implements the `__array__` interface. This should allow\nthese objects to be passed into contexts that expect a NumPy or array-like\nobject (e.g. matplotlib).\n\n```python\nnp.sum(tnp.ones([1, 2]) + np.ones([2, 1]))\n```\n\n\n## TF Interoperability\n\nThe TF-NumPy API calls can be interleaved with TensorFlow calls\nwithout incurring Tensor data copies. This is true even if the `ndarray` or\n`tf.Tensor` is placed on a non-CPU device.\n\nIn general, the expected behavior should be on par with that of code involving\n`tf.Tensor` and running stateless TensorFlow functions on them.\n\n```python\ntnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1]))\n```\n\nNote that the `__array_priority__` is currently chosen to be lower than\n`tf.Tensor`. Hence the `+` operator above returns a `tf.Tensor`.\n\nAdditional examples of interoperability include:\n\n* using `with tf.GradientTape()` scope to compute gradients through the\n TF-NumPy API calls.\n* using `tf.distribution.Strategy` scope for distributed execution\n* using `tf.vectorized_map()` for speeding up code using auto-vectorization\n\n\n\n## Device Support\n\nGiven that `ndarray` and functions wrap TensorFlow constructs, the code will\nhave GPU and TPU support on par with TensorFlow. Device placement can be\ncontrolled by using `with tf.device` scopes. Note that these devices could\nbe local or remote.\n\n```python\nwith tf.device(\"GPU:0\"):\n x = tnp.ones([1, 2])\nprint(tf.convert_to_tensor(x).device)\n```\n\n## Graph and Eager Modes\n\nEager mode execution should typically match NumPy semantics of executing\nop-by-op. However the same code can be executed in graph mode, by putting it\ninside a `tf.function`. The function body can contain NumPy code, and the inputs\ncan be `ndarray` as well.\n\n```python\n@tf.function\ndef f(x, y):\n return tnp.sum(x + y)\n\nf(tnp.ones([1, 2]), tf.ones([2, 1]))\n```\nPython control flow based on `ndarray` values will be translated by\n[autograph](https://www.tensorflow.org/code/tensorflow/python/autograph/g3doc/reference/index.md)\ninto `tf.cond` and `tf.while_loop` constructs. The code can be XLA compiled\nfor further optimizations.\n\nHowever, note that graph mode execution can change behavior of certain\noperations since symbolic execution may not have information that is computed\nduring runtime. Some differences are:\n\n* Shapes can be incomplete or unknown in graph mode. This means that\n `ndarray.shape`, `ndarray.size` and `ndarray.ndim` can return `ndarray`\n objects instead of returning integer (or tuple of integer) values.\n* `__len__`, `__iter__` and `__index__` properties of `ndarray`\n may similarly not be supported in graph mode. Code using these\n may need to change to explicit shape operations or control flow\n constructs.\n* Also note the [autograph limitations](\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md).\n\n\n## Mutation and Variables\n\n`ndarrays` currently wrap immutable `tf.Tensor`. Hence mutation\noperations like slice assigns are not supported. This may change in the future.\nNote however that one can directly construct a `tf.Variable` and use that with\nthe TF-NumPy APIs.\n\n```python\ntf_var = tf.Variable(2.0)\ntf_var.assign_add(tnp.square(tf_var))\n```\n\n## Differences with NumPy\n\nHere is a non-exhaustive list of differences:\n\n* Not all dtypes are currently supported. e.g. `np.float96`, `np.float128`.\n `np.object_`, `np.str_`, `np.recarray` types are not supported.\n* `ndarray` storage is in C order only. Fortran order, views, `stride_tricks`\n are not supported.\n* Only a subset of functions and modules are supported. This set will be\n expanded over time. For supported functions, some arguments or argument\n values may not be supported. These differences are generally provided in the\n function comments. Full `ufunc` support is also not provided.\n* Buffer mutation is currently not supported. `ndarrays` wrap immutable\n tensors. This means that output buffer arguments (e.g. `out` in ufuncs) are\n not supported.\n* NumPy C API is not supported. NumPy's Cython and Swig integration are not\n supported.\n\n", "desc": "# tf.experimental.numpy: NumPy API on TensorFlow.", "type": "API"}, {"name": "tf.experimental.numpy.abs", "docs": "TensorFlow variant of NumPy's `abs`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.abs`](https://numpy.org/doc/1.16/reference/generated/numpy.absolute.html).", "desc": "TensorFlow variant of NumPy's `abs`.", "type": "API"}, {"name": "tf.experimental.numpy.absolute", "docs": "TensorFlow variant of NumPy's `absolute`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.absolute`](https://numpy.org/doc/1.16/reference/generated/numpy.absolute.html).", "desc": "TensorFlow variant of NumPy's `absolute`.", "type": "API"}, {"name": "tf.experimental.numpy.add", "docs": "TensorFlow variant of NumPy's `add`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.add`](https://numpy.org/doc/1.16/reference/generated/numpy.add.html).", "desc": "TensorFlow variant of NumPy's `add`.", "type": "API"}, {"name": "tf.experimental.numpy.all", "docs": "TensorFlow variant of NumPy's `all`.\n\nUnsupported arguments: `out`, `where`.\n\nSee the NumPy documentation for [`numpy.all`](https://numpy.org/doc/1.16/reference/generated/numpy.all.html).", "desc": "TensorFlow variant of NumPy's `all`.", "type": "API"}, {"name": "tf.experimental.numpy.allclose", "docs": "TensorFlow variant of NumPy's `allclose`.\n\nSee the NumPy documentation for [`numpy.allclose`](https://numpy.org/doc/1.16/reference/generated/numpy.allclose.html).", "desc": "TensorFlow variant of NumPy's `allclose`.", "type": "API"}, {"name": "tf.experimental.numpy.amax", "docs": "TensorFlow variant of NumPy's `amax`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.amax`](https://numpy.org/doc/1.16/reference/generated/numpy.amax.html).", "desc": "TensorFlow variant of NumPy's `amax`.", "type": "API"}, {"name": "tf.experimental.numpy.amin", "docs": "TensorFlow variant of NumPy's `amin`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.amin`](https://numpy.org/doc/1.16/reference/generated/numpy.amin.html).", "desc": "TensorFlow variant of NumPy's `amin`.", "type": "API"}, {"name": "tf.experimental.numpy.angle", "docs": "TensorFlow variant of NumPy's `angle`.\n\nSee the NumPy documentation for [`numpy.angle`](https://numpy.org/doc/1.16/reference/generated/numpy.angle.html).", "desc": "TensorFlow variant of NumPy's `angle`.", "type": "API"}, {"name": "tf.experimental.numpy.any", "docs": "TensorFlow variant of NumPy's `any`.\n\nUnsupported arguments: `out`, `where`.\n\nSee the NumPy documentation for [`numpy.any`](https://numpy.org/doc/1.16/reference/generated/numpy.any.html).", "desc": "TensorFlow variant of NumPy's `any`.", "type": "API"}, {"name": "tf.experimental.numpy.append", "docs": "TensorFlow variant of NumPy's `append`.\n\nSee the NumPy documentation for [`numpy.append`](https://numpy.org/doc/1.16/reference/generated/numpy.append.html).", "desc": "TensorFlow variant of NumPy's `append`.", "type": "API"}, {"name": "tf.experimental.numpy.arange", "docs": "TensorFlow variant of NumPy's `arange`.\n\nReturns `step`-separated values in the range [start, stop).\n\n Args:\n start: Start of the interval. Included in the range.\n stop: End of the interval. If not specified, `start` is treated as 0 and\n `start` value is used as `stop`. If specified, it is not included in the\n range if `step` is integer. When `step` is floating point, it may or may\n not be included.\n step: The difference between 2 consecutive values in the output range. It is\n recommended to use `linspace` instead of using non-integer values for\n `step`.\n dtype: Optional. Type of the resulting ndarray. Could be a python type, a\n NumPy type or a TensorFlow `DType`. If not provided, the largest type of\n `start`, `stop`, `step` is used.\n\n Raises:\n ValueError: If step is zero.\n \n\nSee the NumPy documentation for [`numpy.arange`](https://numpy.org/doc/1.16/reference/generated/numpy.arange.html).", "desc": "TensorFlow variant of NumPy's `arange`.", "type": "API"}, {"name": "tf.experimental.numpy.arccos", "docs": "TensorFlow variant of NumPy's `arccos`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arccos`](https://numpy.org/doc/1.16/reference/generated/numpy.arccos.html).", "desc": "TensorFlow variant of NumPy's `arccos`.", "type": "API"}, {"name": "tf.experimental.numpy.arccosh", "docs": "TensorFlow variant of NumPy's `arccosh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arccosh`](https://numpy.org/doc/1.16/reference/generated/numpy.arccosh.html).", "desc": "TensorFlow variant of NumPy's `arccosh`.", "type": "API"}, {"name": "tf.experimental.numpy.arcsin", "docs": "TensorFlow variant of NumPy's `arcsin`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arcsin`](https://numpy.org/doc/1.16/reference/generated/numpy.arcsin.html).", "desc": "TensorFlow variant of NumPy's `arcsin`.", "type": "API"}, {"name": "tf.experimental.numpy.arcsinh", "docs": "TensorFlow variant of NumPy's `arcsinh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arcsinh`](https://numpy.org/doc/1.16/reference/generated/numpy.arcsinh.html).", "desc": "TensorFlow variant of NumPy's `arcsinh`.", "type": "API"}, {"name": "tf.experimental.numpy.arctan", "docs": "TensorFlow variant of NumPy's `arctan`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arctan`](https://numpy.org/doc/1.16/reference/generated/numpy.arctan.html).", "desc": "TensorFlow variant of NumPy's `arctan`.", "type": "API"}, {"name": "tf.experimental.numpy.arctan2", "docs": "TensorFlow variant of NumPy's `arctan2`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arctan2`](https://numpy.org/doc/1.16/reference/generated/numpy.arctan2.html).", "desc": "TensorFlow variant of NumPy's `arctan2`.", "type": "API"}, {"name": "tf.experimental.numpy.arctanh", "docs": "TensorFlow variant of NumPy's `arctanh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.arctanh`](https://numpy.org/doc/1.16/reference/generated/numpy.arctanh.html).", "desc": "TensorFlow variant of NumPy's `arctanh`.", "type": "API"}, {"name": "tf.experimental.numpy.argmax", "docs": "TensorFlow variant of NumPy's `argmax`.\n\nUnsupported arguments: `out`, `keepdims`.\n\nSee the NumPy documentation for [`numpy.argmax`](https://numpy.org/doc/1.16/reference/generated/numpy.argmax.html).", "desc": "TensorFlow variant of NumPy's `argmax`.", "type": "API"}, {"name": "tf.experimental.numpy.argmin", "docs": "TensorFlow variant of NumPy's `argmin`.\n\nUnsupported arguments: `out`, `keepdims`.\n\nSee the NumPy documentation for [`numpy.argmin`](https://numpy.org/doc/1.16/reference/generated/numpy.argmin.html).", "desc": "TensorFlow variant of NumPy's `argmin`.", "type": "API"}, {"name": "tf.experimental.numpy.argsort", "docs": "TensorFlow variant of NumPy's `argsort`.\n\nSee the NumPy documentation for [`numpy.argsort`](https://numpy.org/doc/1.16/reference/generated/numpy.argsort.html).", "desc": "TensorFlow variant of NumPy's `argsort`.", "type": "API"}, {"name": "tf.experimental.numpy.around", "docs": "TensorFlow variant of NumPy's `around`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.around`](https://numpy.org/doc/1.16/reference/generated/numpy.around.html).", "desc": "TensorFlow variant of NumPy's `around`.", "type": "API"}, {"name": "tf.experimental.numpy.array", "docs": "TensorFlow variant of NumPy's `array`.\n\nSince Tensors are immutable, a copy is made only if val is placed on a\n\n different device than the current one. Even if `copy` is False, a new Tensor\n may need to be built to satisfy `dtype` and `ndim`. This is used only if `val`\n is an ndarray or a Tensor.\n \n\nSee the NumPy documentation for [`numpy.array`](https://numpy.org/doc/1.16/reference/generated/numpy.array.html).", "desc": "TensorFlow variant of NumPy's `array`.", "type": "API"}, {"name": "tf.experimental.numpy.array_equal", "docs": "TensorFlow variant of NumPy's `array_equal`.\n\nUnsupported arguments: `equal_nan`.\n\nSee the NumPy documentation for [`numpy.array_equal`](https://numpy.org/doc/1.16/reference/generated/numpy.array_equal.html).", "desc": "TensorFlow variant of NumPy's `array_equal`.", "type": "API"}, {"name": "tf.experimental.numpy.asanyarray", "docs": "TensorFlow variant of NumPy's `asanyarray`.\n\nSee the NumPy documentation for [`numpy.asanyarray`](https://numpy.org/doc/1.16/reference/generated/numpy.asanyarray.html).", "desc": "TensorFlow variant of NumPy's `asanyarray`.", "type": "API"}, {"name": "tf.experimental.numpy.asarray", "docs": "TensorFlow variant of NumPy's `asarray`.\n\nSee the NumPy documentation for [`numpy.asarray`](https://numpy.org/doc/1.16/reference/generated/numpy.asarray.html).", "desc": "TensorFlow variant of NumPy's `asarray`.", "type": "API"}, {"name": "tf.experimental.numpy.ascontiguousarray", "docs": "TensorFlow variant of NumPy's `ascontiguousarray`.\n\nSee the NumPy documentation for [`numpy.ascontiguousarray`](https://numpy.org/doc/1.16/reference/generated/numpy.ascontiguousarray.html).", "desc": "TensorFlow variant of NumPy's `ascontiguousarray`.", "type": "API"}, {"name": "tf.experimental.numpy.atleast_1d", "docs": "TensorFlow variant of NumPy's `atleast_1d`.\n\nSee the NumPy documentation for [`numpy.atleast_1d`](https://numpy.org/doc/1.16/reference/generated/numpy.atleast_1d.html).", "desc": "TensorFlow variant of NumPy's `atleast_1d`.", "type": "API"}, {"name": "tf.experimental.numpy.atleast_2d", "docs": "TensorFlow variant of NumPy's `atleast_2d`.\n\nSee the NumPy documentation for [`numpy.atleast_2d`](https://numpy.org/doc/1.16/reference/generated/numpy.atleast_2d.html).", "desc": "TensorFlow variant of NumPy's `atleast_2d`.", "type": "API"}, {"name": "tf.experimental.numpy.atleast_3d", "docs": "TensorFlow variant of NumPy's `atleast_3d`.\n\nSee the NumPy documentation for [`numpy.atleast_3d`](https://numpy.org/doc/1.16/reference/generated/numpy.atleast_3d.html).", "desc": "TensorFlow variant of NumPy's `atleast_3d`.", "type": "API"}, {"name": "tf.experimental.numpy.average", "docs": "TensorFlow variant of NumPy's `average`.\n\nSee the NumPy documentation for [`numpy.average`](https://numpy.org/doc/1.16/reference/generated/numpy.average.html).", "desc": "TensorFlow variant of NumPy's `average`.", "type": "API"}, {"name": "tf.experimental.numpy.bitwise_and", "docs": "TensorFlow variant of NumPy's `bitwise_and`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.bitwise_and`](https://numpy.org/doc/1.16/reference/generated/numpy.bitwise_and.html).", "desc": "TensorFlow variant of NumPy's `bitwise_and`.", "type": "API"}, {"name": "tf.experimental.numpy.bitwise_not", "docs": "TensorFlow variant of NumPy's `bitwise_not`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.bitwise_not`](https://numpy.org/doc/1.16/reference/generated/numpy.invert.html).", "desc": "TensorFlow variant of NumPy's `bitwise_not`.", "type": "API"}, {"name": "tf.experimental.numpy.bitwise_or", "docs": "TensorFlow variant of NumPy's `bitwise_or`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.bitwise_or`](https://numpy.org/doc/1.16/reference/generated/numpy.bitwise_or.html).", "desc": "TensorFlow variant of NumPy's `bitwise_or`.", "type": "API"}, {"name": "tf.experimental.numpy.bitwise_xor", "docs": "TensorFlow variant of NumPy's `bitwise_xor`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.bitwise_xor`](https://numpy.org/doc/1.16/reference/generated/numpy.bitwise_xor.html).", "desc": "TensorFlow variant of NumPy's `bitwise_xor`.", "type": "API"}, {"name": "tf.experimental.numpy.bool_", "docs": "Boolean type (True or False), stored as a byte.\n\n .. warning::\n\n The :class:`bool_` type is not a subclass of the :class:`int_` type\n (the :class:`bool_` is not even a number type). This is different\n than Python's default implementation of :class:`bool` as a\n sub-class of :class:`int`.\n\n :Character code: ``'?'``\n :Alias: `numpy.bool8`", "desc": "Boolean type (True or False), stored as a byte.", "type": "API"}, {"name": "tf.experimental.numpy.broadcast_arrays", "docs": "TensorFlow variant of NumPy's `broadcast_arrays`.\n\nUnsupported arguments: `subok`.\n\nSee the NumPy documentation for [`numpy.broadcast_arrays`](https://numpy.org/doc/1.16/reference/generated/numpy.broadcast_arrays.html).", "desc": "TensorFlow variant of NumPy's `broadcast_arrays`.", "type": "API"}, {"name": "tf.experimental.numpy.broadcast_to", "docs": "TensorFlow variant of NumPy's `broadcast_to`.\n\nUnsupported arguments: `subok`.\n\nSee the NumPy documentation for [`numpy.broadcast_to`](https://numpy.org/doc/1.16/reference/generated/numpy.broadcast_to.html).", "desc": "TensorFlow variant of NumPy's `broadcast_to`.", "type": "API"}, {"name": "tf.experimental.numpy.cbrt", "docs": "TensorFlow variant of NumPy's `cbrt`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.cbrt`](https://numpy.org/doc/1.16/reference/generated/numpy.cbrt.html).", "desc": "TensorFlow variant of NumPy's `cbrt`.", "type": "API"}, {"name": "tf.experimental.numpy.ceil", "docs": "TensorFlow variant of NumPy's `ceil`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.ceil`](https://numpy.org/doc/1.16/reference/generated/numpy.ceil.html).", "desc": "TensorFlow variant of NumPy's `ceil`.", "type": "API"}, {"name": "tf.experimental.numpy.clip", "docs": "TensorFlow variant of NumPy's `clip`.\n\nUnsupported arguments: `out`, `kwargs`.\n\nSee the NumPy documentation for [`numpy.clip`](https://numpy.org/doc/1.16/reference/generated/numpy.clip.html).", "desc": "TensorFlow variant of NumPy's `clip`.", "type": "API"}, {"name": "tf.experimental.numpy.complex_", "docs": "Complex number type composed of two double-precision floating-point\n numbers, compatible with Python `complex`.\n\n :Character code: ``'D'``\n :Canonical name: `numpy.cdouble`\n :Alias: `numpy.cfloat`\n :Alias: `numpy.complex_`\n :Alias on this platform (Windows AMD64): `numpy.complex128`: Complex number type composed of 2 64-bit-precision floating-point numbers.", "desc": "Complex number type composed of two double-precision floating-point", "type": "API"}, {"name": "tf.experimental.numpy.complex128", "docs": "Complex number type composed of two double-precision floating-point\n numbers, compatible with Python `complex`.\n\n :Character code: ``'D'``\n :Canonical name: `numpy.cdouble`\n :Alias: `numpy.cfloat`\n :Alias: `numpy.complex_`\n :Alias on this platform (Windows AMD64): `numpy.complex128`: Complex number type composed of 2 64-bit-precision floating-point numbers.", "desc": "Complex number type composed of two double-precision floating-point", "type": "API"}, {"name": "tf.experimental.numpy.complex64", "docs": "Complex number type composed of two single-precision floating-point\n numbers.\n\n :Character code: ``'F'``\n :Canonical name: `numpy.csingle`\n :Alias: `numpy.singlecomplex`\n :Alias on this platform (Windows AMD64): `numpy.complex64`: Complex number type composed of 2 32-bit-precision floating-point numbers.", "desc": "Complex number type composed of two single-precision floating-point", "type": "API"}, {"name": "tf.experimental.numpy.compress", "docs": "TensorFlow variant of NumPy's `compress`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.compress`](https://numpy.org/doc/1.16/reference/generated/numpy.compress.html).", "desc": "TensorFlow variant of NumPy's `compress`.", "type": "API"}, {"name": "tf.experimental.numpy.concatenate", "docs": "TensorFlow variant of NumPy's `concatenate`.\n\nSee the NumPy documentation for [`numpy.concatenate`](https://numpy.org/doc/1.16/reference/generated/numpy.concatenate.html).", "desc": "TensorFlow variant of NumPy's `concatenate`.", "type": "API"}, {"name": "tf.experimental.numpy.conj", "docs": "TensorFlow variant of NumPy's `conj`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.conj`](https://numpy.org/doc/1.16/reference/generated/numpy.conj.html).", "desc": "TensorFlow variant of NumPy's `conj`.", "type": "API"}, {"name": "tf.experimental.numpy.conjugate", "docs": "TensorFlow variant of NumPy's `conjugate`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.conjugate`](https://numpy.org/doc/1.16/reference/generated/numpy.conj.html).", "desc": "TensorFlow variant of NumPy's `conjugate`.", "type": "API"}, {"name": "tf.experimental.numpy.copy", "docs": "TensorFlow variant of NumPy's `copy`.\n\nUnsupported arguments: `order`, `subok`.\n\nSee the NumPy documentation for [`numpy.copy`](https://numpy.org/doc/1.16/reference/generated/numpy.copy.html).", "desc": "TensorFlow variant of NumPy's `copy`.", "type": "API"}, {"name": "tf.experimental.numpy.cos", "docs": "TensorFlow variant of NumPy's `cos`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.cos`](https://numpy.org/doc/1.16/reference/generated/numpy.cos.html).", "desc": "TensorFlow variant of NumPy's `cos`.", "type": "API"}, {"name": "tf.experimental.numpy.cosh", "docs": "TensorFlow variant of NumPy's `cosh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.cosh`](https://numpy.org/doc/1.16/reference/generated/numpy.cosh.html).", "desc": "TensorFlow variant of NumPy's `cosh`.", "type": "API"}, {"name": "tf.experimental.numpy.count_nonzero", "docs": "TensorFlow variant of NumPy's `count_nonzero`.\n\nUnsupported arguments: `keepdims`.\n\nSee the NumPy documentation for [`numpy.count_nonzero`](https://numpy.org/doc/1.16/reference/generated/numpy.count_nonzero.html).", "desc": "TensorFlow variant of NumPy's `count_nonzero`.", "type": "API"}, {"name": "tf.experimental.numpy.cross", "docs": "TensorFlow variant of NumPy's `cross`.\n\nSee the NumPy documentation for [`numpy.cross`](https://numpy.org/doc/1.16/reference/generated/numpy.cross.html).", "desc": "TensorFlow variant of NumPy's `cross`.", "type": "API"}, {"name": "tf.experimental.numpy.cumprod", "docs": "TensorFlow variant of NumPy's `cumprod`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.cumprod`](https://numpy.org/doc/1.16/reference/generated/numpy.cumprod.html).", "desc": "TensorFlow variant of NumPy's `cumprod`.", "type": "API"}, {"name": "tf.experimental.numpy.cumsum", "docs": "TensorFlow variant of NumPy's `cumsum`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.cumsum`](https://numpy.org/doc/1.16/reference/generated/numpy.cumsum.html).", "desc": "TensorFlow variant of NumPy's `cumsum`.", "type": "API"}, {"name": "tf.experimental.numpy.deg2rad", "docs": "TensorFlow variant of NumPy's `deg2rad`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.deg2rad`](https://numpy.org/doc/1.16/reference/generated/numpy.deg2rad.html).", "desc": "TensorFlow variant of NumPy's `deg2rad`.", "type": "API"}, {"name": "tf.experimental.numpy.diag", "docs": "TensorFlow variant of NumPy's `diag`.\n\nRaises an error if input is not 1- or 2-d.\n\nSee the NumPy documentation for [`numpy.diag`](https://numpy.org/doc/1.16/reference/generated/numpy.diag.html).", "desc": "TensorFlow variant of NumPy's `diag`.", "type": "API"}, {"name": "tf.experimental.numpy.diag_indices", "docs": "TensorFlow variant of NumPy's `diag_indices`.\n\nSee the NumPy documentation for [`numpy.diag_indices`](https://numpy.org/doc/1.16/reference/generated/numpy.diag_indices.html).", "desc": "TensorFlow variant of NumPy's `diag_indices`.", "type": "API"}, {"name": "tf.experimental.numpy.diagflat", "docs": "TensorFlow variant of NumPy's `diagflat`.\n\nSee the NumPy documentation for [`numpy.diagflat`](https://numpy.org/doc/1.16/reference/generated/numpy.diagflat.html).", "desc": "TensorFlow variant of NumPy's `diagflat`.", "type": "API"}, {"name": "tf.experimental.numpy.diagonal", "docs": "TensorFlow variant of NumPy's `diagonal`.\n\nSee the NumPy documentation for [`numpy.diagonal`](https://numpy.org/doc/1.16/reference/generated/numpy.diagonal.html).", "desc": "TensorFlow variant of NumPy's `diagonal`.", "type": "API"}, {"name": "tf.experimental.numpy.diff", "docs": "TensorFlow variant of NumPy's `diff`.\n\nUnsupported arguments: `prepend`, `append`.\n\nSee the NumPy documentation for [`numpy.diff`](https://numpy.org/doc/1.16/reference/generated/numpy.diff.html).", "desc": "TensorFlow variant of NumPy's `diff`.", "type": "API"}, {"name": "tf.experimental.numpy.divide", "docs": "TensorFlow variant of NumPy's `divide`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.divide`](https://numpy.org/doc/1.16/reference/generated/numpy.divide.html).", "desc": "TensorFlow variant of NumPy's `divide`.", "type": "API"}, {"name": "tf.experimental.numpy.divmod", "docs": "TensorFlow variant of NumPy's `divmod`.\n\nUnsupported arguments: `out1`, `out2`, `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.divmod`](https://numpy.org/doc/1.16/reference/generated/numpy.divmod.html).", "desc": "TensorFlow variant of NumPy's `divmod`.", "type": "API"}, {"name": "tf.experimental.numpy.dot", "docs": "TensorFlow variant of NumPy's `dot`.\n\nSee the NumPy documentation for [`numpy.dot`](https://numpy.org/doc/1.16/reference/generated/numpy.dot.html).", "desc": "TensorFlow variant of NumPy's `dot`.", "type": "API"}, {"name": "tf.experimental.numpy.dsplit", "docs": "TensorFlow variant of NumPy's `dsplit`.\n\nSee the NumPy documentation for [`numpy.dsplit`](https://numpy.org/doc/1.16/reference/generated/numpy.dsplit.html).", "desc": "TensorFlow variant of NumPy's `dsplit`.", "type": "API"}, {"name": "tf.experimental.numpy.dstack", "docs": "TensorFlow variant of NumPy's `dstack`.\n\nSee the NumPy documentation for [`numpy.dstack`](https://numpy.org/doc/1.16/reference/generated/numpy.dstack.html).", "desc": "TensorFlow variant of NumPy's `dstack`.", "type": "API"}, {"name": "tf.experimental.numpy.einsum", "docs": "TensorFlow variant of NumPy's `einsum`.\n\nSee the NumPy documentation for [`numpy.einsum`](https://numpy.org/doc/1.16/reference/generated/numpy.einsum.html).", "desc": "TensorFlow variant of NumPy's `einsum`.", "type": "API"}, {"name": "tf.experimental.numpy.empty", "docs": "TensorFlow variant of NumPy's `empty`.\n\nSee the NumPy documentation for [`numpy.empty`](https://numpy.org/doc/1.16/reference/generated/numpy.empty.html).", "desc": "TensorFlow variant of NumPy's `empty`.", "type": "API"}, {"name": "tf.experimental.numpy.empty_like", "docs": "TensorFlow variant of NumPy's `empty_like`.\n\nSee the NumPy documentation for [`numpy.empty_like`](https://numpy.org/doc/1.16/reference/generated/numpy.empty_like.html).", "desc": "TensorFlow variant of NumPy's `empty_like`.", "type": "API"}, {"name": "tf.experimental.numpy.equal", "docs": "TensorFlow variant of NumPy's `equal`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.equal`](https://numpy.org/doc/1.16/reference/generated/numpy.equal.html).", "desc": "TensorFlow variant of NumPy's `equal`.", "type": "API"}, {"name": "tf.experimental.numpy.exp", "docs": "TensorFlow variant of NumPy's `exp`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.exp`](https://numpy.org/doc/1.16/reference/generated/numpy.exp.html).", "desc": "TensorFlow variant of NumPy's `exp`.", "type": "API"}, {"name": "tf.experimental.numpy.exp2", "docs": "TensorFlow variant of NumPy's `exp2`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.exp2`](https://numpy.org/doc/1.16/reference/generated/numpy.exp2.html).", "desc": "TensorFlow variant of NumPy's `exp2`.", "type": "API"}, {"name": "tf.experimental.numpy.expand_dims", "docs": "TensorFlow variant of NumPy's `expand_dims`.\n\nSee the NumPy documentation for [`numpy.expand_dims`](https://numpy.org/doc/1.16/reference/generated/numpy.expand_dims.html).", "desc": "TensorFlow variant of NumPy's `expand_dims`.", "type": "API"}, {"name": "tf.experimental.numpy.experimental_enable_numpy_behavior", "docs": "Enable NumPy behavior on Tensors.\n\n Enabling NumPy behavior has three effects:\n * It adds to `tf.Tensor` some common NumPy methods such as `T`,\n `reshape` and `ravel`.\n * It changes dtype promotion in `tf.Tensor` operators to be\n compatible with NumPy. For example,\n `tf.ones([], tf.int32) + tf.ones([], tf.float32)` used to throw a\n \"dtype incompatible\" error, but after this it will return a\n float64 tensor (obeying NumPy's promotion rules).\n * It enhances `tf.Tensor`'s indexing capability to be on par with\n [NumPy's](https://numpy.org/doc/stable/reference/arrays.indexing.html).\n\n Args:\n prefer_float32: Controls whether dtype inference will use float32\n for Python floats, or float64 (the default and the\n NumPy-compatible behavior).\n ", "desc": "Enable NumPy behavior on Tensors.", "type": "API"}, {"name": "tf.experimental.numpy.expm1", "docs": "TensorFlow variant of NumPy's `expm1`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.expm1`](https://numpy.org/doc/1.16/reference/generated/numpy.expm1.html).", "desc": "TensorFlow variant of NumPy's `expm1`.", "type": "API"}, {"name": "tf.experimental.numpy.eye", "docs": "TensorFlow variant of NumPy's `eye`.\n\nUnsupported arguments: `order`, `like`.\n\nSee the NumPy documentation for [`numpy.eye`](https://numpy.org/doc/1.16/reference/generated/numpy.eye.html).", "desc": "TensorFlow variant of NumPy's `eye`.", "type": "API"}, {"name": "tf.experimental.numpy.fabs", "docs": "TensorFlow variant of NumPy's `fabs`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.fabs`](https://numpy.org/doc/1.16/reference/generated/numpy.fabs.html).", "desc": "TensorFlow variant of NumPy's `fabs`.", "type": "API"}, {"name": "tf.experimental.numpy.finfo", "docs": "TensorFlow variant of NumPy's `finfo`.\n\nNote that currently it just forwards to the numpy namesake, while\n tensorflow and numpy dtypes may have different properties.\n\nSee the NumPy documentation for [`numpy.finfo`](https://numpy.org/doc/1.16/reference/generated/numpy.finfo.html).", "desc": "TensorFlow variant of NumPy's `finfo`.", "type": "API"}, {"name": "tf.experimental.numpy.fix", "docs": "TensorFlow variant of NumPy's `fix`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.fix`](https://numpy.org/doc/1.16/reference/generated/numpy.fix.html).", "desc": "TensorFlow variant of NumPy's `fix`.", "type": "API"}, {"name": "tf.experimental.numpy.flip", "docs": "TensorFlow variant of NumPy's `flip`.\n\nSee the NumPy documentation for [`numpy.flip`](https://numpy.org/doc/1.16/reference/generated/numpy.flip.html).", "desc": "TensorFlow variant of NumPy's `flip`.", "type": "API"}, {"name": "tf.experimental.numpy.fliplr", "docs": "TensorFlow variant of NumPy's `fliplr`.\n\nSee the NumPy documentation for [`numpy.fliplr`](https://numpy.org/doc/1.16/reference/generated/numpy.fliplr.html).", "desc": "TensorFlow variant of NumPy's `fliplr`.", "type": "API"}, {"name": "tf.experimental.numpy.flipud", "docs": "TensorFlow variant of NumPy's `flipud`.\n\nSee the NumPy documentation for [`numpy.flipud`](https://numpy.org/doc/1.16/reference/generated/numpy.flipud.html).", "desc": "TensorFlow variant of NumPy's `flipud`.", "type": "API"}, {"name": "tf.experimental.numpy.float_", "docs": "Double-precision floating-point number type, compatible with Python `float`\n and C ``double``.\n\n :Character code: ``'d'``\n :Canonical name: `numpy.double`\n :Alias: `numpy.float_`\n :Alias on this platform (Windows AMD64): `numpy.float64`: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa.", "desc": "Double-precision floating-point number type, compatible with Python `float`", "type": "API"}, {"name": "tf.experimental.numpy.float_power", "docs": "TensorFlow variant of NumPy's `float_power`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.float_power`](https://numpy.org/doc/1.16/reference/generated/numpy.float_power.html).", "desc": "TensorFlow variant of NumPy's `float_power`.", "type": "API"}, {"name": "tf.experimental.numpy.float16", "docs": "Half-precision floating-point number type.\n\n :Character code: ``'e'``\n :Canonical name: `numpy.half`\n :Alias on this platform (Windows AMD64): `numpy.float16`: 16-bit-precision floating-point number type: sign bit, 5 bits exponent, 10 bits mantissa.", "desc": "Half-precision floating-point number type.", "type": "API"}, {"name": "tf.experimental.numpy.float32", "docs": "Single-precision floating-point number type, compatible with C ``float``.\n\n :Character code: ``'f'``\n :Canonical name: `numpy.single`\n :Alias on this platform (Windows AMD64): `numpy.float32`: 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa.", "desc": "Single-precision floating-point number type, compatible with C ``float``.", "type": "API"}, {"name": "tf.experimental.numpy.float64", "docs": "Double-precision floating-point number type, compatible with Python `float`\n and C ``double``.\n\n :Character code: ``'d'``\n :Canonical name: `numpy.double`\n :Alias: `numpy.float_`\n :Alias on this platform (Windows AMD64): `numpy.float64`: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa.", "desc": "Double-precision floating-point number type, compatible with Python `float`", "type": "API"}, {"name": "tf.experimental.numpy.floor", "docs": "TensorFlow variant of NumPy's `floor`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.floor`](https://numpy.org/doc/1.16/reference/generated/numpy.floor.html).", "desc": "TensorFlow variant of NumPy's `floor`.", "type": "API"}, {"name": "tf.experimental.numpy.floor_divide", "docs": "TensorFlow variant of NumPy's `floor_divide`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.floor_divide`](https://numpy.org/doc/1.16/reference/generated/numpy.floor_divide.html).", "desc": "TensorFlow variant of NumPy's `floor_divide`.", "type": "API"}, {"name": "tf.experimental.numpy.full", "docs": "TensorFlow variant of NumPy's `full`.\n\nUnsupported arguments: `order`, `like`.\n\nSee the NumPy documentation for [`numpy.full`](https://numpy.org/doc/1.16/reference/generated/numpy.full.html).", "desc": "TensorFlow variant of NumPy's `full`.", "type": "API"}, {"name": "tf.experimental.numpy.full_like", "docs": "TensorFlow variant of NumPy's `full_like`.\n\norder, subok and shape arguments mustn't be changed.\n\nSee the NumPy documentation for [`numpy.full_like`](https://numpy.org/doc/1.16/reference/generated/numpy.full_like.html).", "desc": "TensorFlow variant of NumPy's `full_like`.", "type": "API"}, {"name": "tf.experimental.numpy.gcd", "docs": "TensorFlow variant of NumPy's `gcd`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.gcd`](https://numpy.org/doc/1.16/reference/generated/numpy.gcd.html).", "desc": "TensorFlow variant of NumPy's `gcd`.", "type": "API"}, {"name": "tf.experimental.numpy.geomspace", "docs": "TensorFlow variant of NumPy's `geomspace`.\n\nSee the NumPy documentation for [`numpy.geomspace`](https://numpy.org/doc/1.16/reference/generated/numpy.geomspace.html).", "desc": "TensorFlow variant of NumPy's `geomspace`.", "type": "API"}, {"name": "tf.experimental.numpy.greater", "docs": "TensorFlow variant of NumPy's `greater`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.greater`](https://numpy.org/doc/1.16/reference/generated/numpy.greater.html).", "desc": "TensorFlow variant of NumPy's `greater`.", "type": "API"}, {"name": "tf.experimental.numpy.greater_equal", "docs": "TensorFlow variant of NumPy's `greater_equal`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.greater_equal`](https://numpy.org/doc/1.16/reference/generated/numpy.greater_equal.html).", "desc": "TensorFlow variant of NumPy's `greater_equal`.", "type": "API"}, {"name": "tf.experimental.numpy.heaviside", "docs": "TensorFlow variant of NumPy's `heaviside`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.heaviside`](https://numpy.org/doc/1.16/reference/generated/numpy.heaviside.html).", "desc": "TensorFlow variant of NumPy's `heaviside`.", "type": "API"}, {"name": "tf.experimental.numpy.hsplit", "docs": "TensorFlow variant of NumPy's `hsplit`.\n\nSee the NumPy documentation for [`numpy.hsplit`](https://numpy.org/doc/1.16/reference/generated/numpy.hsplit.html).", "desc": "TensorFlow variant of NumPy's `hsplit`.", "type": "API"}, {"name": "tf.experimental.numpy.hstack", "docs": "TensorFlow variant of NumPy's `hstack`.\n\nSee the NumPy documentation for [`numpy.hstack`](https://numpy.org/doc/1.16/reference/generated/numpy.hstack.html).", "desc": "TensorFlow variant of NumPy's `hstack`.", "type": "API"}, {"name": "tf.experimental.numpy.hypot", "docs": "TensorFlow variant of NumPy's `hypot`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.hypot`](https://numpy.org/doc/1.16/reference/generated/numpy.hypot.html).", "desc": "TensorFlow variant of NumPy's `hypot`.", "type": "API"}, {"name": "tf.experimental.numpy.identity", "docs": "TensorFlow variant of NumPy's `identity`.\n\nUnsupported arguments: `like`.\n\nSee the NumPy documentation for [`numpy.identity`](https://numpy.org/doc/1.16/reference/generated/numpy.identity.html).", "desc": "TensorFlow variant of NumPy's `identity`.", "type": "API"}, {"name": "tf.experimental.numpy.iinfo", "docs": "\n iinfo(type)\n\n Machine limits for integer types.\n\n Attributes\n ----------\n bits : int\n The number of bits occupied by the type.\n min : int\n The smallest integer expressible by the type.\n max : int\n The largest integer expressible by the type.\n\n Parameters\n ----------\n int_type : integer type, dtype, or instance\n The kind of integer data type to get information about.\n\n See Also\n --------\n finfo : The equivalent for floating point data types.\n\n Examples\n --------\n With types:\n\n >>> ii16 = np.iinfo(np.int16)\n >>> ii16.min\n -32768\n >>> ii16.max\n 32767\n >>> ii32 = np.iinfo(np.int32)\n >>> ii32.min\n -2147483648\n >>> ii32.max\n 2147483647\n\n With instances:\n\n >>> ii32 = np.iinfo(np.int32(10))\n >>> ii32.min\n -2147483648\n >>> ii32.max\n 2147483647\n\n ", "desc": "", "type": "API"}, {"name": "tf.experimental.numpy.imag", "docs": "TensorFlow variant of NumPy's `imag`.\n\nSee the NumPy documentation for [`numpy.imag`](https://numpy.org/doc/1.16/reference/generated/numpy.imag.html).", "desc": "TensorFlow variant of NumPy's `imag`.", "type": "API"}, {"name": "tf.experimental.numpy.inexact", "docs": "Abstract base class of all numeric scalar types with a (potentially)\n inexact representation of the values in its range, such as\n floating-point numbers.", "desc": "Abstract base class of all numeric scalar types with a (potentially)", "type": "API"}, {"name": "tf.experimental.numpy.inner", "docs": "TensorFlow variant of NumPy's `inner`.\n\nSee the NumPy documentation for [`numpy.inner`](https://numpy.org/doc/1.16/reference/generated/numpy.inner.html).", "desc": "TensorFlow variant of NumPy's `inner`.", "type": "API"}, {"name": "tf.experimental.numpy.int_", "docs": "Signed integer type, compatible with Python `int` and C ``long``.\n\n :Character code: ``'l'``\n :Canonical name: `numpy.int_`\n :Alias on this platform (Windows AMD64): `numpy.int32`: 32-bit signed integer (``-2_147_483_648`` to ``2_147_483_647``).", "desc": "Signed integer type, compatible with Python `int` and C ``long``.", "type": "API"}, {"name": "tf.experimental.numpy.int16", "docs": "Signed integer type, compatible with C ``short``.\n\n :Character code: ``'h'``\n :Canonical name: `numpy.short`\n :Alias on this platform (Windows AMD64): `numpy.int16`: 16-bit signed integer (``-32_768`` to ``32_767``).", "desc": "Signed integer type, compatible with C ``short``.", "type": "API"}, {"name": "tf.experimental.numpy.int32", "docs": "Signed integer type, compatible with Python `int` and C ``long``.\n\n :Character code: ``'l'``\n :Canonical name: `numpy.int_`\n :Alias on this platform (Windows AMD64): `numpy.int32`: 32-bit signed integer (``-2_147_483_648`` to ``2_147_483_647``).", "desc": "Signed integer type, compatible with Python `int` and C ``long``.", "type": "API"}, {"name": "tf.experimental.numpy.int64", "docs": "Signed integer type, compatible with C ``long long``.\n\n :Character code: ``'q'``\n :Canonical name: `numpy.longlong`\n :Alias on this platform (Windows AMD64): `numpy.int64`: 64-bit signed integer (``-9_223_372_036_854_775_808`` to ``9_223_372_036_854_775_807``).\n :Alias on this platform (Windows AMD64): `numpy.intp`: Signed integer large enough to fit pointer, compatible with C ``intptr_t``.", "desc": "Signed integer type, compatible with C ``long long``.", "type": "API"}, {"name": "tf.experimental.numpy.int8", "docs": "Signed integer type, compatible with C ``char``.\n\n :Character code: ``'b'``\n :Canonical name: `numpy.byte`\n :Alias on this platform (Windows AMD64): `numpy.int8`: 8-bit signed integer (``-128`` to ``127``).", "desc": "Signed integer type, compatible with C ``char``.", "type": "API"}, {"name": "tf.experimental.numpy.isclose", "docs": "TensorFlow variant of NumPy's `isclose`.\n\nSee the NumPy documentation for [`numpy.isclose`](https://numpy.org/doc/1.16/reference/generated/numpy.isclose.html).", "desc": "TensorFlow variant of NumPy's `isclose`.", "type": "API"}, {"name": "tf.experimental.numpy.iscomplex", "docs": "TensorFlow variant of NumPy's `iscomplex`.\n\nSee the NumPy documentation for [`numpy.iscomplex`](https://numpy.org/doc/1.16/reference/generated/numpy.iscomplex.html).", "desc": "TensorFlow variant of NumPy's `iscomplex`.", "type": "API"}, {"name": "tf.experimental.numpy.iscomplexobj", "docs": "TensorFlow variant of NumPy's `iscomplexobj`.\n\nSee the NumPy documentation for [`numpy.iscomplexobj`](https://numpy.org/doc/1.16/reference/generated/numpy.iscomplexobj.html).", "desc": "TensorFlow variant of NumPy's `iscomplexobj`.", "type": "API"}, {"name": "tf.experimental.numpy.isfinite", "docs": "TensorFlow variant of NumPy's `isfinite`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.isfinite`](https://numpy.org/doc/1.16/reference/generated/numpy.isfinite.html).", "desc": "TensorFlow variant of NumPy's `isfinite`.", "type": "API"}, {"name": "tf.experimental.numpy.isinf", "docs": "TensorFlow variant of NumPy's `isinf`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.isinf`](https://numpy.org/doc/1.16/reference/generated/numpy.isinf.html).", "desc": "TensorFlow variant of NumPy's `isinf`.", "type": "API"}, {"name": "tf.experimental.numpy.isnan", "docs": "TensorFlow variant of NumPy's `isnan`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.isnan`](https://numpy.org/doc/1.16/reference/generated/numpy.isnan.html).", "desc": "TensorFlow variant of NumPy's `isnan`.", "type": "API"}, {"name": "tf.experimental.numpy.isneginf", "docs": "TensorFlow variant of NumPy's `isneginf`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.isneginf`](https://numpy.org/doc/1.16/reference/generated/numpy.isneginf.html).", "desc": "TensorFlow variant of NumPy's `isneginf`.", "type": "API"}, {"name": "tf.experimental.numpy.isposinf", "docs": "TensorFlow variant of NumPy's `isposinf`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.isposinf`](https://numpy.org/doc/1.16/reference/generated/numpy.isposinf.html).", "desc": "TensorFlow variant of NumPy's `isposinf`.", "type": "API"}, {"name": "tf.experimental.numpy.isreal", "docs": "TensorFlow variant of NumPy's `isreal`.\n\nSee the NumPy documentation for [`numpy.isreal`](https://numpy.org/doc/1.16/reference/generated/numpy.isreal.html).", "desc": "TensorFlow variant of NumPy's `isreal`.", "type": "API"}, {"name": "tf.experimental.numpy.isrealobj", "docs": "TensorFlow variant of NumPy's `isrealobj`.\n\nSee the NumPy documentation for [`numpy.isrealobj`](https://numpy.org/doc/1.16/reference/generated/numpy.isrealobj.html).", "desc": "TensorFlow variant of NumPy's `isrealobj`.", "type": "API"}, {"name": "tf.experimental.numpy.isscalar", "docs": "TensorFlow variant of NumPy's `isscalar`.\n\nUnsupported arguments: `element`.\n\nSee the NumPy documentation for [`numpy.isscalar`](https://numpy.org/doc/1.16/reference/generated/numpy.isscalar.html).", "desc": "TensorFlow variant of NumPy's `isscalar`.", "type": "API"}, {"name": "tf.experimental.numpy.issubdtype", "docs": "\n Returns True if first argument is a typecode lower/equal in type hierarchy.\n\n This is like the builtin :func:`issubclass`, but for `dtype`\\ s.\n\n Parameters\n ----------\n arg1, arg2 : dtype_like\n `dtype` or object coercible to one\n\n Returns\n -------\n out : bool\n\n See Also\n --------\n :ref:`arrays.scalars` : Overview of the numpy type hierarchy.\n issubsctype, issubclass_\n\n Examples\n --------\n `issubdtype` can be used to check the type of arrays:\n\n >>> ints = np.array([1, 2, 3], dtype=np.int32)\n >>> np.issubdtype(ints.dtype, np.integer)\n True\n >>> np.issubdtype(ints.dtype, np.floating)\n False\n\n >>> floats = np.array([1, 2, 3], dtype=np.float32)\n >>> np.issubdtype(floats.dtype, np.integer)\n False\n >>> np.issubdtype(floats.dtype, np.floating)\n True\n\n Similar types of different sizes are not subdtypes of each other:\n\n >>> np.issubdtype(np.float64, np.float32)\n False\n >>> np.issubdtype(np.float32, np.float64)\n False\n\n but both are subtypes of `floating`:\n\n >>> np.issubdtype(np.float64, np.floating)\n True\n >>> np.issubdtype(np.float32, np.floating)\n True\n\n For convenience, dtype-like objects are allowed too:\n\n >>> np.issubdtype('S1', np.string_)\n True\n >>> np.issubdtype('i4', np.signedinteger)\n True\n\n ", "desc": "", "type": "API"}, {"name": "tf.experimental.numpy.ix_", "docs": "TensorFlow variant of NumPy's `ix_`.\n\nSee the NumPy documentation for [`numpy.ix_`](https://numpy.org/doc/1.16/reference/generated/numpy.ix_.html).", "desc": "TensorFlow variant of NumPy's `ix_`.", "type": "API"}, {"name": "tf.experimental.numpy.kron", "docs": "TensorFlow variant of NumPy's `kron`.\n\nSee the NumPy documentation for [`numpy.kron`](https://numpy.org/doc/1.16/reference/generated/numpy.kron.html).", "desc": "TensorFlow variant of NumPy's `kron`.", "type": "API"}, {"name": "tf.experimental.numpy.lcm", "docs": "TensorFlow variant of NumPy's `lcm`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.lcm`](https://numpy.org/doc/1.16/reference/generated/numpy.lcm.html).", "desc": "TensorFlow variant of NumPy's `lcm`.", "type": "API"}, {"name": "tf.experimental.numpy.less", "docs": "TensorFlow variant of NumPy's `less`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.less`](https://numpy.org/doc/1.16/reference/generated/numpy.less.html).", "desc": "TensorFlow variant of NumPy's `less`.", "type": "API"}, {"name": "tf.experimental.numpy.less_equal", "docs": "TensorFlow variant of NumPy's `less_equal`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.less_equal`](https://numpy.org/doc/1.16/reference/generated/numpy.less_equal.html).", "desc": "TensorFlow variant of NumPy's `less_equal`.", "type": "API"}, {"name": "tf.experimental.numpy.linspace", "docs": "TensorFlow variant of NumPy's `linspace`.\n\nSee the NumPy documentation for [`numpy.linspace`](https://numpy.org/doc/1.16/reference/generated/numpy.linspace.html).", "desc": "TensorFlow variant of NumPy's `linspace`.", "type": "API"}, {"name": "tf.experimental.numpy.log", "docs": "TensorFlow variant of NumPy's `log`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.log`](https://numpy.org/doc/1.16/reference/generated/numpy.log.html).", "desc": "TensorFlow variant of NumPy's `log`.", "type": "API"}, {"name": "tf.experimental.numpy.log10", "docs": "TensorFlow variant of NumPy's `log10`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.log10`](https://numpy.org/doc/1.16/reference/generated/numpy.log10.html).", "desc": "TensorFlow variant of NumPy's `log10`.", "type": "API"}, {"name": "tf.experimental.numpy.log1p", "docs": "TensorFlow variant of NumPy's `log1p`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.log1p`](https://numpy.org/doc/1.16/reference/generated/numpy.log1p.html).", "desc": "TensorFlow variant of NumPy's `log1p`.", "type": "API"}, {"name": "tf.experimental.numpy.log2", "docs": "TensorFlow variant of NumPy's `log2`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.log2`](https://numpy.org/doc/1.16/reference/generated/numpy.log2.html).", "desc": "TensorFlow variant of NumPy's `log2`.", "type": "API"}, {"name": "tf.experimental.numpy.logaddexp", "docs": "TensorFlow variant of NumPy's `logaddexp`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logaddexp`](https://numpy.org/doc/1.16/reference/generated/numpy.logaddexp.html).", "desc": "TensorFlow variant of NumPy's `logaddexp`.", "type": "API"}, {"name": "tf.experimental.numpy.logaddexp2", "docs": "TensorFlow variant of NumPy's `logaddexp2`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logaddexp2`](https://numpy.org/doc/1.16/reference/generated/numpy.logaddexp2.html).", "desc": "TensorFlow variant of NumPy's `logaddexp2`.", "type": "API"}, {"name": "tf.experimental.numpy.logical_and", "docs": "TensorFlow variant of NumPy's `logical_and`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logical_and`](https://numpy.org/doc/1.16/reference/generated/numpy.logical_and.html).", "desc": "TensorFlow variant of NumPy's `logical_and`.", "type": "API"}, {"name": "tf.experimental.numpy.logical_not", "docs": "TensorFlow variant of NumPy's `logical_not`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logical_not`](https://numpy.org/doc/1.16/reference/generated/numpy.logical_not.html).", "desc": "TensorFlow variant of NumPy's `logical_not`.", "type": "API"}, {"name": "tf.experimental.numpy.logical_or", "docs": "TensorFlow variant of NumPy's `logical_or`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logical_or`](https://numpy.org/doc/1.16/reference/generated/numpy.logical_or.html).", "desc": "TensorFlow variant of NumPy's `logical_or`.", "type": "API"}, {"name": "tf.experimental.numpy.logical_xor", "docs": "TensorFlow variant of NumPy's `logical_xor`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.logical_xor`](https://numpy.org/doc/1.16/reference/generated/numpy.logical_xor.html).", "desc": "TensorFlow variant of NumPy's `logical_xor`.", "type": "API"}, {"name": "tf.experimental.numpy.logspace", "docs": "TensorFlow variant of NumPy's `logspace`.\n\nSee the NumPy documentation for [`numpy.logspace`](https://numpy.org/doc/1.16/reference/generated/numpy.logspace.html).", "desc": "TensorFlow variant of NumPy's `logspace`.", "type": "API"}, {"name": "tf.experimental.numpy.matmul", "docs": "TensorFlow variant of NumPy's `matmul`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.matmul`](https://numpy.org/doc/1.16/reference/generated/numpy.matmul.html).", "desc": "TensorFlow variant of NumPy's `matmul`.", "type": "API"}, {"name": "tf.experimental.numpy.max", "docs": "TensorFlow variant of NumPy's `max`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.max`](https://numpy.org/doc/1.16/reference/generated/numpy.amax.html).", "desc": "TensorFlow variant of NumPy's `max`.", "type": "API"}, {"name": "tf.experimental.numpy.maximum", "docs": "TensorFlow variant of NumPy's `maximum`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.maximum`](https://numpy.org/doc/1.16/reference/generated/numpy.maximum.html).", "desc": "TensorFlow variant of NumPy's `maximum`.", "type": "API"}, {"name": "tf.experimental.numpy.mean", "docs": "TensorFlow variant of NumPy's `mean`.\n\nUnsupported arguments: `out`, `where`.\n\nSee the NumPy documentation for [`numpy.mean`](https://numpy.org/doc/1.16/reference/generated/numpy.mean.html).", "desc": "TensorFlow variant of NumPy's `mean`.", "type": "API"}, {"name": "tf.experimental.numpy.meshgrid", "docs": "TensorFlow variant of NumPy's `meshgrid`.\n\nUnsupported arguments: `copy`, `sparse`, `indexing`.\n\nThis currently requires copy=True and sparse=False.\n\nSee the NumPy documentation for [`numpy.meshgrid`](https://numpy.org/doc/1.16/reference/generated/numpy.meshgrid.html).", "desc": "TensorFlow variant of NumPy's `meshgrid`.", "type": "API"}, {"name": "tf.experimental.numpy.min", "docs": "TensorFlow variant of NumPy's `min`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.min`](https://numpy.org/doc/1.16/reference/generated/numpy.amin.html).", "desc": "TensorFlow variant of NumPy's `min`.", "type": "API"}, {"name": "tf.experimental.numpy.minimum", "docs": "TensorFlow variant of NumPy's `minimum`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.minimum`](https://numpy.org/doc/1.16/reference/generated/numpy.minimum.html).", "desc": "TensorFlow variant of NumPy's `minimum`.", "type": "API"}, {"name": "tf.experimental.numpy.mod", "docs": "TensorFlow variant of NumPy's `mod`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.mod`](https://numpy.org/doc/1.16/reference/generated/numpy.mod.html).", "desc": "TensorFlow variant of NumPy's `mod`.", "type": "API"}, {"name": "tf.experimental.numpy.moveaxis", "docs": "TensorFlow variant of NumPy's `moveaxis`.\n\nRaises ValueError if source, destination not in (-ndim(a), ndim(a)).\n\nSee the NumPy documentation for [`numpy.moveaxis`](https://numpy.org/doc/1.16/reference/generated/numpy.moveaxis.html).", "desc": "TensorFlow variant of NumPy's `moveaxis`.", "type": "API"}, {"name": "tf.experimental.numpy.multiply", "docs": "TensorFlow variant of NumPy's `multiply`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.multiply`](https://numpy.org/doc/1.16/reference/generated/numpy.multiply.html).", "desc": "TensorFlow variant of NumPy's `multiply`.", "type": "API"}, {"name": "tf.experimental.numpy.nanmean", "docs": "TensorFlow variant of NumPy's `nanmean`.\n\nUnsupported arguments: `out`, `where`.\n\nSee the NumPy documentation for [`numpy.nanmean`](https://numpy.org/doc/1.16/reference/generated/numpy.nanmean.html).", "desc": "TensorFlow variant of NumPy's `nanmean`.", "type": "API"}, {"name": "tf.experimental.numpy.nanprod", "docs": "TensorFlow variant of NumPy's `nanprod`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.nanprod`](https://numpy.org/doc/1.16/reference/generated/numpy.nanprod.html).", "desc": "TensorFlow variant of NumPy's `nanprod`.", "type": "API"}, {"name": "tf.experimental.numpy.nansum", "docs": "TensorFlow variant of NumPy's `nansum`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.nansum`](https://numpy.org/doc/1.16/reference/generated/numpy.nansum.html).", "desc": "TensorFlow variant of NumPy's `nansum`.", "type": "API"}, {"name": "tf.experimental.numpy.ndarray", "docs": "A `tf.Tensor` represents a multidimensional array of elements.\n\n All elements are of a single known data type.\n\n When writing a TensorFlow program, the main object that is\n manipulated and passed around is the `tf.Tensor`.\n\n A `tf.Tensor` has the following properties:\n\n * a single data type (float32, int32, or string, for example)\n * a shape\n\n TensorFlow supports eager execution and graph execution. In eager\n execution, operations are evaluated immediately. In graph\n execution, a computational graph is constructed for later\n evaluation.\n\n TensorFlow defaults to eager execution. In the example below, the\n matrix multiplication results are calculated immediately.\n\n >>> # Compute some values using a Tensor\n >>> c = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n >>> d = tf.constant([[1.0, 1.0], [0.0, 1.0]])\n >>> e = tf.matmul(c, d)\n >>> print(e)\n tf.Tensor(\n [[1. 3.]\n [3. 7.]], shape=(2, 2), dtype=float32)\n\n Note that during eager execution, you may discover your `Tensors` are actually\n of type `EagerTensor`. This is an internal detail, but it does give you\n access to a useful function, `numpy`:\n\n >>> type(e)\n \n >>> print(e.numpy())\n [[1. 3.]\n [3. 7.]]\n\n In TensorFlow, `tf.function`s are a common way to define graph execution.\n\n A Tensor's shape (that is, the rank of the Tensor and the size of\n each dimension) may not always be fully known. In `tf.function`\n definitions, the shape may only be partially known.\n\n Most operations produce tensors of fully-known shapes if the shapes of their\n inputs are also fully known, but in some cases it's only possible to find the\n shape of a tensor at execution time.\n\n A number of specialized tensors are available: see `tf.Variable`,\n `tf.constant`, `tf.placeholder`, `tf.sparse.SparseTensor`, and\n `tf.RaggedTensor`.\n\n Caution: when constructing a tensor from a numpy array or pandas dataframe\n the underlying buffer may be re-used:\n\n ```python\n a = np.array([1, 2, 3])\n b = tf.constant(a)\n a[0] = 4\n print(b) # tf.Tensor([4 2 3], shape=(3,), dtype=int64)\n ```\n\n Note: this is an implementation detail that is subject to change and users\n should not rely on this behaviour.\n\n For more on Tensors, see the [guide](https://tensorflow.org/guide/tensor).\n\n ", "desc": "A `tf.Tensor` represents a multidimensional array of elements.", "type": "API"}, {"name": "tf.experimental.numpy.ndim", "docs": "TensorFlow variant of NumPy's `ndim`.\n\n", "desc": "TensorFlow variant of NumPy's `ndim`.", "type": "API"}, {"name": "tf.experimental.numpy.negative", "docs": "TensorFlow variant of NumPy's `negative`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.negative`](https://numpy.org/doc/1.16/reference/generated/numpy.negative.html).", "desc": "TensorFlow variant of NumPy's `negative`.", "type": "API"}, {"name": "tf.experimental.numpy.nextafter", "docs": "TensorFlow variant of NumPy's `nextafter`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.nextafter`](https://numpy.org/doc/1.16/reference/generated/numpy.nextafter.html).", "desc": "TensorFlow variant of NumPy's `nextafter`.", "type": "API"}, {"name": "tf.experimental.numpy.nonzero", "docs": "TensorFlow variant of NumPy's `nonzero`.\n\nSee the NumPy documentation for [`numpy.nonzero`](https://numpy.org/doc/1.16/reference/generated/numpy.nonzero.html).", "desc": "TensorFlow variant of NumPy's `nonzero`.", "type": "API"}, {"name": "tf.experimental.numpy.not_equal", "docs": "TensorFlow variant of NumPy's `not_equal`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.not_equal`](https://numpy.org/doc/1.16/reference/generated/numpy.not_equal.html).", "desc": "TensorFlow variant of NumPy's `not_equal`.", "type": "API"}, {"name": "tf.experimental.numpy.object_", "docs": "Any Python object.\n\n :Character code: ``'O'``", "desc": "Any Python object.", "type": "API"}, {"name": "tf.experimental.numpy.ones", "docs": "TensorFlow variant of NumPy's `ones`.\n\nUnsupported arguments: `order`, `like`.\n\nSee the NumPy documentation for [`numpy.ones`](https://numpy.org/doc/1.16/reference/generated/numpy.ones.html).", "desc": "TensorFlow variant of NumPy's `ones`.", "type": "API"}, {"name": "tf.experimental.numpy.ones_like", "docs": "TensorFlow variant of NumPy's `ones_like`.\n\nUnsupported arguments: `order`, `subok`, `shape`.\n\nSee the NumPy documentation for [`numpy.ones_like`](https://numpy.org/doc/1.16/reference/generated/numpy.ones_like.html).", "desc": "TensorFlow variant of NumPy's `ones_like`.", "type": "API"}, {"name": "tf.experimental.numpy.outer", "docs": "TensorFlow variant of NumPy's `outer`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.outer`](https://numpy.org/doc/1.16/reference/generated/numpy.outer.html).", "desc": "TensorFlow variant of NumPy's `outer`.", "type": "API"}, {"name": "tf.experimental.numpy.pad", "docs": "TensorFlow variant of NumPy's `pad`.\n\nOnly supports modes 'constant', 'reflect' and 'symmetric' currently.\n\nSee the NumPy documentation for [`numpy.pad`](https://numpy.org/doc/1.16/reference/generated/numpy.pad.html).", "desc": "TensorFlow variant of NumPy's `pad`.", "type": "API"}, {"name": "tf.experimental.numpy.polyval", "docs": "TensorFlow variant of NumPy's `polyval`.\n\nSee the NumPy documentation for [`numpy.polyval`](https://numpy.org/doc/1.16/reference/generated/numpy.polyval.html).", "desc": "TensorFlow variant of NumPy's `polyval`.", "type": "API"}, {"name": "tf.experimental.numpy.positive", "docs": "TensorFlow variant of NumPy's `positive`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.positive`](https://numpy.org/doc/1.16/reference/generated/numpy.positive.html).", "desc": "TensorFlow variant of NumPy's `positive`.", "type": "API"}, {"name": "tf.experimental.numpy.power", "docs": "TensorFlow variant of NumPy's `power`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.power`](https://numpy.org/doc/1.16/reference/generated/numpy.power.html).", "desc": "TensorFlow variant of NumPy's `power`.", "type": "API"}, {"name": "tf.experimental.numpy.prod", "docs": "TensorFlow variant of NumPy's `prod`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.prod`](https://numpy.org/doc/1.16/reference/generated/numpy.prod.html).", "desc": "TensorFlow variant of NumPy's `prod`.", "type": "API"}, {"name": "tf.experimental.numpy.promote_types", "docs": "TensorFlow variant of NumPy's `promote_types`.\n\nSee the NumPy documentation for [`numpy.promote_types`](https://numpy.org/doc/1.16/reference/generated/numpy.promote_types.html).", "desc": "TensorFlow variant of NumPy's `promote_types`.", "type": "API"}, {"name": "tf.experimental.numpy.ptp", "docs": "TensorFlow variant of NumPy's `ptp`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.ptp`](https://numpy.org/doc/1.16/reference/generated/numpy.ptp.html).", "desc": "TensorFlow variant of NumPy's `ptp`.", "type": "API"}, {"name": "tf.experimental.numpy.rad2deg", "docs": "TensorFlow variant of NumPy's `rad2deg`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.rad2deg`](https://numpy.org/doc/1.16/reference/generated/numpy.rad2deg.html).", "desc": "TensorFlow variant of NumPy's `rad2deg`.", "type": "API"}, {"name": "tf.experimental.numpy.random", "docs": "Public API for tf.experimental.numpy.random namespace.\n", "desc": "Public API for tf.experimental.numpy.random namespace.", "type": "API"}, {"name": "tf.experimental.numpy.random.poisson", "docs": "TensorFlow variant of NumPy's `random.poisson`.\n\nSee the NumPy documentation for [`numpy.random.poisson`](https://numpy.org/doc/1.16/reference/generated/numpy.random.poisson.html).", "desc": "TensorFlow variant of NumPy's `random.poisson`.", "type": "API"}, {"name": "tf.experimental.numpy.random.rand", "docs": "TensorFlow variant of NumPy's `random.rand`.\n\nSee the NumPy documentation for [`numpy.random.rand`](https://numpy.org/doc/1.16/reference/generated/numpy.random.rand.html).", "desc": "TensorFlow variant of NumPy's `random.rand`.", "type": "API"}, {"name": "tf.experimental.numpy.random.randint", "docs": "TensorFlow variant of NumPy's `random.randint`.\n\nSee the NumPy documentation for [`numpy.random.randint`](https://numpy.org/doc/1.16/reference/generated/numpy.random.randint.html).", "desc": "TensorFlow variant of NumPy's `random.randint`.", "type": "API"}, {"name": "tf.experimental.numpy.random.randn", "docs": "TensorFlow variant of NumPy's `random.randn`.\n\nReturns samples from a normal distribution.\n\n Uses `tf.random_normal`.\n\n Args:\n *args: The shape of the output array.\n\n Returns:\n An ndarray with shape `args` and dtype `float64`.\n \n\nSee the NumPy documentation for [`numpy.random.randn`](https://numpy.org/doc/1.16/reference/generated/numpy.random.randn.html).", "desc": "TensorFlow variant of NumPy's `random.randn`.", "type": "API"}, {"name": "tf.experimental.numpy.random.random", "docs": "TensorFlow variant of NumPy's `random.random`.\n\nSee the NumPy documentation for [`numpy.random.random`](https://numpy.org/doc/1.16/reference/generated/numpy.random.random.html).", "desc": "TensorFlow variant of NumPy's `random.random`.", "type": "API"}, {"name": "tf.experimental.numpy.random.seed", "docs": "TensorFlow variant of NumPy's `random.seed`.\n\nSets the seed for the random number generator.\n\n Uses `tf.set_random_seed`.\n\n Args:\n s: an integer.\n \n\nSee the NumPy documentation for [`numpy.random.seed`](https://numpy.org/doc/1.16/reference/generated/numpy.random.seed.html).", "desc": "TensorFlow variant of NumPy's `random.seed`.", "type": "API"}, {"name": "tf.experimental.numpy.random.standard_normal", "docs": "TensorFlow variant of NumPy's `random.standard_normal`.\n\nSee the NumPy documentation for [`numpy.random.standard_normal`](https://numpy.org/doc/1.16/reference/generated/numpy.random.standard_normal.html).", "desc": "TensorFlow variant of NumPy's `random.standard_normal`.", "type": "API"}, {"name": "tf.experimental.numpy.random.uniform", "docs": "TensorFlow variant of NumPy's `random.uniform`.\n\nSee the NumPy documentation for [`numpy.random.uniform`](https://numpy.org/doc/1.16/reference/generated/numpy.random.uniform.html).", "desc": "TensorFlow variant of NumPy's `random.uniform`.", "type": "API"}, {"name": "tf.experimental.numpy.ravel", "docs": "TensorFlow variant of NumPy's `ravel`.\n\nUnsupported arguments: `order`.\n\nSee the NumPy documentation for [`numpy.ravel`](https://numpy.org/doc/1.16/reference/generated/numpy.ravel.html).", "desc": "TensorFlow variant of NumPy's `ravel`.", "type": "API"}, {"name": "tf.experimental.numpy.real", "docs": "TensorFlow variant of NumPy's `real`.\n\nSee the NumPy documentation for [`numpy.real`](https://numpy.org/doc/1.16/reference/generated/numpy.real.html).", "desc": "TensorFlow variant of NumPy's `real`.", "type": "API"}, {"name": "tf.experimental.numpy.reciprocal", "docs": "TensorFlow variant of NumPy's `reciprocal`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.reciprocal`](https://numpy.org/doc/1.16/reference/generated/numpy.reciprocal.html).", "desc": "TensorFlow variant of NumPy's `reciprocal`.", "type": "API"}, {"name": "tf.experimental.numpy.remainder", "docs": "TensorFlow variant of NumPy's `remainder`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.remainder`](https://numpy.org/doc/1.16/reference/generated/numpy.remainder.html).", "desc": "TensorFlow variant of NumPy's `remainder`.", "type": "API"}, {"name": "tf.experimental.numpy.repeat", "docs": "TensorFlow variant of NumPy's `repeat`.\n\nSee the NumPy documentation for [`numpy.repeat`](https://numpy.org/doc/1.16/reference/generated/numpy.repeat.html).", "desc": "TensorFlow variant of NumPy's `repeat`.", "type": "API"}, {"name": "tf.experimental.numpy.reshape", "docs": "TensorFlow variant of NumPy's `reshape`.\n\norder argument can only b 'C' or 'F'.\n\nSee the NumPy documentation for [`numpy.reshape`](https://numpy.org/doc/1.16/reference/generated/numpy.reshape.html).", "desc": "TensorFlow variant of NumPy's `reshape`.", "type": "API"}, {"name": "tf.experimental.numpy.result_type", "docs": "TensorFlow variant of NumPy's `result_type`.\n\nSee the NumPy documentation for [`numpy.result_type`](https://numpy.org/doc/1.16/reference/generated/numpy.result_type.html).", "desc": "TensorFlow variant of NumPy's `result_type`.", "type": "API"}, {"name": "tf.experimental.numpy.roll", "docs": "TensorFlow variant of NumPy's `roll`.\n\nSee the NumPy documentation for [`numpy.roll`](https://numpy.org/doc/1.16/reference/generated/numpy.roll.html).", "desc": "TensorFlow variant of NumPy's `roll`.", "type": "API"}, {"name": "tf.experimental.numpy.rot90", "docs": "TensorFlow variant of NumPy's `rot90`.\n\nSee the NumPy documentation for [`numpy.rot90`](https://numpy.org/doc/1.16/reference/generated/numpy.rot90.html).", "desc": "TensorFlow variant of NumPy's `rot90`.", "type": "API"}, {"name": "tf.experimental.numpy.round", "docs": "TensorFlow variant of NumPy's `round`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.round`](https://numpy.org/doc/1.16/reference/generated/numpy.around.html).", "desc": "TensorFlow variant of NumPy's `round`.", "type": "API"}, {"name": "tf.experimental.numpy.select", "docs": "TensorFlow variant of NumPy's `select`.\n\nSee the NumPy documentation for [`numpy.select`](https://numpy.org/doc/1.16/reference/generated/numpy.select.html).", "desc": "TensorFlow variant of NumPy's `select`.", "type": "API"}, {"name": "tf.experimental.numpy.shape", "docs": "TensorFlow variant of NumPy's `shape`.\n\nSee the NumPy documentation for [`numpy.shape`](https://numpy.org/doc/1.18/reference/generated/numpy.shape.html).", "desc": "TensorFlow variant of NumPy's `shape`.", "type": "API"}, {"name": "tf.experimental.numpy.sign", "docs": "TensorFlow variant of NumPy's `sign`.\n\nSee the NumPy documentation for [`numpy.sign`](https://numpy.org/doc/1.16/reference/generated/numpy.sign.html).", "desc": "TensorFlow variant of NumPy's `sign`.", "type": "API"}, {"name": "tf.experimental.numpy.signbit", "docs": "TensorFlow variant of NumPy's `signbit`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.signbit`](https://numpy.org/doc/1.16/reference/generated/numpy.signbit.html).", "desc": "TensorFlow variant of NumPy's `signbit`.", "type": "API"}, {"name": "tf.experimental.numpy.sin", "docs": "TensorFlow variant of NumPy's `sin`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.sin`](https://numpy.org/doc/1.16/reference/generated/numpy.sin.html).", "desc": "TensorFlow variant of NumPy's `sin`.", "type": "API"}, {"name": "tf.experimental.numpy.sinc", "docs": "TensorFlow variant of NumPy's `sinc`.\n\nSee the NumPy documentation for [`numpy.sinc`](https://numpy.org/doc/1.16/reference/generated/numpy.sinc.html).", "desc": "TensorFlow variant of NumPy's `sinc`.", "type": "API"}, {"name": "tf.experimental.numpy.sinh", "docs": "TensorFlow variant of NumPy's `sinh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.sinh`](https://numpy.org/doc/1.16/reference/generated/numpy.sinh.html).", "desc": "TensorFlow variant of NumPy's `sinh`.", "type": "API"}, {"name": "tf.experimental.numpy.size", "docs": "TensorFlow variant of NumPy's `size`.\n\nUnsupported arguments: `a`.\n\nSee the NumPy documentation for [`numpy.size`](https://numpy.org/doc/1.16/reference/generated/numpy.size.html).", "desc": "TensorFlow variant of NumPy's `size`.", "type": "API"}, {"name": "tf.experimental.numpy.sort", "docs": "TensorFlow variant of NumPy's `sort`.\n\nSee the NumPy documentation for [`numpy.sort`](https://numpy.org/doc/1.16/reference/generated/numpy.sort.html).", "desc": "TensorFlow variant of NumPy's `sort`.", "type": "API"}, {"name": "tf.experimental.numpy.split", "docs": "TensorFlow variant of NumPy's `split`.\n\nSee the NumPy documentation for [`numpy.split`](https://numpy.org/doc/1.16/reference/generated/numpy.split.html).", "desc": "TensorFlow variant of NumPy's `split`.", "type": "API"}, {"name": "tf.experimental.numpy.sqrt", "docs": "TensorFlow variant of NumPy's `sqrt`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.sqrt`](https://numpy.org/doc/1.16/reference/generated/numpy.sqrt.html).", "desc": "TensorFlow variant of NumPy's `sqrt`.", "type": "API"}, {"name": "tf.experimental.numpy.square", "docs": "TensorFlow variant of NumPy's `square`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.square`](https://numpy.org/doc/1.16/reference/generated/numpy.square.html).", "desc": "TensorFlow variant of NumPy's `square`.", "type": "API"}, {"name": "tf.experimental.numpy.squeeze", "docs": "TensorFlow variant of NumPy's `squeeze`.\n\nSee the NumPy documentation for [`numpy.squeeze`](https://numpy.org/doc/1.16/reference/generated/numpy.squeeze.html).", "desc": "TensorFlow variant of NumPy's `squeeze`.", "type": "API"}, {"name": "tf.experimental.numpy.stack", "docs": "TensorFlow variant of NumPy's `stack`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.stack`](https://numpy.org/doc/1.16/reference/generated/numpy.stack.html).", "desc": "TensorFlow variant of NumPy's `stack`.", "type": "API"}, {"name": "tf.experimental.numpy.std", "docs": "TensorFlow variant of NumPy's `std`.\n\nUnsupported arguments: `dtype`, `out`, `ddof`, `where`.\n\nSee the NumPy documentation for [`numpy.std`](https://numpy.org/doc/1.16/reference/generated/numpy.std.html).", "desc": "TensorFlow variant of NumPy's `std`.", "type": "API"}, {"name": "tf.experimental.numpy.string_", "docs": "A byte string.\n\n When used in arrays, this type strips trailing null bytes.\n\n :Character code: ``'S'``\n :Alias: `numpy.string_`", "desc": "A byte string.", "type": "API"}, {"name": "tf.experimental.numpy.subtract", "docs": "TensorFlow variant of NumPy's `subtract`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.subtract`](https://numpy.org/doc/1.16/reference/generated/numpy.subtract.html).", "desc": "TensorFlow variant of NumPy's `subtract`.", "type": "API"}, {"name": "tf.experimental.numpy.sum", "docs": "TensorFlow variant of NumPy's `sum`.\n\nUnsupported arguments: `out`, `initial`, `where`.\n\nSee the NumPy documentation for [`numpy.sum`](https://numpy.org/doc/1.16/reference/generated/numpy.sum.html).", "desc": "TensorFlow variant of NumPy's `sum`.", "type": "API"}, {"name": "tf.experimental.numpy.swapaxes", "docs": "TensorFlow variant of NumPy's `swapaxes`.\n\nSee the NumPy documentation for [`numpy.swapaxes`](https://numpy.org/doc/1.16/reference/generated/numpy.swapaxes.html).", "desc": "TensorFlow variant of NumPy's `swapaxes`.", "type": "API"}, {"name": "tf.experimental.numpy.take", "docs": "TensorFlow variant of NumPy's `take`.\n\nout argument is not supported, and default mode is clip.\n\nSee the NumPy documentation for [`numpy.take`](https://numpy.org/doc/1.16/reference/generated/numpy.take.html).", "desc": "TensorFlow variant of NumPy's `take`.", "type": "API"}, {"name": "tf.experimental.numpy.take_along_axis", "docs": "TensorFlow variant of NumPy's `take_along_axis`.\n\nSee the NumPy documentation for [`numpy.take_along_axis`](https://numpy.org/doc/1.16/reference/generated/numpy.take_along_axis.html).", "desc": "TensorFlow variant of NumPy's `take_along_axis`.", "type": "API"}, {"name": "tf.experimental.numpy.tan", "docs": "TensorFlow variant of NumPy's `tan`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.tan`](https://numpy.org/doc/1.16/reference/generated/numpy.tan.html).", "desc": "TensorFlow variant of NumPy's `tan`.", "type": "API"}, {"name": "tf.experimental.numpy.tanh", "docs": "TensorFlow variant of NumPy's `tanh`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.tanh`](https://numpy.org/doc/1.16/reference/generated/numpy.tanh.html).", "desc": "TensorFlow variant of NumPy's `tanh`.", "type": "API"}, {"name": "tf.experimental.numpy.tensordot", "docs": "TensorFlow variant of NumPy's `tensordot`.\n\nSee the NumPy documentation for [`numpy.tensordot`](https://numpy.org/doc/1.16/reference/generated/numpy.tensordot.html).", "desc": "TensorFlow variant of NumPy's `tensordot`.", "type": "API"}, {"name": "tf.experimental.numpy.tile", "docs": "TensorFlow variant of NumPy's `tile`.\n\nSee the NumPy documentation for [`numpy.tile`](https://numpy.org/doc/1.16/reference/generated/numpy.tile.html).", "desc": "TensorFlow variant of NumPy's `tile`.", "type": "API"}, {"name": "tf.experimental.numpy.trace", "docs": "TensorFlow variant of NumPy's `trace`.\n\nUnsupported arguments: `out`.\n\nSee the NumPy documentation for [`numpy.trace`](https://numpy.org/doc/1.16/reference/generated/numpy.trace.html).", "desc": "TensorFlow variant of NumPy's `trace`.", "type": "API"}, {"name": "tf.experimental.numpy.transpose", "docs": "TensorFlow variant of NumPy's `transpose`.\n\nSee the NumPy documentation for [`numpy.transpose`](https://numpy.org/doc/1.16/reference/generated/numpy.transpose.html).", "desc": "TensorFlow variant of NumPy's `transpose`.", "type": "API"}, {"name": "tf.experimental.numpy.tri", "docs": "TensorFlow variant of NumPy's `tri`.\n\nUnsupported arguments: `like`.\n\nSee the NumPy documentation for [`numpy.tri`](https://numpy.org/doc/1.16/reference/generated/numpy.tri.html).", "desc": "TensorFlow variant of NumPy's `tri`.", "type": "API"}, {"name": "tf.experimental.numpy.tril", "docs": "TensorFlow variant of NumPy's `tril`.\n\nSee the NumPy documentation for [`numpy.tril`](https://numpy.org/doc/1.16/reference/generated/numpy.tril.html).", "desc": "TensorFlow variant of NumPy's `tril`.", "type": "API"}, {"name": "tf.experimental.numpy.triu", "docs": "TensorFlow variant of NumPy's `triu`.\n\nSee the NumPy documentation for [`numpy.triu`](https://numpy.org/doc/1.16/reference/generated/numpy.triu.html).", "desc": "TensorFlow variant of NumPy's `triu`.", "type": "API"}, {"name": "tf.experimental.numpy.true_divide", "docs": "TensorFlow variant of NumPy's `true_divide`.\n\nUnsupported arguments: `out`, `where`, `casting`, `order`, `dtype`, `subok`, `signature`, `extobj`.\n\nSee the NumPy documentation for [`numpy.true_divide`](https://numpy.org/doc/1.16/reference/generated/numpy.true_divide.html).", "desc": "TensorFlow variant of NumPy's `true_divide`.", "type": "API"}, {"name": "tf.experimental.numpy.uint16", "docs": "Unsigned integer type, compatible with C ``unsigned short``.\n\n :Character code: ``'H'``\n :Canonical name: `numpy.ushort`\n :Alias on this platform (Windows AMD64): `numpy.uint16`: 16-bit unsigned integer (``0`` to ``65_535``).", "desc": "Unsigned integer type, compatible with C ``unsigned short``.", "type": "API"}, {"name": "tf.experimental.numpy.uint32", "docs": "Unsigned integer type, compatible with C ``unsigned long``.\n\n :Character code: ``'L'``\n :Canonical name: `numpy.uint`\n :Alias on this platform (Windows AMD64): `numpy.uint32`: 32-bit unsigned integer (``0`` to ``4_294_967_295``).", "desc": "Unsigned integer type, compatible with C ``unsigned long``.", "type": "API"}, {"name": "tf.experimental.numpy.uint64", "docs": "Signed integer type, compatible with C ``unsigned long long``.\n\n :Character code: ``'Q'``\n :Canonical name: `numpy.ulonglong`\n :Alias on this platform (Windows AMD64): `numpy.uint64`: 64-bit unsigned integer (``0`` to ``18_446_744_073_709_551_615``).\n :Alias on this platform (Windows AMD64): `numpy.uintp`: Unsigned integer large enough to fit pointer, compatible with C ``uintptr_t``.", "desc": "Signed integer type, compatible with C ``unsigned long long``.", "type": "API"}, {"name": "tf.experimental.numpy.uint8", "docs": "Unsigned integer type, compatible with C ``unsigned char``.\n\n :Character code: ``'B'``\n :Canonical name: `numpy.ubyte`\n :Alias on this platform (Windows AMD64): `numpy.uint8`: 8-bit unsigned integer (``0`` to ``255``).", "desc": "Unsigned integer type, compatible with C ``unsigned char``.", "type": "API"}, {"name": "tf.experimental.numpy.unicode_", "docs": "A unicode string.\n\n When used in arrays, this type strips trailing null codepoints.\n\n Unlike the builtin `str`, this supports the :ref:`python:bufferobjects`, exposing its\n contents as UCS4:\n\n >>> m = memoryview(np.str_(\"abc\"))\n >>> m.format\n '3w'\n >>> m.tobytes()\n b'a\\x00\\x00\\x00b\\x00\\x00\\x00c\\x00\\x00\\x00'\n\n :Character code: ``'U'``\n :Alias: `numpy.unicode_`", "desc": "A unicode string.", "type": "API"}, {"name": "tf.experimental.numpy.vander", "docs": "TensorFlow variant of NumPy's `vander`.\n\nSee the NumPy documentation for [`numpy.vander`](https://numpy.org/doc/1.16/reference/generated/numpy.vander.html).", "desc": "TensorFlow variant of NumPy's `vander`.", "type": "API"}, {"name": "tf.experimental.numpy.var", "docs": "TensorFlow variant of NumPy's `var`.\n\nUnsupported arguments: `where`.\n\nSee the NumPy documentation for [`numpy.var`](https://numpy.org/doc/1.16/reference/generated/numpy.var.html).", "desc": "TensorFlow variant of NumPy's `var`.", "type": "API"}, {"name": "tf.experimental.numpy.vdot", "docs": "TensorFlow variant of NumPy's `vdot`.\n\nSee the NumPy documentation for [`numpy.vdot`](https://numpy.org/doc/1.16/reference/generated/numpy.vdot.html).", "desc": "TensorFlow variant of NumPy's `vdot`.", "type": "API"}, {"name": "tf.experimental.numpy.vsplit", "docs": "TensorFlow variant of NumPy's `vsplit`.\n\nSee the NumPy documentation for [`numpy.vsplit`](https://numpy.org/doc/1.16/reference/generated/numpy.vsplit.html).", "desc": "TensorFlow variant of NumPy's `vsplit`.", "type": "API"}, {"name": "tf.experimental.numpy.vstack", "docs": "TensorFlow variant of NumPy's `vstack`.\n\nSee the NumPy documentation for [`numpy.vstack`](https://numpy.org/doc/1.16/reference/generated/numpy.vstack.html).", "desc": "TensorFlow variant of NumPy's `vstack`.", "type": "API"}, {"name": "tf.experimental.numpy.where", "docs": "TensorFlow variant of NumPy's `where`.\n\nRaises ValueError if exactly one of x or y is not None.\n\nSee the NumPy documentation for [`numpy.where`](https://numpy.org/doc/1.16/reference/generated/numpy.where.html).", "desc": "TensorFlow variant of NumPy's `where`.", "type": "API"}, {"name": "tf.experimental.numpy.zeros", "docs": "TensorFlow variant of NumPy's `zeros`.\n\nSee the NumPy documentation for [`numpy.zeros`](https://numpy.org/doc/1.16/reference/generated/numpy.zeros.html).", "desc": "TensorFlow variant of NumPy's `zeros`.", "type": "API"}, {"name": "tf.experimental.numpy.zeros_like", "docs": "TensorFlow variant of NumPy's `zeros_like`.\n\nUnsupported arguments: `order`, `subok`, `shape`.\n\nSee the NumPy documentation for [`numpy.zeros_like`](https://numpy.org/doc/1.16/reference/generated/numpy.zeros_like.html).", "desc": "TensorFlow variant of NumPy's `zeros_like`.", "type": "API"}, {"name": "tf.experimental.Optional", "docs": "Represents a value that may or may not be present.\n\n A `tf.experimental.Optional` can represent the result of an operation that may\n fail as a value, rather than raising an exception and halting execution. For\n example, `tf.data.Iterator.get_next_as_optional()` returns a\n `tf.experimental.Optional` that either contains the next element of an\n iterator if one exists, or an \"empty\" value that indicates the end of the\n sequence has been reached.\n\n `tf.experimental.Optional` can only be used with values that are convertible\n to `tf.Tensor` or `tf.CompositeTensor`.\n\n One can create a `tf.experimental.Optional` from a value using the\n `from_value()` method:\n\n >>> optional = tf.experimental.Optional.from_value(42)\n >>> print(optional.has_value())\n tf.Tensor(True, shape=(), dtype=bool)\n >>> print(optional.get_value())\n tf.Tensor(42, shape=(), dtype=int32)\n\n or without a value using the `empty()` method:\n\n >>> optional = tf.experimental.Optional.empty(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))\n >>> print(optional.has_value())\n tf.Tensor(False, shape=(), dtype=bool)\n ", "desc": "Represents a value that may or may not be present.", "type": "API"}, {"name": "tf.experimental.register_filesystem_plugin", "docs": "Loads a TensorFlow FileSystem plugin.\n\n Args:\n plugin_location: Path to the plugin. Relative or absolute filesystem plugin\n path to a dynamic library file.\n\n Returns:\n None\n\n Raises:\n OSError: When the file to be loaded is not found.\n RuntimeError: when unable to load the library.\n ", "desc": "Loads a TensorFlow FileSystem plugin.", "type": "API"}, {"name": "tf.experimental.tensorrt", "docs": "Public API for tf.experimental.tensorrt namespace.\n", "desc": "Public API for tf.experimental.tensorrt namespace.", "type": "API"}, {"name": "tf.experimental.tensorrt.ConversionParams", "docs": "Parameters that are used for TF-TRT conversion.\n\n Fields:\n max_workspace_size_bytes: the maximum GPU temporary memory that the TRT\n engine can use at execution time. This corresponds to the\n 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().\n precision_mode: one of the strings in\n TrtPrecisionMode.supported_precision_modes().\n minimum_segment_size: the minimum number of nodes required for a subgraph\n to be replaced by TRTEngineOp.\n maximum_cached_engines: max number of cached TRT engines for dynamic TRT\n ops. Created TRT engines for a dynamic dimension are cached. If the\n number of cached engines is already at max but none of them supports the\n input shapes, the TRTEngineOp will fall back to run the original TF\n subgraph that corresponds to the TRTEngineOp.\n use_calibration: this argument is ignored if precision_mode is not INT8.\n If set to True, a calibration graph will be created to calibrate the\n missing ranges. The calibration graph must be converted to an inference\n graph by running calibration with calibrate(). If set to False,\n quantization nodes will be expected for every tensor in the graph\n (excluding those which will be fused). If a range is missing, an error\n will occur. Please note that accuracy may be negatively affected if\n there is a mismatch between which tensors TRT quantizes and which\n tensors were trained with fake quantization.\n allow_build_at_runtime: whether to allow building TensorRT engines during\n runtime if no prebuilt TensorRT engine can be found that can handle the\n given inputs during runtime, then a new TensorRT engine is built at\n runtime if allow_build_at_runtime=True, and otherwise native TF is used.\n ", "desc": "Parameters that are used for TF-TRT conversion.", "type": "API"}, {"name": "tf.experimental.tensorrt.Converter", "docs": "An offline converter for TF-TRT transformation for TF 2.0 SavedModels.\n\n Windows support is provided experimentally. No guarantee is made regarding\n functionality or engineering support. Use at your own risk.\n\n There are several ways to run the conversion:\n\n 1. FP32/FP16 precision\n\n ```python\n params = tf.experimental.tensorrt.ConversionParams(\n precision_mode='FP16')\n converter = tf.experimental.tensorrt.Converter(\n input_saved_model_dir=\"my_dir\", conversion_params=params)\n converter.convert()\n converter.save(output_saved_model_dir)\n ```\n\n In this case, no TRT engines will be built or saved in the converted\n SavedModel. But if input data is available during conversion, we can still\n build and save the TRT engines to reduce the cost during inference (see\n option 2 below).\n\n 2. FP32/FP16 precision with pre-built engines\n\n ```python\n params = tf.experimental.tensorrt.ConversionParams(\n precision_mode='FP16',\n # Set this to a large enough number so it can cache all the engines.\n maximum_cached_engines=16)\n converter = tf.experimental.tensorrt.Converter(\n input_saved_model_dir=\"my_dir\", conversion_params=params)\n converter.convert()\n\n # Define a generator function that yields input data, and use it to execute\n # the graph to build TRT engines.\n def my_input_fn():\n for _ in range(num_runs):\n inp1, inp2 = ...\n yield inp1, inp2\n\n converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines\n converter.save(output_saved_model_dir) # Generated engines will be saved.\n ```\n\n In this way, one engine will be built/saved for each unique input shapes of\n the TRTEngineOp. This is good for applications that cannot afford building\n engines during inference but have access to input data that is similar to\n the one used in production (for example, that has the same input shapes).\n Also, the generated TRT engines is platform dependent, so we need to run\n `build()` in an environment that is similar to production (e.g. with\n same type of GPU).\n\n 3. INT8 precision and calibration with pre-built engines\n\n ```python\n params = tf.experimental.tensorrt.ConversionParams(\n precision_mode='INT8',\n # Currently only one INT8 engine is supported in this mode.\n maximum_cached_engines=1,\n use_calibration=True)\n converter = tf.experimental.tensorrt.Converter(\n input_saved_model_dir=\"my_dir\", conversion_params=params)\n\n # Define a generator function that yields input data, and run INT8\n # calibration with the data. All input data should have the same shape.\n # At the end of convert(), the calibration stats (e.g. range information)\n # will be saved and can be used to generate more TRT engines with different\n # shapes. Also, one TRT engine will be generated (with the same shape as\n # the calibration data) for save later.\n def my_calibration_input_fn():\n for _ in range(num_runs):\n inp1, inp2 = ...\n yield inp1, inp2\n\n converter.convert(calibration_input_fn=my_calibration_input_fn)\n\n # (Optional) Generate more TRT engines offline (same as the previous\n # option), to avoid the cost of generating them during inference.\n def my_input_fn():\n for _ in range(num_runs):\n inp1, inp2 = ...\n yield inp1, inp2\n converter.build(input_fn=my_input_fn)\n\n # Save the TRT engine and the engines.\n converter.save(output_saved_model_dir)\n ```\n 4. To use dynamic shape, we need to call the build method with an input\n function to generate profiles. This step is similar to the INT8 calibration\n step described above. The converter also needs to be created with\n use_dynamic_shape=True and one of the following profile_strategies for\n creating profiles based on the inputs produced by the input function:\n * `Range`: create one profile that works for inputs with dimension values\n in the range of [min_dims, max_dims] where min_dims and max_dims are\n derived from the provided inputs.\n * `Optimal`: create one profile for each input. The profile only works for\n inputs with the same dimensions as the input it is created for. The GPU\n engine will be run with optimal performance with such inputs.\n * `Range+Optimal`: create the profiles for both `Range` and `Optimal`.\n * `ImplicitBatchModeCompatible`: create the profiles that will produce the\n same GPU engines as the implicit_batch_mode would produce.\n ", "desc": "An offline converter for TF-TRT transformation for TF 2.0 SavedModels.", "type": "API"}, {"name": "tf.extract_volume_patches", "docs": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 5`.\n The size of the sliding window for each dimension of `input`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D of length 5. How far the centers of two consecutive patches are in\n `input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n\n The size-related attributes are specified as follows:\n\n ```python\n ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]\n strides = [1, stride_planes, strides_rows, strides_cols, 1]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.", "type": "API"}, {"name": "tf.eye", "docs": "Construct an identity matrix, or a batch of matrices.\n\n See also `tf.ones`, `tf.zeros`, `tf.fill`, `tf.one_hot`.\n\n ```python\n # Construct one identity matrix.\n tf.eye(2)\n ==> [[1., 0.],\n [0., 1.]]\n\n # Construct a batch of 3 identity matrices, each 2 x 2.\n # batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.\n batch_identity = tf.eye(2, batch_shape=[3])\n\n # Construct one 2 x 3 \"identity\" matrix\n tf.eye(2, num_columns=3)\n ==> [[ 1., 0., 0.],\n [ 0., 1., 0.]]\n ```\n\n Args:\n num_rows: Non-negative `int32` scalar `Tensor` giving the number of rows\n in each batch matrix.\n num_columns: Optional non-negative `int32` scalar `Tensor` giving the number\n of columns in each batch matrix. Defaults to `num_rows`.\n batch_shape: A list or tuple of Python integers or a 1-D `int32` `Tensor`.\n If provided, the returned `Tensor` will have leading batch dimensions of\n this shape.\n dtype: The type of an element in the resulting `Tensor`\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `Tensor` of shape `batch_shape + [num_rows, num_columns]`\n ", "desc": "Construct an identity matrix, or a batch of matrices.", "type": "API"}, {"name": "tf.feature_column", "docs": "Public API for tf.feature_column namespace.\n", "desc": "Public API for tf.feature_column namespace.", "type": "API"}, {"name": "tf.feature_column.bucketized_column", "docs": "Represents discretized dense input bucketed by `boundaries`.\n\n Buckets include the left boundary, and exclude the right boundary. Namely,\n `boundaries=[0., 1., 2.]` generates buckets `(-inf, 0.)`, `[0., 1.)`,\n `[1., 2.)`, and `[2., +inf)`.\n\n For example, if the inputs are\n\n ```python\n boundaries = [0, 10, 100]\n input tensor = [[-5, 10000]\n [150, 10]\n [5, 100]]\n ```\n\n then the output will be\n\n ```python\n output = [[0, 3]\n [3, 2]\n [1, 3]]\n ```\n\n Example:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n bucketized_price = tf.feature_column.bucketized_column(\n price, boundaries=[...])\n columns = [bucketized_price, ...]\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)\n ```\n\n A `bucketized_column` can also be crossed with another categorical column\n using `crossed_column`:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n # bucketized_column converts numerical feature to a categorical one.\n bucketized_price = tf.feature_column.bucketized_column(\n price, boundaries=[...])\n # 'keywords' is a string feature.\n price_x_keywords = tf.feature_column.crossed_column(\n [bucketized_price, 'keywords'], 50K)\n columns = [price_x_keywords, ...]\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)\n linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor)\n ```\n\n Args:\n source_column: A one-dimensional dense column which is generated with\n `numeric_column`.\n boundaries: A sorted list or tuple of floats specifying the boundaries.\n\n Returns:\n A `BucketizedColumn`.\n\n Raises:\n ValueError: If `source_column` is not a numeric column, or if it is not\n one-dimensional.\n ValueError: If `boundaries` is not a sorted list or tuple.\n ", "desc": "Represents discretized dense input bucketed by `boundaries`.", "type": "API"}, {"name": "tf.feature_column.categorical_column_with_hash_bucket", "docs": "Represents sparse feature where ids are set by hashing.\n\n Use this when your sparse features are in string or integer format, and you\n want to distribute your inputs into a finite number of buckets by hashing.\n output_id = Hash(input_feature_string) % bucket_size for string type input.\n For int type input, the value is converted to its string representation first\n and then hashed by the same formula.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example:\n\n ```python\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n columns = [keywords]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n linear_prediction, _, _ = tf.compat.v1.feature_column.linear_model(features,\n columns)\n\n # or\n import tensorflow as tf\n keywords = tf.feature_column.categorical_column_with_hash_bucket(\"keywords\",\n 10000)\n keywords_embedded = tf.feature_column.embedding_column(keywords, 16)\n columns = [keywords_embedded]\n features = {'keywords': tf.constant([['Tensorflow', 'Keras', 'RNN', 'LSTM',\n 'CNN'], ['LSTM', 'CNN', 'Tensorflow', 'Keras', 'RNN'], ['CNN', 'Tensorflow',\n 'LSTM', 'Keras', 'RNN']])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n hash_bucket_size: An int > 1. The number of buckets.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `HashedCategoricalColumn`.\n\n Raises:\n ValueError: `hash_bucket_size` is not greater than 1.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "Represents sparse feature where ids are set by hashing.", "type": "API"}, {"name": "tf.feature_column.categorical_column_with_identity", "docs": "A `CategoricalColumn` that returns identity values.\n\n Use this when your inputs are integers in the range `[0, num_buckets)`, and\n you want to use the input value itself as the categorical ID. Values outside\n this range will result in `default_value` if specified, otherwise it will\n fail.\n\n Typically, this is used for contiguous ranges of integer indexes, but\n it doesn't have to be. This might be inefficient, however, if many of IDs\n are unused. Consider `categorical_column_with_hash_bucket` in that case.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n In the following examples, each input in the range `[0, 1000000)` is assigned\n the same value. All other inputs are assigned `default_value` 0. Note that a\n literal 0 in inputs will result in the same default ID.\n\n Linear model:\n\n ```python\n import tensorflow as tf\n video_id = tf.feature_column.categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [video_id]\n features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],\n [33,78, 2, 73, 1]])}\n linear_prediction = tf.compat.v1.feature_column.linear_model(features,\n columns)\n ```\n\n Embedding for a DNN model:\n\n ```python\n import tensorflow as tf\n video_id = tf.feature_column.categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [tf.feature_column.embedding_column(video_id, 9)]\n features = {'video_id': tf.sparse.from_dense([[2, 85, 0, 0, 0],\n [33,78, 2, 73, 1]])}\n input_layer = tf.keras.layers.DenseFeatures(columns)\n dense_tensor = input_layer(features)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n num_buckets: Range of inputs and outputs is `[0, num_buckets)`.\n default_value: If set, values outside of range `[0, num_buckets)` will\n be replaced with this value. If not set, values >= num_buckets will\n cause a failure while values < 0 will be dropped.\n\n Returns:\n A `CategoricalColumn` that returns identity values.\n\n Raises:\n ValueError: if `num_buckets` is less than one.\n ValueError: if `default_value` is not in range `[0, num_buckets)`.\n ", "desc": "A `CategoricalColumn` that returns identity values.", "type": "API"}, {"name": "tf.feature_column.categorical_column_with_vocabulary_file", "docs": "A `CategoricalColumn` with a vocabulary file.\n\n Use this when your inputs are in string or integer format, and you have a\n vocabulary file that maps each value to an integer ID. By default,\n out-of-vocabulary values are ignored. Use either (but not both) of\n `num_oov_buckets` and `default_value` to specify how to include\n out-of-vocabulary values.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example with `num_oov_buckets`:\n File `'/us/states.txt'` contains 50 lines, each with a 2-character U.S. state\n abbreviation. All inputs with values in that file are assigned an ID 0-49,\n corresponding to its line number. All other values are hashed and assigned an\n ID 50-54.\n\n ```python\n states = categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,\n num_oov_buckets=5)\n columns = [states, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n Example with `default_value`:\n File `'/us/states.txt'` contains 51 lines - the first line is `'XX'`, and the\n other 50 each have a 2-character U.S. state abbreviation. Both a literal\n `'XX'` in input, and other values missing from the file, will be assigned\n ID 0. All others are assigned the corresponding line number 1-50.\n\n ```python\n states = categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='/us/states.txt', vocabulary_size=51,\n default_value=0)\n columns = [states, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n And to make an embedding with either:\n\n ```python\n columns = [embedding_column(states, 3),...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n vocabulary_file: The vocabulary file name.\n vocabulary_size: Number of the elements in the vocabulary. This must be no\n greater than length of `vocabulary_file`, if less than length, later\n values are ignored. If None, it is set to the length of `vocabulary_file`.\n dtype: The type of features. Only string and integer types are supported.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of\n the input value. A positive `num_oov_buckets` can not be specified with\n `default_value`.\n file_format: The format of the vocabulary file. The format is 'text' by\n default unless `vocabulary_file` is a string which ends in 'tfrecord.gz'.\n Accepted alternative value for `file_format` is 'tfrecord_gzip'.\n\n Returns:\n A `CategoricalColumn` with a vocabulary file.\n\n Raises:\n ValueError: `vocabulary_file` is missing or cannot be opened.\n ValueError: `vocabulary_size` is missing or < 1.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A `CategoricalColumn` with a vocabulary file.", "type": "API"}, {"name": "tf.feature_column.categorical_column_with_vocabulary_list", "docs": "A `CategoricalColumn` with in-memory vocabulary.\n\n Use this when your inputs are in string or integer format, and you have an\n in-memory vocabulary mapping each value to an integer ID. By default,\n out-of-vocabulary values are ignored. Use either (but not both) of\n `num_oov_buckets` and `default_value` to specify how to include\n out-of-vocabulary values.\n\n For input dictionary `features`, `features[key]` is either `Tensor` or\n `SparseTensor`. If `Tensor`, missing values can be represented by `-1` for int\n and `''` for string, which will be dropped by this feature column.\n\n Example with `num_oov_buckets`:\n In the following example, each input in `vocabulary_list` is assigned an ID\n 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other\n inputs are hashed and assigned an ID 4-5.\n\n ```python\n colors = categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),\n num_oov_buckets=2)\n columns = [colors, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n Example with `default_value`:\n In the following example, each input in `vocabulary_list` is assigned an ID\n 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other\n inputs are assigned `default_value` 0.\n\n\n ```python\n colors = categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('X', 'R', 'G', 'B', 'Y'), default_value=0)\n columns = [colors, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n And to make an embedding with either:\n\n ```python\n columns = [embedding_column(colors, 3),...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n ```\n\n Args:\n key: A unique string identifying the input feature. It is used as the column\n name and the dictionary key for feature parsing configs, feature `Tensor`\n objects, and feature columns.\n vocabulary_list: An ordered iterable defining the vocabulary. Each feature\n is mapped to the index of its value (if present) in `vocabulary_list`.\n Must be castable to `dtype`.\n dtype: The type of features. Only string and integer types are supported. If\n `None`, it will be inferred from `vocabulary_list`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a\n hash of the input value. A positive `num_oov_buckets` can not be specified\n with `default_value`.\n\n Returns:\n A `CategoricalColumn` with in-memory vocabulary.\n\n Raises:\n ValueError: if `vocabulary_list` is empty, or contains duplicate keys.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: if `dtype` is not integer or string.\n ", "desc": "A `CategoricalColumn` with in-memory vocabulary.", "type": "API"}, {"name": "tf.feature_column.crossed_column", "docs": "Returns a column for performing crosses of categorical features.\n\n Crossed features will be hashed according to `hash_bucket_size`. Conceptually,\n the transformation can be thought of as:\n Hash(cartesian product of features) % `hash_bucket_size`\n\n For example, if the input features are:\n\n * SparseTensor referred by first key:\n\n ```python\n shape = [2, 2]\n {\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n }\n ```\n\n * SparseTensor referred by second key:\n\n ```python\n shape = [2, 1]\n {\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n }\n ```\n\n then crossed feature will look like:\n\n ```python\n shape = [2, 2]\n {\n [0, 0]: Hash64(\"d\", Hash64(\"a\")) % hash_bucket_size\n [1, 0]: Hash64(\"e\", Hash64(\"b\")) % hash_bucket_size\n [1, 1]: Hash64(\"e\", Hash64(\"c\")) % hash_bucket_size\n }\n ```\n\n Here is an example to create a linear model with crosses of string features:\n\n ```python\n keywords_x_doc_terms = crossed_column(['keywords', 'doc_terms'], 50K)\n columns = [keywords_x_doc_terms, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n You could also use vocabulary lookup before crossing:\n\n ```python\n keywords = categorical_column_with_vocabulary_file(\n 'keywords', '/path/to/vocabulary/file', vocabulary_size=1K)\n keywords_x_doc_terms = crossed_column([keywords, 'doc_terms'], 50K)\n columns = [keywords_x_doc_terms, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n If an input feature is of numeric type, you can use\n `categorical_column_with_identity`, or `bucketized_column`, as in the example:\n\n ```python\n # vertical_id is an integer categorical feature.\n vertical_id = categorical_column_with_identity('vertical_id', 10K)\n price = numeric_column('price')\n # bucketized_column converts numerical feature to a categorical one.\n bucketized_price = bucketized_column(price, boundaries=[...])\n vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)\n columns = [vertical_id_x_price, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction = linear_model(features, columns)\n ```\n\n To use crossed column in DNN model, you need to add it in an embedding column\n as in this example:\n\n ```python\n vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)\n vertical_id_x_price_embedded = embedding_column(vertical_id_x_price, 10)\n dense_tensor = input_layer(features, [vertical_id_x_price_embedded, ...])\n ```\n\n Args:\n keys: An iterable identifying the features to be crossed. Each element can\n be either:\n * string: Will use the corresponding feature which must be of string type.\n * `CategoricalColumn`: Will use the transformed tensor produced by this\n column. Does not support hashed categorical column.\n hash_bucket_size: An int > 1. The number of buckets.\n hash_key: Specify the hash_key that will be used by the `FingerprintCat64`\n function to combine the crosses fingerprints on SparseCrossOp (optional).\n\n Returns:\n A `CrossedColumn`.\n\n Raises:\n ValueError: If `len(keys) < 2`.\n ValueError: If any of the keys is neither a string nor `CategoricalColumn`.\n ValueError: If any of the keys is `HashedCategoricalColumn`.\n ValueError: If `hash_bucket_size < 1`.\n ", "desc": "Returns a column for performing crosses of categorical features.", "type": "API"}, {"name": "tf.feature_column.embedding_column", "docs": "`DenseColumn` that converts from sparse, categorical input.\n\n Use this when your inputs are sparse, but you want to convert them to a dense\n representation (e.g., to feed to a DNN).\n\n Inputs must be a `CategoricalColumn` created by any of the\n `categorical_column_*` function. Here is an example of using\n `embedding_column` with `DNNClassifier`:\n\n ```python\n video_id = categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [embedding_column(video_id, 9),...]\n\n estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)\n\n label_column = ...\n def input_fn():\n features = tf.io.parse_example(\n ..., features=make_parse_example_spec(columns + [label_column]))\n labels = features.pop(label_column.name)\n return features, labels\n\n estimator.train(input_fn=input_fn, steps=100)\n ```\n\n Here is an example using `embedding_column` with model_fn:\n\n ```python\n def model_fn(features, ...):\n video_id = categorical_column_with_identity(\n key='video_id', num_buckets=1000000, default_value=0)\n columns = [embedding_column(video_id, 9),...]\n dense_tensor = input_layer(features, columns)\n # Form DNN layers, calculate loss, and return EstimatorSpec.\n ...\n ```\n\n Args:\n categorical_column: A `CategoricalColumn` created by a\n `categorical_column_with_*` function. This column produces the sparse IDs\n that are inputs to the embedding lookup.\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries in\n a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with\n 'mean' the default. 'sqrtn' often achieves good accuracy, in particular\n with bag-of-words columns. Each of this can be thought as example level\n normalizations on the column. For more information, see\n `tf.embedding_lookup_sparse`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `truncated_normal_initializer` with mean `0.0` and\n standard deviation `1/sqrt(dimension)`.\n ckpt_to_load_from: String representing checkpoint name/pattern from which to\n restore column weights. Required if `tensor_name_in_ckpt` is not `None`.\n tensor_name_in_ckpt: Name of the `Tensor` in `ckpt_to_load_from` from which\n to restore the column weights. Required if `ckpt_to_load_from` is not\n `None`.\n max_norm: If not `None`, embedding values are l2-normalized to this value.\n trainable: Whether or not the embedding is trainable. Default is True.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n `DenseColumn` that converts from sparse input.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt`\n is specified.\n ValueError: if `initializer` is specified and is not callable.\n RuntimeError: If eager execution is enabled.\n ", "desc": "`DenseColumn` that converts from sparse, categorical input.", "type": "API"}, {"name": "tf.feature_column.indicator_column", "docs": "Represents multi-hot representation of given categorical column.\n\n - For DNN model, `indicator_column` can be used to wrap any\n `categorical_column_*` (e.g., to feed to DNN). Consider to Use\n `embedding_column` if the number of buckets/unique(values) are large.\n\n - For Wide (aka linear) model, `indicator_column` is the internal\n representation for categorical column when passing categorical column\n directly (as any element in feature_columns) to `linear_model`. See\n `linear_model` for details.\n\n ```python\n name = indicator_column(categorical_column_with_vocabulary_list(\n 'name', ['bob', 'george', 'wanda']))\n columns = [name, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n dense_tensor = input_layer(features, columns)\n\n dense_tensor == [[1, 0, 0]] # If \"name\" bytes_list is [\"bob\"]\n dense_tensor == [[1, 0, 1]] # If \"name\" bytes_list is [\"bob\", \"wanda\"]\n dense_tensor == [[2, 0, 0]] # If \"name\" bytes_list is [\"bob\", \"bob\"]\n ```\n\n Args:\n categorical_column: A `CategoricalColumn` which is created by\n `categorical_column_with_*` or `crossed_column` functions.\n\n Returns:\n An `IndicatorColumn`.\n\n Raises:\n ValueError: If `categorical_column` is not CategoricalColumn type.\n ", "desc": "Represents multi-hot representation of given categorical column.", "type": "API"}, {"name": "tf.feature_column.make_parse_example_spec", "docs": "Creates parsing spec dictionary from input feature_columns.\n\n The returned dictionary can be used as arg 'features' in\n `tf.io.parse_example`.\n\n Typical usage example:\n\n ```python\n # Define features and transformations\n feature_a = tf.feature_column.categorical_column_with_vocabulary_file(...)\n feature_b = tf.feature_column.numeric_column(...)\n feature_c_bucketized = tf.feature_column.bucketized_column(\n tf.feature_column.numeric_column(\"feature_c\"), ...)\n feature_a_x_feature_c = tf.feature_column.crossed_column(\n columns=[\"feature_a\", feature_c_bucketized], ...)\n\n feature_columns = set(\n [feature_b, feature_c_bucketized, feature_a_x_feature_c])\n features = tf.io.parse_example(\n serialized=serialized_examples,\n features=tf.feature_column.make_parse_example_spec(feature_columns))\n ```\n\n For the above example, make_parse_example_spec would return the dict:\n\n ```python\n {\n \"feature_a\": parsing_ops.VarLenFeature(tf.string),\n \"feature_b\": parsing_ops.FixedLenFeature([1], dtype=tf.float32),\n \"feature_c\": parsing_ops.FixedLenFeature([1], dtype=tf.float32)\n }\n ```\n\n Args:\n feature_columns: An iterable containing all feature columns. All items\n should be instances of classes derived from `FeatureColumn`.\n\n Returns:\n A dict mapping each feature key to a `FixedLenFeature` or `VarLenFeature`\n value.\n\n Raises:\n ValueError: If any of the given `feature_columns` is not a `FeatureColumn`\n instance.\n ", "desc": "Creates parsing spec dictionary from input feature_columns.", "type": "API"}, {"name": "tf.feature_column.numeric_column", "docs": "Represents real valued or numerical features.\n\n Example:\n\n Assume we have data with two features `a` and `b`.\n\n >>> data = {'a': [15, 9, 17, 19, 21, 18, 25, 30],\n ... 'b': [5.0, 6.4, 10.5, 13.6, 15.7, 19.9, 20.3 , 0.0]}\n\n Let us represent the features `a` and `b` as numerical features.\n\n >>> a = tf.feature_column.numeric_column('a')\n >>> b = tf.feature_column.numeric_column('b')\n\n Feature column describe a set of transformations to the inputs.\n\n For example, to \"bucketize\" feature `a`, wrap the `a` column in a\n `feature_column.bucketized_column`.\n Providing `5` bucket boundaries, the bucketized_column api\n will bucket this feature in total of `6` buckets.\n\n >>> a_buckets = tf.feature_column.bucketized_column(a,\n ... boundaries=[10, 15, 20, 25, 30])\n\n Create a `DenseFeatures` layer which will apply the transformations\n described by the set of `tf.feature_column` objects:\n\n >>> feature_layer = tf.keras.layers.DenseFeatures([a_buckets, b])\n >>> print(feature_layer(data))\n tf.Tensor(\n [[ 0. 0. 1. 0. 0. 0. 5. ]\n [ 1. 0. 0. 0. 0. 0. 6.4]\n [ 0. 0. 1. 0. 0. 0. 10.5]\n [ 0. 0. 1. 0. 0. 0. 13.6]\n [ 0. 0. 0. 1. 0. 0. 15.7]\n [ 0. 0. 1. 0. 0. 0. 19.9]\n [ 0. 0. 0. 0. 1. 0. 20.3]\n [ 0. 0. 0. 0. 0. 1. 0. ]], shape=(8, 7), dtype=float32)\n\n Args:\n key: A unique string identifying the input feature. It is used as the\n column name and the dictionary key for feature parsing configs, feature\n `Tensor` objects, and feature columns.\n shape: An iterable of integers specifies the shape of the `Tensor`. An\n integer can be given which means a single dimension `Tensor` with given\n width. The `Tensor` representing the column will have the shape of\n [batch_size] + `shape`.\n default_value: A single value compatible with `dtype` or an iterable of\n values compatible with `dtype` which the column takes on during\n `tf.Example` parsing if data is missing. A default value of `None` will\n cause `tf.io.parse_example` to fail if an example does not contain this\n column. If a single value is provided, the same value will be applied as\n the default value for every item. If an iterable of values is provided,\n the shape of the `default_value` should be equal to the given `shape`.\n dtype: defines the type of values. Default value is `tf.float32`. Must be a\n non-quantized, real integer or floating point type.\n normalizer_fn: If not `None`, a function that can be used to normalize the\n value of the tensor after `default_value` is applied for parsing.\n Normalizer function takes the input `Tensor` as its argument, and returns\n the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that\n even though the most common use case of this function is normalization, it\n can be used for any kind of Tensorflow transformations.\n\n Returns:\n A `NumericColumn`.\n\n Raises:\n TypeError: if any dimension in shape is not an int\n ValueError: if any dimension in shape is not a positive integer\n TypeError: if `default_value` is an iterable but not compatible with `shape`\n TypeError: if `default_value` is not compatible with `dtype`.\n ValueError: if `dtype` is not convertible to `tf.float32`.\n ", "desc": "Represents real valued or numerical features.", "type": "API"}, {"name": "tf.feature_column.sequence_categorical_column_with_hash_bucket", "docs": "A sequence of categorical terms where ids are set by hashing.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n tokens = sequence_categorical_column_with_hash_bucket(\n 'tokens', hash_bucket_size=1000)\n tokens_embedding = embedding_column(tokens, dimension=10)\n columns = [tokens_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n hash_bucket_size: An int > 1. The number of buckets.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: `hash_bucket_size` is not greater than 1.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A sequence of categorical terms where ids are set by hashing.", "type": "API"}, {"name": "tf.feature_column.sequence_categorical_column_with_identity", "docs": "Returns a feature column that represents sequences of integers.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n watches = sequence_categorical_column_with_identity(\n 'watches', num_buckets=1000)\n watches_embedding = embedding_column(watches, dimension=10)\n columns = [watches_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n num_buckets: Range of inputs. Namely, inputs are expected to be in the\n range `[0, num_buckets)`.\n default_value: If `None`, this column's graph operations will fail for\n out-of-range inputs. Otherwise, this value must be in the range\n `[0, num_buckets)`, and will replace out-of-range inputs.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: if `num_buckets` is less than one.\n ValueError: if `default_value` is not in range `[0, num_buckets)`.\n ", "desc": "Returns a feature column that represents sequences of integers.", "type": "API"}, {"name": "tf.feature_column.sequence_categorical_column_with_vocabulary_file", "docs": "A sequence of categorical terms where ids use a vocabulary file.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n states = sequence_categorical_column_with_vocabulary_file(\n key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,\n num_oov_buckets=5)\n states_embedding = embedding_column(states, dimension=10)\n columns = [states_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n vocabulary_file: The vocabulary file name.\n vocabulary_size: Number of the elements in the vocabulary. This must be no\n greater than length of `vocabulary_file`, if less than length, later\n values are ignored. If None, it is set to the length of `vocabulary_file`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[vocabulary_size, vocabulary_size+num_oov_buckets)` based on a hash of\n the input value. A positive `num_oov_buckets` can not be specified with\n `default_value`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n dtype: The type of features. Only string and integer types are supported.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: `vocabulary_file` is missing or cannot be opened.\n ValueError: `vocabulary_size` is missing or < 1.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: `dtype` is neither string nor integer.\n ", "desc": "A sequence of categorical terms where ids use a vocabulary file.", "type": "API"}, {"name": "tf.feature_column.sequence_categorical_column_with_vocabulary_list", "docs": "A sequence of categorical terms where ids use an in-memory list.\n\n Pass this to `embedding_column` or `indicator_column` to convert sequence\n categorical data into dense representation for input to sequence NN, such as\n RNN.\n\n Example:\n\n ```python\n colors = sequence_categorical_column_with_vocabulary_list(\n key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),\n num_oov_buckets=2)\n colors_embedding = embedding_column(colors, dimension=3)\n columns = [colors_embedding]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input feature.\n vocabulary_list: An ordered iterable defining the vocabulary. Each feature\n is mapped to the index of its value (if present) in `vocabulary_list`.\n Must be castable to `dtype`.\n dtype: The type of features. Only string and integer types are supported.\n If `None`, it will be inferred from `vocabulary_list`.\n default_value: The integer ID value to return for out-of-vocabulary feature\n values, defaults to `-1`. This can not be specified with a positive\n `num_oov_buckets`.\n num_oov_buckets: Non-negative integer, the number of out-of-vocabulary\n buckets. All out-of-vocabulary inputs will be assigned IDs in the range\n `[len(vocabulary_list), len(vocabulary_list)+num_oov_buckets)` based on a\n hash of the input value. A positive `num_oov_buckets` can not be specified\n with `default_value`.\n\n Returns:\n A `SequenceCategoricalColumn`.\n\n Raises:\n ValueError: if `vocabulary_list` is empty, or contains duplicate keys.\n ValueError: `num_oov_buckets` is a negative integer.\n ValueError: `num_oov_buckets` and `default_value` are both specified.\n ValueError: if `dtype` is not integer or string.\n ", "desc": "A sequence of categorical terms where ids use an in-memory list.", "type": "API"}, {"name": "tf.feature_column.sequence_numeric_column", "docs": "Returns a feature column that represents sequences of numeric data.\n\n Example:\n\n ```python\n temperature = sequence_numeric_column('temperature')\n columns = [temperature]\n\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n sequence_feature_layer = SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_feature_layer(features)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n\n Args:\n key: A unique string identifying the input features.\n shape: The shape of the input data per sequence id. E.g. if `shape=(2,)`,\n each example must contain `2 * sequence_length` values.\n default_value: A single value compatible with `dtype` that is used for\n padding the sparse data into a dense `Tensor`.\n dtype: The type of values.\n normalizer_fn: If not `None`, a function that can be used to normalize the\n value of the tensor after `default_value` is applied for parsing.\n Normalizer function takes the input `Tensor` as its argument, and returns\n the output `Tensor`. (e.g. lambda x: (x - 3.0) / 4.2). Please note that\n even though the most common use case of this function is normalization, it\n can be used for any kind of Tensorflow transformations.\n\n Returns:\n A `SequenceNumericColumn`.\n\n Raises:\n TypeError: if any dimension in shape is not an int.\n ValueError: if any dimension in shape is not a positive integer.\n ValueError: if `dtype` is not convertible to `tf.float32`.\n ", "desc": "Returns a feature column that represents sequences of numeric data.", "type": "API"}, {"name": "tf.feature_column.shared_embeddings", "docs": "List of dense columns that convert from sparse, categorical input.\n\n This is similar to `embedding_column`, except that it produces a list of\n embedding columns that share the same embedding weights.\n\n Use this when your inputs are sparse and of the same type (e.g. watched and\n impression video IDs that share the same vocabulary), and you want to convert\n them to a dense representation (e.g., to feed to a DNN).\n\n Inputs must be a list of categorical columns created by any of the\n `categorical_column_*` function. They must all be of the same type and have\n the same arguments except `key`. E.g. they can be\n categorical_column_with_vocabulary_file with the same vocabulary_file. Some or\n all columns could also be weighted_categorical_column.\n\n Here is an example embedding of two features for a DNNClassifier model:\n\n ```python\n watched_video_id = categorical_column_with_vocabulary_file(\n 'watched_video_id', video_vocabulary_file, video_vocabulary_size)\n impression_video_id = categorical_column_with_vocabulary_file(\n 'impression_video_id', video_vocabulary_file, video_vocabulary_size)\n columns = shared_embedding_columns(\n [watched_video_id, impression_video_id], dimension=10)\n\n estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)\n\n label_column = ...\n def input_fn():\n features = tf.io.parse_example(\n ..., features=make_parse_example_spec(columns + [label_column]))\n labels = features.pop(label_column.name)\n return features, labels\n\n estimator.train(input_fn=input_fn, steps=100)\n ```\n\n Here is an example using `shared_embedding_columns` with model_fn:\n\n ```python\n def model_fn(features, ...):\n watched_video_id = categorical_column_with_vocabulary_file(\n 'watched_video_id', video_vocabulary_file, video_vocabulary_size)\n impression_video_id = categorical_column_with_vocabulary_file(\n 'impression_video_id', video_vocabulary_file, video_vocabulary_size)\n columns = shared_embedding_columns(\n [watched_video_id, impression_video_id], dimension=10)\n dense_tensor = input_layer(features, columns)\n # Form DNN layers, calculate loss, and return EstimatorSpec.\n ...\n ```\n\n Args:\n categorical_columns: List of categorical columns created by a\n `categorical_column_with_*` function. These columns produce the sparse IDs\n that are inputs to the embedding lookup. All columns must be of the same\n type and have the same arguments except `key`. E.g. they can be\n categorical_column_with_vocabulary_file with the same vocabulary_file.\n Some or all columns could also be weighted_categorical_column.\n dimension: An integer specifying dimension of the embedding, must be > 0.\n combiner: A string specifying how to reduce if there are multiple entries\n in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with\n 'mean' the default. 'sqrtn' often achieves good accuracy, in particular\n with bag-of-words columns. Each of this can be thought as example level\n normalizations on the column. For more information, see\n `tf.embedding_lookup_sparse`.\n initializer: A variable initializer function to be used in embedding\n variable initialization. If not specified, defaults to\n `truncated_normal_initializer` with mean `0.0` and standard\n deviation `1/sqrt(dimension)`.\n shared_embedding_collection_name: Optional collective name of these columns.\n If not given, a reasonable name will be chosen based on the names of\n `categorical_columns`.\n ckpt_to_load_from: String representing checkpoint name/pattern from which to\n restore column weights. Required if `tensor_name_in_ckpt` is not `None`.\n tensor_name_in_ckpt: Name of the `Tensor` in `ckpt_to_load_from` from\n which to restore the column weights. Required if `ckpt_to_load_from` is\n not `None`.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is\n larger than this value, before combining.\n trainable: Whether or not the embedding is trainable. Default is True.\n use_safe_embedding_lookup: If true, uses safe_embedding_lookup_sparse\n instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures\n there are no empty rows and all weights and ids are positive at the\n expense of extra compute cost. This only applies to rank 2 (NxM) shaped\n input tensors. Defaults to true, consider turning off if the above checks\n are not needed. Note that having empty rows will not trigger any error\n though the output result might be 0 or omitted.\n\n Returns:\n A list of dense columns that converts from sparse input. The order of\n results follows the ordering of `categorical_columns`.\n\n Raises:\n ValueError: if `dimension` not > 0.\n ValueError: if any of the given `categorical_columns` is of different type\n or has different arguments than the others.\n ValueError: if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt`\n is specified.\n ValueError: if `initializer` is specified and is not callable.\n RuntimeError: if eager execution is enabled.\n ", "desc": "List of dense columns that convert from sparse, categorical input.", "type": "API"}, {"name": "tf.feature_column.weighted_categorical_column", "docs": "Applies weight values to a `CategoricalColumn`.\n\n Use this when each of your sparse inputs has both an ID and a value. For\n example, if you're representing text documents as a collection of word\n frequencies, you can provide 2 parallel sparse input features ('terms' and\n 'frequencies' below).\n\n Example:\n\n Input `tf.Example` objects:\n\n ```proto\n [\n features {\n feature {\n key: \"terms\"\n value {bytes_list {value: \"very\" value: \"model\"}}\n }\n feature {\n key: \"frequencies\"\n value {float_list {value: 0.3 value: 0.1}}\n }\n },\n features {\n feature {\n key: \"terms\"\n value {bytes_list {value: \"when\" value: \"course\" value: \"human\"}}\n }\n feature {\n key: \"frequencies\"\n value {float_list {value: 0.4 value: 0.1 value: 0.2}}\n }\n }\n ]\n ```\n\n ```python\n categorical_column = categorical_column_with_hash_bucket(\n column_name='terms', hash_bucket_size=1000)\n weighted_column = weighted_categorical_column(\n categorical_column=categorical_column, weight_feature_key='frequencies')\n columns = [weighted_column, ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n linear_prediction, _, _ = linear_model(features, columns)\n ```\n\n This assumes the input dictionary contains a `SparseTensor` for key\n 'terms', and a `SparseTensor` for key 'frequencies'. These 2 tensors must have\n the same indices and dense shape.\n\n Args:\n categorical_column: A `CategoricalColumn` created by\n `categorical_column_with_*` functions.\n weight_feature_key: String key for weight values.\n dtype: Type of weights, such as `tf.float32`. Only float and integer weights\n are supported.\n\n Returns:\n A `CategoricalColumn` composed of two sparse features: one represents id,\n the other represents weight (value) of the id feature in that example.\n\n Raises:\n ValueError: if `dtype` is not convertible to float.\n ", "desc": "Applies weight values to a `CategoricalColumn`.", "type": "API"}, {"name": "tf.fill", "docs": "Creates a tensor filled with a scalar value.\n\n See also `tf.ones`, `tf.zeros`, `tf.one_hot`, `tf.eye`.\n\n This operation creates a tensor of shape `dims` and fills it with `value`.\n\n For example:\n\n >>> tf.fill([2, 3], 9)\n \n\n `tf.fill` evaluates at graph runtime and supports dynamic shapes based on\n other runtime `tf.Tensors`, unlike `tf.constant(value, shape=dims)`, which\n embeds the value as a `Const` node.\n\n Args:\n dims: A 1-D sequence of non-negative numbers. Represents the shape of the\n output `tf.Tensor`. Entries should be of type: `int32`, `int64`.\n value: A value to fill the returned `tf.Tensor`.\n name: Optional string. The name of the output `tf.Tensor`.\n\n Returns:\n A `tf.Tensor` with shape `dims` and the same dtype as `value`.\n\n Raises:\n InvalidArgumentError: `dims` contains negative entries.\n NotFoundError: `dims` contains non-integer entries.\n\n @compatibility(numpy)\n Similar to `np.full`. In `numpy`, more parameters are supported. Passing a\n number argument as the shape (`np.full(5, value)`) is valid in `numpy` for\n specifying a 1-D shaped result, while TensorFlow does not support this syntax.\n @end_compatibility\n ", "desc": "Creates a tensor filled with a scalar value.", "type": "API"}, {"name": "tf.fingerprint", "docs": "Generates fingerprint values.\n\n Generates fingerprint values of `data`.\n\n Fingerprint op considers the first dimension of `data` as the batch dimension,\n and `output[i]` contains the fingerprint value generated from contents in\n `data[i, ...]` for all `i`.\n\n Fingerprint op writes fingerprint values as byte arrays. For example, the\n default method `farmhash64` generates a 64-bit fingerprint value at a time.\n This 8-byte value is written out as an `tf.uint8` array of size 8, in\n little-endian order.\n\n For example, suppose that `data` has data type `tf.int32` and shape (2, 3, 4),\n and that the fingerprint method is `farmhash64`. In this case, the output\n shape is (2, 8), where 2 is the batch dimension size of `data`, and 8 is the\n size of each fingerprint value in bytes. `output[0, :]` is generated from\n 12 integers in `data[0, :, :]` and similarly `output[1, :]` is generated from\n other 12 integers in `data[1, :, :]`.\n\n Note that this op fingerprints the raw underlying buffer, and it does not\n fingerprint Tensor's metadata such as data type and/or shape. For example, the\n fingerprint values are invariant under reshapes and bitcasts as long as the\n batch dimension remain the same:\n\n ```python\n tf.fingerprint(data) == tf.fingerprint(tf.reshape(data, ...))\n tf.fingerprint(data) == tf.fingerprint(tf.bitcast(data, ...))\n ```\n\n For string data, one should expect `tf.fingerprint(data) !=\n tf.fingerprint(tf.string.reduce_join(data))` in general.\n\n Args:\n data: A `Tensor`. Must have rank 1 or higher.\n method: A `Tensor` of type `tf.string`. Fingerprint method used by this op.\n Currently available method is `farmhash64`.\n name: A name for the operation (optional).\n\n Returns:\n A two-dimensional `Tensor` of type `tf.uint8`. The first dimension equals to\n `data`'s first dimension, and the second dimension size depends on the\n fingerprint algorithm.\n ", "desc": "Generates fingerprint values.", "type": "API"}, {"name": "tf.floor", "docs": "Returns element-wise largest integer not greater than x.\n\n Both input range is `(-inf, inf)` and the\n output range consists of all integer values.\n\n For example:\n\n >>> x = tf.constant([1.3324, -1.5, 5.555, -2.532, 0.99, float(\"inf\")])\n >>> tf.floor(x).numpy()\n array([ 1., -2., 5., -3., 0., inf], dtype=float32)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Returns element-wise largest integer not greater than x.", "type": "API"}, {"name": "tf.foldl", "docs": "foldl on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version.\nInstructions for updating:\nback_prop=False is deprecated. Consider using tf.stop_gradient instead.\nInstead of:\nresults = tf.foldl(fn, elems, back_prop=False)\nUse:\nresults = tf.nest.map_structure(tf.stop_gradient, tf.foldl(fn, elems))\n\nThis foldl operator repeatedly applies the callable `fn` to a sequence\nof elements from first to last. The elements are made of the tensors\nunpacked from `elems` on dimension 0. The callable fn takes two tensors as\narguments. The first argument is the accumulated value computed from the\npreceding invocation of fn, and the second is the value at the current\nposition of `elems`. If `initializer` is None, `elems` must contain at least\none element, and its first element is used as the initializer.\n\nSuppose that `elems` is unpacked into `values`, a list of tensors. The shape\nof the result tensor is fn(initializer, values[0]).shape`.\n\nThis method also allows multi-arity `elems` and output of `fn`. If `elems`\nis a (possibly nested) list or tuple of tensors, then each of these tensors\nmust have a matching first (unpack) dimension. The signature of `fn` may\nmatch the structure of `elems`. That is, if `elems` is\n`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:\n`fn = lambda (t1, [t2, t3, [t4, t5]]):`.\n\nArgs:\n fn: The callable to be performed.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n as the initial value for the accumulator.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) Deprecated. False disables support for back\n propagation. Prefer using `tf.stop_gradient` instead.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n name: (optional) Name prefix for the returned tensors.\n\nReturns:\n A tensor or (possibly nested) sequence of tensors, resulting from applying\n `fn` consecutively to the list of tensors unpacked from `elems`, from first\n to last.\n\nRaises:\n TypeError: if `fn` is not callable.\n\nExample:\n ```python\n elems = tf.constant([1, 2, 3, 4, 5, 6])\n sum = foldl(lambda a, x: a + x, elems)\n # sum == 21\n ```", "desc": "foldl on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)", "type": "API"}, {"name": "tf.foldr", "docs": "foldr on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version.\nInstructions for updating:\nback_prop=False is deprecated. Consider using tf.stop_gradient instead.\nInstead of:\nresults = tf.foldr(fn, elems, back_prop=False)\nUse:\nresults = tf.nest.map_structure(tf.stop_gradient, tf.foldr(fn, elems))\n\nThis foldr operator repeatedly applies the callable `fn` to a sequence\nof elements from last to first. The elements are made of the tensors\nunpacked from `elems`. The callable fn takes two tensors as arguments.\nThe first argument is the accumulated value computed from the preceding\ninvocation of fn, and the second is the value at the current position of\n`elems`. If `initializer` is None, `elems` must contain at least one element,\nand its first element is used as the initializer.\n\nSuppose that `elems` is unpacked into `values`, a list of tensors. The shape\nof the result tensor is `fn(initializer, values[0]).shape`.\n\nThis method also allows multi-arity `elems` and output of `fn`. If `elems`\nis a (possibly nested) list or tuple of tensors, then each of these tensors\nmust have a matching first (unpack) dimension. The signature of `fn` may\nmatch the structure of `elems`. That is, if `elems` is\n`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:\n`fn = lambda (t1, [t2, t3, [t4, t5]]):`.\n\nArgs:\n fn: The callable to be performed.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n as the initial value for the accumulator.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) Deprecated. False disables support for back\n propagation. Prefer using `tf.stop_gradient` instead.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n name: (optional) Name prefix for the returned tensors.\n\nReturns:\n A tensor or (possibly nested) sequence of tensors, resulting from applying\n `fn` consecutively to the list of tensors unpacked from `elems`, from last\n to first.\n\nRaises:\n TypeError: if `fn` is not callable.\n\nExample:\n ```python\n elems = [1, 2, 3, 4, 5, 6]\n sum = foldr(lambda a, x: a + x, elems)\n # sum == 21\n ```", "desc": "foldr on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)", "type": "API"}, {"name": "tf.function", "docs": "Compiles a function into a callable TensorFlow graph. (deprecated arguments) (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(experimental_compile)`. They will be removed in a future version.\nInstructions for updating:\nexperimental_compile is deprecated, use jit_compile instead\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(experimental_relax_shapes)`. They will be removed in a future version.\nInstructions for updating:\nexperimental_relax_shapes is deprecated, use reduce_retracing instead\n\n`tf.function` constructs a `tf.types.experimental.GenericFunction` that\nexecutes a TensorFlow graph (`tf.Graph`) created by trace-compiling the\nTensorFlow operations in `func`. More information on the topic can be found\nin [Introduction to Graphs and tf.function]\n(https://www.tensorflow.org/guide/intro_to_graphs).\n\nSee [Better Performance with tf.function]\n(https://www.tensorflow.org/guide/function) for tips on performance and\nknown limitations.\n\nExample usage:\n\n>>> @tf.function\n... def f(x, y):\n... return x ** 2 + y\n>>> x = tf.constant([2, 3])\n>>> y = tf.constant([3, -2])\n>>> f(x, y)\n\n\nThe trace-compilation allows non-TensorFlow operations to execute, but under\nspecial conditions. In general, only TensorFlow operations are guaranteed to\nrun and create fresh results whenever the `GenericFunction` is called.\n\n## Features\n\n`func` may use data-dependent Python control flow statements, including `if`,\n`for`, `while` `break`, `continue` and `return`:\n\n>>> @tf.function\n... def f(x):\n... if tf.reduce_sum(x) > 0:\n... return x * x\n... else:\n... return -x // 2\n>>> f(tf.constant(-2))\n\n\n`func`'s closure may include `tf.Tensor` and `tf.Variable` objects:\n\n>>> @tf.function\n... def f():\n... return x ** 2 + y\n>>> x = tf.constant([-2, -3])\n>>> y = tf.Variable([3, -2])\n>>> f()\n\n\n`func` may also use ops with side effects, such as `tf.print`, `tf.Variable`\nand others:\n\n>>> v = tf.Variable(1)\n>>> @tf.function\n... def f(x):\n... for i in tf.range(x):\n... v.assign_add(i)\n>>> f(3)\n>>> v\n\n\nImportant: Any Python side-effects (appending to a list, printing with\n`print`, etc) will only happen once, when `func` is traced. To have\nside-effects executed into your `tf.function` they need to be written\nas TF ops:\n\n>>> l = []\n>>> @tf.function\n... def f(x):\n... for i in x:\n... l.append(i + 1) # Caution! Will only happen once when tracing\n>>> f(tf.constant([1, 2, 3]))\n>>> l\n[]\n\nInstead, use TensorFlow collections like `tf.TensorArray`:\n\n>>> @tf.function\n... def f(x):\n... ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)\n... for i in range(len(x)):\n... ta = ta.write(i, x[i] + 1)\n... return ta.stack()\n>>> f(tf.constant([1, 2, 3]))\n\n\n## `tf.function` creates polymorphic callables\n\nInternally, `tf.types.experimental.GenericFunction` may contain multiple\n`tf.types.experimental.ConcreteFunction`s, each specialized to arguments with\ndifferent data types or shapes, since TensorFlow can perform more\noptimizations on graphs of specific shapes, dtypes and values of constant\narguments. `tf.function` treats any pure Python values as opaque objects (best\nthought of as compile-time constants), and builds a separate `tf.Graph` for\neach set of Python arguments that it encounters.\nFor more information, see the\n[tf.function guide](https://www.tensorflow.org/guide/function#rules_of_tracing)\n\nExecuting a `GenericFunction` will select and execute the appropriate\n`ConcreteFunction` based on the argument types and values.\n\nTo obtain an individual `ConcreteFunction`, use the\n`GenericFunction.get_concrete_function` method. It can be called with the\nsame arguments as `func` and returns a\n`tf.types.experimental.ConcreteFunction`. `ConcreteFunction`s are backed by a\nsingle `tf.Graph`:\n\n>>> @tf.function\n... def f(x):\n... return x + 1\n>>> isinstance(f.get_concrete_function(1).graph, tf.Graph)\nTrue\n\n`ConcreteFunction`s can be executed just like `GenericFunction`s, but their\ninput is resticted to the types to which they're specialized.\n\n## Retracing\n\n`ConcreteFunctions` are built (traced) on the fly, as the `GenericFunction` is\ncalled with new TensorFlow types or shapes, or with new Python values as\narguments. When `GenericFunction` builds a new trace, it is said that `func`\nis retraced. Retracing is a frequent performance concern for `tf.function` as\nit can be considerably slower than executing a graph that's already been\ntraced. It is ideal to minimize the amount of retracing in your code.\n\nCaution: Passing python scalars or lists as arguments to `tf.function` will\nusually retrace. To avoid this, pass numeric arguments as Tensors whenever\npossible:\n\n>>> @tf.function\n... def f(x):\n... return tf.abs(x)\n>>> f1 = f.get_concrete_function(1)\n>>> f2 = f.get_concrete_function(2) # Slow - compiles new graph\n>>> f1 is f2\nFalse\n>>> f1 = f.get_concrete_function(tf.constant(1))\n>>> f2 = f.get_concrete_function(tf.constant(2)) # Fast - reuses f1\n>>> f1 is f2\nTrue\n\nPython numerical arguments should only be used when they take few distinct\nvalues, such as hyperparameters like the number of layers in a neural network.\n\n## Input signatures\n\nFor Tensor arguments, `GenericFunction`creates a new `ConcreteFunction` for\nevery unique set of input shapes and datatypes. The example below creates two\nseparate `ConcreteFunction`s, each specialized to a different shape:\n\n>>> @tf.function\n... def f(x):\n... return x + 1\n>>> vector = tf.constant([1.0, 1.0])\n>>> matrix = tf.constant([[3.0]])\n>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)\nFalse\n\nAn \"input signature\" can be optionally provided to `tf.function` to control\nthis process. The input signature specifies the shape and type of each\nTensor argument to the function using a `tf.TensorSpec` object. More general\nshapes can be used. This ensures only one `ConcreteFunction` is created, and\nrestricts the `GenericFunction` to the specified shapes and types. It is\nan effective way to limit retracing when Tensors have dynamic shapes.\n\n>>> @tf.function(\n... input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])\n... def f(x):\n... return x + 1\n>>> vector = tf.constant([1.0, 1.0])\n>>> matrix = tf.constant([[3.0]])\n>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)\nTrue\n\n## Variables may only be created once\n\n`tf.function` only allows creating new `tf.Variable` objects when it is called\nfor the first time:\n\n>>> class MyModule(tf.Module):\n... def __init__(self):\n... self.v = None\n...\n... @tf.function\n... def __call__(self, x):\n... if self.v is None:\n... self.v = tf.Variable(tf.ones_like(x))\n... return self.v * x\n\nIn general, it is recommended to create `tf.Variable`s outside of\n`tf.function`.\nIn simple cases, persisting state across `tf.function` boundaries may be\nimplemented using a pure functional style in which state is represented by\n`tf.Tensor`s passed as arguments and returned as return values.\n\nContrast the two styles below:\n\n>>> state = tf.Variable(1)\n>>> @tf.function\n... def f(x):\n... state.assign_add(x)\n>>> f(tf.constant(2)) # Non-pure functional style\n>>> state\n\n\n>>> state = tf.constant(1)\n>>> @tf.function\n... def f(state, x):\n... state += x\n... return state\n>>> state = f(state, tf.constant(2)) # Pure functional style\n>>> state\n\n\n## Python operations execute only once per trace\n\n`func` may contain TensorFlow operations mixed with pure Python operations.\nHowever, when the function is executed, only the TensorFlow operations will\nrun. The Python operations run only once, at trace time. If TensorFlow\noperations depend on results from Pyhton operations, those results will be\nfrozen into the graph.\n\n>>> @tf.function\n... def f(a, b):\n... print('this runs at trace time; a is', a, 'and b is', b)\n... return b\n>>> f(1, tf.constant(1))\nthis runs at trace time; a is 1 and b is Tensor(\"...\", shape=(), dtype=int32)\n\n\n>>> f(1, tf.constant(2))\n\n\n>>> f(2, tf.constant(1))\nthis runs at trace time; a is 2 and b is Tensor(\"...\", shape=(), dtype=int32)\n\n\n>>> f(2, tf.constant(2))\n\n\n## Using type annotations to improve performance\n\n'experimental_follow_type_hints` can be used along with type annotations to\nreduce retracing by automatically casting any Python values to `tf.Tensor`\n(something that is not done by default, unless you use input signatures).\n\n>>> @tf.function(experimental_follow_type_hints=True)\n... def f_with_hints(x: tf.Tensor):\n... print('Tracing')\n... return x\n>>> @tf.function(experimental_follow_type_hints=False)\n... def f_no_hints(x: tf.Tensor):\n... print('Tracing')\n... return x\n>>> f_no_hints(1)\nTracing\n\n>>> f_no_hints(2)\nTracing\n\n>>> f_with_hints(1)\nTracing\n\n>>> f_with_hints(2)\n\n\nArgs:\n func: the function to be compiled. If `func` is None, `tf.function` returns\n a decorator that can be invoked with a single argument - `func`. In other\n words, `tf.function(input_signature=...)(func)` is equivalent to\n `tf.function(func, input_signature=...)`. The former can be used as\n decorator.\n input_signature: A possibly nested sequence of `tf.TensorSpec` objects\n specifying the shapes and dtypes of the Tensors that will be supplied to\n this function. If `None`, a separate function is instantiated for each\n inferred input signature. If input_signature is specified, every input to\n `func` must be a `Tensor`, and `func` cannot accept `**kwargs`.\n autograph: Whether autograph should be applied on `func` before tracing a\n graph. Data-dependent Python control flow statements require\n `autograph=True`. For more information, see the\n [tf.function and AutoGraph guide](\n https://www.tensorflow.org/guide/function#autograph_transformations).\n jit_compile: If `True`, compiles the function using\n [XLA](https://tensorflow.org/xla). XLA performs compiler optimizations,\n such as fusion, and attempts to emit more efficient code. This may\n drastically improve the performance. If set to `True`,\n the whole function needs to be compilable by XLA, or an\n `errors.InvalidArgumentError` is thrown.\n If `None` (default), compiles the function with XLA when running on TPU\n and goes through the regular function execution path when running on\n other devices.\n If `False`, executes the function without XLA compilation. Set this value\n to `False` when directly running a multi-device function on TPUs (e.g. two\n TPU cores, one TPU core and its host CPU).\n Not all functions are compilable, see a list of\n [sharp corners](https://tensorflow.org/xla/known_issues).\n reduce_retracing: When True, `tf.function` attempts to reduce the\n amount of retracing, for example by using more generic shapes. This\n can be controlled for user objects by customizing their associated\n `tf.types.experimental.TraceType`.\n experimental_implements: If provided, contains a name of a \"known\" function\n this implements. For example \"mycompany.my_recurrent_cell\".\n This is stored as an attribute in inference function,\n which can then be detected when processing serialized function.\n See [standardizing composite ops](https://github.com/tensorflow/community/blob/master/rfcs/20190610-standardizing-composite_ops.md) # pylint: disable=line-too-long\n for details. For an example of utilizing this attribute see this\n [example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc)\n The code above automatically detects and substitutes function that\n implements \"embedded_matmul\" and allows TFLite to substitute its own\n implementations. For instance, a tensorflow user can use this\n attribute to mark that their function also implements\n `embedded_matmul` (perhaps more efficiently!)\n by specifying it using this parameter:\n `@tf.function(experimental_implements=\"embedded_matmul\")`\n This can either be specified as just the string name of the function or\n a NameAttrList corresponding to a list of key-value attributes associated\n with the function name. The name of the function will be in the 'name'\n field of the NameAttrList. To define a formal TF op for this function\n implements, try the experimental [composite TF](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tfr)\n project.\n experimental_autograph_options: Optional tuple of\n `tf.autograph.experimental.Feature` values.\n experimental_relax_shapes: Deprecated. Use `reduce_retracing`\n instead.\n experimental_compile: Deprecated alias to 'jit_compile'.\n experimental_follow_type_hints: When True, the function may use type\n annotations from `func` to optimize the tracing performance. For example,\n arguments annotated with `tf.Tensor` will automatically be converted\n to a Tensor.\n\nReturns:\n If `func` is not None, returns a `tf.types.experimental.GenericFunction`.\n If `func` is None, returns a decorator that, when invoked with a single\n `func` argument, returns a `tf.types.experimental.GenericFunction`.\n\nRaises:\n `ValueError` when attempting to use `jit_compile=True`, but XLA support is\n not available.", "desc": "Compiles a function into a callable TensorFlow graph. (deprecated arguments) (deprecated arguments)", "type": "API"}, {"name": "tf.gather", "docs": "Gather slices from params axis `axis` according to indices. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(validate_indices)`. They will be removed in a future version.\nInstructions for updating:\nThe `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.\n\nGather slices from `params` axis `axis` according to `indices`. `indices`\nmust be an integer tensor of any dimension (often 1-D).\n\n`Tensor.__getitem__` works for scalars, `tf.newaxis`, and\n[python slices](https://numpy.org/doc/stable/reference/arrays.indexing.html#basic-slicing-and-indexing)\n\n`tf.gather` extends indexing to handle tensors of indices.\n\nIn the simplest case it's identical to scalar indexing:\n\n>>> params = tf.constant(['p0', 'p1', 'p2', 'p3', 'p4', 'p5'])\n>>> params[3].numpy()\nb'p3'\n>>> tf.gather(params, 3).numpy()\nb'p3'\n\nThe most common case is to pass a single axis tensor of indices (this\ncan't be expressed as a python slice because the indices are not sequential):\n\n>>> indices = [2, 0, 2, 5]\n>>> tf.gather(params, indices).numpy()\narray([b'p2', b'p0', b'p2', b'p5'], dtype=object)\n\n
\n\n
\n\nThe indices can have any shape. When the `params` has 1 axis, the\noutput shape is equal to the input shape:\n\n>>> tf.gather(params, [[2, 0], [2, 5]]).numpy()\narray([[b'p2', b'p0'],\n [b'p2', b'p5']], dtype=object)\n\nThe `params` may also have any shape. `gather` can select slices\nacross any axis depending on the `axis` argument (which defaults to 0).\nBelow it is used to gather first rows, then columns from a matrix:\n\n>>> params = tf.constant([[0, 1.0, 2.0],\n... [10.0, 11.0, 12.0],\n... [20.0, 21.0, 22.0],\n... [30.0, 31.0, 32.0]])\n>>> tf.gather(params, indices=[3,1]).numpy()\narray([[30., 31., 32.],\n [10., 11., 12.]], dtype=float32)\n>>> tf.gather(params, indices=[2,1], axis=1).numpy()\narray([[ 2., 1.],\n [12., 11.],\n [22., 21.],\n [32., 31.]], dtype=float32)\n\nMore generally: The output shape has the same shape as the input, with the\nindexed-axis replaced by the shape of the indices.\n\n>>> def result_shape(p_shape, i_shape, axis=0):\n... return p_shape[:axis] + i_shape + p_shape[axis+1:]\n>>>\n>>> result_shape([1, 2, 3], [], axis=1)\n[1, 3]\n>>> result_shape([1, 2, 3], [7], axis=1)\n[1, 7, 3]\n>>> result_shape([1, 2, 3], [7, 5], axis=1)\n[1, 7, 5, 3]\n\nHere are some examples:\n\n>>> params.shape.as_list()\n[4, 3]\n>>> indices = tf.constant([[0, 2]])\n>>> tf.gather(params, indices=indices, axis=0).shape.as_list()\n[1, 2, 3]\n>>> tf.gather(params, indices=indices, axis=1).shape.as_list()\n[4, 1, 2]\n\n>>> params = tf.random.normal(shape=(5, 6, 7, 8))\n>>> indices = tf.random.uniform(shape=(10, 11), maxval=7, dtype=tf.int32)\n>>> result = tf.gather(params, indices, axis=2)\n>>> result.shape.as_list()\n[5, 6, 10, 11, 8]\n\nThis is because each index takes a slice from `params`, and\nplaces it at the corresponding location in the output. For the above example\n\n>>> # For any location in indices\n>>> a, b = 0, 1\n>>> tf.reduce_all(\n... # the corresponding slice of the result\n... result[:, :, a, b, :] ==\n... # is equal to the slice of `params` along `axis` at the index.\n... params[:, :, indices[a, b], :]\n... ).numpy()\nTrue\n\n### Batching:\n\nThe `batch_dims` argument lets you gather different items from each element\nof a batch.\n\nUsing `batch_dims=1` is equivalent to having an outer loop over the first\naxis of `params` and `indices`:\n\n>>> params = tf.constant([\n... [0, 0, 1, 0, 2],\n... [3, 0, 0, 0, 4],\n... [0, 5, 0, 6, 0]])\n>>> indices = tf.constant([\n... [2, 4],\n... [0, 4],\n... [1, 3]])\n\n>>> tf.gather(params, indices, axis=1, batch_dims=1).numpy()\narray([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n\nThis is equivalent to:\n\n>>> def manually_batched_gather(params, indices, axis):\n... batch_dims=1\n... result = []\n... for p,i in zip(params, indices):\n... r = tf.gather(p, i, axis=axis-batch_dims)\n... result.append(r)\n... return tf.stack(result)\n>>> manually_batched_gather(params, indices, axis=1).numpy()\narray([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n\nHigher values of `batch_dims` are equivalent to multiple nested loops over\nthe outer axes of `params` and `indices`. So the overall shape function is\n\n>>> def batched_result_shape(p_shape, i_shape, axis=0, batch_dims=0):\n... return p_shape[:axis] + i_shape[batch_dims:] + p_shape[axis+1:]\n>>>\n>>> batched_result_shape(\n... p_shape=params.shape.as_list(),\n... i_shape=indices.shape.as_list(),\n... axis=1,\n... batch_dims=1)\n[3, 2]\n\n>>> tf.gather(params, indices, axis=1, batch_dims=1).shape.as_list()\n[3, 2]\n\nThis comes up naturally if you need to use the indices of an operation like\n`tf.argsort`, or `tf.math.top_k` where the last dimension of the indices\nindexes into the last dimension of input, at the corresponding location.\nIn this case you can use `tf.gather(values, indices, batch_dims=-1)`.\n\nSee also:\n\n* `tf.Tensor.__getitem__`: The direct tensor index operation (`t[]`), handles\n scalars and python-slices `tensor[..., 7, 1:-1]`\n* `tf.scatter`: A collection of operations similar to `__setitem__`\n (`t[i] = x`)\n* `tf.gather_nd`: An operation similar to `tf.gather` but gathers across\n multiple axis at once (it can gather elements of a matrix instead of rows\n or columns)\n* `tf.boolean_mask`, `tf.where`: Binary indexing.\n* `tf.slice` and `tf.strided_slice`: For lower level access to the\n implementation of `__getitem__`'s python-slice handling (`t[1:-1:2]`)\n\nArgs:\n params: The `Tensor` from which to gather values. Must be at least rank\n `axis + 1`.\n indices: The index `Tensor`. Must be one of the following types: `int32`,\n `int64`. The values must be in range `[0, params.shape[axis])`.\n validate_indices: Deprecated, does nothing. Indices are always validated on\n CPU, never validated on GPU.\n\n Caution: On CPU, if an out of bound index is found, an error is raised.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`. The\n `axis` in `params` to gather `indices` from. Must be greater than or equal\n to `batch_dims`. Defaults to the first non-batch dimension. Supports\n negative indexes.\n batch_dims: An `integer`. The number of batch dimensions. Must be less\n than or equal to `rank(indices)`.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor`. Has the same type as `params`.", "desc": "Gather slices from params axis `axis` according to indices. (deprecated arguments)", "type": "API"}, {"name": "tf.gather_nd", "docs": "Gather slices from `params` into a Tensor with shape specified by `indices`.\n\n `indices` is a `Tensor` of indices into `params`. The index vectors are\n arranged along the last axis of `indices`.\n\n This is similar to `tf.gather`, in which `indices` defines slices into the\n first dimension of `params`. In `tf.gather_nd`, `indices` defines slices into the\n first `N` dimensions of `params`, where `N = indices.shape[-1]`.\n\n Caution: On CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n\n ## Gathering scalars\n\n In the simplest case the vectors in `indices` index the full rank of `params`:\n\n >>> tf.gather_nd(\n ... indices=[[0, 0],\n ... [1, 1]],\n ... params = [['a', 'b'],\n ... ['c', 'd']]).numpy()\n array([b'a', b'd'], dtype=object)\n\n In this case the result has 1-axis fewer than `indices`, and each index vector\n is replaced by the scalar indexed from `params`.\n\n In this case the shape relationship is:\n\n ```\n index_depth = indices.shape[-1]\n assert index_depth == params.shape.rank\n result_shape = indices.shape[:-1]\n ```\n\n If `indices` has a rank of `K`, it is helpful to think `indices` as a\n (K-1)-dimensional tensor of indices into `params`.\n\n ## Gathering slices\n\n If the index vectors do not index the full rank of `params` then each location\n in the result contains a slice of params. This example collects rows from a\n matrix:\n\n >>> tf.gather_nd(\n ... indices = [[1],\n ... [0]],\n ... params = [['a', 'b', 'c'],\n ... ['d', 'e', 'f']]).numpy()\n array([[b'd', b'e', b'f'],\n [b'a', b'b', b'c']], dtype=object)\n\n Here `indices` contains `[2]` index vectors, each with a length of `1`.\n The index vectors each refer to rows of the `params` matrix. Each\n row has a shape of `[3]` so the output shape is `[2, 3]`.\n\n In this case, the relationship between the shapes is:\n\n ```\n index_depth = indices.shape[-1]\n outer_shape = indices.shape[:-1]\n assert index_depth <= params.shape.rank\n inner_shape = params.shape[index_depth:]\n output_shape = outer_shape + inner_shape\n ```\n\n It is helpful to think of the results in this case as tensors-of-tensors.\n The shape of the outer tensor is set by the leading dimensions of `indices`.\n While the shape of the inner tensors is the shape of a single slice.\n\n ## Batches\n\n Additionally both `params` and `indices` can have `M` leading batch\n dimensions that exactly match. In this case `batch_dims` must be set to `M`.\n\n For example, to collect one row from each of a batch of matrices you could\n set the leading elements of the index vectors to be their location in the\n batch:\n\n >>> tf.gather_nd(\n ... indices = [[0, 1],\n ... [1, 0],\n ... [2, 4],\n ... [3, 2],\n ... [4, 1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n The `batch_dims` argument lets you omit those leading location dimensions\n from the index:\n\n >>> tf.gather_nd(\n ... batch_dims=1,\n ... indices = [[1],\n ... [0],\n ... [4],\n ... [2],\n ... [1]],\n ... params=tf.zeros([5, 7, 3])).shape.as_list()\n [5, 3]\n\n This is equivalent to caling a separate `gather_nd` for each location in the\n batch dimensions.\n\n\n >>> params=tf.zeros([5, 7, 3])\n >>> indices=tf.zeros([5, 1])\n >>> batch_dims = 1\n >>>\n >>> index_depth = indices.shape[-1]\n >>> batch_shape = indices.shape[:batch_dims]\n >>> assert params.shape[:batch_dims] == batch_shape\n >>> outer_shape = indices.shape[batch_dims:-1]\n >>> assert index_depth <= params.shape.rank\n >>> inner_shape = params.shape[batch_dims + index_depth:]\n >>> output_shape = batch_shape + outer_shape + inner_shape\n >>> output_shape.as_list()\n [5, 3]\n\n ### More examples\n\n Indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'a1', b'b1'],\n [b'c1', b'd1']]], dtype=object)\n\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 1], [1, 0]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[0, 0, 1], [1, 0, 1]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([b'b0', b'b1'], dtype=object)\n\n The examples below are for the case when only indices have leading extra\n dimensions. If both 'params' and 'indices' have leading batch dimensions, use\n the 'batch_dims' parameter to run gather_nd in batch mode.\n\n Batched indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0]], [[0, 1]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[b'a'],\n [b'b']], dtype=object)\n\n\n\n Batched slice indexing into a matrix:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [['a', 'b'], ['c', 'd']]).numpy()\n array([[[b'c', b'd']],\n [[b'a', b'b']]], dtype=object)\n\n\n Batched indexing into a 3-tensor:\n\n >>> tf.gather_nd(\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[[b'a1', b'b1'],\n [b'c1', b'd1']]],\n [[[b'a0', b'b0'],\n [b'c0', b'd0']]]], dtype=object)\n\n\n >>> tf.gather_nd(\n ... indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0'],\n [b'a1', b'b1']],\n [[b'a0', b'b0'],\n [b'c1', b'd1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'b0', b'b1'],\n [b'd0', b'c1']], dtype=object)\n\n\n Examples with batched 'params' and 'indices':\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[1],\n ... [0]],\n ... params = [[['a0', 'b0'],\n ... ['c0', 'd0']],\n ... [['a1', 'b1'],\n ... ['c1', 'd1']]]).numpy()\n array([[b'c0', b'd0'],\n [b'a1', b'b1']], dtype=object)\n\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1]], [[0]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[[b'c0', b'd0']],\n [[b'a1', b'b1']]], dtype=object)\n\n >>> tf.gather_nd(\n ... batch_dims = 1,\n ... indices = [[[1, 0]], [[0, 1]]],\n ... params = [[['a0', 'b0'], ['c0', 'd0']],\n ... [['a1', 'b1'], ['c1', 'd1']]]).numpy()\n array([[b'c0'],\n [b'b1']], dtype=object)\n\n\n See also `tf.gather`.\n\n Args:\n params: A `Tensor`. The tensor from which to gather values.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n name: A name for the operation (optional).\n batch_dims: An integer or a scalar 'Tensor'. The number of batch dimensions.\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` into a Tensor with shape specified by `indices`.", "type": "API"}, {"name": "tf.get_logger", "docs": "Return TF logger instance.", "desc": "Return TF logger instance.", "type": "API"}, {"name": "tf.get_static_value", "docs": "Returns the constant value of the given tensor, if efficiently calculable.\n\n This function attempts to partially evaluate the given tensor, and\n returns its value as a numpy ndarray if this succeeds.\n\n Example usage:\n\n >>> a = tf.constant(10)\n >>> tf.get_static_value(a)\n 10\n >>> b = tf.constant(20)\n >>> tf.get_static_value(tf.add(a, b))\n 30\n\n >>> # `tf.Variable` is not supported.\n >>> c = tf.Variable(30)\n >>> print(tf.get_static_value(c))\n None\n\n Using `partial` option is most relevant when calling `get_static_value` inside\n a `tf.function`. Setting it to `True` will return the results but for the\n values that cannot be evaluated will be `None`. For example:\n\n ```python\n class Foo(object):\n def __init__(self):\n self.a = tf.Variable(1)\n self.b = tf.constant(2)\n\n @tf.function\n def bar(self, partial):\n packed = tf.raw_ops.Pack(values=[self.a, self.b])\n static_val = tf.get_static_value(packed, partial=partial)\n tf.print(static_val)\n\n f = Foo()\n f.bar(partial=True) # `array([None, array(2, dtype=int32)], dtype=object)`\n f.bar(partial=False) # `None`\n ```\n\n Compatibility(V1): If `constant_value(tensor)` returns a non-`None` result, it\n will no longer be possible to feed a different value for `tensor`. This allows\n the result of this function to influence the graph that is constructed, and\n permits static shape optimizations.\n\n Args:\n tensor: The Tensor to be evaluated.\n partial: If True, the returned numpy array is allowed to have partially\n evaluated values. Values that can't be evaluated will be None.\n\n Returns:\n A numpy ndarray containing the constant value of the given `tensor`,\n or None if it cannot be calculated.\n\n Raises:\n TypeError: if tensor is not an ops.Tensor.\n ", "desc": "Returns the constant value of the given tensor, if efficiently calculable.", "type": "API"}, {"name": "tf.grad_pass_through", "docs": "Creates a grad-pass-through op with the forward behavior provided in f.\n\n Use this function to wrap any op, maintaining its behavior in the forward\n pass, but replacing the original op in the backward graph with an identity.\n For example:\n\n ```python\n x = tf.Variable(1.0, name=\"x\")\n z = tf.Variable(3.0, name=\"z\")\n\n with tf.GradientTape() as tape:\n # y will evaluate to 9.0\n y = tf.grad_pass_through(x.assign)(z**2)\n # grads will evaluate to 6.0\n grads = tape.gradient(y, z)\n ```\n\n Another example is a 'differentiable' moving average approximation, where\n gradients are allowed to flow into the last value fed to the moving average,\n but the moving average is still used for the forward pass:\n\n ```python\n x = ... # Some scalar value\n # A moving average object, we don't need to know how this is implemented\n moving_average = MovingAverage()\n with backprop.GradientTape() as tape:\n # mavg_x will evaluate to the current running average value\n mavg_x = tf.grad_pass_through(moving_average)(x)\n grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0\n ```\n\n Args:\n f: function `f(*x)` that returns a `Tensor` or nested structure of `Tensor`\n outputs.\n\n Returns:\n A function `h(x)` which returns the same values as `f(x)` and whose\n gradients are the same as those of an identity function.\n ", "desc": "Creates a grad-pass-through op with the forward behavior provided in f.", "type": "API"}, {"name": "tf.gradients", "docs": "Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`.\n\n `tf.gradients` is only valid in a graph context. In particular,\n it is valid in the context of a `tf.function` wrapper, where code\n is executing as a graph.\n\n `ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`\n is a list of `Tensor`, holding the gradients received by the\n `ys`. The list must be the same length as `ys`.\n\n `gradients()` adds ops to the graph to output the derivatives of `ys` with\n respect to `xs`. It returns a list of `Tensor` of length `len(xs)` where\n each tensor is the `sum(dy/dx)` for y in `ys` and for x in `xs`.\n\n `grad_ys` is a list of tensors of the same length as `ys` that holds\n the initial gradients for each y in `ys`. When `grad_ys` is None,\n we fill in a tensor of '1's of the shape of y for each y in `ys`. A\n user can provide their own initial `grad_ys` to compute the\n derivatives using a different initial gradient for each y (e.g., if\n one wanted to weight the gradient differently for each value in\n each y).\n\n `stop_gradients` is a `Tensor` or a list of tensors to be considered constant\n with respect to all `xs`. These tensors will not be backpropagated through,\n as though they had been explicitly disconnected using `stop_gradient`. Among\n other things, this allows computation of partial derivatives as opposed to\n total derivatives. For example:\n\n >>> @tf.function\n ... def example():\n ... a = tf.constant(0.)\n ... b = 2 * a\n ... return tf.gradients(a + b, [a, b], stop_gradients=[a, b])\n >>> example()\n [,\n ]\n\n Here the partial derivatives `g` evaluate to `[1.0, 1.0]`, compared to the\n total derivatives `tf.gradients(a + b, [a, b])`, which take into account the\n influence of `a` on `b` and evaluate to `[3.0, 1.0]`. Note that the above is\n equivalent to:\n\n >>> @tf.function\n ... def example():\n ... a = tf.stop_gradient(tf.constant(0.))\n ... b = tf.stop_gradient(2 * a)\n ... return tf.gradients(a + b, [a, b])\n >>> example()\n [,\n ]\n\n `stop_gradients` provides a way of stopping gradient after the graph has\n already been constructed, as compared to `tf.stop_gradient` which is used\n during graph construction. When the two approaches are combined,\n backpropagation stops at both `tf.stop_gradient` nodes and nodes in\n `stop_gradients`, whichever is encountered first.\n\n All integer tensors are considered constant with respect to all `xs`, as if\n they were included in `stop_gradients`.\n\n `unconnected_gradients` determines the value returned for each x in xs if it\n is unconnected in the graph to ys. By default this is None to safeguard\n against errors. Mathematically these gradients are zero which can be requested\n using the `'zero'` option. `tf.UnconnectedGradients` provides the\n following options and behaviors:\n\n >>> @tf.function\n ... def example(use_zero):\n ... a = tf.ones([1, 2])\n ... b = tf.ones([3, 1])\n ... if use_zero:\n ... return tf.gradients([b], [a], unconnected_gradients='zero')\n ... else:\n ... return tf.gradients([b], [a], unconnected_gradients='none')\n >>> example(False)\n [None]\n >>> example(True)\n []\n\n Let us take one practical example which comes during the back propogation\n phase. This function is used to evaluate the derivatives of the cost function\n with respect to Weights `Ws` and Biases `bs`. Below sample implementation\n provides the exaplantion of what it is actually used for :\n\n >>> @tf.function\n ... def example():\n ... Ws = tf.constant(0.)\n ... bs = 2 * Ws\n ... cost = Ws + bs # This is just an example. Please ignore the formulas.\n ... g = tf.gradients(cost, [Ws, bs])\n ... dCost_dW, dCost_db = g\n ... return dCost_dW, dCost_db\n >>> example()\n (,\n )\n\n Args:\n ys: A `Tensor` or list of tensors to be differentiated.\n xs: A `Tensor` or list of tensors to be used for differentiation.\n grad_ys: Optional. A `Tensor` or list of tensors the same size as\n `ys` and holding the gradients computed for each y in `ys`.\n name: Optional name to use for grouping all the gradient ops together.\n defaults to 'gradients'.\n gate_gradients: If True, add a tuple around the gradients returned\n for an operations. This avoids some race conditions.\n aggregation_method: Specifies the method used to combine gradient terms.\n Accepted values are constants defined in the class `AggregationMethod`.\n stop_gradients: Optional. A `Tensor` or list of tensors not to differentiate\n through.\n unconnected_gradients: Optional. Specifies the gradient value returned when\n the given input tensors are unconnected. Accepted values are constants\n defined in the class `tf.UnconnectedGradients` and the default value is\n `none`.\n\n Returns:\n A list of `Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`\n for y in `ys` and for x in `xs`.\n\n Raises:\n LookupError: if one of the operations between `x` and `y` does not\n have a registered gradient function.\n ValueError: if the arguments are invalid.\n RuntimeError: if called in Eager mode.\n\n ", "desc": "Constructs symbolic derivatives of sum of `ys` w.r.t. x in `xs`.", "type": "API"}, {"name": "tf.GradientTape", "docs": "Record operations for automatic differentiation.\n\n Operations are recorded if they are executed within this context manager and\n at least one of their inputs is being \"watched\".\n\n Trainable variables (created by `tf.Variable` or `tf.compat.v1.get_variable`,\n where `trainable=True` is default in both cases) are automatically watched.\n Tensors can be manually watched by invoking the `watch` method on this context\n manager.\n\n For example, consider the function `y = x * x`. The gradient at `x = 3.0` can\n be computed as:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = x * x\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n GradientTapes can be nested to compute higher-order derivatives. For example,\n\n >>> x = tf.constant(5.0)\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... with tf.GradientTape() as gg:\n ... gg.watch(x)\n ... y = x * x\n ... dy_dx = gg.gradient(y, x) # dy_dx = 2 * x\n >>> d2y_dx2 = g.gradient(dy_dx, x) # d2y_dx2 = 2\n >>> print(dy_dx)\n tf.Tensor(10.0, shape=(), dtype=float32)\n >>> print(d2y_dx2)\n tf.Tensor(2.0, shape=(), dtype=float32)\n\n By default, the resources held by a GradientTape are released as soon as\n GradientTape.gradient() method is called. To compute multiple gradients over\n the same computation, create a persistent gradient tape. This allows multiple\n calls to the gradient() method as resources are released when the tape object\n is garbage collected. For example:\n\n >>> x = tf.constant(3.0)\n >>> with tf.GradientTape(persistent=True) as g:\n ... g.watch(x)\n ... y = x * x\n ... z = y * y\n >>> dz_dx = g.gradient(z, x) # (4*x^3 at x = 3)\n >>> print(dz_dx)\n tf.Tensor(108.0, shape=(), dtype=float32)\n >>> dy_dx = g.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(6.0, shape=(), dtype=float32)\n\n By default GradientTape will automatically watch any trainable variables that\n are accessed inside the context. If you want fine grained control over which\n variables are watched you can disable automatic tracking by passing\n `watch_accessed_variables=False` to the tape constructor:\n\n >>> x = tf.Variable(2.0)\n >>> w = tf.Variable(5.0)\n >>> with tf.GradientTape(\n ... watch_accessed_variables=False, persistent=True) as tape:\n ... tape.watch(x)\n ... y = x ** 2 # Gradients will be available for `x`.\n ... z = w ** 3 # No gradients will be available as `w` isn't being watched.\n >>> dy_dx = tape.gradient(y, x)\n >>> print(dy_dx)\n tf.Tensor(4.0, shape=(), dtype=float32)\n >>> # No gradients will be available as `w` isn't being watched.\n >>> dz_dw = tape.gradient(z, w)\n >>> print(dz_dw)\n None\n\n Note that when using models you should ensure that your variables exist when\n using `watch_accessed_variables=False`. Otherwise it's quite easy to make your\n first iteration not have any gradients:\n\n ```python\n a = tf.keras.layers.Dense(32)\n b = tf.keras.layers.Dense(32)\n\n with tf.GradientTape(watch_accessed_variables=False) as tape:\n tape.watch(a.variables) # Since `a.build` has not been called at this point\n # `a.variables` will return an empty list and the\n # tape will not be watching anything.\n result = b(a(inputs))\n tape.gradient(result, a.variables) # The result of this computation will be\n # a list of `None`s since a's variables\n # are not being watched.\n ```\n\n Note that only tensors with real or complex dtypes are differentiable.\n ", "desc": "Record operations for automatic differentiation.", "type": "API"}, {"name": "tf.Graph", "docs": "A TensorFlow computation, represented as a dataflow graph.\n\n Graphs are used by `tf.function`s to represent the function's computations.\n Each graph contains a set of `tf.Operation` objects, which represent units of\n computation; and `tf.Tensor` objects, which represent the units of data that\n flow between operations.\n\n ### Using graphs directly (deprecated)\n\n A `tf.Graph` can be constructed and used directly without a `tf.function`, as\n was required in TensorFlow 1, but this is deprecated and it is recommended to\n use a `tf.function` instead. If a graph is directly used, other deprecated\n TensorFlow 1 classes are also required to execute the graph, such as a\n `tf.compat.v1.Session`.\n\n A default graph can be registered with the `tf.Graph.as_default` context\n manager. Then, operations will be added to the graph instead of being executed\n eagerly. For example:\n\n ```python\n g = tf.Graph()\n with g.as_default():\n # Define operations and tensors in `g`.\n c = tf.constant(30.0)\n assert c.graph is g\n ```\n\n `tf.compat.v1.get_default_graph()` can be used to obtain the default graph.\n\n Important note: This class *is not* thread-safe for graph construction. All\n operations should be created from a single thread, or external\n synchronization must be provided. Unless otherwise specified, all methods\n are not thread-safe.\n\n A `Graph` instance supports an arbitrary number of \"collections\"\n that are identified by name. For convenience when building a large\n graph, collections can store groups of related objects: for\n example, the `tf.Variable` uses a collection (named\n `tf.GraphKeys.GLOBAL_VARIABLES`) for\n all variables that are created during the construction of a graph. The caller\n may define additional collections by specifying a new name.\n ", "desc": "A TensorFlow computation, represented as a dataflow graph.", "type": "API"}, {"name": "tf.graph_util", "docs": "Helpers to manipulate a tensor graph in python.\n\n", "desc": "Helpers to manipulate a tensor graph in python.", "type": "API"}, {"name": "tf.graph_util.import_graph_def", "docs": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(op_dict)`. They will be removed in a future version.\nInstructions for updating:\nPlease file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.\n\nThis function provides a way to import a serialized TensorFlow\n[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)\nprotocol buffer, and extract individual objects in the `GraphDef` as\n`tf.Tensor` and `tf.Operation` objects. Once extracted,\nthese objects are placed into the current default `Graph`. See\n`tf.Graph.as_graph_def` for a way to create a `GraphDef`\nproto.\n\nArgs:\n graph_def: A `GraphDef` proto containing operations to be imported into\n the default graph.\n input_map: A dictionary mapping input names (as strings) in `graph_def`\n to `Tensor` objects. The values of the named input tensors in the\n imported graph will be re-mapped to the respective `Tensor` values.\n return_elements: A list of strings containing operation names in\n `graph_def` that will be returned as `Operation` objects; and/or\n tensor names in `graph_def` that will be returned as `Tensor` objects.\n name: (Optional.) A prefix that will be prepended to the names in\n `graph_def`. Note that this does not apply to imported function names.\n Defaults to `\"import\"`.\n op_dict: (Optional.) Deprecated, do not use.\n producer_op_list: (Optional.) An `OpList` proto with the (possibly stripped)\n list of `OpDef`s used by the producer of the graph. If provided,\n unrecognized attrs for ops in `graph_def` that have their default value\n according to `producer_op_list` will be removed. This will allow some more\n `GraphDef`s produced by later binaries to be accepted by earlier binaries.\n\nReturns:\n A list of `Operation` and/or `Tensor` objects from the imported graph,\n corresponding to the names in `return_elements`,\n and None if `returns_elements` is None.\n\nRaises:\n TypeError: If `graph_def` is not a `GraphDef` proto,\n `input_map` is not a dictionary mapping strings to `Tensor` objects,\n or `return_elements` is not a list of strings.\n ValueError: If `input_map`, or `return_elements` contains names that\n do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.\n it refers to an unknown tensor).", "desc": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)", "type": "API"}, {"name": "tf.greater", "docs": "Returns the truth value of (x > y) element-wise.\n\n *NOTE*: `math.greater` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 2, 5])\n tf.math.greater(x, y) ==> [False, True, True]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.greater(x, y) ==> [False, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x > y) element-wise.", "type": "API"}, {"name": "tf.greater_equal", "docs": "Returns the truth value of (x >= y) element-wise.\n\n *NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5, 2, 5, 10])\n tf.math.greater_equal(x, y) ==> [True, True, True, False]\n\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5])\n tf.math.greater_equal(x, y) ==> [True, False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x >= y) element-wise.", "type": "API"}, {"name": "tf.group", "docs": "Create an op that groups multiple operations.\n\n When this op finishes, all ops in `inputs` have finished. This op has no\n output.\n\n Note: *In TensorFlow 2 with eager and/or Autograph, you should not require\n this method, as ops execute in the expected order thanks to automatic control\n dependencies.* Only use `tf.group` when working with v1\n `tf.Graph` code.\n\n When operating in a v1-style graph context, ops are not executed in the same\n order as specified in the code; TensorFlow will attempt to execute ops in\n parallel or in an order convenient to the result it is computing. `tf.group`\n allows you to request that one or more results finish before execution\n continues.\n\n `tf.group` creates a single op (of type `NoOp`), and then adds appropriate\n control dependencies. Thus, `c = tf.group(a, b)` will compute the same graph\n as this:\n\n with tf.control_dependencies([a, b]):\n c = tf.no_op()\n\n See also `tf.tuple` and\n `tf.control_dependencies`.\n\n Args:\n *inputs: Zero or more tensors to group.\n name: A name for this operation (optional).\n\n Returns:\n An Operation that executes all its inputs.\n\n Raises:\n ValueError: If an unknown keyword argument is provided.\n ", "desc": "Create an op that groups multiple operations.", "type": "API"}, {"name": "tf.guarantee_const", "docs": "Promise to the TF runtime that the input tensor is a constant. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nNot for public use.\n\nThe runtime is then free to make optimizations based on this.\n\nReturns the input tensor without modification.\n\nArgs:\n input: A `Tensor`.\n name: A name for this operation.\n\nReturns:\n A `Tensor`. Has the same dtype as `input`.", "desc": "Promise to the TF runtime that the input tensor is a constant. (deprecated)", "type": "API"}, {"name": "tf.hessians", "docs": "Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.\n\n `hessians()` adds ops to the graph to output the Hessian matrix of `ys`\n with respect to `xs`. It returns a list of `Tensor` of length `len(xs)`\n where each tensor is the Hessian of `sum(ys)`.\n\n The Hessian is a matrix of second-order partial derivatives of a scalar\n tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).\n\n Args:\n ys: A `Tensor` or list of tensors to be differentiated.\n xs: A `Tensor` or list of tensors to be used for differentiation.\n gate_gradients: See `gradients()` documentation for details.\n aggregation_method: See `gradients()` documentation for details.\n name: Optional name to use for grouping all the gradient ops together.\n defaults to 'hessians'.\n\n Returns:\n A list of Hessian matrices of `sum(ys)` for each `x` in `xs`.\n\n Raises:\n LookupError: if one of the operations between `xs` and `ys` does not\n have a registered gradient function.\n ", "desc": "Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.", "type": "API"}, {"name": "tf.histogram_fixed_width", "docs": "Return histogram of values.\n\n Given the tensor `values`, this operation returns a rank 1 histogram counting\n the number of entries in `values` that fell into every bin. The bins are\n equal width and determined by the arguments `value_range` and `nbins`.\n\n Args:\n values: Numeric `Tensor`.\n value_range: Shape [2] `Tensor` of same `dtype` as `values`.\n values <= value_range[0] will be mapped to hist[0],\n values >= value_range[1] will be mapped to hist[-1].\n nbins: Scalar `int32 Tensor`. Number of histogram bins.\n dtype: dtype for returned histogram.\n name: A name for this operation (defaults to 'histogram_fixed_width').\n\n Returns:\n A 1-D `Tensor` holding histogram of values.\n\n Raises:\n TypeError: If any unsupported dtype is provided.\n tf.errors.InvalidArgumentError: If value_range does not\n satisfy value_range[0] < value_range[1].\n\n Examples:\n\n >>> # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)\n ...\n >>> nbins = 5\n >>> value_range = [0.0, 5.0]\n >>> new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]\n >>> hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)\n >>> hist.numpy()\n array([2, 1, 1, 0, 2], dtype=int32)\n ", "desc": "Return histogram of values.", "type": "API"}, {"name": "tf.histogram_fixed_width_bins", "docs": "Bins the given values for use in a histogram.\n\n Given the tensor `values`, this operation returns a rank 1 `Tensor`\n representing the indices of a histogram into which each element\n of `values` would be binned. The bins are equal width and\n determined by the arguments `value_range` and `nbins`.\n\n Args:\n values: Numeric `Tensor`.\n value_range: Shape [2] `Tensor` of same `dtype` as `values`.\n values <= value_range[0] will be mapped to hist[0],\n values >= value_range[1] will be mapped to hist[-1].\n nbins: Scalar `int32 Tensor`. Number of histogram bins.\n dtype: dtype for returned histogram.\n name: A name for this operation (defaults to 'histogram_fixed_width').\n\n Returns:\n A `Tensor` holding the indices of the binned values whose shape matches\n `values`.\n\n Raises:\n TypeError: If any unsupported dtype is provided.\n tf.errors.InvalidArgumentError: If value_range does not\n satisfy value_range[0] < value_range[1].\n\n Examples:\n\n >>> # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)\n ...\n >>> nbins = 5\n >>> value_range = [0.0, 5.0]\n >>> new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]\n >>> indices = tf.histogram_fixed_width_bins(new_values, value_range, nbins=5)\n >>> indices.numpy()\n array([0, 0, 1, 2, 4, 4], dtype=int32)\n ", "desc": "Bins the given values for use in a histogram.", "type": "API"}, {"name": "tf.identity", "docs": "Return a Tensor with the same shape and contents as input.\n\n The return value is not the same Tensor as the original, but contains the same\n values. This operation is fast when used on the same device.\n\n For example:\n\n >>> a = tf.constant([0.78])\n >>> a_identity = tf.identity(a)\n >>> a.numpy()\n array([0.78], dtype=float32)\n >>> a_identity.numpy()\n array([0.78], dtype=float32)\n\n Calling `tf.identity` on a variable will make a Tensor that represents the\n value of that variable at the time it is called. This is equivalent to calling\n `.read_value()`.\n\n >>> a = tf.Variable(5)\n >>> a_identity = tf.identity(a)\n >>> a.assign_add(1)\n \n >>> a.numpy()\n 6\n >>> a_identity.numpy()\n 5\n\n Args:\n input: A `Tensor`, a `Variable`, a `CompositeTensor` or anything that can be\n converted to a tensor using `tf.convert_to_tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or CompositeTensor. Has the same type and contents as `input`.\n ", "desc": "Return a Tensor with the same shape and contents as input.", "type": "API"}, {"name": "tf.identity_n", "docs": "Returns a list of tensors with the same shapes and contents as the input\n\n tensors.\n\n This op can be used to override the gradient for complicated functions. For\n example, suppose y = f(x) and we wish to apply a custom function g for backprop\n such that dx = g(dy). In Python,\n\n ```python\n with tf.get_default_graph().gradient_override_map(\n {'IdentityN': 'OverrideGradientWithG'}):\n y, _ = identity_n([f(x), x])\n\n @tf.RegisterGradient('OverrideGradientWithG')\n def ApplyG(op, dy, _):\n return [None, g(dy)] # Do not backprop to f(x).\n ```\n\n Args:\n input: A list of `Tensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "Returns a list of tensors with the same shapes and contents as the input", "type": "API"}, {"name": "tf.image", "docs": "Image ops.\n\nThe `tf.image` module contains various functions for image\nprocessing and decoding-encoding Ops.\n\nMany of the encoding/decoding functions are also available in the\ncore `tf.io` module.\n\n## Image processing\n\n### Resizing\n\nThe resizing Ops accept input images as tensors of several types. They always\noutput resized images as float32 tensors.\n\nThe convenience function `tf.image.resize` supports both 4-D\nand 3-D tensors as input and output. 4-D tensors are for batches of images,\n3-D tensors for individual images.\n\nResized images will be distorted if their original aspect ratio is not the\nsame as size. To avoid distortions see tf.image.resize_with_pad.\n\n* `tf.image.resize`\n* `tf.image.resize_with_pad`\n* `tf.image.resize_with_crop_or_pad`\n\nThe Class `tf.image.ResizeMethod` provides various resize methods like\n`bilinear`, `nearest_neighbor`.\n\n### Converting Between Colorspaces\n\nImage ops work either on individual images or on batches of images, depending on\nthe shape of their input Tensor.\n\nIf 3-D, the shape is `[height, width, channels]`, and the Tensor represents one\nimage. If 4-D, the shape is `[batch_size, height, width, channels]`, and the\nTensor represents `batch_size` images.\n\nCurrently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are\ngrayscale, images with 3 channels are encoded as either RGB or HSV. Images\nwith 2 or 4 channels include an alpha channel, which has to be stripped from the\nimage before passing the image to most image processing functions (and can be\nre-attached later).\n\nInternally, images are either stored in as one `float32` per channel per pixel\n(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel\nper pixel (values are assumed to lie in `[0,255]`).\n\nTensorFlow can convert between images in RGB or HSV or YIQ.\n\n* `tf.image.rgb_to_grayscale`, `tf.image.grayscale_to_rgb`\n* `tf.image.rgb_to_hsv`, `tf.image.hsv_to_rgb`\n* `tf.image.rgb_to_yiq`, `tf.image.yiq_to_rgb`\n* `tf.image.rgb_to_yuv`, `tf.image.yuv_to_rgb`\n* `tf.image.image_gradients`\n* `tf.image.convert_image_dtype`\n\n### Image Adjustments\n\nTensorFlow provides functions to adjust images in various ways: brightness,\ncontrast, hue, and saturation. Each adjustment can be done with predefined\nparameters or with random parameters picked from predefined intervals. Random\nadjustments are often useful to expand a training set and reduce overfitting.\n\nIf several adjustments are chained it is advisable to minimize the number of\nredundant conversions by first converting the images to the most natural data\ntype and representation.\n\n* `tf.image.adjust_brightness`\n* `tf.image.adjust_contrast`\n* `tf.image.adjust_gamma`\n* `tf.image.adjust_hue`\n* `tf.image.adjust_jpeg_quality`\n* `tf.image.adjust_saturation`\n* `tf.image.random_brightness`\n* `tf.image.random_contrast`\n* `tf.image.random_hue`\n* `tf.image.random_saturation`\n* `tf.image.per_image_standardization`\n\n### Working with Bounding Boxes\n\n* `tf.image.draw_bounding_boxes`\n* `tf.image.combined_non_max_suppression`\n* `tf.image.generate_bounding_box_proposals`\n* `tf.image.non_max_suppression`\n* `tf.image.non_max_suppression_overlaps`\n* `tf.image.non_max_suppression_padded`\n* `tf.image.non_max_suppression_with_scores`\n* `tf.image.pad_to_bounding_box`\n* `tf.image.sample_distorted_bounding_box`\n\n### Cropping\n\n* `tf.image.central_crop`\n* `tf.image.crop_and_resize`\n* `tf.image.crop_to_bounding_box`\n* `tf.io.decode_and_crop_jpeg`\n* `tf.image.extract_glimpse`\n* `tf.image.random_crop`\n* `tf.image.resize_with_crop_or_pad`\n\n### Flipping, Rotating and Transposing\n\n* `tf.image.flip_left_right`\n* `tf.image.flip_up_down`\n* `tf.image.random_flip_left_right`\n* `tf.image.random_flip_up_down`\n* `tf.image.rot90`\n* `tf.image.transpose`\n\n## Image decoding and encoding\n\nTensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded\nimages are represented by scalar string Tensors, decoded images by 3-D uint8\ntensors of shape `[height, width, channels]`. (PNG also supports uint16.)\n\nNote: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`\n\nThe encode and decode Ops apply to one image at a time. Their input and output\nare all of variable size. If you need fixed size images, pass the output of\nthe decode Ops to one of the cropping and resizing Ops.\n\n* `tf.io.decode_bmp`\n* `tf.io.decode_gif`\n* `tf.io.decode_image`\n* `tf.io.decode_jpeg`\n* `tf.io.decode_and_crop_jpeg`\n* `tf.io.decode_png`\n* `tf.io.encode_jpeg`\n* `tf.io.encode_png`\n\n\n", "desc": "Image ops.", "type": "API"}, {"name": "tf.image.adjust_brightness", "docs": "Adjust the brightness of RGB or Grayscale images.\n\n This is a convenience method that converts RGB images to float\n representation, adjusts their brightness, and then converts them back to the\n original data type. If several adjustments are chained, it is advisable to\n minimize the number of redundant conversions.\n\n The value `delta` is added to all components of the tensor `image`. `image` is\n converted to `float` and scaled appropriately if it is in fixed-point\n representation, and `delta` is converted to the same data type. For regular\n images, `delta` should be in the range `(-1,1)`, as it is added to the image\n in floating point representation, where pixel values are in the `[0,1)` range.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_brightness(x, delta=0.1)\n \n\n Args:\n image: RGB image or images to adjust.\n delta: A scalar. Amount to add to the pixel values.\n\n Returns:\n A brightness-adjusted tensor of the same shape and type as `image`.\n ", "desc": "Adjust the brightness of RGB or Grayscale images.", "type": "API"}, {"name": "tf.image.adjust_contrast", "docs": "Adjust contrast of RGB or grayscale images.\n\n This is a convenience method that converts RGB images to float\n representation, adjusts their contrast, and then converts them back to the\n original data type. If several adjustments are chained, it is advisable to\n minimize the number of redundant conversions.\n\n `images` is a tensor of at least 3 dimensions. The last 3 dimensions are\n interpreted as `[height, width, channels]`. The other dimensions only\n represent a collection of images, such as `[batch, height, width, channels].`\n\n Contrast is adjusted independently for each channel of each image.\n\n For each channel, this Op computes the mean of the image pixels in the\n channel and then adjusts each component `x` of each pixel to\n `(x - mean) * contrast_factor + mean`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_contrast(x, 2.)\n \n\n Args:\n images: Images to adjust. At least 3-D.\n contrast_factor: A float multiplier for adjusting contrast.\n\n Returns:\n The contrast-adjusted image or images.\n ", "desc": "Adjust contrast of RGB or grayscale images.", "type": "API"}, {"name": "tf.image.adjust_gamma", "docs": "Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction).\n\n on the input image.\n\n Also known as Power Law Transform. This function converts the\n input images at first to float representation, then transforms them\n pixelwise according to the equation `Out = gain * In**gamma`,\n and then converts the back to the original data type.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_gamma(x, 0.2)\n \n\n Args:\n image : RGB image or images to adjust.\n gamma : A scalar or tensor. Non-negative real number.\n gain : A scalar or tensor. The constant multiplier.\n\n Returns:\n A Tensor. A Gamma-adjusted tensor of the same shape and type as `image`.\n\n Raises:\n ValueError: If gamma is negative.\n Notes:\n For gamma greater than 1, the histogram will shift towards left and\n the output image will be darker than the input image.\n For gamma less than 1, the histogram will shift towards right and\n the output image will be brighter than the input image.\n References:\n [Wikipedia](http://en.wikipedia.org/wiki/Gamma_correction)\n ", "desc": "Performs [Gamma Correction](http://en.wikipedia.org/wiki/Gamma_correction).", "type": "API"}, {"name": "tf.image.adjust_hue", "docs": "Adjust hue of RGB images.\n\n This is a convenience method that converts an RGB image to float\n representation, converts it to HSV, adds an offset to the\n hue channel, converts back to RGB and then back to the original\n data type. If several adjustments are chained it is advisable to minimize\n the number of redundant conversions.\n\n `image` is an RGB image. The image hue is adjusted by converting the\n image(s) to HSV and rotating the hue channel (H) by\n `delta`. The image is then converted back to RGB.\n\n `delta` must be in the interval `[-1, 1]`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_hue(x, 0.2)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n delta: float. How much to add to the hue channel.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: image must have at least 3 dimensions.\n InvalidArgumentError: The size of the last dimension must be 3.\n ValueError: if `delta` is not in the interval of `[-1, 1]`.\n\n Usage Example:\n\n >>> image = [[[1, 2, 3], [4, 5, 6]],\n ... [[7, 8, 9], [10, 11, 12]],\n ... [[13, 14, 15], [16, 17, 18]]]\n >>> image = tf.constant(image)\n >>> tf.image.adjust_hue(image, 0.2)\n \n ", "desc": "Adjust hue of RGB images.", "type": "API"}, {"name": "tf.image.adjust_jpeg_quality", "docs": "Adjust jpeg encoding quality of an image.\n\n This is a convenience method that converts an image to uint8 representation,\n encodes it to jpeg with `jpeg_quality`, decodes it, and then converts back\n to the original data type.\n\n `jpeg_quality` must be in the interval `[0, 100]`.\n\n Usage Examples:\n\n >>> x = [[[0.01, 0.02, 0.03],\n ... [0.04, 0.05, 0.06]],\n ... [[0.07, 0.08, 0.09],\n ... [0.10, 0.11, 0.12]]]\n >>> x_jpeg = tf.image.adjust_jpeg_quality(x, 75)\n >>> x_jpeg.numpy()\n array([[[0.00392157, 0.01960784, 0.03137255],\n [0.02745098, 0.04313726, 0.05490196]],\n [[0.05882353, 0.07450981, 0.08627451],\n [0.08235294, 0.09803922, 0.10980393]]], dtype=float32)\n\n Note that floating point values are expected to have values in the range\n [0,1) and values outside this range are clipped.\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_jpeg_quality(x, 75)\n \n\n Note that `jpeg_quality` 100 is still lossy compresson.\n\n >>> x = tf.constant([[[1, 2, 3],\n ... [4, 5, 6]],\n ... [[7, 8, 9],\n ... [10, 11, 12]]], dtype=tf.uint8)\n >>> tf.image.adjust_jpeg_quality(x, 100)\n \n\n Args:\n image: 3D image. The size of the last dimension must be None, 1 or 3.\n jpeg_quality: Python int or Tensor of type int32. jpeg encoding quality.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image, same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: quality must be in [0,100]\n InvalidArgumentError: image must have 1 or 3 channels\n ", "desc": "Adjust jpeg encoding quality of an image.", "type": "API"}, {"name": "tf.image.adjust_saturation", "docs": "Adjust saturation of RGB images.\n\n This is a convenience method that converts RGB images to float\n representation, converts them to HSV, adds an offset to the\n saturation channel, converts back to RGB and then back to the original\n data type. If several adjustments are chained it is advisable to minimize\n the number of redundant conversions.\n\n `image` is an RGB image or images. The image saturation is adjusted by\n converting the images to HSV and multiplying the saturation (S) channel by\n `saturation_factor` and clipping. The images are then converted back to RGB.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.adjust_saturation(x, 0.5)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n saturation_factor: float. Factor to multiply the saturation by.\n name: A name for this operation (optional).\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n InvalidArgumentError: input must have 3 channels\n ", "desc": "Adjust saturation of RGB images.", "type": "API"}, {"name": "tf.image.central_crop", "docs": "Crop the central region of the image(s).\n\n Remove the outer parts of an image but retain the central region of the image\n along each dimension. If we specify central_fraction = 0.5, this function\n returns the region marked with \"X\" in the below diagram.\n\n --------\n | |\n | XXXX |\n | XXXX |\n | | where \"X\" is the central 50% of the image.\n --------\n\n This function works on either a single image (`image` is a 3-D Tensor), or a\n batch of images (`image` is a 4-D Tensor).\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0],\n ... [7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]],\n ... [[13.0, 14.0, 15.0],\n ... [16.0, 17.0, 18.0],\n ... [19.0, 20.0, 21.0],\n ... [22.0, 23.0, 24.0]],\n ... [[25.0, 26.0, 27.0],\n ... [28.0, 29.0, 30.0],\n ... [31.0, 32.0, 33.0],\n ... [34.0, 35.0, 36.0]],\n ... [[37.0, 38.0, 39.0],\n ... [40.0, 41.0, 42.0],\n ... [43.0, 44.0, 45.0],\n ... [46.0, 47.0, 48.0]]]\n >>> tf.image.central_crop(x, 0.5)\n \n\n Args:\n image: Either a 3-D float Tensor of shape [height, width, depth], or a 4-D\n Tensor of shape [batch_size, height, width, depth].\n central_fraction: float (0, 1], fraction of size to crop\n\n Raises:\n ValueError: if central_crop_fraction is not within (0, 1].\n\n Returns:\n 3-D / 4-D float Tensor, as per the input.\n ", "desc": "Crop the central region of the image(s).", "type": "API"}, {"name": "tf.image.combined_non_max_suppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n This operation performs non_max_suppression on the inputs per batch, across\n all classes.\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Also note that\n this algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is the final boxes, scores and classes tensor\n returned after performing non_max_suppression.\n\n Args:\n boxes: A 4-D float `Tensor` of shape `[batch_size, num_boxes, q, 4]`. If `q`\n is 1 then same boxes are used for all classes otherwise, if `q` is equal\n to number of classes, class-specific boxes are used.\n scores: A 3-D float `Tensor` of shape `[batch_size, num_boxes, num_classes]`\n representing a single score corresponding to each box (each row of boxes).\n max_output_size_per_class: A scalar integer `Tensor` representing the\n maximum number of boxes to be selected by non-max suppression per class\n max_total_size: A int32 scalar representing maximum number of boxes retained\n over all classes. Note that setting this value to a large number may\n result in OOM error depending on the system workload.\n iou_threshold: A float representing the threshold for deciding whether boxes\n overlap too much with respect to IOU.\n score_threshold: A float representing the threshold for deciding when to\n remove boxes based on score.\n pad_per_class: If false, the output nmsed boxes, scores and classes are\n padded/clipped to `max_total_size`. If true, the output nmsed boxes,\n scores and classes are padded to be of length\n `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in\n which case it is clipped to `max_total_size`. Defaults to false.\n clip_boxes: If true, the coordinates of output nmsed boxes will be clipped\n to [0, 1]. If false, output the box coordinates as it is. Defaults to\n true.\n name: A name for the operation (optional).\n\n Returns:\n 'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor\n containing the non-max suppressed boxes.\n 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing\n the scores for the boxes.\n 'nmsed_classes': A [batch_size, max_detections] float32 tensor\n containing the class for boxes.\n 'valid_detections': A [batch_size] int32 tensor indicating the number of\n valid detections per batch item. Only the top valid_detections[i] entries\n in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the\n entries are zero paddings.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.image.convert_image_dtype", "docs": "Convert `image` to `dtype`, scaling its values if needed.\n\n The operation supports data types (for `image` and `dtype`) of\n `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`,\n `float16`, `float32`, `float64`, `bfloat16`.\n\n Images that are represented using floating point values are expected to have\n values in the range [0,1). Image data stored in integer data types are\n expected to have values in the range `[0,MAX]`, where `MAX` is the largest\n positive representable number for the data type.\n\n This op converts between data types, scaling the values appropriately before\n casting.\n\n Usage Example:\n\n >>> x = [[[1, 2, 3], [4, 5, 6]],\n ... [[7, 8, 9], [10, 11, 12]]]\n >>> x_int8 = tf.convert_to_tensor(x, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(x_int8, dtype=tf.float16, saturate=False)\n \n\n Converting integer types to floating point types returns normalized floating\n point values in the range [0, 1); the values are normalized by the `MAX` value\n of the input dtype. Consider the following two examples:\n\n >>> a = [[[1], [2]], [[3], [4]]]\n >>> a_int8 = tf.convert_to_tensor(a, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(a_int8, dtype=tf.float32)\n \n\n >>> a_int32 = tf.convert_to_tensor(a, dtype=tf.int32)\n >>> tf.image.convert_image_dtype(a_int32, dtype=tf.float32)\n \n\n Despite having identical values of `a` and output dtype of `float32`, the\n outputs differ due to the different input dtypes (`int8` vs. `int32`). This\n is, again, because the values are normalized by the `MAX` value of the input\n dtype.\n\n Note that converting floating point values to integer type may lose precision.\n In the example below, an image tensor `b` of dtype `float32` is converted to\n `int8` and back to `float32`. The final output, however, is different from\n the original input `b` due to precision loss.\n\n >>> b = [[[0.12], [0.34]], [[0.56], [0.78]]]\n >>> b_float32 = tf.convert_to_tensor(b, dtype=tf.float32)\n >>> b_int8 = tf.image.convert_image_dtype(b_float32, dtype=tf.int8)\n >>> tf.image.convert_image_dtype(b_int8, dtype=tf.float32)\n \n\n Scaling up from an integer type (input dtype) to another integer type (output\n dtype) will not map input dtype's `MAX` to output dtype's `MAX` but converting\n back and forth should result in no change. For example, as shown below, the\n `MAX` value of int8 (=127) is not mapped to the `MAX` value of int16 (=32,767)\n but, when scaled back, we get the same, original values of `c`.\n\n >>> c = [[[1], [2]], [[127], [127]]]\n >>> c_int8 = tf.convert_to_tensor(c, dtype=tf.int8)\n >>> c_int16 = tf.image.convert_image_dtype(c_int8, dtype=tf.int16)\n >>> print(c_int16)\n tf.Tensor(\n [[[ 256]\n [ 512]]\n [[32512]\n [32512]]], shape=(2, 2, 1), dtype=int16)\n >>> c_int8_back = tf.image.convert_image_dtype(c_int16, dtype=tf.int8)\n >>> print(c_int8_back)\n tf.Tensor(\n [[[ 1]\n [ 2]]\n [[127]\n [127]]], shape=(2, 2, 1), dtype=int8)\n\n Scaling down from an integer type to another integer type can be a lossy\n conversion. Notice in the example below that converting `int16` to `uint8` and\n back to `int16` has lost precision.\n\n >>> d = [[[1000], [2000]], [[3000], [4000]]]\n >>> d_int16 = tf.convert_to_tensor(d, dtype=tf.int16)\n >>> d_uint8 = tf.image.convert_image_dtype(d_int16, dtype=tf.uint8)\n >>> d_int16_back = tf.image.convert_image_dtype(d_uint8, dtype=tf.int16)\n >>> print(d_int16_back)\n tf.Tensor(\n [[[ 896]\n [1920]]\n [[2944]\n [3968]]], shape=(2, 2, 1), dtype=int16)\n\n Note that converting from floating point inputs to integer types may lead to\n over/underflow problems. Set saturate to `True` to avoid such problem in\n problematic conversions. If enabled, saturation will clip the output into the\n allowed range before performing a potentially dangerous cast (and only before\n performing such a cast, i.e., when casting from a floating point to an integer\n type, and when casting from a signed to an unsigned type; `saturate` has no\n effect on casts between floats, or on casts that increase the type's range).\n\n Args:\n image: An image.\n dtype: A `DType` to convert `image` to.\n saturate: If `True`, clip the input before casting (if necessary).\n name: A name for this operation (optional).\n\n Returns:\n `image`, converted to `dtype`.\n\n Raises:\n AttributeError: Raises an attribute error when dtype is neither\n float nor integer\n ", "desc": "Convert `image` to `dtype`, scaling its values if needed.", "type": "API"}, {"name": "tf.image.crop_and_resize", "docs": "Extracts crops from the input image tensor and resizes them.\n\n Extracts crops from the input image tensor and resizes them using bilinear\n sampling or nearest neighbor sampling (possibly with aspect ratio change) to a\n common output size specified by `crop_size`. This is more general than the\n `crop_to_bounding_box` op which extracts a fixed size slice from the input\n image and does not allow resizing or aspect ratio change.\n\n Returns a tensor with `crops` from the input `image` at positions defined at\n the bounding box locations in `boxes`. The cropped boxes are all resized (with\n bilinear or nearest neighbor interpolation) to a fixed\n `size = [crop_height, crop_width]`. The result is a 4-D tensor\n `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned.\n In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical\n results to using `tf.compat.v1.image.resize_bilinear()` or\n `tf.compat.v1.image.resize_nearest_neighbor()`(depends on the `method`\n argument) with\n `align_corners=True`.\n\n Args:\n image: A 4-D tensor of shape `[batch, image_height, image_width, depth]`.\n Both `image_height` and `image_width` need to be positive.\n boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor\n specifies the coordinates of a box in the `box_ind[i]` image and is\n specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized\n coordinate value of `y` is mapped to the image coordinate at `y *\n (image_height - 1)`, so as the `[0, 1]` interval of normalized image\n height is mapped to `[0, image_height - 1]` in image height coordinates.\n We do allow `y1` > `y2`, in which case the sampled crop is an up-down\n flipped version of the original image. The width dimension is treated\n similarly. Normalized coordinates outside the `[0, 1]` range are allowed,\n in which case we use `extrapolation_value` to extrapolate the input image\n values.\n box_indices: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0,\n batch)`. The value of `box_ind[i]` specifies the image that the `i`-th box\n refers to.\n crop_size: A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`.\n All cropped image patches are resized to this size. The aspect ratio of\n the image content is not preserved. Both `crop_height` and `crop_width`\n need to be positive.\n method: An optional string specifying the sampling method for resizing. It\n can be either `\"bilinear\"` or `\"nearest\"` and default to `\"bilinear\"`.\n Currently two sampling methods are supported: Bilinear and Nearest\n Neighbor.\n extrapolation_value: An optional `float`. Defaults to `0.0`. Value used for\n extrapolation, when applicable.\n name: A name for the operation (optional).\n\n Returns:\n A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.\n\n Example:\n\n ```python\n import tensorflow as tf\n BATCH_SIZE = 1\n NUM_BOXES = 5\n IMAGE_HEIGHT = 256\n IMAGE_WIDTH = 256\n CHANNELS = 3\n CROP_SIZE = (24, 24)\n\n image = tf.random.normal(shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH,\n CHANNELS) )\n boxes = tf.random.uniform(shape=(NUM_BOXES, 4))\n box_indices = tf.random.uniform(shape=(NUM_BOXES,), minval=0,\n maxval=BATCH_SIZE, dtype=tf.int32)\n output = tf.image.crop_and_resize(image, boxes, box_indices, CROP_SIZE)\n output.shape #=> (5, 24, 24, 3)\n ```\n ", "desc": "Extracts crops from the input image tensor and resizes them.", "type": "API"}, {"name": "tf.image.crop_to_bounding_box", "docs": "Crops an `image` to a specified bounding box.\n\n This op cuts a rectangular bounding box out of `image`. The top-left corner\n of the bounding box is at `offset_height, offset_width` in `image`, and the\n lower-right corner is at\n `offset_height + target_height, offset_width + target_width`.\n\n Example Usage:\n\n >>> image = tf.constant(np.arange(1, 28, dtype=np.float32), shape=[3, 3, 3])\n >>> image[:,:,0] # print the first channel of the 3-D tensor\n \n >>> cropped_image = tf.image.crop_to_bounding_box(image, 0, 0, 2, 2)\n >>> cropped_image[:,:,0] # print the first channel of the cropped 3-D tensor\n \n\n Args:\n image: 4-D `Tensor` of shape `[batch, height, width, channels]` or 3-D\n `Tensor` of shape `[height, width, channels]`.\n offset_height: Vertical coordinate of the top-left corner of the bounding\n box in `image`.\n offset_width: Horizontal coordinate of the top-left corner of the bounding\n box in `image`.\n target_height: Height of the bounding box.\n target_width: Width of the bounding box.\n\n Returns:\n If `image` was 4-D, a 4-D `Tensor` of shape\n `[batch, target_height, target_width, channels]`.\n If `image` was 3-D, a 3-D `Tensor` of shape\n `[target_height, target_width, channels]`.\n It has the same dtype with `image`.\n\n Raises:\n ValueError: `image` is not a 3-D or 4-D `Tensor`.\n ValueError: `offset_width < 0` or `offset_height < 0`.\n ValueError: `target_width <= 0` or `target_width <= 0`.\n ValueError: `width < offset_width + target_width` or\n `height < offset_height + target_height`.\n ", "desc": "Crops an `image` to a specified bounding box.", "type": "API"}, {"name": "tf.image.decode_and_crop_jpeg", "docs": "Decode and Crop a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n It is equivalent to a combination of decode and crop, but much faster by only\n decoding partial jpeg image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n crop_window: A `Tensor` of type `int32`.\n 1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode and Crop a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.image.decode_bmp", "docs": "Decode the first frame of a BMP-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the BMP-encoded image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The BMP-encoded image.\n channels: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the first frame of a BMP-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.image.decode_gif", "docs": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.\n\n GIF images with frame or transparency compression are not supported.\n On Linux and MacOS systems, convert animated GIFs from compressed to\n uncompressed by running:\n\n convert $src.gif -coalesce $dst.gif\n\n This op also supports decoding JPEGs and PNGs, though it is cleaner to use\n `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The GIF-encoded image.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.image.decode_image", "docs": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.\n\n Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the\n appropriate operation to convert the input bytes `string` into a `Tensor`\n of type `dtype`.\n\n Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as\n opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D\n arrays `[height, width, num_channels]`. Make sure to take this into account\n when constructing your graph if you are intermixing GIF files with BMP, JPEG,\n and/or PNG files. Alternately, set the `expand_animations` argument of this\n function to `False`, in which case the op will return 3-dimensional tensors\n and will truncate animated GIF files to the first frame.\n\n NOTE: If the first frame of an animated GIF does not occupy the entire\n canvas (maximum frame width x maximum frame height), then it fills the\n unoccupied areas (in the first frame) with zeros (black). For frames after the\n first frame that does not occupy the entire canvas, it uses the previous\n frame to fill the unoccupied areas.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The encoded image bytes.\n channels: An optional `int`. Defaults to `0`. Number of color channels for\n the decoded image.\n dtype: The desired DType of the returned `Tensor`.\n name: A name for the operation (optional)\n expand_animations: An optional `bool`. Defaults to `True`. Controls the\n shape of the returned op's output. If `True`, the returned op will produce\n a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs,\n whether animated or not. If, `False`, the returned op will produce a 3-D\n tensor for all file types and will truncate animated GIFs to the first\n frame.\n\n Returns:\n `Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on\n the file type and the value of the `expand_animations` parameter.\n\n Raises:\n ValueError: On incorrect number of channels.\n ", "desc": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.", "type": "API"}, {"name": "tf.image.decode_jpeg", "docs": "Decode a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n This op also supports decoding PNGs and non-animated GIFs since the interface is\n the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.image.decode_png", "docs": "Decode a PNG-encoded image to a uint8 or uint16 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the PNG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n If needed, the PNG-encoded image is transformed to match the requested number\n of color channels.\n\n This op also supports decoding JPEGs and non-animated GIFs since the interface\n is the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The PNG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Decode a PNG-encoded image to a uint8 or uint16 tensor.", "type": "API"}, {"name": "tf.image.draw_bounding_boxes", "docs": "Draw bounding boxes on a batch of images.\n\n Outputs a copy of `images` but draws on top of the pixels zero or more\n bounding boxes specified by the locations in `boxes`. The coordinates of the\n each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`.\n The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width\n and the height of the underlying image.\n\n For example, if an image is 100 x 200 pixels (height x width) and the bounding\n box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of\n the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).\n\n Parts of the bounding box may fall outside the image.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `float32`, `half`.\n 4-D with shape `[batch, height, width, depth]`. A batch of images.\n boxes: A `Tensor` of type `float32`. 3-D with shape `[batch,\n num_bounding_boxes, 4]` containing bounding boxes.\n colors: A `Tensor` of type `float32`. 2-D. A list of RGBA colors to cycle\n through for the boxes.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n\n Usage Example:\n\n >>> # create an empty image\n >>> img = tf.zeros([1, 3, 3, 3])\n >>> # draw a box around the image\n >>> box = np.array([0, 0, 1, 1])\n >>> boxes = box.reshape([1, 1, 4])\n >>> # alternate between red and blue\n >>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])\n >>> tf.image.draw_bounding_boxes(img, boxes, colors)\n \n ", "desc": "Draw bounding boxes on a batch of images.", "type": "API"}, {"name": "tf.image.encode_jpeg", "docs": "JPEG-encode an image.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n\n The attr `format` can be used to override the color format of the encoded\n output. Values can be:\n\n * `''`: Use a default format based on the number of channels in the image.\n * `grayscale`: Output a grayscale JPEG image. The `channels` dimension\n of `image` must be 1.\n * `rgb`: Output an RGB JPEG image. The `channels` dimension\n of `image` must be 3.\n\n If `format` is not specified or is the empty string, a default format is picked\n in function of the number of channels in `image`:\n\n * 1: Output a grayscale image.\n * 3: Output an RGB image.\n\n Args:\n image: A `Tensor` of type `uint8`.\n 3-D with shape `[height, width, channels]`.\n format: An optional `string` from: `\"\", \"grayscale\", \"rgb\"`. Defaults to `\"\"`.\n Per pixel image format.\n quality: An optional `int`. Defaults to `95`.\n Quality of the compression from 0 to 100 (higher is better and slower).\n progressive: An optional `bool`. Defaults to `False`.\n If True, create a JPEG that loads progressively (coarse to fine).\n optimize_size: An optional `bool`. Defaults to `False`.\n If True, spend CPU/RAM to reduce size with no quality change.\n chroma_downsampling: An optional `bool`. Defaults to `True`.\n See http://en.wikipedia.org/wiki/Chroma_subsampling.\n density_unit: An optional `string` from: `\"in\", \"cm\"`. Defaults to `\"in\"`.\n Unit used to specify `x_density` and `y_density`:\n pixels per inch (`'in'`) or centimeter (`'cm'`).\n x_density: An optional `int`. Defaults to `300`.\n Horizontal pixels per density unit.\n y_density: An optional `int`. Defaults to `300`.\n Vertical pixels per density unit.\n xmp_metadata: An optional `string`. Defaults to `\"\"`.\n If not empty, embed this XMP metadata in the image header.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG-encode an image.", "type": "API"}, {"name": "tf.image.encode_png", "docs": "PNG-encode an image.\n\n `image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`\n where `channels` is:\n\n * 1: for grayscale.\n * 2: for grayscale + alpha.\n * 3: for RGB.\n * 4: for RGBA.\n\n The ZLIB compression level, `compression`, can be -1 for the PNG-encoder\n default or a value from 0 to 9. 9 is the highest compression level,\n generating the smallest output, but is slower.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.\n 3-D with shape `[height, width, channels]`.\n compression: An optional `int`. Defaults to `-1`. Compression level.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "PNG-encode an image.", "type": "API"}, {"name": "tf.image.extract_glimpse", "docs": "Extracts a glimpse from the input tensor.\n\n Returns a set of windows called glimpses extracted at location\n `offsets` from the input tensor. If the windows only partially\n overlaps the inputs, the non-overlapping areas will be filled with\n random noise.\n\n The result is a 4-D tensor of shape `[batch_size, glimpse_height,\n glimpse_width, channels]`. The channels and batch dimensions are the\n same as that of the input tensor. The height and width of the output\n windows are specified in the `size` parameter.\n\n The argument `normalized` and `centered` controls how the windows are built:\n\n * If the coordinates are normalized but not centered, 0.0 and 1.0\n correspond to the minimum and maximum of each height and width\n dimension.\n * If the coordinates are both normalized and centered, they range from\n -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper\n left corner, the lower right corner is located at (1.0, 1.0) and the\n center is at (0, 0).\n * If the coordinates are not normalized they are interpreted as\n numbers of pixels.\n\n Usage Example:\n\n >>> x = [[[[0.0],\n ... [1.0],\n ... [2.0]],\n ... [[3.0],\n ... [4.0],\n ... [5.0]],\n ... [[6.0],\n ... [7.0],\n ... [8.0]]]]\n >>> tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]],\n ... centered=False, normalized=False)\n \n\n Args:\n input: A `Tensor` of type `float32`. A 4-D float tensor of shape\n `[batch_size, height, width, channels]`.\n size: A `Tensor` of type `int32`. A 1-D tensor of 2 elements containing the\n size of the glimpses to extract. The glimpse height must be specified\n first, following by the glimpse width.\n offsets: A `Tensor` of type `float32`. A 2-D integer tensor of shape\n `[batch_size, 2]` containing the y, x locations of the center of each\n window.\n centered: An optional `bool`. Defaults to `True`. indicates if the offset\n coordinates are centered relative to the image, in which case the (0, 0)\n offset is relative to the center of the input images. If false, the (0,0)\n offset corresponds to the upper left corner of the input images.\n normalized: An optional `bool`. Defaults to `True`. indicates if the offset\n coordinates are normalized.\n noise: An optional `string`. Defaults to `uniform`. indicates if the noise\n should be `uniform` (uniform distribution), `gaussian` (gaussian\n distribution), or `zero` (zero padding).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts a glimpse from the input tensor.", "type": "API"}, {"name": "tf.image.extract_jpeg_shape", "docs": "Extract the shape information of a JPEG-encoded image.\n\n This op only parses the image header, so it is much faster than DecodeJpeg.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n (Optional) The output type of the operation (int32 or int64).\n Defaults to int32.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Extract the shape information of a JPEG-encoded image.", "type": "API"}, {"name": "tf.image.extract_patches", "docs": "Extract `patches` from `images`.\n\n This op collects patches from the input image, as if applying a\n convolution. All extracted patches are stacked in the depth (last) dimension\n of the output.\n\n Specifically, the op extracts patches of shape `sizes` which are `strides`\n apart in the input image. The output is subsampled using the `rates` argument,\n in the same manner as \"atrous\" or \"dilated\" convolutions.\n\n The result is a 4D tensor which is indexed by batch, row, and column.\n `output[i, x, y]` contains a flattened patch of size `sizes[1], sizes[2]`\n which is taken from the input starting at\n `images[i, x*strides[1], y*strides[2]]`.\n\n Each output patch can be reshaped to `sizes[1], sizes[2], depth`, where\n `depth` is `images.shape[3]`.\n\n The output elements are taken from the input at intervals given by the `rate`\n argument, as in dilated convolutions.\n\n The `padding` argument has no effect on the size of each patch, it determines\n how many patches are extracted. If `VALID`, only patches which are fully\n contained in the input image are included. If `SAME`, all patches whose\n starting point is inside the input are included, and areas outside the input\n default to zero.\n\n Example:\n\n ```\n n = 10\n # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100\n images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]\n\n # We generate two outputs as follows:\n # 1. 3x3 patches with stride length 5\n # 2. Same as above, but the rate is increased to 2\n tf.image.extract_patches(images=images,\n sizes=[1, 3, 3, 1],\n strides=[1, 5, 5, 1],\n rates=[1, 1, 1, 1],\n padding='VALID')\n\n # Yields:\n [[[[ 1 2 3 11 12 13 21 22 23]\n [ 6 7 8 16 17 18 26 27 28]]\n [[51 52 53 61 62 63 71 72 73]\n [56 57 58 66 67 68 76 77 78]]]]\n ```\n\n If we mark the pixels in the input image which are taken for the output with\n `*`, we see the pattern:\n\n ```\n * * * 4 5 * * * 9 10\n * * * 14 15 * * * 19 20\n * * * 24 25 * * * 29 30\n 31 32 33 34 35 36 37 38 39 40\n 41 42 43 44 45 46 47 48 49 50\n * * * 54 55 * * * 59 60\n * * * 64 65 * * * 69 70\n * * * 74 75 * * * 79 80\n 81 82 83 84 85 86 87 88 89 90\n 91 92 93 94 95 96 97 98 99 100\n ```\n\n ```\n tf.image.extract_patches(images=images,\n sizes=[1, 3, 3, 1],\n strides=[1, 5, 5, 1],\n rates=[1, 2, 2, 1],\n padding='VALID')\n\n # Yields:\n [[[[ 1 3 5 21 23 25 41 43 45]\n [ 6 8 10 26 28 30 46 48 50]]\n\n [[ 51 53 55 71 73 75 91 93 95]\n [ 56 58 60 76 78 80 96 98 100]]]]\n ```\n\n We can again draw the effect, this time using the symbols `*`, `x`, `+` and\n `o` to distinguish the patches:\n\n ```\n * 2 * 4 * x 7 x 9 x\n 11 12 13 14 15 16 17 18 19 20\n * 22 * 24 * x 27 x 29 x\n 31 32 33 34 35 36 37 38 39 40\n * 42 * 44 * x 47 x 49 x\n + 52 + 54 + o 57 o 59 o\n 61 62 63 64 65 66 67 68 69 70\n + 72 + 74 + o 77 o 79 o\n 81 82 83 84 85 86 87 88 89 90\n + 92 + 94 + o 97 o 99 o\n ```\n\n Args:\n images: A 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.\n sizes: The size of the extracted patches. Must be\n `[1, size_rows, size_cols, 1]`.\n strides: A 1-D Tensor of length 4. How far the centers of two consecutive\n patches are in the images. Must be: `[1, stride_rows, stride_cols, 1]`.\n rates: A 1-D Tensor of length 4. Must be: `[1, rate_rows, rate_cols, 1]`.\n This is the input stride, specifying how far two consecutive patch samples\n are in the input. Equivalent to extracting patches with `patch_sizes_eff =\n patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling\n them spatially by a factor of `rates`. This is equivalent to `rate` in\n dilated (a.k.a. Atrous) convolutions.\n padding: The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A 4-D Tensor of the same type as the input.\n ", "desc": "Extract `patches` from `images`.", "type": "API"}, {"name": "tf.image.flip_left_right", "docs": "Flip an image horizontally (left to right).\n\n Outputs the contents of `image` flipped along the width dimension.\n\n See also `tf.reverse`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.flip_left_right(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n\n Returns:\n A tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Flip an image horizontally (left to right).", "type": "API"}, {"name": "tf.image.flip_up_down", "docs": "Flip an image vertically (upside down).\n\n Outputs the contents of `image` flipped along the height dimension.\n\n See also `reverse()`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.flip_up_down(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n\n Returns:\n A `Tensor` of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Flip an image vertically (upside down).", "type": "API"}, {"name": "tf.image.generate_bounding_box_proposals", "docs": "Generate bounding box proposals from encoded bounding boxes.\n\n Args:\n scores: A 4-D float `Tensor` of shape\n `[num_images, height, width, num_achors]` containing scores of\n the boxes for given anchors, can be unsorted.\n bbox_deltas: A 4-D float `Tensor` of shape\n `[num_images, height, width, 4 x num_anchors]` encoding boxes\n with respect to each anchor. Coordinates are given\n in the form `[dy, dx, dh, dw]`.\n image_info: A 2-D float `Tensor` of shape `[num_images, 5]`\n containing image information Height, Width, Scale.\n anchors: A 2-D float `Tensor` of shape `[num_anchors, 4]`\n describing the anchor boxes.\n Boxes are formatted in the form `[y1, x1, y2, x2]`.\n nms_threshold: A scalar float `Tensor` for non-maximal-suppression\n threshold. Defaults to 0.7.\n pre_nms_topn: A scalar int `Tensor` for the number of\n top scoring boxes to be used as input. Defaults to 6000.\n min_size: A scalar float `Tensor`. Any box that has a smaller size\n than min_size will be discarded. Defaults to 16.\n post_nms_topn: An integer. Maximum number of rois in the output.\n name: A name for this operation (optional).\n\n Returns:\n rois: Region of interest boxes sorted by their scores.\n roi_probabilities: scores of the ROI boxes in the ROIs' `Tensor`.\n ", "desc": "Generate bounding box proposals from encoded bounding boxes.", "type": "API"}, {"name": "tf.image.grayscale_to_rgb", "docs": "Converts one or more images from Grayscale to RGB.\n\n Outputs a tensor of the same `DType` and rank as `images`. The size of the\n last dimension of the output is 3, containing the RGB value of the pixels.\n The input images' last dimension must be size 1.\n\n >>> original = tf.constant([[[1.0], [2.0], [3.0]]])\n >>> converted = tf.image.grayscale_to_rgb(original)\n >>> print(converted.numpy())\n [[[1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]]]\n\n Args:\n images: The Grayscale tensor to convert. The last dimension must be size 1.\n name: A name for the operation (optional).\n\n Returns:\n The converted grayscale image(s).\n ", "desc": "Converts one or more images from Grayscale to RGB.", "type": "API"}, {"name": "tf.image.hsv_to_rgb", "docs": "Convert one or more images from HSV to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n See `rgb_to_hsv` for a description of the HSV encoding.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. HSV data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Convert one or more images from HSV to RGB.", "type": "API"}, {"name": "tf.image.image_gradients", "docs": "Returns image gradients (dy, dx) for each color channel.\n\n Both output tensors have the same shape as the input: [batch_size, h, w,\n d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in\n location (x, y). That means that dy will always have zeros in the last row,\n and dx will always have zeros in the last column.\n\n Usage Example:\n ```python\n BATCH_SIZE = 1\n IMAGE_HEIGHT = 5\n IMAGE_WIDTH = 5\n CHANNELS = 1\n image = tf.reshape(tf.range(IMAGE_HEIGHT * IMAGE_WIDTH * CHANNELS,\n delta=1, dtype=tf.float32),\n shape=(BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))\n dy, dx = tf.image.image_gradients(image)\n print(image[0, :,:,0])\n tf.Tensor(\n [[ 0. 1. 2. 3. 4.]\n [ 5. 6. 7. 8. 9.]\n [10. 11. 12. 13. 14.]\n [15. 16. 17. 18. 19.]\n [20. 21. 22. 23. 24.]], shape=(5, 5), dtype=float32)\n print(dy[0, :,:,0])\n tf.Tensor(\n [[5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [5. 5. 5. 5. 5.]\n [0. 0. 0. 0. 0.]], shape=(5, 5), dtype=float32)\n print(dx[0, :,:,0])\n tf.Tensor(\n [[1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]\n [1. 1. 1. 1. 0.]], shape=(5, 5), dtype=float32)\n ```\n\n Args:\n image: Tensor with shape [batch_size, h, w, d].\n\n Returns:\n Pair of tensors (dy, dx) holding the vertical and horizontal image\n gradients (1-step finite difference).\n\n Raises:\n ValueError: If `image` is not a 4D tensor.\n ", "desc": "Returns image gradients (dy, dx) for each color channel.", "type": "API"}, {"name": "tf.image.is_jpeg", "docs": "Convenience function to check if the 'contents' encodes a JPEG image.\n\n Args:\n contents: 0-D `string`. The encoded image bytes.\n name: A name for the operation (optional)\n\n Returns:\n A scalar boolean tensor indicating if 'contents' may be a JPEG image.\n is_jpeg is susceptible to false positives.\n ", "desc": "Convenience function to check if the 'contents' encodes a JPEG image.", "type": "API"}, {"name": "tf.image.non_max_suppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices = tf.image.non_max_suppression(\n boxes, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n boxes: A 2-D float `Tensor` of shape `[num_boxes, 4]`.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n iou_threshold: A 0-D float tensor representing the threshold for deciding\n whether boxes overlap too much with respect to IOU.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the boxes tensor, where `M <= max_output_size`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.image.non_max_suppression_overlaps", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high overlap with previously selected boxes.\n N-by-n overlap values are supplied as square matrix.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices = tf.image.non_max_suppression_overlaps(\n overlaps, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n overlaps: A 2-D float `Tensor` of shape `[num_boxes, num_boxes]`\n representing the n-by-n box overlap values.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n overlap_threshold: A 0-D float tensor representing the threshold for\n deciding whether boxes overlap too much with respect to the provided\n overlap values.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the overlaps tensor, where `M <= max_output_size`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.image.non_max_suppression_padded", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Performs algorithmically equivalent operation to tf.image.non_max_suppression,\n with the addition of an optional parameter which zero-pads the output to\n be of size `max_output_size`.\n The output of this operation is a tuple containing the set of integers\n indexing into the input collection of bounding boxes representing the selected\n boxes and the number of valid indices in the index set. The bounding box\n coordinates corresponding to the selected indices can then be obtained using\n the `tf.slice` and `tf.gather` operations. For example:\n ```python\n selected_indices_padded, num_valid = tf.image.non_max_suppression_padded(\n boxes, scores, max_output_size, iou_threshold,\n score_threshold, pad_to_max_output_size=True)\n selected_indices = tf.slice(\n selected_indices_padded, tf.constant([0]), num_valid)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n Args:\n boxes: a tensor of rank 2 or higher with a shape of [..., num_boxes, 4].\n Dimensions except the last two are batch dimensions.\n scores: a tensor of rank 1 or higher with a shape of [..., num_boxes].\n max_output_size: a scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non max suppression. Note that setting this\n value to a large number may result in OOM error depending on the system\n workload.\n iou_threshold: a float representing the threshold for deciding whether boxes\n overlap too much with respect to IoU (intersection over union).\n score_threshold: a float representing the threshold for box scores. Boxes\n with a score that is not larger than this threshold will be suppressed.\n pad_to_max_output_size: whether to pad the output idx to max_output_size.\n Must be set to True when the input is a batch of images.\n name: name of operation.\n sorted_input: a boolean indicating whether the input boxes and scores\n are sorted in descending order by the score.\n canonicalized_coordinates: if box coordinates are given as\n `[y_min, x_min, y_max, x_max]`, setting to True eliminate redundant\n computation to canonicalize box coordinates.\n tile_size: an integer representing the number of boxes in a tile, i.e.,\n the maximum number of boxes per image that can be used to suppress other\n boxes in parallel; larger tile_size means larger parallelism and\n potentially more redundant work.\n Returns:\n idx: a tensor with a shape of [..., num_boxes] representing the\n indices selected by non-max suppression. The leading dimensions\n are the batch dimensions of the input boxes. All numbers are within\n [0, num_boxes). For each image (i.e., idx[i]), only the first num_valid[i]\n indices (i.e., idx[i][:num_valid[i]]) are valid.\n num_valid: a tensor of rank 0 or higher with a shape of [...]\n representing the number of valid indices in idx. Its dimensions are the\n batch dimensions of the input boxes.\n Raises:\n ValueError: When set pad_to_max_output_size to False for batched input.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.image.non_max_suppression_with_scores", "docs": "Greedily selects a subset of bounding boxes in descending order of score.\n\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n `[y1, x1, y2, x2]`, where `(y1, x1)` and `(y2, x2)` are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval `[0, 1]`) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather` operation. For example:\n ```python\n selected_indices, selected_scores = tf.image.non_max_suppression_padded(\n boxes, scores, max_output_size, iou_threshold=1.0, score_threshold=0.1,\n soft_nms_sigma=0.5)\n selected_boxes = tf.gather(boxes, selected_indices)\n ```\n\n This function generalizes the `tf.image.non_max_suppression` op by also\n supporting a Soft-NMS (with Gaussian weighting) mode (c.f.\n Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score\n of other overlapping boxes instead of directly causing them to be pruned.\n Consequently, in contrast to `tf.image.non_max_suppression`,\n `tf.image.non_max_suppression_with_scores` returns the new scores of each\n input box in the second output, `selected_scores`.\n\n To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be\n larger than 0. When `soft_nms_sigma` equals 0, the behavior of\n `tf.image.non_max_suppression_with_scores` is identical to that of\n `tf.image.non_max_suppression` (except for the extra output) both in function\n and in running time.\n\n Note that when `soft_nms_sigma` > 0, Soft-NMS is performed and `iou_threshold`\n is ignored. `iou_threshold` is only used for standard NMS.\n\n Args:\n boxes: A 2-D float `Tensor` of shape `[num_boxes, 4]`.\n scores: A 1-D float `Tensor` of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A scalar integer `Tensor` representing the maximum number\n of boxes to be selected by non-max suppression.\n iou_threshold: A 0-D float tensor representing the threshold for deciding\n whether boxes overlap too much with respect to IOU.\n score_threshold: A 0-D float tensor representing the threshold for deciding\n when to remove boxes based on score.\n soft_nms_sigma: A 0-D float tensor representing the sigma parameter for Soft\n NMS; see Bodla et al (c.f. https://arxiv.org/abs/1704.04503). When\n `soft_nms_sigma=0.0` (which is default), we fall back to standard (hard)\n NMS.\n name: A name for the operation (optional).\n\n Returns:\n selected_indices: A 1-D integer `Tensor` of shape `[M]` representing the\n selected indices from the boxes tensor, where `M <= max_output_size`.\n selected_scores: A 1-D float tensor of shape `[M]` representing the\n corresponding scores for each selected box, where `M <= max_output_size`.\n Scores only differ from corresponding input scores when using Soft NMS\n (i.e. when `soft_nms_sigma>0`)\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score.", "type": "API"}, {"name": "tf.image.pad_to_bounding_box", "docs": "Pad `image` with zeros to the specified `height` and `width`.\n\n Adds `offset_height` rows of zeros on top, `offset_width` columns of\n zeros on the left, and then pads the image on the bottom and right\n with zeros until it has dimensions `target_height`, `target_width`.\n\n This op does nothing if `offset_*` is zero and the image already has size\n `target_height` by `target_width`.\n\n Usage Example:\n\n >>> x = [[[1., 2., 3.],\n ... [4., 5., 6.]],\n ... [[7., 8., 9.],\n ... [10., 11., 12.]]]\n >>> padded_image = tf.image.pad_to_bounding_box(x, 1, 1, 4, 4)\n >>> padded_image\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n offset_height: Number of rows of zeros to add on top.\n offset_width: Number of columns of zeros to add on the left.\n target_height: Height of output image.\n target_width: Width of output image.\n\n Returns:\n If `image` was 4-D, a 4-D float Tensor of shape\n `[batch, target_height, target_width, channels]`\n If `image` was 3-D, a 3-D float Tensor of shape\n `[target_height, target_width, channels]`\n\n Raises:\n ValueError: If the shape of `image` is incompatible with the `offset_*` or\n `target_*` arguments, or either `offset_height` or `offset_width` is\n negative.\n ", "desc": "Pad `image` with zeros to the specified `height` and `width`.", "type": "API"}, {"name": "tf.image.per_image_standardization", "docs": "Linearly scales each image in `image` to have mean 0 and variance 1.\n\n For each 3-D image `x` in `image`, computes `(x - mean) / adjusted_stddev`,\n where\n\n - `mean` is the average of all values in `x`\n - `adjusted_stddev = max(stddev, 1.0/sqrt(N))` is capped away from 0 to\n protect against division by 0 when handling uniform images\n - `N` is the number of elements in `x`\n - `stddev` is the standard deviation of all values in `x`\n\n Example Usage:\n\n >>> image = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> image # 3-D tensor\n \n >>> new_image = tf.image.per_image_standardization(image)\n >>> new_image # 3-D tensor with mean ~= 0 and variance ~= 1\n \n\n Args:\n image: An n-D `Tensor` with at least 3 dimensions, the last 3 of which are\n the dimensions of each image.\n\n Returns:\n A `Tensor` with the same shape as `image` and its dtype is `float32`.\n\n Raises:\n ValueError: The shape of `image` has fewer than 3 dimensions.\n ", "desc": "Linearly scales each image in `image` to have mean 0 and variance 1.", "type": "API"}, {"name": "tf.image.psnr", "docs": "Returns the Peak Signal-to-Noise Ratio between a and b.\n\n This is intended to be used on signals (or images). Produces a PSNR value for\n each image in batch.\n\n The last three dimensions of input are expected to be [height, width, depth].\n\n Example:\n\n ```python\n # Read images from file.\n im1 = tf.decode_png('path/to/im1.png')\n im2 = tf.decode_png('path/to/im2.png')\n # Compute PSNR over tf.uint8 Tensors.\n psnr1 = tf.image.psnr(im1, im2, max_val=255)\n\n # Compute PSNR over tf.float32 Tensors.\n im1 = tf.image.convert_image_dtype(im1, tf.float32)\n im2 = tf.image.convert_image_dtype(im2, tf.float32)\n psnr2 = tf.image.psnr(im1, im2, max_val=1.0)\n # psnr1 and psnr2 both have type tf.float32 and are almost equal.\n ```\n\n Args:\n a: First set of images.\n b: Second set of images.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n name: Namespace to embed the computation in.\n\n Returns:\n The scalar PSNR between a and b. The returned tensor has type `tf.float32`\n and shape [batch_size, 1].\n ", "desc": "Returns the Peak Signal-to-Noise Ratio between a and b.", "type": "API"}, {"name": "tf.image.random_brightness", "docs": "Adjust the brightness of images by a random factor.\n\n Equivalent to `adjust_brightness()` using a `delta` randomly picked in the\n interval `[-max_delta, max_delta)`.\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_brightness`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: An image or images to adjust.\n max_delta: float, must be non-negative.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_brightness(x, 0.2)\n \n\n Returns:\n The brightness-adjusted image(s).\n\n Raises:\n ValueError: if `max_delta` is negative.\n ", "desc": "Adjust the brightness of images by a random factor.", "type": "API"}, {"name": "tf.image.random_contrast", "docs": "Adjust the contrast of an image or images by a random factor.\n\n Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly\n picked in the interval `[lower, upper)`.\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_contrast`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: An image tensor with 3 or more dimensions.\n lower: float. Lower bound for the random contrast factor.\n upper: float. Upper bound for the random contrast factor.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_contrast(x, 0.2, 0.5)\n \n\n Returns:\n The contrast-adjusted image(s).\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the contrast of an image or images by a random factor.", "type": "API"}, {"name": "tf.image.random_crop", "docs": "Randomly crops a tensor to a given size.\n\n Slices a shape `size` portion out of `value` at a uniformly chosen offset.\n Requires `value.shape >= size`.\n\n If a dimension should not be cropped, pass the full size of that dimension.\n For example, RGB images can be cropped with\n `size = [crop_height, crop_width, 3]`.\n\n Example usage:\n\n >>> image = [[1, 2, 3], [4, 5, 6]]\n >>> result = tf.image.random_crop(value=image, size=(1, 3))\n >>> result.shape.as_list()\n [1, 3]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_crop`. Unlike using the `seed` param with\n `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same\n results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n value: Input tensor to crop.\n size: 1-D tensor with size the rank of `value`.\n seed: Python integer. Used to create a random seed. See\n `tf.random.set_seed`\n for behavior.\n name: A name for this operation (optional).\n\n Returns:\n A cropped tensor of the same rank as `value` and shape `size`.\n ", "desc": "Randomly crops a tensor to a given size.", "type": "API"}, {"name": "tf.image.random_flip_left_right", "docs": "Randomly flip an image horizontally (left to right).\n\n With a 1 in 2 chance, outputs the contents of `image` flipped along the\n second dimension, which is `width`. Otherwise output the image as-is.\n When passing a batch of images, each image will be randomly flipped\n independent of other images.\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> tf.image.random_flip_left_right(image, 5).numpy().tolist()\n [[[2], [1]], [[4], [3]]]\n\n Randomly flip multiple images.\n\n >>> images = np.array(\n ... [\n ... [[[1], [2]], [[3], [4]]],\n ... [[[5], [6]], [[7], [8]]]\n ... ])\n >>> tf.image.random_flip_left_right(images, 6).numpy().tolist()\n [[[[2], [1]], [[4], [3]]], [[[5], [6]], [[7], [8]]]]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_flip_left_right`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Returns:\n A tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Randomly flip an image horizontally (left to right).", "type": "API"}, {"name": "tf.image.random_flip_up_down", "docs": "Randomly flips an image vertically (upside down).\n\n With a 1 in 2 chance, outputs the contents of `image` flipped along the first\n dimension, which is `height`. Otherwise, output the image as-is.\n When passing a batch of images, each image will be randomly flipped\n independent of other images.\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> tf.image.random_flip_up_down(image, 3).numpy().tolist()\n [[[3], [4]], [[1], [2]]]\n\n Randomly flip multiple images.\n\n >>> images = np.array(\n ... [\n ... [[[1], [2]], [[3], [4]]],\n ... [[[5], [6]], [[7], [8]]]\n ... ])\n >>> tf.image.random_flip_up_down(images, 4).numpy().tolist()\n [[[[3], [4]], [[1], [2]]], [[[5], [6]], [[7], [8]]]]\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_flip_up_down`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A Python integer. Used to create a random seed. See\n `tf.compat.v1.set_random_seed` for behavior.\n\n Returns:\n A tensor of the same type and shape as `image`.\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Randomly flips an image vertically (upside down).", "type": "API"}, {"name": "tf.image.random_hue", "docs": "Adjust the hue of RGB images by a random factor.\n\n Equivalent to `adjust_hue()` but uses a `delta` randomly\n picked in the interval `[-max_delta, max_delta)`.\n\n `max_delta` must be in the interval `[0, 0.5]`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_hue(x, 0.2)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_hue`. Unlike using the `seed` param with\n `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the same\n results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n max_delta: float. The maximum value for the random delta.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `max_delta` is invalid.\n ", "desc": "Adjust the hue of RGB images by a random factor.", "type": "API"}, {"name": "tf.image.random_jpeg_quality", "docs": "Randomly changes jpeg encoding quality for inducing jpeg noise.\n\n `min_jpeg_quality` must be in the interval `[0, 100]` and less than\n `max_jpeg_quality`.\n `max_jpeg_quality` must be in the interval `[0, 100]`.\n\n Usage Example:\n\n >>> x = tf.constant([[[1, 2, 3],\n ... [4, 5, 6]],\n ... [[7, 8, 9],\n ... [10, 11, 12]]], dtype=tf.uint8)\n >>> tf.image.random_jpeg_quality(x, 75, 95)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_jpeg_quality`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: 3D image. Size of the last dimension must be 1 or 3.\n min_jpeg_quality: Minimum jpeg encoding quality to use.\n max_jpeg_quality: Maximum jpeg encoding quality to use.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `min_jpeg_quality` or `max_jpeg_quality` is invalid.\n ", "desc": "Randomly changes jpeg encoding quality for inducing jpeg noise.", "type": "API"}, {"name": "tf.image.random_saturation", "docs": "Adjust the saturation of RGB images by a random factor.\n\n Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly\n picked in the interval `[lower, upper)`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.random_saturation(x, 5, 10)\n \n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_random_saturation`. Unlike using the `seed` param\n with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops guarantee the\n same results given the same seed independent of how many times the function is\n called, and independent of global seed settings (e.g. tf.random.set_seed).\n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n lower: float. Lower bound for the random saturation factor.\n upper: float. Upper bound for the random saturation factor.\n seed: An operation-specific seed. It will be used in conjunction with the\n graph-level seed to determine the real seeds that will be used in this\n operation. Please see the documentation of set_random_seed for its\n interaction with the graph-level random seed.\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the saturation of RGB images by a random factor.", "type": "API"}, {"name": "tf.image.resize", "docs": "Resize `images` to `size` using the specified `method`.\n\n Resized images will be distorted if their original aspect ratio is not\n the same as `size`. To avoid distortions see\n `tf.image.resize_with_pad`.\n\n >>> image = tf.constant([\n ... [1,0,0,0,0],\n ... [0,1,0,0,0],\n ... [0,0,1,0,0],\n ... [0,0,0,1,0],\n ... [0,0,0,0,1],\n ... ])\n >>> # Add \"batch\" and \"channels\" dimensions\n >>> image = image[tf.newaxis, ..., tf.newaxis]\n >>> image.shape.as_list() # [batch, height, width, channels]\n [1, 5, 5, 1]\n >>> tf.image.resize(image, [3,5])[0,...,0].numpy()\n array([[0.6666667, 0.3333333, 0. , 0. , 0. ],\n [0. , 0. , 1. , 0. , 0. ],\n [0. , 0. , 0. , 0.3333335, 0.6666665]],\n dtype=float32)\n\n It works equally well with a single image instead of a batch of images:\n\n >>> tf.image.resize(image[0], [3,5]).shape.as_list()\n [3, 5, 1]\n\n When `antialias` is true, the sampling filter will anti-alias the input image\n as well as interpolate. When downsampling an image with [anti-aliasing](\n https://en.wikipedia.org/wiki/Spatial_anti-aliasing) the sampling filter\n kernel is scaled in order to properly anti-alias the input image signal.\n `antialias` has no effect when upsampling an image:\n\n >>> a = tf.image.resize(image, [5,10])\n >>> b = tf.image.resize(image, [5,10], antialias=True)\n >>> tf.reduce_max(abs(a - b)).numpy()\n 0.0\n\n The `method` argument expects an item from the `image.ResizeMethod` enum, or\n the string equivalent. The options are:\n\n * `bilinear`: [Bilinear interpolation.](\n https://en.wikipedia.org/wiki/Bilinear_interpolation) If `antialias` is\n true, becomes a hat/tent filter function with radius 1 when downsampling.\n * `lanczos3`: [Lanczos kernel](\n https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 3.\n High-quality practical filter but may have some ringing, especially on\n synthetic images.\n * `lanczos5`: [Lanczos kernel] (\n https://en.wikipedia.org/wiki/Lanczos_resampling) with radius 5.\n Very-high-quality filter but may have stronger ringing.\n * `bicubic`: [Cubic interpolant](\n https://en.wikipedia.org/wiki/Bicubic_interpolation) of Keys. Equivalent to\n Catmull-Rom kernel. Reasonably good quality and faster than Lanczos3Kernel,\n particularly when upsampling.\n * `gaussian`: [Gaussian kernel](\n https://en.wikipedia.org/wiki/Gaussian_filter) with radius 3,\n sigma = 1.5 / 3.0.\n * `nearest`: [Nearest neighbor interpolation.](\n https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)\n `antialias` has no effect when used with nearest neighbor interpolation.\n * `area`: Anti-aliased resampling with area interpolation.\n `antialias` has no effect when used with area interpolation; it\n always anti-aliases.\n * `mitchellcubic`: Mitchell-Netravali Cubic non-interpolating filter.\n For synthetic images (especially those lacking proper prefiltering), less\n ringing than Keys cubic kernel but less sharp.\n\n Note: Near image edges the filtering kernel may be partially outside the\n image boundaries. For these pixels, only input pixels inside the image will be\n included in the filter sum, and the output value will be appropriately\n normalized.\n\n The return value has type `float32`, unless the `method` is\n `ResizeMethod.NEAREST_NEIGHBOR`, then the return dtype is the dtype\n of `images`:\n\n >>> nn = tf.image.resize(image, [5,7], method='nearest')\n >>> nn[0,...,0].numpy()\n array([[1, 0, 0, 0, 0, 0, 0],\n [0, 1, 1, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0],\n [0, 0, 0, 0, 1, 1, 0],\n [0, 0, 0, 0, 0, 0, 1]], dtype=int32)\n\n With `preserve_aspect_ratio=True`, the aspect ratio is preserved, so `size`\n is the maximum for each dimension:\n\n >>> max_10_20 = tf.image.resize(image, [10,20], preserve_aspect_ratio=True)\n >>> max_10_20.shape.as_list()\n [1, 10, 10, 1]\n\n Args:\n images: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new\n size for the images.\n method: An `image.ResizeMethod`, or string equivalent. Defaults to\n `bilinear`.\n preserve_aspect_ratio: Whether to preserve the aspect ratio. If this is set,\n then `images` will be resized to a size that fits in `size` while\n preserving the aspect ratio of the original image. Scales up the image if\n `size` is bigger than the current size of the `image`. Defaults to False.\n antialias: Whether to use an anti-aliasing filter when downsampling an\n image.\n name: A name for this operation (optional).\n\n Raises:\n ValueError: if the shape of `images` is incompatible with the\n shape arguments to this function\n ValueError: if `size` has an invalid shape or type.\n ValueError: if an unsupported resize method is specified.\n\n Returns:\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Resize `images` to `size` using the specified `method`.", "type": "API"}, {"name": "tf.image.resize_with_crop_or_pad", "docs": "Crops and/or pads an image to a target width and height.\n\n Resizes an image to a target width and height by either centrally\n cropping the image or padding it evenly with zeros.\n\n If `width` or `height` is greater than the specified `target_width` or\n `target_height` respectively, this op centrally crops along that dimension.\n\n For example:\n\n >>> image = np.arange(75).reshape(5, 5, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 0, 3, 6, 9, 12],\n [15, 18, 21, 24, 27],\n [30, 33, 36, 39, 42],\n [45, 48, 51, 54, 57],\n [60, 63, 66, 69, 72]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 3, 3) # crop\n >>> # print first channel for demo purposes; centrally cropped output\n >>> image[:,:,0]\n \n\n If `width` or `height` is smaller than the specified `target_width` or\n `target_height` respectively, this op centrally pads with 0 along that\n dimension.\n\n For example:\n\n >>> image = np.arange(1, 28).reshape(3, 3, 3) # create 3-D image input\n >>> image[:,:,0] # print first channel just for demo purposes\n array([[ 1, 4, 7],\n [10, 13, 16],\n [19, 22, 25]])\n >>> image = tf.image.resize_with_crop_or_pad(image, 5, 5) # pad\n >>> # print first channel for demo purposes; we should see 0 paddings\n >>> image[:,:,0]\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n target_height: Target height.\n target_width: Target width.\n\n Raises:\n ValueError: if `target_height` or `target_width` are zero or negative.\n\n Returns:\n Cropped and/or padded image.\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Crops and/or pads an image to a target width and height.", "type": "API"}, {"name": "tf.image.resize_with_pad", "docs": "Resizes and pads an image to a target width and height.\n\n Resizes an image to a target width and height by keeping\n the aspect ratio the same without distortion. If the target\n dimensions don't match the image dimensions, the image\n is resized and then padded with zeroes to match requested\n dimensions.\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n target_height: Target height.\n target_width: Target width.\n method: Method to use for resizing image. See `image.resize()`\n antialias: Whether to use anti-aliasing when resizing. See 'image.resize()'.\n\n Raises:\n ValueError: if `target_height` or `target_width` are zero or negative.\n\n Returns:\n Resized and padded image.\n If `images` was 4-D, a 4-D float Tensor of shape\n `[batch, new_height, new_width, channels]`.\n If `images` was 3-D, a 3-D float Tensor of shape\n `[new_height, new_width, channels]`.\n ", "desc": "Resizes and pads an image to a target width and height.", "type": "API"}, {"name": "tf.image.ResizeMethod", "docs": "See `tf.image.resize` for details.", "desc": "See `tf.image.resize` for details.", "type": "API"}, {"name": "tf.image.rgb_to_grayscale", "docs": "Converts one or more images from RGB to Grayscale.\n\n Outputs a tensor of the same `DType` and rank as `images`. The size of the\n last dimension of the output is 1, containing the Grayscale value of the\n pixels.\n\n >>> original = tf.constant([[[1.0, 2.0, 3.0]]])\n >>> converted = tf.image.rgb_to_grayscale(original)\n >>> print(converted.numpy())\n [[[1.81...]]]\n\n Args:\n images: The RGB tensor to convert. The last dimension must have size 3 and\n should contain RGB values.\n name: A name for the operation (optional).\n\n Returns:\n The converted grayscale image(s).\n ", "desc": "Converts one or more images from RGB to Grayscale.", "type": "API"}, {"name": "tf.image.rgb_to_hsv", "docs": "Converts one or more images from RGB to HSV.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the HSV\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n `output[..., 0]` contains hue, `output[..., 1]` contains saturation, and\n `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0\n corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.\n\n Usage Example:\n\n >>> blue_image = tf.stack([\n ... tf.zeros([5,5]),\n ... tf.zeros([5,5]),\n ... tf.ones([5,5])],\n ... axis=-1)\n >>> blue_hsv_image = tf.image.rgb_to_hsv(blue_image)\n >>> blue_hsv_image[0,0].numpy()\n array([0.6666667, 1. , 1. ], dtype=float32)\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. RGB data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Converts one or more images from RGB to HSV.", "type": "API"}, {"name": "tf.image.rgb_to_yiq", "docs": "Converts one or more images from RGB to YIQ.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the YIQ\n value of the pixels.\n The output is only well defined if the value in images are in [0,1].\n\n Usage Example:\n\n >>> x = tf.constant([[[1.0, 2.0, 3.0]]])\n >>> tf.image.rgb_to_yiq(x)\n \n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from RGB to YIQ.", "type": "API"}, {"name": "tf.image.rgb_to_yuv", "docs": "Converts one or more images from RGB to YUV.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the YUV\n value of the pixels.\n The output is only well defined if the value in images are in [0, 1].\n There are two ways of representing an image: [0, 255] pixel values range or\n [0, 1] (as float) pixel values range. Users need to convert the input image\n into a float [0, 1] range.\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from RGB to YUV.", "type": "API"}, {"name": "tf.image.rot90", "docs": "Rotate image(s) counter-clockwise by 90 degrees.\n\n\n For example:\n\n >>> a=tf.constant([[[1],[2]],\n ... [[3],[4]]])\n >>> # rotating `a` counter clockwise by 90 degrees\n >>> a_rot=tf.image.rot90(a)\n >>> print(a_rot[...,0].numpy())\n [[2 4]\n [1 3]]\n >>> # rotating `a` counter clockwise by 270 degrees\n >>> a_rot=tf.image.rot90(a, k=3)\n >>> print(a_rot[...,0].numpy())\n [[3 1]\n [4 2]]\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n k: A scalar integer tensor. The number of times the image(s) are\n rotated by 90 degrees.\n name: A name for this operation (optional).\n\n Returns:\n A rotated tensor of the same type and shape as `image`.\n\n Raises:\n ValueError: if the shape of `image` not supported.\n ", "desc": "Rotate image(s) counter-clockwise by 90 degrees.", "type": "API"}, {"name": "tf.image.sample_distorted_bounding_box", "docs": "Generate a single randomly distorted bounding box for an image.\n\n Bounding box annotations are often supplied in addition to ground-truth labels\n in image recognition or object localization tasks. A common technique for\n training such a system is to randomly distort an image while preserving\n its content, i.e. *data augmentation*. This Op outputs a randomly distorted\n localization of an object, i.e. bounding box, given an `image_size`,\n `bounding_boxes` and a series of constraints.\n\n The output of this Op is a single bounding box that may be used to crop the\n original image. The output is returned as 3 tensors: `begin`, `size` and\n `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\n image. The latter may be supplied to `tf.image.draw_bounding_boxes` to\n visualize what the bounding box looks like.\n\n Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`.\n The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width\n and the height of the underlying image.\n\n For example,\n\n ```python\n # Generate a single distorted bounding box.\n begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(\n tf.shape(image),\n bounding_boxes=bounding_boxes,\n min_object_covered=0.1)\n\n # Draw the bounding box in an image summary.\n image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),\n bbox_for_draw)\n tf.compat.v1.summary.image('images_with_box', image_with_box)\n\n # Employ the bounding box to distort the image.\n distorted_image = tf.slice(image, begin, size)\n ```\n\n Note that if no bounding box information is available, setting\n `use_image_if_no_bounding_boxes = true` will assume there is a single implicit\n bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\n false and no bounding boxes are supplied, an error is raised.\n\n For producing deterministic results given a `seed` value, use\n `tf.image.stateless_sample_distorted_bounding_box`. Unlike using the `seed`\n param with `tf.image.random_*` ops, `tf.image.stateless_random_*` ops\n guarantee the same results given the same seed independent of how many times\n the function is called, and independent of global seed settings\n (e.g. tf.random.set_seed).\n\n Args:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`,\n `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]`\n describing the N bounding boxes associated with the image.\n seed: An optional `int`. Defaults to `0`. If `seed` is set to non-zero, the\n random number generator is seeded by the given `seed`. Otherwise, it is\n seeded by a random seed.\n min_object_covered: A Tensor of type `float32`. Defaults to `0.1`. The\n cropped area of the image must contain at least this fraction of any\n bounding box supplied. The value of this parameter should be non-negative.\n In the case of 0, the cropped area does not need to overlap any of the\n bounding boxes supplied.\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75,\n 1.33]`. The cropped area of the image must have an aspect `ratio = width /\n height` within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`. The\n cropped area of the image must contain a fraction of the supplied image\n within this range.\n max_attempts: An optional `int`. Defaults to `100`. Number of attempts at\n generating a cropped region of the image of the specified constraints.\n After `max_attempts` failures, return the entire image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied. If true, assume an\n implicit bounding box covering the whole input. If false, raise an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[offset_height, offset_width, 0]`. Provide as input to\n `tf.slice`.\n size: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[target_height, target_width, -1]`. Provide as input to\n `tf.slice`.\n bboxes: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing\n the distorted bounding box.\n Provide as input to `tf.image.draw_bounding_boxes`.\n\n Raises:\n ValueError: If no seed is specified and op determinism is enabled.\n ", "desc": "Generate a single randomly distorted bounding box for an image.", "type": "API"}, {"name": "tf.image.sobel_edges", "docs": "Returns a tensor holding Sobel edge maps.\n\n Example usage:\n\n For general usage, `image` would be loaded from a file as below:\n\n ```python\n image_bytes = tf.io.read_file(path_to_image_file)\n image = tf.image.decode_image(image_bytes)\n image = tf.cast(image, tf.float32)\n image = tf.expand_dims(image, 0)\n ```\n But for demo purposes, we are using randomly generated values for `image`:\n\n >>> image = tf.random.uniform(\n ... maxval=255, shape=[1, 28, 28, 3], dtype=tf.float32)\n >>> sobel = tf.image.sobel_edges(image)\n >>> sobel_y = np.asarray(sobel[0, :, :, :, 0]) # sobel in y-direction\n >>> sobel_x = np.asarray(sobel[0, :, :, :, 1]) # sobel in x-direction\n\n For displaying the sobel results, PIL's [Image Module](\n https://pillow.readthedocs.io/en/stable/reference/Image.html) can be used:\n\n ```python\n # Display edge maps for the first channel (at index 0)\n Image.fromarray(sobel_y[..., 0] / 4 + 0.5).show()\n Image.fromarray(sobel_x[..., 0] / 4 + 0.5).show()\n ```\n\n Args:\n image: Image tensor with shape [batch_size, h, w, d] and type float32 or\n float64. The image(s) must be 2x2 or larger.\n\n Returns:\n Tensor holding edge maps for each channel. Returns a tensor with shape\n [batch_size, h, w, d, 2] where the last two dimensions hold [[dy[0], dx[0]],\n [dy[1], dx[1]], ..., [dy[d-1], dx[d-1]]] calculated using the Sobel filter.\n ", "desc": "Returns a tensor holding Sobel edge maps.", "type": "API"}, {"name": "tf.image.ssim", "docs": "Computes SSIM index between img1 and img2.\n\n This function is based on the standard SSIM implementation from:\n Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image\n quality assessment: from error visibility to structural similarity. IEEE\n transactions on image processing.\n\n Note: The true SSIM is only defined on grayscale. This function does not\n perform any colorspace transform. (If the input is already YUV, then it will\n compute YUV SSIM average.)\n\n Details:\n - 11x11 Gaussian filter of width 1.5 is used.\n - k1 = 0.01, k2 = 0.03 as in the original paper.\n\n The image sizes must be at least 11x11 because of the filter size.\n\n Example:\n\n ```python\n # Read images (of size 255 x 255) from file.\n im1 = tf.image.decode_image(tf.io.read_file('path/to/im1.png'))\n im2 = tf.image.decode_image(tf.io.read_file('path/to/im2.png'))\n tf.shape(im1) # `img1.png` has 3 channels; shape is `(255, 255, 3)`\n tf.shape(im2) # `img2.png` has 3 channels; shape is `(255, 255, 3)`\n # Add an outer batch for each image.\n im1 = tf.expand_dims(im1, axis=0)\n im2 = tf.expand_dims(im2, axis=0)\n # Compute SSIM over tf.uint8 Tensors.\n ssim1 = tf.image.ssim(im1, im2, max_val=255, filter_size=11,\n filter_sigma=1.5, k1=0.01, k2=0.03)\n\n # Compute SSIM over tf.float32 Tensors.\n im1 = tf.image.convert_image_dtype(im1, tf.float32)\n im2 = tf.image.convert_image_dtype(im2, tf.float32)\n ssim2 = tf.image.ssim(im1, im2, max_val=1.0, filter_size=11,\n filter_sigma=1.5, k1=0.01, k2=0.03)\n # ssim1 and ssim2 both have type tf.float32 and are almost equal.\n ```\n\n Args:\n img1: First image batch. 4-D Tensor of shape `[batch, height, width,\n channels]` with only Positive Pixel Values.\n img2: Second image batch. 4-D Tensor of shape `[batch, height, width,\n channels]` with only Positive Pixel Values.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n filter_size: Default value 11 (size of gaussian filter).\n filter_sigma: Default value 1.5 (width of gaussian filter).\n k1: Default value 0.01\n k2: Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so\n it would be better if we took the values in the range of 0 < K2 < 0.4).\n\n Returns:\n A tensor containing an SSIM value for each image in batch. Returned SSIM\n values are in range (-1, 1], when pixel values are non-negative. Returns\n a tensor with shape: broadcast(img1.shape[:-3], img2.shape[:-3]).\n ", "desc": "Computes SSIM index between img1 and img2.", "type": "API"}, {"name": "tf.image.ssim_multiscale", "docs": "Computes the MS-SSIM between img1 and img2.\n\n This function assumes that `img1` and `img2` are image batches, i.e. the last\n three dimensions are [height, width, channels].\n\n Note: The true SSIM is only defined on grayscale. This function does not\n perform any colorspace transform. (If the input is already YUV, then it will\n compute YUV SSIM average.)\n\n Original paper: Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. \"Multiscale\n structural similarity for image quality assessment.\" Signals, Systems and\n Computers, 2004.\n\n Args:\n img1: First image batch with only Positive Pixel Values.\n img2: Second image batch with only Positive Pixel Values. Must have the\n same rank as img1.\n max_val: The dynamic range of the images (i.e., the difference between the\n maximum the and minimum allowed values).\n power_factors: Iterable of weights for each of the scales. The number of\n scales used is the length of the list. Index 0 is the unscaled\n resolution's weight and each increasing scale corresponds to the image\n being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363,\n 0.1333), which are the values obtained in the original paper.\n filter_size: Default value 11 (size of gaussian filter).\n filter_sigma: Default value 1.5 (width of gaussian filter).\n k1: Default value 0.01\n k2: Default value 0.03 (SSIM is less sensitivity to K2 for lower values, so\n it would be better if we took the values in the range of 0 < K2 < 0.4).\n\n Returns:\n A tensor containing an MS-SSIM value for each image in batch. The values\n are in range [0, 1]. Returns a tensor with shape:\n broadcast(img1.shape[:-3], img2.shape[:-3]).\n ", "desc": "Computes the MS-SSIM between img1 and img2.", "type": "API"}, {"name": "tf.image.stateless_random_brightness", "docs": "Adjust the brightness of images by a random factor deterministically.\n\n Equivalent to `adjust_brightness()` using a `delta` randomly picked in the\n interval `[-max_delta, max_delta)`.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_brightness(x, 0.2, seed)\n \n\n Args:\n image: An image or images to adjust.\n max_delta: float, must be non-negative.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n The brightness-adjusted image(s).\n\n Raises:\n ValueError: if `max_delta` is negative.\n ", "desc": "Adjust the brightness of images by a random factor deterministically.", "type": "API"}, {"name": "tf.image.stateless_random_contrast", "docs": "Adjust the contrast of images by a random factor deterministically.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Args:\n image: An image tensor with 3 or more dimensions.\n lower: float. Lower bound for the random contrast factor.\n upper: float. Upper bound for the random contrast factor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_contrast(x, 0.2, 0.5, seed)\n \n\n Returns:\n The contrast-adjusted image(s).\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the contrast of images by a random factor deterministically.", "type": "API"}, {"name": "tf.image.stateless_random_crop", "docs": "Randomly crops a tensor to a given size in a deterministic manner.\n\n Slices a shape `size` portion out of `value` at a uniformly chosen offset.\n Requires `value.shape >= size`.\n\n If a dimension should not be cropped, pass the full size of that dimension.\n For example, RGB images can be cropped with\n `size = [crop_height, crop_width, 3]`.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Usage Example:\n\n >>> image = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_crop(value=image, size=(1, 2, 3), seed=seed)\n \n\n Args:\n value: Input tensor to crop.\n size: 1-D tensor with size the rank of `value`.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n name: A name for this operation (optional).\n\n Returns:\n A cropped tensor of the same rank as `value` and shape `size`.\n ", "desc": "Randomly crops a tensor to a given size in a deterministic manner.", "type": "API"}, {"name": "tf.image.stateless_random_flip_left_right", "docs": "Randomly flip an image horizontally (left to right) deterministically.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> seed = (2, 3)\n >>> tf.image.stateless_random_flip_left_right(image, seed).numpy().tolist()\n [[[2], [1]], [[4], [3]]]\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n A tensor of the same type and shape as `image`.\n ", "desc": "Randomly flip an image horizontally (left to right) deterministically.", "type": "API"}, {"name": "tf.image.stateless_random_flip_up_down", "docs": "Randomly flip an image vertically (upside down) deterministically.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Example usage:\n\n >>> image = np.array([[[1], [2]], [[3], [4]]])\n >>> seed = (2, 3)\n >>> tf.image.stateless_random_flip_up_down(image, seed).numpy().tolist()\n [[[3], [4]], [[1], [2]]]\n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n A tensor of the same type and shape as `image`.\n ", "desc": "Randomly flip an image vertically (upside down) deterministically.", "type": "API"}, {"name": "tf.image.stateless_random_hue", "docs": "Adjust the hue of RGB images by a random factor deterministically.\n\n Equivalent to `adjust_hue()` but uses a `delta` randomly picked in the\n interval `[-max_delta, max_delta)`.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n `max_delta` must be in the interval `[0, 0.5]`.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_hue(x, 0.2, seed)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n max_delta: float. The maximum value for the random delta.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `max_delta` is invalid.\n ", "desc": "Adjust the hue of RGB images by a random factor deterministically.", "type": "API"}, {"name": "tf.image.stateless_random_jpeg_quality", "docs": "Deterministically radomize jpeg encoding quality for inducing jpeg noise.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n `min_jpeg_quality` must be in the interval `[0, 100]` and less than\n `max_jpeg_quality`.\n `max_jpeg_quality` must be in the interval `[0, 100]`.\n\n Usage Example:\n\n >>> x = tf.constant([[[1, 2, 3],\n ... [4, 5, 6]],\n ... [[7, 8, 9],\n ... [10, 11, 12]]], dtype=tf.uint8)\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_jpeg_quality(x, 75, 95, seed)\n \n\n Args:\n image: 3D image. Size of the last dimension must be 1 or 3.\n min_jpeg_quality: Minimum jpeg encoding quality to use.\n max_jpeg_quality: Maximum jpeg encoding quality to use.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `min_jpeg_quality` or `max_jpeg_quality` is invalid.\n ", "desc": "Deterministically radomize jpeg encoding quality for inducing jpeg noise.", "type": "API"}, {"name": "tf.image.stateless_random_saturation", "docs": "Adjust the saturation of RGB images by a random factor deterministically.\n\n Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly\n picked in the interval `[lower, upper)`.\n\n Guarantees the same results given the same `seed` independent of how many\n times the function is called, and independent of global seed settings (e.g.\n `tf.random.set_seed`).\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> seed = (1, 2)\n >>> tf.image.stateless_random_saturation(x, 0.5, 1.0, seed)\n \n\n Args:\n image: RGB image or images. The size of the last dimension must be 3.\n lower: float. Lower bound for the random saturation factor.\n upper: float. Upper bound for the random saturation factor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n\n Returns:\n Adjusted image(s), same shape and DType as `image`.\n\n Raises:\n ValueError: if `upper <= lower` or if `lower < 0`.\n ", "desc": "Adjust the saturation of RGB images by a random factor deterministically.", "type": "API"}, {"name": "tf.image.stateless_sample_distorted_bounding_box", "docs": "Generate a randomly distorted bounding box for an image deterministically.\n\n Bounding box annotations are often supplied in addition to ground-truth labels\n in image recognition or object localization tasks. A common technique for\n training such a system is to randomly distort an image while preserving\n its content, i.e. *data augmentation*. This Op, given the same `seed`,\n deterministically outputs a randomly distorted localization of an object, i.e.\n bounding box, given an `image_size`, `bounding_boxes` and a series of\n constraints.\n\n The output of this Op is a single bounding box that may be used to crop the\n original image. The output is returned as 3 tensors: `begin`, `size` and\n `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\n image. The latter may be supplied to `tf.image.draw_bounding_boxes` to\n visualize what the bounding box looks like.\n\n Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`.\n The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width\n and the height of the underlying image.\n\n The output of this Op is guaranteed to be the same given the same `seed` and\n is independent of how many times the function is called, and independent of\n global seed settings (e.g. `tf.random.set_seed`).\n\n Example usage:\n\n >>> image = np.array([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])\n >>> bbox = tf.constant(\n ... [0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])\n >>> seed = (1, 2)\n >>> # Generate a single distorted bounding box.\n >>> bbox_begin, bbox_size, bbox_draw = (\n ... tf.image.stateless_sample_distorted_bounding_box(\n ... tf.shape(image), bounding_boxes=bbox, seed=seed))\n >>> # Employ the bounding box to distort the image.\n >>> tf.slice(image, bbox_begin, bbox_size)\n \n >>> # Draw the bounding box in an image summary.\n >>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])\n >>> tf.image.draw_bounding_boxes(\n ... tf.expand_dims(tf.cast(image, tf.float32),0), bbox_draw, colors)\n \n\n Note that if no bounding box information is available, setting\n `use_image_if_no_bounding_boxes = true` will assume there is a single implicit\n bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\n false and no bounding boxes are supplied, an error is raised.\n\n Args:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`,\n `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]`\n describing the N bounding boxes associated with the image.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n min_object_covered: A Tensor of type `float32`. Defaults to `0.1`. The\n cropped area of the image must contain at least this fraction of any\n bounding box supplied. The value of this parameter should be non-negative.\n In the case of 0, the cropped area does not need to overlap any of the\n bounding boxes supplied.\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75,\n 1.33]`. The cropped area of the image must have an aspect `ratio = width /\n height` within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`. The\n cropped area of the image must contain a fraction of the supplied image\n within this range.\n max_attempts: An optional `int`. Defaults to `100`. Number of attempts at\n generating a cropped region of the image of the specified constraints.\n After `max_attempts` failures, return the entire image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied. If true, assume an\n implicit bounding box covering the whole input. If false, raise an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[offset_height, offset_width, 0]`. Provide as input to\n `tf.slice`.\n size: A `Tensor`. Has the same type as `image_size`. 1-D, containing\n `[target_height, target_width, -1]`. Provide as input to\n `tf.slice`.\n bboxes: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing\n the distorted bounding box.\n Provide as input to `tf.image.draw_bounding_boxes`.\n ", "desc": "Generate a randomly distorted bounding box for an image deterministically.", "type": "API"}, {"name": "tf.image.total_variation", "docs": "Calculate and return the total variation for one or more images.\n\n The total variation is the sum of the absolute differences for neighboring\n pixel-values in the input images. This measures how much noise is in the\n images.\n\n This can be used as a loss-function during optimization so as to suppress\n noise in images. If you have a batch of images, then you should calculate\n the scalar loss-value as the sum:\n `loss = tf.reduce_sum(tf.image.total_variation(images))`\n\n This implements the anisotropic 2-D version of the formula described here:\n\n https://en.wikipedia.org/wiki/Total_variation_denoising\n\n Args:\n images: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n name: A name for the operation (optional).\n\n Raises:\n ValueError: if images.shape is not a 3-D or 4-D vector.\n\n Returns:\n The total variation of `images`.\n\n If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the\n total variation for each image in the batch.\n If `images` was 3-D, return a scalar float with the total variation for\n that image.\n ", "desc": "Calculate and return the total variation for one or more images.", "type": "API"}, {"name": "tf.image.transpose", "docs": "Transpose image(s) by swapping the height and width dimension.\n\n Usage Example:\n\n >>> x = [[[1.0, 2.0, 3.0],\n ... [4.0, 5.0, 6.0]],\n ... [[7.0, 8.0, 9.0],\n ... [10.0, 11.0, 12.0]]]\n >>> tf.image.transpose(x)\n \n\n Args:\n image: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor\n of shape `[height, width, channels]`.\n name: A name for this operation (optional).\n\n Returns:\n If `image` was 4-D, a 4-D float Tensor of shape\n `[batch, width, height, channels]`\n If `image` was 3-D, a 3-D float Tensor of shape\n `[width, height, channels]`\n\n Raises:\n ValueError: if the shape of `image` not supported.\n\n Usage Example:\n\n >>> image = [[[1, 2], [3, 4]],\n ... [[5, 6], [7, 8]],\n ... [[9, 10], [11, 12]]]\n >>> image = tf.constant(image)\n >>> tf.image.transpose(image)\n \n ", "desc": "Transpose image(s) by swapping the height and width dimension.", "type": "API"}, {"name": "tf.image.yiq_to_rgb", "docs": "Converts one or more images from YIQ to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels.\n The output is only well defined if the Y value in images are in [0,1],\n I value are in [-0.5957,0.5957] and Q value are in [-0.5226,0.5226].\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from YIQ to RGB.", "type": "API"}, {"name": "tf.image.yuv_to_rgb", "docs": "Converts one or more images from YUV to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels.\n The output is only well defined if the Y value in images are in [0,1],\n U and V value are in [-0.5,0.5].\n\n As per the above description, you need to scale your YUV images if their\n pixel values are not in the required range. Below given example illustrates\n preprocessing of each channel of images before feeding them to `yuv_to_rgb`.\n\n ```python\n yuv_images = tf.random.uniform(shape=[100, 64, 64, 3], maxval=255)\n last_dimension_axis = len(yuv_images.shape) - 1\n yuv_tensor_images = tf.truediv(\n tf.subtract(\n yuv_images,\n tf.reduce_min(yuv_images)\n ),\n tf.subtract(\n tf.reduce_max(yuv_images),\n tf.reduce_min(yuv_images)\n )\n )\n y, u, v = tf.split(yuv_tensor_images, 3, axis=last_dimension_axis)\n target_uv_min, target_uv_max = -0.5, 0.5\n u = u * (target_uv_max - target_uv_min) + target_uv_min\n v = v * (target_uv_max - target_uv_min) + target_uv_min\n preprocessed_yuv_images = tf.concat([y, u, v], axis=last_dimension_axis)\n rgb_tensor_images = tf.image.yuv_to_rgb(preprocessed_yuv_images)\n ```\n\n Args:\n images: 2-D or higher rank. Image data to convert. Last dimension must be\n size 3.\n\n Returns:\n images: tensor with the same shape as `images`.\n ", "desc": "Converts one or more images from YUV to RGB.", "type": "API"}, {"name": "tf.import_graph_def", "docs": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(op_dict)`. They will be removed in a future version.\nInstructions for updating:\nPlease file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.\n\nThis function provides a way to import a serialized TensorFlow\n[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)\nprotocol buffer, and extract individual objects in the `GraphDef` as\n`tf.Tensor` and `tf.Operation` objects. Once extracted,\nthese objects are placed into the current default `Graph`. See\n`tf.Graph.as_graph_def` for a way to create a `GraphDef`\nproto.\n\nArgs:\n graph_def: A `GraphDef` proto containing operations to be imported into\n the default graph.\n input_map: A dictionary mapping input names (as strings) in `graph_def`\n to `Tensor` objects. The values of the named input tensors in the\n imported graph will be re-mapped to the respective `Tensor` values.\n return_elements: A list of strings containing operation names in\n `graph_def` that will be returned as `Operation` objects; and/or\n tensor names in `graph_def` that will be returned as `Tensor` objects.\n name: (Optional.) A prefix that will be prepended to the names in\n `graph_def`. Note that this does not apply to imported function names.\n Defaults to `\"import\"`.\n op_dict: (Optional.) Deprecated, do not use.\n producer_op_list: (Optional.) An `OpList` proto with the (possibly stripped)\n list of `OpDef`s used by the producer of the graph. If provided,\n unrecognized attrs for ops in `graph_def` that have their default value\n according to `producer_op_list` will be removed. This will allow some more\n `GraphDef`s produced by later binaries to be accepted by earlier binaries.\n\nReturns:\n A list of `Operation` and/or `Tensor` objects from the imported graph,\n corresponding to the names in `return_elements`,\n and None if `returns_elements` is None.\n\nRaises:\n TypeError: If `graph_def` is not a `GraphDef` proto,\n `input_map` is not a dictionary mapping strings to `Tensor` objects,\n or `return_elements` is not a list of strings.\n ValueError: If `input_map`, or `return_elements` contains names that\n do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.\n it refers to an unknown tensor).", "desc": "Imports the graph from `graph_def` into the current default `Graph`. (deprecated arguments)", "type": "API"}, {"name": "tf.IndexedSlices", "docs": "A sparse representation of a set of tensor slices at given indices.\n\n This class is a simple wrapper for a pair of `Tensor` objects:\n\n * `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`.\n * `indices`: A 1-D integer `Tensor` with shape `[D0]`.\n\n An `IndexedSlices` is typically used to represent a subset of a larger\n tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`.\n The values in `indices` are the indices in the first dimension of\n the slices that have been extracted from the larger tensor.\n\n The dense tensor `dense` represented by an `IndexedSlices` `slices` has\n\n ```python\n dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]\n ```\n\n The `IndexedSlices` class is used principally in the definition of\n gradients for operations that have sparse gradients\n (e.g. `tf.gather`).\n\n >>> v = tf.Variable([[0.,1, 2], [2, 3, 4], [4, 5, 6], [6, 7, 8]])\n >>> with tf.GradientTape() as tape:\n ... r = tf.gather(v, [1,3])\n >>> index_slices = tape.gradient(r,v)\n >>> index_slices\n <...IndexedSlices object ...>\n >>> index_slices.indices.numpy()\n array([1, 3], dtype=int32)\n >>> index_slices.values.numpy()\n array([[1., 1., 1.],\n [1., 1., 1.]], dtype=float32)\n\n Contrast this representation with\n `tf.sparse.SparseTensor`,\n which uses multi-dimensional indices and scalar values.\n ", "desc": "A sparse representation of a set of tensor slices at given indices.", "type": "API"}, {"name": "tf.IndexedSlicesSpec", "docs": "Type specification for a `tf.IndexedSlices`.", "desc": "Type specification for a `tf.IndexedSlices`.", "type": "API"}, {"name": "tf.init_scope", "docs": "A context manager that lifts ops out of control-flow scopes and function-building graphs.\n\n There is often a need to lift variable initialization ops out of control-flow\n scopes, function-building graphs, and gradient tapes. Entering an\n `init_scope` is a mechanism for satisfying these desiderata. In particular,\n entering an `init_scope` has three effects:\n\n (1) All control dependencies are cleared the moment the scope is entered;\n this is equivalent to entering the context manager returned from\n `control_dependencies(None)`, which has the side-effect of exiting\n control-flow scopes like `tf.cond` and `tf.while_loop`.\n\n (2) All operations that are created while the scope is active are lifted\n into the lowest context on the `context_stack` that is not building a\n graph function. Here, a context is defined as either a graph or an eager\n context. Every context switch, i.e., every installation of a graph as\n the default graph and every switch into eager mode, is logged in a\n thread-local stack called `context_switches`; the log entry for a\n context switch is popped from the stack when the context is exited.\n Entering an `init_scope` is equivalent to crawling up\n `context_switches`, finding the first context that is not building a\n graph function, and entering it. A caveat is that if graph mode is\n enabled but the default graph stack is empty, then entering an\n `init_scope` will simply install a fresh graph as the default one.\n\n (3) The gradient tape is paused while the scope is active.\n\n When eager execution is enabled, code inside an init_scope block runs with\n eager execution enabled even when tracing a `tf.function`. For example:\n\n ```python\n tf.compat.v1.enable_eager_execution()\n\n @tf.function\n def func():\n # A function constructs TensorFlow graphs,\n # it does not execute eagerly.\n assert not tf.executing_eagerly()\n with tf.init_scope():\n # Initialization runs with eager execution enabled\n assert tf.executing_eagerly()\n ```\n\n Raises:\n RuntimeError: if graph state is incompatible with this initialization.\n ", "desc": "A context manager that lifts ops out of control-flow scopes and function-building graphs.", "type": "API"}, {"name": "tf.initializers", "docs": "", "desc": "", "type": "API"}, {"name": "tf.initializers.Constant", "docs": "Initializer that generates tensors with constant values.\n\n Also available via the shortcut function `tf.keras.initializers.constant`.\n\n Only scalar values are allowed.\n The constant value provided must be convertible to the dtype requested\n when calling the initializer.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Constant(3.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Constant(3.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n value: A Python scalar.\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.initializers.deserialize", "docs": "Return an `Initializer` object from its config.", "desc": "Return an `Initializer` object from its config.", "type": "API"}, {"name": "tf.initializers.get", "docs": "Retrieve a Keras initializer by the identifier.\n\n The `identifier` may be the string name of a initializers function or class (\n case-sensitively).\n\n >>> identifier = 'Ones'\n >>> tf.keras.initializers.deserialize(identifier)\n <...keras.initializers.initializers_v2.Ones...>\n\n You can also specify `config` of the initializer to this function by passing\n dict containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Initializer` class.\n\n >>> cfg = {'class_name': 'Ones', 'config': {}}\n >>> tf.keras.initializers.deserialize(cfg)\n <...keras.initializers.initializers_v2.Ones...>\n\n In the case that the `identifier` is a class, this method will return a new\n instance of the class by its constructor.\n\n Args:\n identifier: String or dict that contains the initializer name or\n configurations.\n\n Returns:\n Initializer instance base on the input identifier.\n\n Raises:\n ValueError: If the input identifier is not a supported type or in a bad\n format.\n ", "desc": "Retrieve a Keras initializer by the identifier.", "type": "API"}, {"name": "tf.initializers.glorot_normal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_normal`.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number of input units in\n the weight tensor and `fan_out` is the number of output units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.initializers.glorot_uniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / (fan_in + fan_out))` (`fan_in` is the number of input units\n in the weight tensor and `fan_out` is the number of output units).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.initializers.GlorotNormal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_normal`.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number of input units in\n the weight tensor and `fan_out` is the number of output units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.initializers.GlorotUniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / (fan_in + fan_out))` (`fan_in` is the number of input units\n in the weight tensor and `fan_out` is the number of output units).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.initializers.he_normal", "docs": "He normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_normal`.\n\n It draws samples from a truncated normal distribution centered on 0 with\n `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the\n weight tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He normal initializer.", "type": "API"}, {"name": "tf.initializers.he_uniform", "docs": "He uniform variance scaling initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He uniform variance scaling initializer.", "type": "API"}, {"name": "tf.initializers.HeNormal", "docs": "He normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_normal`.\n\n It draws samples from a truncated normal distribution centered on 0 with\n `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the\n weight tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He normal initializer.", "type": "API"}, {"name": "tf.initializers.HeUniform", "docs": "He uniform variance scaling initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He uniform variance scaling initializer.", "type": "API"}, {"name": "tf.initializers.Identity", "docs": "Initializer that generates the identity matrix.\n\n Also available via the shortcut function `tf.keras.initializers.identity`.\n\n Only usable for generating 2D matrices.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Identity()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Identity()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n gain: Multiplicative factor to apply to the identity matrix.\n ", "desc": "Initializer that generates the identity matrix.", "type": "API"}, {"name": "tf.initializers.Initializer", "docs": "Initializer base class: all Keras initializers inherit from this class.\n\n Initializers should implement a `__call__` method with the following\n signature:\n\n ```python\n def __call__(self, shape, dtype=None, **kwargs):\n # returns a tensor of shape `shape` and dtype `dtype`\n # containing values drawn from a distribution of your choice.\n ```\n\n Optionally, you an also implement the method `get_config` and the class\n method `from_config` in order to support serialization -- just like with\n any Keras object.\n\n Here's a simple example: a random normal initializer.\n\n ```python\n import tensorflow as tf\n\n class ExampleRandomNormal(tf.keras.initializers.Initializer):\n\n def __init__(self, mean, stddev):\n self.mean = mean\n self.stddev = stddev\n\n def __call__(self, shape, dtype=None, **kwargs):\n return tf.random.normal(\n shape, mean=self.mean, stddev=self.stddev, dtype=dtype)\n\n def get_config(self): # To support serialization\n return {\"mean\": self.mean, \"stddev\": self.stddev}\n ```\n\n Note that we don't have to implement `from_config` in the example above since\n the constructor arguments of the class the keys in the config returned by\n `get_config` are the same. In this case, the default `from_config`\n works fine.\n ", "desc": "Initializer base class: all Keras initializers inherit from this class.", "type": "API"}, {"name": "tf.initializers.lecun_normal", "docs": "Lecun normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_normal`.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun normal initializer.", "type": "API"}, {"name": "tf.initializers.lecun_uniform", "docs": "Lecun uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`,\n where `limit = sqrt(3 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun uniform initializer.", "type": "API"}, {"name": "tf.initializers.LecunNormal", "docs": "Lecun normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_normal`.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun normal initializer.", "type": "API"}, {"name": "tf.initializers.LecunUniform", "docs": "Lecun uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`,\n where `limit = sqrt(3 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun uniform initializer.", "type": "API"}, {"name": "tf.initializers.Ones", "docs": "Initializer that generates tensors initialized to 1.\n\n Also available via the shortcut function `tf.keras.initializers.ones`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Ones()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Ones()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n ", "desc": "Initializer that generates tensors initialized to 1.", "type": "API"}, {"name": "tf.initializers.Orthogonal", "docs": "Initializer that generates an orthogonal matrix.\n\n Also available via the shortcut function `tf.keras.initializers.orthogonal`.\n\n If the shape of the tensor to initialize is two-dimensional, it is initialized\n with an orthogonal matrix obtained from the QR decomposition of a matrix of\n random numbers drawn from a normal distribution.\n If the matrix has fewer rows than columns then the output will have orthogonal\n rows. Otherwise, the output will have orthogonal columns.\n\n If the shape of the tensor to initialize is more than two-dimensional,\n a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`\n is initialized, where `n` is the length of the shape vector.\n The matrix is subsequently reshaped to give a tensor of the desired shape.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Orthogonal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Orthogonal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n gain: multiplicative factor to apply to the orthogonal matrix\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C)\n ", "desc": "Initializer that generates an orthogonal matrix.", "type": "API"}, {"name": "tf.initializers.random_normal", "docs": "Initializer that generates tensors with a normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_normal`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.initializers.random_uniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_uniform`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate (inclusive).\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate (exclusive).\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.initializers.RandomNormal", "docs": "Initializer that generates tensors with a normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_normal`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.initializers.RandomUniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_uniform`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate (inclusive).\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate (exclusive).\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.initializers.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.initializers.truncated_normal", "docs": "Initializer that generates a truncated normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.truncated_normal`.\n\n The values generated are similar to values from a\n `tf.keras.initializers.RandomNormal` initializer except that values more\n than two standard deviations from the mean are\n discarded and re-drawn.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values\n to generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate before truncation.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.initializers.TruncatedNormal", "docs": "Initializer that generates a truncated normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.truncated_normal`.\n\n The values generated are similar to values from a\n `tf.keras.initializers.RandomNormal` initializer except that values more\n than two standard deviations from the mean are\n discarded and re-drawn.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values\n to generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate before truncation.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.initializers.variance_scaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n Also available via the shortcut function\n `tf.keras.initializers.variance_scaling`.\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`, samples are\n drawn from a truncated/untruncated normal distribution with a mean of zero and\n a standard deviation (after truncation, if used) `stddev = sqrt(scale / n)`,\n where `n` is:\n\n - number of input units in the weight tensor, if `mode=\"fan_in\"`\n - number of output units, if `mode=\"fan_out\"`\n - average of the numbers of input and output units, if `mode=\"fan_avg\"`\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within `[-limit, limit]`, where `limit = sqrt(3 * scale / n)`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"truncated_normal\",\n \"untruncated_normal\" and \"uniform\".\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.initializers.VarianceScaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n Also available via the shortcut function\n `tf.keras.initializers.variance_scaling`.\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`, samples are\n drawn from a truncated/untruncated normal distribution with a mean of zero and\n a standard deviation (after truncation, if used) `stddev = sqrt(scale / n)`,\n where `n` is:\n\n - number of input units in the weight tensor, if `mode=\"fan_in\"`\n - number of output units, if `mode=\"fan_out\"`\n - average of the numbers of input and output units, if `mode=\"fan_avg\"`\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within `[-limit, limit]`, where `limit = sqrt(3 * scale / n)`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"truncated_normal\",\n \"untruncated_normal\" and \"uniform\".\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.initializers.Zeros", "docs": "Initializer that generates tensors initialized to 0.\n\n Also available via the shortcut function `tf.keras.initializers.zeros`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Zeros()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Zeros()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n ", "desc": "Initializer that generates tensors initialized to 0.", "type": "API"}, {"name": "tf.inside_function", "docs": "Indicates whether the caller code is executing inside a `tf.function`.\n\n Returns:\n Boolean, True if the caller code is executing inside a `tf.function`\n rather than eagerly.\n\n Example:\n\n >>> tf.inside_function()\n False\n >>> @tf.function\n ... def f():\n ... print(tf.inside_function())\n >>> f()\n True\n ", "desc": "Indicates whether the caller code is executing inside a `tf.function`.", "type": "API"}, {"name": "tf.io", "docs": "Public API for tf.io namespace.\n", "desc": "Public API for tf.io namespace.", "type": "API"}, {"name": "tf.io.decode_and_crop_jpeg", "docs": "Decode and Crop a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n It is equivalent to a combination of decode and crop, but much faster by only\n decoding partial jpeg image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n crop_window: A `Tensor` of type `int32`.\n 1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode and Crop a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.io.decode_base64", "docs": "Decode web-safe base64-encoded strings.\n\n Input may or may not have padding at the end. See\n [EncodeBase64](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64)\n for padding. Web-safe means that input must use - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Base64 strings to decode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decode web-safe base64-encoded strings.", "type": "API"}, {"name": "tf.io.decode_bmp", "docs": "Decode the first frame of a BMP-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the BMP-encoded image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The BMP-encoded image.\n channels: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the first frame of a BMP-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.io.decode_compressed", "docs": "Decompress strings.\n\n This op decompresses each element of the `bytes` input `Tensor`, which\n is assumed to be compressed using the given `compression_type`.\n\n The `output` is a string `Tensor` of the same shape as `bytes`,\n each element containing the decompressed data from the corresponding\n element in `bytes`.\n\n Args:\n bytes: A `Tensor` of type `string`.\n A Tensor of string which is compressed.\n compression_type: An optional `string`. Defaults to `\"\"`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decompress strings.", "type": "API"}, {"name": "tf.io.decode_csv", "docs": "Convert CSV records to tensors. Each column maps to one tensor.\n\n RFC 4180 format is expected for the CSV records.\n (https://tools.ietf.org/html/rfc4180)\n Note that we allow leading and trailing spaces with int or float field.\n\n Args:\n records: A `Tensor` of type `string`.\n Each string is a record/row in the csv and all records should have\n the same format.\n record_defaults: A list of `Tensor` objects with specific types.\n Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`.\n One tensor per column of the input record, with either a\n scalar default value for that column or an empty vector if the column is\n required.\n field_delim: An optional `string`. Defaults to `\",\"`.\n char delimiter to separate fields in a record.\n use_quote_delim: An optional `bool`. Defaults to `True`.\n If false, treats double quotation marks as regular\n characters inside of the string fields (ignoring RFC 4180, Section 2,\n Bullet 5).\n na_value: Additional string to recognize as NA/NaN.\n select_cols: Optional sorted list of column indices to select. If specified,\n only this subset of columns will be parsed and returned.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `record_defaults`.\n Each tensor will have the same shape as records.\n\n Raises:\n ValueError: If any of the arguments is malformed.\n ", "desc": "Convert CSV records to tensors. Each column maps to one tensor.", "type": "API"}, {"name": "tf.io.decode_gif", "docs": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.\n\n GIF images with frame or transparency compression are not supported.\n On Linux and MacOS systems, convert animated GIFs from compressed to\n uncompressed by running:\n\n convert $src.gif -coalesce $dst.gif\n\n This op also supports decoding JPEGs and PNGs, though it is cleaner to use\n `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The GIF-encoded image.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.io.decode_image", "docs": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.\n\n Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the\n appropriate operation to convert the input bytes `string` into a `Tensor`\n of type `dtype`.\n\n Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as\n opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D\n arrays `[height, width, num_channels]`. Make sure to take this into account\n when constructing your graph if you are intermixing GIF files with BMP, JPEG,\n and/or PNG files. Alternately, set the `expand_animations` argument of this\n function to `False`, in which case the op will return 3-dimensional tensors\n and will truncate animated GIF files to the first frame.\n\n NOTE: If the first frame of an animated GIF does not occupy the entire\n canvas (maximum frame width x maximum frame height), then it fills the\n unoccupied areas (in the first frame) with zeros (black). For frames after the\n first frame that does not occupy the entire canvas, it uses the previous\n frame to fill the unoccupied areas.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The encoded image bytes.\n channels: An optional `int`. Defaults to `0`. Number of color channels for\n the decoded image.\n dtype: The desired DType of the returned `Tensor`.\n name: A name for the operation (optional)\n expand_animations: An optional `bool`. Defaults to `True`. Controls the\n shape of the returned op's output. If `True`, the returned op will produce\n a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs,\n whether animated or not. If, `False`, the returned op will produce a 3-D\n tensor for all file types and will truncate animated GIFs to the first\n frame.\n\n Returns:\n `Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on\n the file type and the value of the `expand_animations` parameter.\n\n Raises:\n ValueError: On incorrect number of channels.\n ", "desc": "Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.", "type": "API"}, {"name": "tf.io.decode_jpeg", "docs": "Decode a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n This op also supports decoding PNGs and non-animated GIFs since the interface is\n the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.io.decode_json_example", "docs": "Convert JSON-encoded Example records to binary protocol buffer strings.\n\n Note: This is **not** a general purpose JSON parsing op.\n\n This op converts JSON-serialized `tf.train.Example` (maybe created with\n `json_format.MessageToJson`, following the\n [standard JSON mapping](\n https://developers.google.com/protocol-buffers/docs/proto3#json))\n to a binary-serialized `tf.train.Example` (equivalent to\n `Example.SerializeToString()`) suitable for conversion to tensors with\n `tf.io.parse_example`.\n\n Here is a `tf.train.Example` proto:\n\n >>> example = tf.train.Example(\n ... features=tf.train.Features(\n ... feature={\n ... \"a\": tf.train.Feature(\n ... int64_list=tf.train.Int64List(\n ... value=[1, 1, 3]))}))\n\n Here it is converted to JSON:\n\n >>> from google.protobuf import json_format\n >>> example_json = json_format.MessageToJson(example)\n >>> print(example_json)\n {\n \"features\": {\n \"feature\": {\n \"a\": {\n \"int64List\": {\n \"value\": [\n \"1\",\n \"1\",\n \"3\"\n ]\n }\n }\n }\n }\n }\n\n This op converts the above json string to a binary proto:\n\n >>> example_binary = tf.io.decode_json_example(example_json)\n >>> example_binary.numpy()\n b'\\n\\x0f\\n\\r\\n\\x01a\\x12\\x08\\x1a\\x06\\x08\\x01\\x08\\x01\\x08\\x03'\n\n The OP works on string tensors of andy shape:\n\n >>> tf.io.decode_json_example([\n ... [example_json, example_json],\n ... [example_json, example_json]]).shape.as_list()\n [2, 2]\n\n This resulting binary-string is equivalent to `Example.SerializeToString()`,\n and can be converted to Tensors using `tf.io.parse_example` and related\n functions:\n\n >>> tf.io.parse_example(\n ... serialized=[example_binary.numpy(),\n ... example.SerializeToString()],\n ... features = {'a': tf.io.FixedLenFeature(shape=[3], dtype=tf.int64)})\n {'a': }\n\n Args:\n json_examples: A string tensor containing json-serialized `tf.Example`\n protos.\n name: A name for the op.\n\n Returns:\n A string Tensor containing the binary-serialized `tf.Example` protos.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If the JSON could not be converted to a\n `tf.Example`\n ", "desc": "Convert JSON-encoded Example records to binary protocol buffer strings.", "type": "API"}, {"name": "tf.io.decode_png", "docs": "Decode a PNG-encoded image to a uint8 or uint16 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the PNG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n If needed, the PNG-encoded image is transformed to match the requested number\n of color channels.\n\n This op also supports decoding JPEGs and non-animated GIFs since the interface\n is the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The PNG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Decode a PNG-encoded image to a uint8 or uint16 tensor.", "type": "API"}, {"name": "tf.io.decode_proto", "docs": "The op extracts fields from a serialized protocol buffers message into tensors.\n\n Note: This API is designed for orthogonality rather than human-friendliness. It\n can be used to parse input protos by hand, but it is intended for use in\n generated code.\n\n The `decode_proto` op extracts fields from a serialized protocol buffers\n message into tensors. The fields in `field_names` are decoded and converted\n to the corresponding `output_types` if possible.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n Each output tensor is a dense tensor. This means that it is padded to hold\n the largest number of repeated elements seen in the input minibatch. (The\n shape is also padded by one to prevent zero-sized dimensions). The actual\n repeat counts for each example in the minibatch can be found in the `sizes`\n output. In many cases the output of `decode_proto` is fed immediately into\n tf.squeeze if missing values are not a concern. When using tf.squeeze, always\n pass the squeeze dimension explicitly to avoid surprises.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n - `map` fields are not directly decoded. They are treated as `repeated` fields,\n of the appropriate entry type. The proto-compiler defines entry types for each\n map field. The type-name is the field name, converted to \"CamelCase\" with\n \"Entry\" appended. The `tf.train.Features.FeatureEntry` message is an example of\n one of these implicit `Entry` types.\n\n - `enum` fields should be read as int32.\n\n Both binary and text proto serializations are supported, and can be\n chosen using the `format` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Here is an example:\n\n The, internal, `Summary.Value` proto contains a\n `oneof {float simple_value; Image image; ...}`\n\n >>> from google.protobuf import text_format\n >>>\n >>> # A Summary.Value contains: oneof {float simple_value; Image image}\n >>> values = [\n ... \"simple_value: 2.2\",\n ... \"simple_value: 1.2\",\n ... \"image { height: 128 width: 512 }\",\n ... \"image { height: 256 width: 256 }\",]\n >>> values = [\n ... text_format.Parse(v, tf.compat.v1.Summary.Value()).SerializeToString()\n ... for v in values]\n\n The following can decode both fields from the serialized strings:\n\n >>> sizes, [simple_value, image] = tf.io.decode_proto(\n ... values,\n ... tf.compat.v1.Summary.Value.DESCRIPTOR.full_name,\n ... field_names=['simple_value', 'image'],\n ... output_types=[tf.float32, tf.string])\n\n The `sizes` has the same shape as the input, with an additional axis across the\n fields that were decoded. Here the first column of `sizes` is the size of the\n decoded `simple_value` field:\n\n >>> print(sizes)\n tf.Tensor(\n [[1 0]\n [1 0]\n [0 1]\n [0 1]], shape=(4, 2), dtype=int32)\n\n The result tensors each have one more index than the input byte-strings.\n The valid elements of each result tensor are indicated by\n the appropriate column of `sizes`. The invalid elements are padded with a\n default value:\n\n >>> print(simple_value)\n tf.Tensor(\n [[2.2]\n [1.2]\n [0. ]\n [0. ]], shape=(4, 1), dtype=float32)\n\n Nested protos are extracted as string tensors:\n\n >>> print(image.dtype)\n \n >>> print(image.shape.as_list())\n [4, 1]\n\n To convert to a `tf.RaggedTensor` representation use:\n\n >>> tf.RaggedTensor.from_tensor(simple_value, lengths=sizes[:, 0]).to_list()\n [[2.2], [1.2], [], []]\n\n Args:\n bytes: A `Tensor` of type `string`.\n Tensor of serialized protos with shape `batch_shape`.\n message_type: A `string`. Name of the proto message type to decode.\n field_names: A list of `strings`.\n List of strings containing proto field names. An extension field can be decoded\n by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME.\n output_types: A list of `tf.DTypes`.\n List of TF types to use for the respective field in field_names.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n Either the special value `local://` or a path to a file containing\n a serialized `FileDescriptorSet`.\n message_format: An optional `string`. Defaults to `\"binary\"`.\n Either `binary` or `text`.\n sanitize: An optional `bool`. Defaults to `False`.\n Whether to sanitize the result or not.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sizes, values).\n\n sizes: A `Tensor` of type `int32`.\n values: A list of `Tensor` objects of type `output_types`.\n ", "desc": "The op extracts fields from a serialized protocol buffers message into tensors.", "type": "API"}, {"name": "tf.io.decode_raw", "docs": "Convert raw bytes from input tensor into numeric tensors.\n\n Every component of the input tensor is interpreted as a sequence of bytes.\n These bytes are then decoded as numbers in the format specified by `out_type`.\n\n >>> tf.io.decode_raw(tf.constant(\"1\"), tf.uint8)\n \n >>> tf.io.decode_raw(tf.constant(\"1,2\"), tf.uint8)\n \n\n Note that the rank of the output tensor is always one more than the input one:\n\n >>> tf.io.decode_raw(tf.constant([\"1\",\"2\"]), tf.uint8).shape\n TensorShape([2, 1])\n >>> tf.io.decode_raw(tf.constant([[\"1\"],[\"2\"]]), tf.uint8).shape\n TensorShape([2, 1, 1])\n\n This is because each byte in the input is converted to a new value on the\n output (if output type is `uint8` or `int8`, otherwise chunks of inputs get\n coverted to a new value):\n\n >>> tf.io.decode_raw(tf.constant(\"123\"), tf.uint8)\n \n >>> tf.io.decode_raw(tf.constant(\"1234\"), tf.uint8)\n >> # chuncked output\n >>> tf.io.decode_raw(tf.constant(\"12\"), tf.uint16)\n \n >>> tf.io.decode_raw(tf.constant(\"1234\"), tf.uint16)\n >> # int64 output\n >>> tf.io.decode_raw(tf.constant(\"12345678\"), tf.int64)\n \n >>> tf.io.decode_raw(tf.constant(\"1234567887654321\"), tf.int64)\n \n\n The operation allows specifying endianness via the `little_endian` parameter.\n\n >>> tf.io.decode_raw(tf.constant(\"\\x0a\\x0b\"), tf.int16)\n \n >>> hex(2826)\n '0xb0a'\n >>> tf.io.decode_raw(tf.constant(\"\\x0a\\x0b\"), tf.int16, little_endian=False)\n \n >>> hex(2571)\n '0xa0b'\n\n If the elements of `input_bytes` are of different length, you must specify\n `fixed_length`:\n\n >>> tf.io.decode_raw(tf.constant([[\"1\"],[\"23\"]]), tf.uint8, fixed_length=4)\n \n\n If the `fixed_length` value is larger that the length of the `out_type` dtype,\n multiple values are generated:\n\n >>> tf.io.decode_raw(tf.constant([\"1212\"]), tf.uint16, fixed_length=4)\n >> x=''.join([chr(1), chr(2), chr(3), chr(4)])\n >>> tf.io.decode_raw(x, tf.uint16, fixed_length=2)\n \n >>> hex(513)\n '0x201'\n\n If `little_endian` and `fixed_length` are specified, truncation to the fixed\n length occurs before endianness conversion:\n\n >>> x=''.join([chr(1), chr(2), chr(3), chr(4)])\n >>> tf.io.decode_raw(x, tf.uint16, fixed_length=2, little_endian=False)\n \n >>> hex(258)\n '0x102'\n\n If input values all have the same length, then specifying `fixed_length`\n equal to the size of the strings should not change output:\n\n >>> x = [\"12345678\", \"87654321\"]\n >>> tf.io.decode_raw(x, tf.int16)\n \n >>> tf.io.decode_raw(x, tf.int16, fixed_length=len(x[0]))\n \n\n Args:\n input_bytes:\n Each element of the input Tensor is converted to an array of bytes.\n\n Currently, this must be a tensor of strings (bytes), although semantically\n the operation should support any input.\n out_type:\n `DType` of the output. Acceptable types are `half`, `float`, `double`,\n `int32`, `uint16`, `uint8`, `int16`, `int8`, `int64`.\n little_endian:\n Whether the `input_bytes` data is in little-endian format. Data will be\n converted into host byte order if necessary.\n fixed_length:\n If set, the first `fixed_length` bytes of each element will be converted.\n Data will be zero-padded or truncated to the specified length.\n\n `fixed_length` must be a multiple of the size of `out_type`.\n\n `fixed_length` must be specified if the elements of `input_bytes` are of\n variable length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` object storing the decoded bytes.\n ", "desc": "Convert raw bytes from input tensor into numeric tensors.", "type": "API"}, {"name": "tf.io.deserialize_many_sparse", "docs": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.\n\n The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where\n `N` is the minibatch size and the rows correspond to packed outputs of\n `serialize_sparse`. The ranks of the original `SparseTensor` objects\n must all match. When the final `SparseTensor` is created, it has rank one\n higher than the ranks of the incoming `SparseTensor` objects (they have been\n concatenated along a new row dimension).\n\n The output `SparseTensor` object's shape values for all dimensions but the\n first are the max across the input `SparseTensor` objects' shape values\n for the corresponding dimensions. Its first shape value is `N`, the minibatch\n size.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `sparse.reorder` to restore index ordering.\n\n For example, if the serialized input is a `[2, 3]` matrix representing two\n original `SparseTensor` objects:\n\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n\n and\n\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n\n then the final deserialized `SparseTensor` will be:\n\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n\n Args:\n serialized_sparse: 2-D `Tensor` of type `string` of shape `[N, 3]`.\n The serialized and packed `SparseTensor` objects.\n dtype: The `dtype` of the serialized `SparseTensor` objects.\n rank: (optional) Python int, the rank of the `SparseTensor` objects.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` representing the deserialized `SparseTensor`s,\n concatenated along the `SparseTensor`s' first dimension.\n\n All of the serialized `SparseTensor`s must have had the same rank and type.\n ", "desc": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.", "type": "API"}, {"name": "tf.io.encode_base64", "docs": "Encode strings into web-safe base64 format.\n\n Refer to [this article](https://en.wikipedia.org/wiki/Base64) for more information on\n base64 format. Base64 strings may have padding with '=' at the\n end so that the encoded has length multiple of 4. See Padding section of the\n link above.\n\n Web-safe means that the encoder uses - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Strings to be encoded.\n pad: An optional `bool`. Defaults to `False`.\n Bool whether padding is applied at the ends.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode strings into web-safe base64 format.", "type": "API"}, {"name": "tf.io.encode_jpeg", "docs": "JPEG-encode an image.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n\n The attr `format` can be used to override the color format of the encoded\n output. Values can be:\n\n * `''`: Use a default format based on the number of channels in the image.\n * `grayscale`: Output a grayscale JPEG image. The `channels` dimension\n of `image` must be 1.\n * `rgb`: Output an RGB JPEG image. The `channels` dimension\n of `image` must be 3.\n\n If `format` is not specified or is the empty string, a default format is picked\n in function of the number of channels in `image`:\n\n * 1: Output a grayscale image.\n * 3: Output an RGB image.\n\n Args:\n image: A `Tensor` of type `uint8`.\n 3-D with shape `[height, width, channels]`.\n format: An optional `string` from: `\"\", \"grayscale\", \"rgb\"`. Defaults to `\"\"`.\n Per pixel image format.\n quality: An optional `int`. Defaults to `95`.\n Quality of the compression from 0 to 100 (higher is better and slower).\n progressive: An optional `bool`. Defaults to `False`.\n If True, create a JPEG that loads progressively (coarse to fine).\n optimize_size: An optional `bool`. Defaults to `False`.\n If True, spend CPU/RAM to reduce size with no quality change.\n chroma_downsampling: An optional `bool`. Defaults to `True`.\n See http://en.wikipedia.org/wiki/Chroma_subsampling.\n density_unit: An optional `string` from: `\"in\", \"cm\"`. Defaults to `\"in\"`.\n Unit used to specify `x_density` and `y_density`:\n pixels per inch (`'in'`) or centimeter (`'cm'`).\n x_density: An optional `int`. Defaults to `300`.\n Horizontal pixels per density unit.\n y_density: An optional `int`. Defaults to `300`.\n Vertical pixels per density unit.\n xmp_metadata: An optional `string`. Defaults to `\"\"`.\n If not empty, embed this XMP metadata in the image header.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG-encode an image.", "type": "API"}, {"name": "tf.io.encode_png", "docs": "PNG-encode an image.\n\n `image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`\n where `channels` is:\n\n * 1: for grayscale.\n * 2: for grayscale + alpha.\n * 3: for RGB.\n * 4: for RGBA.\n\n The ZLIB compression level, `compression`, can be -1 for the PNG-encoder\n default or a value from 0 to 9. 9 is the highest compression level,\n generating the smallest output, but is slower.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.\n 3-D with shape `[height, width, channels]`.\n compression: An optional `int`. Defaults to `-1`. Compression level.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "PNG-encode an image.", "type": "API"}, {"name": "tf.io.encode_proto", "docs": "The op serializes protobuf messages provided in the input tensors.\n\n The types of the tensors in `values` must match the schema for the fields\n specified in `field_names`. All the tensors in `values` must have a common\n shape prefix, *batch_shape*.\n\n The `sizes` tensor specifies repeat counts for each field. The repeat count\n (last dimension) of a each tensor in `values` must be greater than or equal\n to corresponding repeat count in `sizes`.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Args:\n sizes: A `Tensor` of type `int32`.\n Tensor of int32 with shape `[batch_shape, len(field_names)]`.\n values: A list of `Tensor` objects.\n List of tensors containing values for the corresponding field.\n field_names: A list of `strings`.\n List of strings containing proto field names.\n message_type: A `string`. Name of the proto message type to decode.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "The op serializes protobuf messages provided in the input tensors.", "type": "API"}, {"name": "tf.io.extract_jpeg_shape", "docs": "Extract the shape information of a JPEG-encoded image.\n\n This op only parses the image header, so it is much faster than DecodeJpeg.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n (Optional) The output type of the operation (int32 or int64).\n Defaults to int32.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Extract the shape information of a JPEG-encoded image.", "type": "API"}, {"name": "tf.io.FixedLenFeature", "docs": "Configuration for parsing a fixed-length input feature.\n\n To treat sparse input as dense, provide a `default_value`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data.\n dtype: Data type of input.\n default_value: Value to be used if an example is missing this feature. It\n must be compatible with `dtype` and of the specified `shape`.\n ", "desc": "Configuration for parsing a fixed-length input feature.", "type": "API"}, {"name": "tf.io.FixedLenSequenceFeature", "docs": "Configuration for parsing a variable-length input feature into a `Tensor`.\n\n The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has\n a static `shape` of `[None] + shape` and the specified `dtype`.\n The resulting `Tensor` of parsing a `batch_size` many `Example`s has\n a static `shape` of `[batch_size, None] + shape` and the specified `dtype`.\n The entries in the `batch` from different `Examples` will be padded with\n `default_value` to the maximum length present in the `batch`.\n\n To treat a sparse input as dense, provide `allow_missing=True`; otherwise,\n the parse functions will fail on any examples missing this feature.\n\n Fields:\n shape: Shape of input data for dimension 2 and higher. First dimension is\n of variable length `None`.\n dtype: Data type of input.\n allow_missing: Whether to allow this feature to be missing from a feature\n list item. Is available only for parsing `SequenceExample` not for\n parsing `Examples`.\n default_value: Scalar value to be used to pad multiple `Example`s to their\n maximum length. Irrelevant for parsing a single `Example` or\n `SequenceExample`. Defaults to \"\" for dtype string and 0 otherwise\n (optional).\n ", "desc": "Configuration for parsing a variable-length input feature into a `Tensor`.", "type": "API"}, {"name": "tf.io.gfile", "docs": "Public API for tf.io.gfile namespace.\n", "desc": "Public API for tf.io.gfile namespace.", "type": "API"}, {"name": "tf.io.gfile.copy", "docs": "Copies data from `src` to `dst`.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.exists(\"/tmp/y\")\n True\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that you need to always specify a file name, even if moving into a new\n directory. This is because some cloud filesystems don't have the concept of a\n directory.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.mkdir(\"/tmp/new_dir\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"/tmp/new_dir/y\")\n >>> tf.io.gfile.exists(\"/tmp/new_dir/y\")\n True\n >>> tf.io.gfile.rmtree(\"/tmp/new_dir\")\n\n If you want to prevent errors if the path already exists, you can use\n `overwrite` argument:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\")\n >>> tf.io.gfile.copy(\"/tmp/x\", \"file:///tmp/y\", overwrite=True)\n >>> tf.io.gfile.remove(\"/tmp/y\")\n\n Note that the above will still result in an error if you try to overwrite a\n directory with a file.\n\n Note that you cannot copy a directory, only file arguments are supported.\n\n Args:\n src: string, name of the file whose contents need to be copied\n dst: string, name of the file to which to copy to\n overwrite: boolean, if false it's an error for `dst` to be occupied by an\n existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Copies data from `src` to `dst`.", "type": "API"}, {"name": "tf.io.gfile.exists", "docs": "Determines whether a path exists or not.\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"/tmp/x\")\n True\n\n You can also specify the URI scheme for selecting a different filesystem:\n\n >>> # for a GCS filesystem path:\n >>> # tf.io.gfile.exists(\"gs://bucket/file\")\n >>> # for a local filesystem:\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.gfile.exists(\"file:///tmp/x\")\n True\n\n This currently returns `True` for existing directories but don't rely on this\n behavior, especially if you are using cloud filesystems (e.g., GCS, S3,\n Hadoop):\n\n >>> tf.io.gfile.exists(\"/tmp\")\n True\n\n Args:\n path: string, a path\n\n Returns:\n True if the path exists, whether it's a file or a directory.\n False if the path does not exist and there are no filesystem errors.\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API.\n ", "desc": "Determines whether a path exists or not.", "type": "API"}, {"name": "tf.io.gfile.GFile", "docs": "File I/O wrappers without thread locking.\n\n The main roles of the `tf.io.gfile` module are:\n\n 1. To provide an API that is close to Python's file I/O objects, and\n 2. To provide an implementation based on TensorFlow's C++ FileSystem API.\n\n The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a `gs://` prefix, and\n HDFS (using an `hdfs://` prefix). TensorFlow exports these as `tf.io.gfile`,\n so that you can use these implementations for saving and loading checkpoints,\n writing to TensorBoard logs, and accessing training data (among other uses).\n However, if all your files are local, you can use the regular Python file\n API without any problem.\n\n *Note*: though similar to Python's I/O implementation, there are semantic\n differences to make `tf.io.gfile` more efficient for backing filesystems. For\n example, a write mode file will not be opened until the first write call to\n minimize RPC invocations in network filesystems.\n\n Once you obtain a `GFile` object, you can use it in most ways as you would any\n Python's file object:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdf\")\n 4\n >>> with tf.io.gfile.GFile(\"/tmp/x\") as f:\n ... f.read()\n 'asdf'\n\n The difference is that you can specify URI schemes to use other filesystems\n (e.g., `gs://` for GCS, `s3://` for S3, etc.), if they are supported. Using\n `file://` as an example, we have:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"qwert\")\n ... f.write(\"asdf\")\n >>> tf.io.gfile.GFile(\"file:///tmp/x\").read()\n 'qwertasdf'\n\n You can also read all lines of a file directly:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> tf.io.gfile.GFile(\"/tmp/x\").readlines()\n ['asdf\\n', 'qwer\\n']\n\n You can iterate over the lines:\n\n >>> with tf.io.gfile.GFile(\"file:///tmp/x\", \"w\") as f:\n ... f.write(\"asdf\\n\")\n ... f.write(\"qwer\\n\")\n >>> for line in tf.io.gfile.GFile(\"/tmp/x\"):\n ... print(line[:-1]) # removes the end of line character\n asdf\n qwer\n\n Random access read is possible if the underlying filesystem supports it:\n\n >>> with open(\"/tmp/x\", \"w\") as f:\n ... f.write(\"asdfqwer\")\n >>> f = tf.io.gfile.GFile(\"/tmp/x\")\n >>> f.read(3)\n 'asd'\n >>> f.seek(4)\n >>> f.tell()\n 4\n >>> f.read(3)\n 'qwe'\n >>> f.tell()\n 7\n >>> f.close()\n ", "desc": "File I/O wrappers without thread locking.", "type": "API"}, {"name": "tf.io.gfile.glob", "docs": "Returns a list of files that match the given pattern(s).\n\n The patterns are defined as strings. Supported patterns are defined\n here. Note that the pattern can be a Python iteratable of string patterns.\n\n The format definition of the pattern is:\n\n **pattern**: `{ term }`\n\n **term**:\n * `'*'`: matches any sequence of non-'/' characters\n * `'?'`: matches a single non-'/' character\n * `'[' [ '^' ] { match-list } ']'`: matches any single\n character (not) on the list\n * `c`: matches character `c` where `c != '*', '?', '\\\\', '['`\n * `'\\\\' c`: matches character `c`\n\n **character range**:\n * `c`: matches character `c` while `c != '\\\\', '-', ']'`\n * `'\\\\' c`: matches character `c`\n * `lo '-' hi`: matches character `c` for `lo <= c <= hi`\n\n Examples:\n\n >>> tf.io.gfile.glob(\"*.py\")\n ... # For example, ['__init__.py']\n\n >>> tf.io.gfile.glob(\"__init__.??\")\n ... # As above\n\n >>> files = {\"*.py\"}\n >>> the_iterator = iter(files)\n >>> tf.io.gfile.glob(the_iterator)\n ... # As above\n\n See the C++ function `GetMatchingPaths` in\n [`core/platform/file_system.h`]\n (../../../core/platform/file_system.h)\n for implementation details.\n\n Args:\n pattern: string or iterable of strings. The glob pattern(s).\n\n Returns:\n A list of strings containing filenames that match the given pattern(s).\n\n Raises:\n errors.OpError: If there are filesystem / directory listing errors.\n errors.NotFoundError: If pattern to be matched is an invalid directory.\n ", "desc": "Returns a list of files that match the given pattern(s).", "type": "API"}, {"name": "tf.io.gfile.isdir", "docs": "Returns whether the path is a directory or not.\n\n Args:\n path: string, path to a potential directory\n\n Returns:\n True, if the path is a directory; False otherwise\n ", "desc": "Returns whether the path is a directory or not.", "type": "API"}, {"name": "tf.io.gfile.listdir", "docs": "Returns a list of entries contained within a directory.\n\n The list is in arbitrary order. It does not contain the special entries \".\"\n and \"..\".\n\n Args:\n path: string, path to a directory\n\n Returns:\n [filename1, filename2, ... filenameN] as strings\n\n Raises:\n errors.NotFoundError if directory doesn't exist\n ", "desc": "Returns a list of entries contained within a directory.", "type": "API"}, {"name": "tf.io.gfile.makedirs", "docs": "Creates a directory and all parent/intermediate directories.\n\n It succeeds if path already exists and is writable.\n\n Args:\n path: string, name of the directory to be created\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory and all parent/intermediate directories.", "type": "API"}, {"name": "tf.io.gfile.mkdir", "docs": "Creates a directory with the name given by `path`.\n\n Args:\n path: string, name of the directory to be created\n\n Notes: The parent directories need to exist. Use `tf.io.gfile.makedirs`\n instead if there is the possibility that the parent dirs don't exist.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Creates a directory with the name given by `path`.", "type": "API"}, {"name": "tf.io.gfile.remove", "docs": "Deletes the path located at 'path'.\n\n Args:\n path: string, a path\n\n Raises:\n errors.OpError: Propagates any errors reported by the FileSystem API. E.g.,\n `NotFoundError` if the path does not exist.\n ", "desc": "Deletes the path located at 'path'.", "type": "API"}, {"name": "tf.io.gfile.rename", "docs": "Rename or move a file / directory.\n\n Args:\n src: string, pathname for a file\n dst: string, pathname to which the file needs to be moved\n overwrite: boolean, if false it's an error for `dst` to be occupied by an\n existing file.\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Rename or move a file / directory.", "type": "API"}, {"name": "tf.io.gfile.rmtree", "docs": "Deletes everything under path recursively.\n\n Args:\n path: string, a path\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Deletes everything under path recursively.", "type": "API"}, {"name": "tf.io.gfile.stat", "docs": "Returns file statistics for a given path.\n\n Args:\n path: string, path to a file\n\n Returns:\n FileStatistics struct that contains information about the path\n\n Raises:\n errors.OpError: If the operation fails.\n ", "desc": "Returns file statistics for a given path.", "type": "API"}, {"name": "tf.io.gfile.walk", "docs": "Recursive directory tree generator for directories.\n\n Args:\n top: string, a Directory name\n topdown: bool, Traverse pre order if True, post order if False.\n onerror: optional handler for errors. Should be a function, it will be\n called with the error as argument. Rethrowing the error aborts the walk.\n Errors that happen while listing directories are ignored.\n\n Yields:\n Each yield is a 3-tuple: the pathname of a directory, followed by lists of\n all its subdirectories and leaf files. That is, each yield looks like:\n `(dirname, [subdirname, subdirname, ...], [filename, filename, ...])`.\n Each item is a string.\n ", "desc": "Recursive directory tree generator for directories.", "type": "API"}, {"name": "tf.io.is_jpeg", "docs": "Convenience function to check if the 'contents' encodes a JPEG image.\n\n Args:\n contents: 0-D `string`. The encoded image bytes.\n name: A name for the operation (optional)\n\n Returns:\n A scalar boolean tensor indicating if 'contents' may be a JPEG image.\n is_jpeg is susceptible to false positives.\n ", "desc": "Convenience function to check if the 'contents' encodes a JPEG image.", "type": "API"}, {"name": "tf.io.match_filenames_once", "docs": "Save the list of files matching pattern, so it is only computed once.\n\n NOTE: The order of the files returned is deterministic.\n\n Args:\n pattern: A file pattern (glob), or 1D tensor of file patterns.\n name: A name for the operations (optional).\n\n Returns:\n A variable that is initialized to the list of files matching the pattern(s).\n ", "desc": "Save the list of files matching pattern, so it is only computed once.", "type": "API"}, {"name": "tf.io.matching_files", "docs": "Returns the set of files matching one or more glob patterns.\n\n Note that this routine only supports wildcard characters in the\n basename portion of the pattern, not in the directory portion.\n Note also that the order of filenames returned is deterministic.\n\n Args:\n pattern: A `Tensor` of type `string`.\n Shell wildcard pattern(s). Scalar or vector of type string.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the set of files matching one or more glob patterns.", "type": "API"}, {"name": "tf.io.parse_example", "docs": "Parses `Example` protos into a `dict` of tensors.\n\n Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n protos given in `serialized`. We refer to `serialized` as a batch with\n `batch_size` many entries of individual `Example` protos.\n\n `example_names` may contain descriptive names for the corresponding serialized\n protos. These may be useful for debugging purposes, but they have no effect on\n the output. If not `None`, `example_names` must be the same length as\n `serialized`.\n\n This op parses serialized examples into a dictionary mapping keys to `Tensor`\n `SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to\n `VarLenFeature`, `SparseFeature`, `RaggedFeature`, and `FixedLenFeature`\n objects. Each `VarLenFeature` and `SparseFeature` is mapped to a\n `SparseTensor`; each `FixedLenFeature` is mapped to a `Tensor`; and each\n `RaggedFeature` is mapped to a `RaggedTensor`.\n\n Each `VarLenFeature` maps to a `SparseTensor` of the specified type\n representing a ragged matrix. Its indices are `[batch, index]` where `batch`\n identifies the example in `serialized`, and `index` is the value's index in\n the list of values associated with that feature and example.\n\n Each `SparseFeature` maps to a `SparseTensor` of the specified type\n representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`.\n Its `values` come from the feature in the examples with key `value_key`.\n A `values[i]` comes from a position `k` in the feature of an example at batch\n entry `batch`. This positional information is recorded in `indices[i]` as\n `[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of\n the feature in the example at with key `SparseFeature.index_key[j]`.\n In other words, we split the indices (except the first index indicating the\n batch entry) of a `SparseTensor` by dimension into different features of the\n `Example`. Due to its complexity a `VarLenFeature` should be preferred over a\n `SparseFeature` whenever possible.\n\n Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or\n `tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.\n\n `FixedLenFeature` entries with a `default_value` are optional. With no default\n value, we will fail if that `Feature` is missing from any example in\n `serialized`.\n\n Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type\n (or `tf.float32` if not specified) and shape\n `(serialized.size(), None) + df.shape`.\n All examples in `serialized` will be padded with `default_value` along the\n second dimension.\n\n Each `RaggedFeature` maps to a `RaggedTensor` of the specified type. It\n is formed by stacking the `RaggedTensor` for each example, where the\n `RaggedTensor` for each individual example is constructed using the tensors\n specified by `RaggedTensor.values_key` and `RaggedTensor.partition`. See\n the `tf.io.RaggedFeature` documentation for details and examples.\n\n Examples:\n\n For example, if one expects a `tf.float32` `VarLenFeature` `ft` and three\n serialized `Example`s are provided:\n\n ```\n serialized = [\n features\n { feature { key: \"ft\" value { float_list { value: [1.0, 2.0] } } } },\n features\n { feature []},\n features\n { feature { key: \"ft\" value { float_list { value: [3.0] } } }\n ]\n ```\n\n then the output will look like:\n\n ```python\n {\"ft\": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],\n values=[1.0, 2.0, 3.0],\n dense_shape=(3, 2)) }\n ```\n\n If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and\n `shape=[]` is used then the output will look like:\n\n ```python\n {\"ft\": [[1.0, 2.0], [3.0, -1.0]]}\n ```\n\n Given two `Example` input protos in `serialized`:\n\n ```\n [\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"knit\", \"big\" ] } } }\n feature { key: \"gps\" value { float_list { value: [] } } }\n },\n features {\n feature { key: \"kw\" value { bytes_list { value: [ \"emmy\" ] } } }\n feature { key: \"dank\" value { int64_list { value: [ 42 ] } } }\n feature { key: \"gps\" value { } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"kw\": VarLenFeature(tf.string),\n \"dank\": VarLenFeature(tf.int64),\n \"gps\": VarLenFeature(tf.float32),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"kw\": SparseTensor(\n indices=[[0, 0], [0, 1], [1, 0]],\n values=[\"knit\", \"big\", \"emmy\"]\n dense_shape=[2, 2]),\n \"dank\": SparseTensor(\n indices=[[1, 0]],\n values=[42],\n dense_shape=[2, 1]),\n \"gps\": SparseTensor(\n indices=[],\n values=[],\n dense_shape=[2, 0]),\n }\n ```\n\n For dense results in two serialized `Example`s:\n\n ```\n [\n features {\n feature { key: \"age\" value { int64_list { value: [ 0 ] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n },\n features {\n feature { key: \"age\" value { int64_list { value: [] } } }\n feature { key: \"gender\" value { bytes_list { value: [ \"f\" ] } } }\n }\n ]\n ```\n\n We can use arguments:\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"age\": FixedLenFeature([], dtype=tf.int64, default_value=-1),\n \"gender\": FixedLenFeature([], dtype=tf.string),\n }\n ```\n\n And the expected output is:\n\n ```python\n {\n \"age\": [[0], [-1]],\n \"gender\": [[\"f\"], [\"f\"]],\n }\n ```\n\n An alternative to `VarLenFeature` to obtain a `SparseTensor` is\n `SparseFeature`. For example, given two `Example` input protos in\n `serialized`:\n\n ```\n [\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 3, 20 ] } } }\n },\n features {\n feature { key: \"val\" value { float_list { value: [ 0.0 ] } } }\n feature { key: \"ix\" value { int64_list { value: [ 42 ] } } }\n }\n ]\n ```\n\n And arguments\n\n ```\n example_names: [\"input0\", \"input1\"],\n features: {\n \"sparse\": SparseFeature(\n index_key=\"ix\", value_key=\"val\", dtype=tf.float32, size=100),\n }\n ```\n\n Then the output is a dictionary:\n\n ```python\n {\n \"sparse\": SparseTensor(\n indices=[[0, 3], [0, 20], [1, 42]],\n values=[0.5, -1.0, 0.0]\n dense_shape=[2, 100]),\n }\n ```\n\n See the `tf.io.RaggedFeature` documentation for examples showing how\n `RaggedFeature` can be used to obtain `RaggedTensor`s.\n\n Args:\n serialized: A vector (1-D Tensor) of strings, a batch of binary\n serialized `Example` protos.\n features: A `dict` mapping feature keys to `FixedLenFeature`,\n `VarLenFeature`, `SparseFeature`, and `RaggedFeature` values.\n example_names: A vector (1-D Tensor) of strings (optional), the names of\n the serialized protos in the batch.\n name: A name for this operation (optional).\n\n Returns:\n A `dict` mapping feature keys to `Tensor`, `SparseTensor`, and\n `RaggedTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses `Example` protos into a `dict` of tensors.", "type": "API"}, {"name": "tf.io.parse_sequence_example", "docs": "Parses a batch of `SequenceExample` protos.\n\n Parses a vector of serialized\n [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n protos given in `serialized`.\n\n This op parses serialized sequence examples into a tuple of dictionaries,\n each mapping keys to `Tensor` and `SparseTensor` objects.\n The first dictionary contains mappings for keys appearing in\n `context_features`, and the second dictionary contains mappings for keys\n appearing in `sequence_features`.\n\n At least one of `context_features` and `sequence_features` must be provided\n and non-empty.\n\n The `context_features` keys are associated with a `SequenceExample` as a\n whole, independent of time / frame. In contrast, the `sequence_features` keys\n provide a way to access variable-length data within the `FeatureList` section\n of the `SequenceExample` proto. While the shapes of `context_features` values\n are fixed with respect to frame, the frame dimension (the first dimension)\n of `sequence_features` values may vary between `SequenceExample` protos,\n and even between `feature_list` keys within the same `SequenceExample`.\n\n `context_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and\n default value.\n\n `sequence_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and\n each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified\n type. The shape will be `(B,T,) + df.dense_shape` for\n `FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the\n length of the associated `FeatureList` in the `SequenceExample`. For instance,\n `FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape\n `[None, None]` and dynamic shape `[B, T]`, while\n `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor`\n of static shape `[None, None, k]` and dynamic shape `[B, T, k]`.\n\n Like the input, the resulting output tensors have a batch dimension. This\n means that the original per-example shapes of `VarLenFeature`s and\n `FixedLenSequenceFeature`s can be lost. To handle that situation, this op also\n provides dicts of shape tensors as part of the output. There is one dict for\n the context features, and one for the feature_list features. Context features\n of type `FixedLenFeature`s will not be present, since their shapes are already\n known by the caller. In situations where the input `FixedLenSequenceFeature`s\n are of different sequence lengths across examples, the shorter examples will\n be padded with default datatype values: 0 for numeric types, and the empty\n string for string types.\n\n Each `SparseTensor` corresponding to `sequence_features` represents a ragged\n vector. Its indices are `[time, index]`, where `time` is the `FeatureList`\n entry and `index` is the value's index in the list of values associated with\n that time.\n\n `FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`\n entries with `allow_missing=True` are optional; otherwise, we will fail if\n that `Feature` or `FeatureList` is missing from any example in `serialized`.\n\n `example_name` may contain a descriptive name for the corresponding serialized\n proto. This may be useful for debugging purposes, but it has no effect on the\n output. If not `None`, `example_name` must be a scalar.\n\n Args:\n serialized: A vector (1-D Tensor) of type string containing binary\n serialized `SequenceExample` protos.\n context_features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` or `RaggedFeature` values. These features are associated\n with a `SequenceExample` as a whole.\n sequence_features: A `dict` mapping feature keys to\n `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values.\n These features are associated with data within the `FeatureList` section\n of the `SequenceExample` proto.\n example_names: A vector (1-D Tensor) of strings (optional), the name of the\n serialized protos.\n name: A name for this operation (optional).\n\n Returns:\n A tuple of three `dict`s, each mapping keys to `Tensor`s,\n `SparseTensor`s, and `RaggedTensor`. The first dict contains the context\n key/values, the second dict contains the feature_list key/values, and the\n final dict contains the lengths of any dense feature_list features.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a batch of `SequenceExample` protos.", "type": "API"}, {"name": "tf.io.parse_single_example", "docs": "Parses a single `Example` proto.\n\n Similar to `parse_example`, except:\n\n For dense tensors, the returned `Tensor` is identical to the output of\n `parse_example`, except there is no batch dimension, the output shape is the\n same as the shape given in `dense_shape`.\n\n For `SparseTensor`s, the first (batch) column of the indices matrix is removed\n (the indices matrix is a column vector), the values vector is unchanged, and\n the first (`batch_size`) entry of the shape vector is removed (it is now a\n single element vector).\n\n One might see performance advantages by batching `Example` protos with\n `parse_example` instead of using this function directly.\n\n Args:\n serialized: A scalar string Tensor, a single serialized Example.\n features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` values.\n example_names: (Optional) A scalar string Tensor, the associated name.\n name: A name for this operation (optional).\n\n Returns:\n A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `Example` proto.", "type": "API"}, {"name": "tf.io.parse_single_sequence_example", "docs": "Parses a single `SequenceExample` proto.\n\n Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)\n proto given in `serialized`.\n\n This op parses a serialized sequence example into a tuple of dictionaries,\n each mapping keys to `Tensor` and `SparseTensor` objects.\n The first dictionary contains mappings for keys appearing in\n `context_features`, and the second dictionary contains mappings for keys\n appearing in `sequence_features`.\n\n At least one of `context_features` and `sequence_features` must be provided\n and non-empty.\n\n The `context_features` keys are associated with a `SequenceExample` as a\n whole, independent of time / frame. In contrast, the `sequence_features` keys\n provide a way to access variable-length data within the `FeatureList` section\n of the `SequenceExample` proto. While the shapes of `context_features` values\n are fixed with respect to frame, the frame dimension (the first dimension)\n of `sequence_features` values may vary between `SequenceExample` protos,\n and even between `feature_list` keys within the same `SequenceExample`.\n\n `context_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`;\n each `RaggedFeature` is mapped to a `RaggedTensor`; and each `FixedLenFeature`\n is mapped to a `Tensor`, of the specified type, shape, and default value.\n\n `sequence_features` contains `VarLenFeature`, `RaggedFeature`, and\n `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a\n `SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each\n `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type.\n The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`,\n where `T` is the length of the associated `FeatureList` in the\n `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar\n 1-D `Tensor` of static shape `[None]` and dynamic shape `[T]`, while\n `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor`\n of static shape `[None, k]` and dynamic shape `[T, k]`.\n\n Each `SparseTensor` corresponding to `sequence_features` represents a ragged\n vector. Its indices are `[time, index]`, where `time` is the `FeatureList`\n entry and `index` is the value's index in the list of values associated with\n that time.\n\n `FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`\n entries with `allow_missing=True` are optional; otherwise, we will fail if\n that `Feature` or `FeatureList` is missing from any example in `serialized`.\n\n `example_name` may contain a descriptive name for the corresponding serialized\n proto. This may be useful for debugging purposes, but it has no effect on the\n output. If not `None`, `example_name` must be a scalar.\n\n Note that the batch version of this function, `tf.parse_sequence_example`,\n is written for better memory efficiency and will be faster on large\n `SequenceExample`s.\n\n Args:\n serialized: A scalar (0-D Tensor) of type string, a single binary\n serialized `SequenceExample` proto.\n context_features: A `dict` mapping feature keys to `FixedLenFeature` or\n `VarLenFeature` or `RaggedFeature` values. These features are associated\n with a `SequenceExample` as a whole.\n sequence_features: A `dict` mapping feature keys to\n `FixedLenSequenceFeature` or `VarLenFeature` or `RaggedFeature` values.\n These features are associated with data within the `FeatureList` section\n of the `SequenceExample` proto.\n example_name: A scalar (0-D Tensor) of strings (optional), the name of\n the serialized proto.\n name: A name for this operation (optional).\n\n Returns:\n A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s\n and `RaggedTensor`s.\n\n * The first dict contains the context key/values.\n * The second dict contains the feature_list key/values.\n\n Raises:\n ValueError: if any feature is invalid.\n ", "desc": "Parses a single `SequenceExample` proto.", "type": "API"}, {"name": "tf.io.parse_tensor", "docs": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar string containing a serialized TensorProto proto.\n out_type: A `tf.DType`.\n The type of the serialized tensor. The provided type must match the\n type of the serialized tensor and no implicit conversion will take place.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.", "type": "API"}, {"name": "tf.io.RaggedFeature", "docs": "Configuration for passing a RaggedTensor input feature.\n\n `value_key` specifies the feature key for a variable-length list of values;\n and `partitions` specifies zero or more feature keys for partitioning those\n values into higher dimensions. Each element of `partitions` must be one of\n the following:\n\n * `tf.io.RaggedFeature.RowSplits(key: string)`\n * `tf.io.RaggedFeature.RowLengths(key: string)`\n * `tf.io.RaggedFeature.RowStarts(key: string)`\n * `tf.io.RaggedFeature.RowLimits(key: string)`\n * `tf.io.RaggedFeature.ValueRowIds(key: string)`\n * `tf.io.RaggedFeature.UniformRowLength(length: int)`.\n\n Where `key` is a feature key whose values are used to partition the values.\n Partitions are listed from outermost to innermost.\n\n * If `len(partitions) == 0` (the default), then:\n\n * A feature from a single `tf.Example` is parsed into a 1D `tf.Tensor`.\n * A feature from a batch of `tf.Example`s is parsed into a 2D\n `tf.RaggedTensor`, where the outer dimension is the batch dimension, and\n the inner (ragged) dimension is the feature length in each example.\n\n * If `len(partitions) == 1`, then:\n\n * A feature from a single `tf.Example` is parsed into a 2D\n `tf.RaggedTensor`, where the values taken from the `value_key` are\n separated into rows using the partition key.\n * A feature from a batch of `tf.Example`s is parsed into a 3D\n `tf.RaggedTensor`, where the outer dimension is the batch dimension,\n the two inner dimensions are formed by separating the `value_key` values\n from each example into rows using that example's partition key.\n\n * If `len(partitions) > 1`, then:\n\n * A feature from a single `tf.Example` is parsed into a `tf.RaggedTensor`\n whose rank is `len(partitions)+1`, and whose ragged_rank is\n `len(partitions)`.\n\n * A feature from a batch of `tf.Example`s is parsed into a `tf.RaggedTensor`\n whose rank is `len(partitions)+2` and whose ragged_rank is\n `len(partitions)+1`, where the outer dimension is the batch dimension.\n\n There is one exception: if the final (i.e., innermost) element(s) of\n `partitions` are `UniformRowLength`s, then the values are simply reshaped (as\n a higher-dimensional `tf.Tensor`), rather than being wrapped in a\n `tf.RaggedTensor`.\n\n #### Examples\n\n >>> import google.protobuf.text_format as pbtext\n >>> example_batch = [\n ... pbtext.Merge(r'''\n ... features {\n ... feature {key: \"v\" value {int64_list {value: [3, 1, 4, 1, 5, 9]}}}\n ... feature {key: \"s1\" value {int64_list {value: [0, 2, 3, 3, 6]}}}\n ... feature {key: \"s2\" value {int64_list {value: [0, 2, 3, 4]}}}\n ... }''', tf.train.Example()).SerializeToString(),\n ... pbtext.Merge(r'''\n ... features {\n ... feature {key: \"v\" value {int64_list {value: [2, 7, 1, 8, 2, 8, 1]}}}\n ... feature {key: \"s1\" value {int64_list {value: [0, 3, 4, 5, 7]}}}\n ... feature {key: \"s2\" value {int64_list {value: [0, 1, 1, 4]}}}\n ... }''', tf.train.Example()).SerializeToString()]\n\n >>> features = {\n ... # Zero partitions: returns 1D tf.Tensor for each Example.\n ... 'f1': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64),\n ... # One partition: returns 2D tf.RaggedTensor for each Example.\n ... 'f2': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64, partitions=[\n ... tf.io.RaggedFeature.RowSplits(\"s1\")]),\n ... # Two partitions: returns 3D tf.RaggedTensor for each Example.\n ... 'f3': tf.io.RaggedFeature(value_key=\"v\", dtype=tf.int64, partitions=[\n ... tf.io.RaggedFeature.RowSplits(\"s2\"),\n ... tf.io.RaggedFeature.RowSplits(\"s1\")])\n ... }\n\n >>> feature_dict = tf.io.parse_single_example(example_batch[0], features)\n >>> for (name, val) in sorted(feature_dict.items()):\n ... print('%s: %s' % (name, val))\n f1: tf.Tensor([3 1 4 1 5 9], shape=(6,), dtype=int64)\n f2: \n f3: \n\n >>> feature_dict = tf.io.parse_example(example_batch, features)\n >>> for (name, val) in sorted(feature_dict.items()):\n ... print('%s: %s' % (name, val))\n f1: \n f2: \n f3: \n\n Fields:\n dtype: Data type of the `RaggedTensor`. Must be one of:\n `tf.dtypes.int64`, `tf.dtypes.float32`, `tf.dtypes.string`.\n value_key: (Optional.) Key for a `Feature` in the input `Example`, whose\n parsed `Tensor` will be the resulting `RaggedTensor.flat_values`. If\n not specified, then it defaults to the key for this `RaggedFeature`.\n partitions: (Optional.) A list of objects specifying the row-partitioning\n tensors (from outermost to innermost). Each entry in this list must be\n one of:\n * `tf.io.RaggedFeature.RowSplits(key: string)`\n * `tf.io.RaggedFeature.RowLengths(key: string)`\n * `tf.io.RaggedFeature.RowStarts(key: string)`\n * `tf.io.RaggedFeature.RowLimits(key: string)`\n * `tf.io.RaggedFeature.ValueRowIds(key: string)`\n * `tf.io.RaggedFeature.UniformRowLength(length: int)`.\n Where `key` is a key for a `Feature` in the input `Example`, whose parsed\n `Tensor` will be the resulting row-partitioning tensor.\n row_splits_dtype: (Optional.) Data type for the row-partitioning tensor(s).\n One of `int32` or `int64`. Defaults to `int32`.\n validate: (Optional.) Boolean indicating whether or not to validate that\n the input values form a valid RaggedTensor. Defaults to `False`.\n ", "desc": "Configuration for passing a RaggedTensor input feature.", "type": "API"}, {"name": "tf.io.RaggedFeature.RowLengths", "docs": "RowLengths(key,)", "desc": "RowLengths(key,)", "type": "API"}, {"name": "tf.io.RaggedFeature.RowLimits", "docs": "RowLimits(key,)", "desc": "RowLimits(key,)", "type": "API"}, {"name": "tf.io.RaggedFeature.RowSplits", "docs": "RowSplits(key,)", "desc": "RowSplits(key,)", "type": "API"}, {"name": "tf.io.RaggedFeature.RowStarts", "docs": "RowStarts(key,)", "desc": "RowStarts(key,)", "type": "API"}, {"name": "tf.io.RaggedFeature.UniformRowLength", "docs": "UniformRowLength(length,)", "desc": "UniformRowLength(length,)", "type": "API"}, {"name": "tf.io.RaggedFeature.ValueRowIds", "docs": "ValueRowIds(key,)", "desc": "ValueRowIds(key,)", "type": "API"}, {"name": "tf.io.read_file", "docs": "Reads the contents of file.\n\n This operation returns a tensor with the entire contents of the input\n filename. It does not do any parsing, it just returns the contents as\n they are. Usually, this is the first step in the input pipeline.\n\n Example:\n\n >>> with open(\"/tmp/file.txt\", \"w\") as f:\n ... f.write(\"asdf\")\n ...\n 4\n >>> tf.io.read_file(\"/tmp/file.txt\")\n \n\n Example of using the op in a function to read an image, decode it and reshape\n the tensor containing the pixel data:\n\n >>> @tf.function\n ... def load_image(filename):\n ... raw = tf.io.read_file(filename)\n ... image = tf.image.decode_png(raw, channels=3)\n ... # the `print` executes during tracing.\n ... print(\"Initial shape: \", image.shape)\n ... image.set_shape([28, 28, 3])\n ... print(\"Final shape: \", image.shape)\n ... return image\n\n Args:\n filename: string. filename to read from.\n name: string. Optional name for the op.\n\n Returns:\n A tensor of dtype \"string\", with the file contents.\n ", "desc": "Reads the contents of file.", "type": "API"}, {"name": "tf.io.serialize_many_sparse", "docs": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.\n\n The `SparseTensor` must have rank `R` greater than 1, and the first dimension\n is treated as the minibatch dimension. Elements of the `SparseTensor`\n must be sorted in increasing order of this first dimension. The serialized\n `SparseTensor` objects going into each row of the output `Tensor` will have\n rank `R-1`.\n\n The minibatch size `N` is extracted from `sparse_shape[0]`.\n\n Args:\n sp_input: The input rank `R` `SparseTensor`.\n out_type: The `dtype` to use for serialization.\n name: A name prefix for the returned tensors (optional).\n\n Returns:\n A matrix (2-D `Tensor`) with `N` rows and `3` columns. Each column\n represents serialized `SparseTensor`'s indices, values, and shape\n (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor`.", "type": "API"}, {"name": "tf.io.serialize_sparse", "docs": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.\n\n Args:\n sp_input: The input `SparseTensor`.\n out_type: The `dtype` to use for serialization.\n name: A name prefix for the returned tensors (optional).\n\n Returns:\n A 3-vector (1-D `Tensor`), with each column representing the serialized\n `SparseTensor`'s indices, values, and shape (respectively).\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Serialize a `SparseTensor` into a 3-vector (1-D `Tensor`) object.", "type": "API"}, {"name": "tf.io.serialize_tensor", "docs": "Transforms a Tensor into a serialized TensorProto proto.\n\n This operation transforms data in a `tf.Tensor` into a `tf.Tensor` of type\n `tf.string` containing the data in a binary string format. This operation can\n transform scalar data and linear arrays, but it is most useful in converting\n multidimensional arrays into a format accepted by binary storage formats such\n as a `TFRecord` or `tf.train.Example`.\n\n See also:\n - `tf.io.parse_tensor`: inverse operation of `tf.io.serialize_tensor` that\n transforms a scalar string containing a serialized Tensor into a Tensor of a\n specified type.\n - `tf.ensure_shape`: `parse_tensor` cannot statically determine the shape of\n the parsed tensor. Use `tf.ensure_shape` to set the static shape when running\n under a `tf.function`\n - `.SerializeToString`, serializes a proto to a binary-string\n\n Example of serializing scalar data:\n\n >>> t = tf.constant(1)\n >>> tf.io.serialize_tensor(t)\n \n\n Example of storing non-scalar data into a `tf.train.Example`:\n\n >>> t1 = [[1, 2]]\n >>> t2 = [[7, 8]]\n >>> nonscalar = tf.concat([t1, t2], 0)\n >>> nonscalar\n \n\n Serialize the data using `tf.io.serialize_tensor`.\n\n >>> serialized_nonscalar = tf.io.serialize_tensor(nonscalar)\n >>> serialized_nonscalar\n \n\n Store the data in a `tf.train.Feature`.\n\n >>> feature_of_bytes = tf.train.Feature(\n ... bytes_list=tf.train.BytesList(value=[serialized_nonscalar.numpy()]))\n >>> feature_of_bytes\n bytes_list {\n value: \"\\010...\\000\"\n }\n\n Put the `tf.train.Feature` message into a `tf.train.Example`.\n\n >>> features_for_example = {\n ... 'feature0': feature_of_bytes\n ... }\n >>> example_proto = tf.train.Example(\n ... features=tf.train.Features(feature=features_for_example))\n >>> example_proto\n features {\n feature {\n key: \"feature0\"\n value {\n bytes_list {\n value: \"\\010...\\000\"\n }\n }\n }\n }\n\n Args:\n tensor: A `tf.Tensor`.\n name: string. Optional name for the op.\n\n Returns:\n A Tensor of dtype string.\n ", "desc": "Transforms a Tensor into a serialized TensorProto proto.", "type": "API"}, {"name": "tf.io.SparseFeature", "docs": "Configuration for parsing a sparse input feature from an `Example`.\n\n Note, preferably use `VarLenFeature` (possibly in combination with a\n `SequenceExample`) in order to parse out `SparseTensor`s instead of\n `SparseFeature` due to its simplicity.\n\n Closely mimicking the `SparseTensor` that will be obtained by parsing an\n `Example` with a `SparseFeature` config, a `SparseFeature` contains a\n\n * `value_key`: The name of key for a `Feature` in the `Example` whose parsed\n `Tensor` will be the resulting `SparseTensor.values`.\n\n * `index_key`: A list of names - one for each dimension in the resulting\n `SparseTensor` whose `indices[i][dim]` indicating the position of\n the `i`-th value in the `dim` dimension will be equal to the `i`-th value in\n the Feature with key named `index_key[dim]` in the `Example`.\n\n * `size`: A list of ints for the resulting `SparseTensor.dense_shape`.\n\n For example, we can represent the following 2D `SparseTensor`\n\n ```python\n SparseTensor(indices=[[3, 1], [20, 0]],\n values=[0.5, -1.0]\n dense_shape=[100, 3])\n ```\n\n with an `Example` input proto\n\n ```python\n features {\n feature { key: \"val\" value { float_list { value: [ 0.5, -1.0 ] } } }\n feature { key: \"ix0\" value { int64_list { value: [ 3, 20 ] } } }\n feature { key: \"ix1\" value { int64_list { value: [ 1, 0 ] } } }\n }\n ```\n\n and `SparseFeature` config with 2 `index_key`s\n\n ```python\n SparseFeature(index_key=[\"ix0\", \"ix1\"],\n value_key=\"val\",\n dtype=tf.float32,\n size=[100, 3])\n ```\n\n Fields:\n index_key: A single string name or a list of string names of index features.\n For each key the underlying feature's type must be `int64` and its length\n must always match that of the `value_key` feature.\n To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1\n a list of length `rank` should be used.\n value_key: Name of value feature. The underlying feature's type must\n be `dtype` and its length must always match that of all the `index_key`s'\n features.\n dtype: Data type of the `value_key` feature.\n size: A Python int or list thereof specifying the dense shape. Should be a\n list if and only if `index_key` is a list. In that case the list must be\n equal to the length of `index_key`. Each for each entry `i` all values in\n the `index_key`[i] feature must be in `[0, size[i])`.\n already_sorted: A Python boolean to specify whether the values in\n `value_key` are already sorted by their index position. If so skip\n sorting. False by default (optional).\n ", "desc": "Configuration for parsing a sparse input feature from an `Example`.", "type": "API"}, {"name": "tf.io.TFRecordOptions", "docs": "Options used for manipulating TFRecord files.", "desc": "Options used for manipulating TFRecord files.", "type": "API"}, {"name": "tf.io.TFRecordWriter", "docs": "A class to write records to a TFRecords file.\n\n [TFRecords tutorial](https://www.tensorflow.org/tutorials/load_data/tfrecord)\n\n TFRecords is a binary format which is optimized for high throughput data\n retrieval, generally in conjunction with `tf.data`. `TFRecordWriter` is used\n to write serialized examples to a file for later consumption. The key steps\n are:\n\n Ahead of time:\n\n - [Convert data into a serialized format](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfexample)\n - [Write the serialized data to one or more files](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#tfrecord_files_in_python)\n\n During training or evaluation:\n\n - [Read serialized examples into memory](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n - [Parse (deserialize) examples](\n https://www.tensorflow.org/tutorials/load_data/tfrecord#reading_a_tfrecord_file)\n\n A minimal example is given below:\n\n >>> import tempfile\n >>> example_path = os.path.join(tempfile.gettempdir(), \"example.tfrecords\")\n >>> np.random.seed(0)\n\n >>> # Write the records to a file.\n ... with tf.io.TFRecordWriter(example_path) as file_writer:\n ... for _ in range(4):\n ... x, y = np.random.random(), np.random.random()\n ...\n ... record_bytes = tf.train.Example(features=tf.train.Features(feature={\n ... \"x\": tf.train.Feature(float_list=tf.train.FloatList(value=[x])),\n ... \"y\": tf.train.Feature(float_list=tf.train.FloatList(value=[y])),\n ... })).SerializeToString()\n ... file_writer.write(record_bytes)\n\n >>> # Read the data back out.\n >>> def decode_fn(record_bytes):\n ... return tf.io.parse_single_example(\n ... # Data\n ... record_bytes,\n ...\n ... # Schema\n ... {\"x\": tf.io.FixedLenFeature([], dtype=tf.float32),\n ... \"y\": tf.io.FixedLenFeature([], dtype=tf.float32)}\n ... )\n\n >>> for batch in tf.data.TFRecordDataset([example_path]).map(decode_fn):\n ... print(\"x = {x:.4f}, y = {y:.4f}\".format(**batch))\n x = 0.5488, y = 0.7152\n x = 0.6028, y = 0.5449\n x = 0.4237, y = 0.6459\n x = 0.4376, y = 0.8918\n\n This class implements `__enter__` and `__exit__`, and can be used\n in `with` blocks like a normal file. (See the usage example above.)\n ", "desc": "A class to write records to a TFRecords file.", "type": "API"}, {"name": "tf.io.VarLenFeature", "docs": "Configuration for parsing a variable-length input feature.\n\n Fields:\n dtype: Data type of input.\n ", "desc": "Configuration for parsing a variable-length input feature.", "type": "API"}, {"name": "tf.io.write_file", "docs": "Writes `contents` to the file at input `filename`.\n\n Creates the file and recursively creates directory if it does not exist.\n\n Args:\n filename: A `Tensor` of type `string`.\n scalar. The name of the file to which we write the contents.\n contents: A `Tensor` of type `string`.\n scalar. The content to be written to the output file.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes `contents` to the file at input `filename`.", "type": "API"}, {"name": "tf.io.write_graph", "docs": "Writes a graph proto to a file.\n\n The graph is written as a text proto unless `as_text` is `False`.\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')\n ```\n\n or\n\n ```python\n v = tf.Variable(0, name='my_variable')\n sess = tf.compat.v1.Session()\n tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')\n ```\n\n Args:\n graph_or_graph_def: A `Graph` or a `GraphDef` protocol buffer.\n logdir: Directory where to write the graph. This can refer to remote\n filesystems, such as Google Cloud Storage (GCS).\n name: Filename for the graph.\n as_text: If `True`, writes the graph as an ASCII proto.\n\n Returns:\n The path of the output proto file.\n ", "desc": "Writes a graph proto to a file.", "type": "API"}, {"name": "tf.is_tensor", "docs": "Checks whether `x` is a TF-native type that can be passed to many TF ops.\n\n Use `is_tensor` to differentiate types that can ingested by TensorFlow ops\n without any conversion (e.g., `tf.Tensor`, `tf.SparseTensor`, and\n `tf.RaggedTensor`) from types that need to be converted into tensors before\n they are ingested (e.g., numpy `ndarray` and Python scalars).\n\n For example, in the following code block:\n\n ```python\n if not tf.is_tensor(t):\n t = tf.convert_to_tensor(t)\n return t.shape, t.dtype\n ```\n\n we check to make sure that `t` is a tensor (and convert it if not) before\n accessing its `shape` and `dtype`. (But note that not all TensorFlow native\n types have shapes or dtypes; `tf.data.Dataset` is an example of a TensorFlow\n native type that has neither shape nor dtype.)\n\n Args:\n x: A python object to check.\n\n Returns:\n `True` if `x` is a TensorFlow-native type.\n ", "desc": "Checks whether `x` is a TF-native type that can be passed to many TF ops.", "type": "API"}, {"name": "tf.keras", "docs": "Implementation of the Keras API, the high-level API of TensorFlow.\n\nDetailed documentation and user guides are available at\n[keras.io](https://keras.io).\n\n", "desc": "Implementation of the Keras API, the high-level API of TensorFlow.", "type": "API"}, {"name": "tf.keras.activations", "docs": "Built-in activation functions.\n", "desc": "Built-in activation functions.", "type": "API"}, {"name": "tf.keras.activations.deserialize", "docs": "Returns activation function given a string identifier.\n\n Args:\n name: The name of the activation function.\n custom_objects: Optional `{function_name: function_obj}`\n dictionary listing user-provided activation functions.\n\n Returns:\n Corresponding activation function.\n\n For example:\n\n >>> tf.keras.activations.deserialize('linear')\n \n >>> tf.keras.activations.deserialize('sigmoid')\n \n >>> tf.keras.activations.deserialize('abcd')\n Traceback (most recent call last):\n ...\n ValueError: Unknown activation function:abcd\n\n Raises:\n ValueError: `Unknown activation function` if the input string does not\n denote any defined Tensorflow activation function.\n ", "desc": "Returns activation function given a string identifier.", "type": "API"}, {"name": "tf.keras.activations.elu", "docs": "Exponential Linear Unit.\n\n The exponential linear unit (ELU) with `alpha > 0` is:\n `x` if `x > 0` and\n `alpha * (exp(x) - 1)` if `x < 0`\n The ELU hyperparameter `alpha` controls the value to which an\n ELU saturates for negative net inputs. ELUs diminish the\n vanishing gradient effect.\n\n ELUs have negative values which pushes the mean of the activations\n closer to zero.\n Mean activations that are closer to zero enable faster learning as they\n bring the gradient closer to the natural gradient.\n ELUs saturate to a negative value when the argument gets smaller.\n Saturation means a small derivative which decreases the variation\n and the information that is propagated to the next layer.\n\n Example Usage:\n\n >>> import tensorflow as tf\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu',\n ... input_shape=(28, 28, 1)))\n >>> model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n >>> model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n\n \n\n Args:\n x: Input tensor.\n alpha: A scalar, slope of negative section. `alpha` controls the value to\n which an ELU saturates for negative net inputs.\n\n Returns:\n The exponential linear unit (ELU) activation function: `x` if `x > 0` and\n `alpha * (exp(x) - 1)` if `x < 0`.\n\n\n Reference:\n [Fast and Accurate Deep Network Learning by Exponential Linear Units\n (ELUs) (Clevert et al, 2016)](https://arxiv.org/abs/1511.07289)\n ", "desc": "Exponential Linear Unit.", "type": "API"}, {"name": "tf.keras.activations.exponential", "docs": "Exponential activation function.\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.exponential(a)\n >>> b.numpy()\n array([0.04978707, 0.36787945, 1., 2.7182817 , 20.085537], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor with exponential activation: `exp(x)`.\n ", "desc": "Exponential activation function.", "type": "API"}, {"name": "tf.keras.activations.gelu", "docs": "Applies the Gaussian error linear unit (GELU) activation function.\n\n Gaussian error linear unit (GELU) computes\n `x * P(X <= x)`, where `P(X) ~ N(0, 1)`.\n The (GELU) nonlinearity weights inputs by their value, rather than gates\n inputs by their sign as in ReLU.\n\n For example:\n\n >>> x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype=tf.float32)\n >>> y = tf.keras.activations.gelu(x)\n >>> y.numpy()\n array([-0.00404951, -0.15865529, 0. , 0.8413447 , 2.9959507 ],\n dtype=float32)\n >>> y = tf.keras.activations.gelu(x, approximate=True)\n >>> y.numpy()\n array([-0.00363752, -0.15880796, 0. , 0.841192 , 2.9963627 ],\n dtype=float32)\n\n Args:\n x: Input tensor.\n approximate: A `bool`, whether to enable approximation.\n\n Returns:\n The gaussian error linear activation:\n `0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))`\n if `approximate` is `True` or\n `x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))`,\n where `P(X) ~ N(0, 1)`,\n if `approximate` is `False`.\n\n Reference:\n - [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415)\n ", "desc": "Applies the Gaussian error linear unit (GELU) activation function.", "type": "API"}, {"name": "tf.keras.activations.get", "docs": "Returns function.\n\n Args:\n identifier: Function or string\n\n Returns:\n Function corresponding to the input string or input function.\n\n For example:\n\n >>> tf.keras.activations.get('softmax')\n \n >>> tf.keras.activations.get(tf.keras.activations.softmax)\n \n >>> tf.keras.activations.get(None)\n \n >>> tf.keras.activations.get(abs)\n \n >>> tf.keras.activations.get('abcd')\n Traceback (most recent call last):\n ...\n ValueError: Unknown activation function:abcd\n\n Raises:\n ValueError: Input is an unknown function or string, i.e., the input does\n not denote any defined function.\n ", "desc": "Returns function.", "type": "API"}, {"name": "tf.keras.activations.hard_sigmoid", "docs": "Hard sigmoid activation function.\n\n A faster approximation of the sigmoid activation.\n Piecewise linear approximation of the sigmoid function.\n Ref: 'https://en.wikipedia.org/wiki/Hard_sigmoid'\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.hard_sigmoid(a)\n >>> b.numpy()\n array([0. , 0.3, 0.5, 0.7, 1. ], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The hard sigmoid activation, defined as:\n\n - `if x < -2.5: return 0`\n - `if x > 2.5: return 1`\n - `if -2.5 <= x <= 2.5: return 0.2 * x + 0.5`\n ", "desc": "Hard sigmoid activation function.", "type": "API"}, {"name": "tf.keras.activations.linear", "docs": "Linear activation function (pass-through).\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.linear(a)\n >>> b.numpy()\n array([-3., -1., 0., 1., 3.], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The input, unmodified.\n ", "desc": "Linear activation function (pass-through).", "type": "API"}, {"name": "tf.keras.activations.relu", "docs": "Applies the rectified linear unit activation function.\n\n With default values, this returns the standard ReLU activation:\n `max(x, 0)`, the element-wise maximum of 0 and the input tensor.\n\n Modifying default parameters allows you to use non-zero thresholds,\n change the max value of the activation,\n and to use a non-zero multiple of the input for values below the threshold.\n\n For example:\n\n >>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32)\n >>> tf.keras.activations.relu(foo).numpy()\n array([ 0., 0., 0., 5., 10.], dtype=float32)\n >>> tf.keras.activations.relu(foo, alpha=0.5).numpy()\n array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32)\n >>> tf.keras.activations.relu(foo, max_value=5.).numpy()\n array([0., 0., 0., 5., 5.], dtype=float32)\n >>> tf.keras.activations.relu(foo, threshold=5.).numpy()\n array([-0., -0., 0., 0., 10.], dtype=float32)\n\n Args:\n x: Input `tensor` or `variable`.\n alpha: A `float` that governs the slope for values lower than the\n threshold.\n max_value: A `float` that sets the saturation threshold (the largest value\n the function will return).\n threshold: A `float` giving the threshold value of the activation function\n below which values will be damped or set to zero.\n\n Returns:\n A `Tensor` representing the input tensor,\n transformed by the relu activation function.\n Tensor will be of the same shape and dtype of input `x`.\n ", "desc": "Applies the rectified linear unit activation function.", "type": "API"}, {"name": "tf.keras.activations.selu", "docs": "Scaled Exponential Linear Unit (SELU).\n\n The Scaled Exponential Linear Unit (SELU) activation function is defined as:\n\n - `if x > 0: return scale * x`\n - `if x < 0: return scale * alpha * (exp(x) - 1)`\n\n where `alpha` and `scale` are pre-defined constants\n (`alpha=1.67326324` and `scale=1.05070098`).\n\n Basically, the SELU activation function multiplies `scale` (> 1) with the\n output of the `tf.keras.activations.elu` function to ensure a slope larger\n than one for positive inputs.\n\n The values of `alpha` and `scale` are\n chosen so that the mean and variance of the inputs are preserved\n between two consecutive layers as long as the weights are initialized\n correctly (see `tf.keras.initializers.LecunNormal` initializer)\n and the number of input units is \"large enough\"\n (see reference paper for more information).\n\n Example Usage:\n\n >>> num_classes = 10 # 10-class problem\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',\n ... activation='selu'))\n >>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))\n\n Args:\n x: A tensor or variable to compute the activation function for.\n\n Returns:\n The scaled exponential unit activation: `scale * elu(x, alpha)`.\n\n Notes:\n - To be used together with the\n `tf.keras.initializers.LecunNormal` initializer.\n - To be used together with the dropout variant\n `tf.keras.layers.AlphaDropout` (not regular dropout).\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Scaled Exponential Linear Unit (SELU).", "type": "API"}, {"name": "tf.keras.activations.serialize", "docs": "Returns the string identifier of an activation function.\n\n Args:\n activation : Function object.\n\n Returns:\n String denoting the name attribute of the input function\n\n For example:\n\n >>> tf.keras.activations.serialize(tf.keras.activations.tanh)\n 'tanh'\n >>> tf.keras.activations.serialize(tf.keras.activations.sigmoid)\n 'sigmoid'\n >>> tf.keras.activations.serialize('abcd')\n Traceback (most recent call last):\n ...\n ValueError: ('Cannot serialize', 'abcd')\n\n Raises:\n ValueError: The input function is not a valid one.\n ", "desc": "Returns the string identifier of an activation function.", "type": "API"}, {"name": "tf.keras.activations.sigmoid", "docs": "Sigmoid activation function, `sigmoid(x) = 1 / (1 + exp(-x))`.\n\n Applies the sigmoid activation function. For small values (<-5),\n `sigmoid` returns a value close to zero, and for large values (>5)\n the result of the function gets close to 1.\n\n Sigmoid is equivalent to a 2-element Softmax, where the second element is\n assumed to be zero. The sigmoid function always returns a value between\n 0 and 1.\n\n For example:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.sigmoid(a)\n >>> b.numpy()\n array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01,\n 1.0000000e+00], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor with the sigmoid activation: `1 / (1 + exp(-x))`.\n ", "desc": "Sigmoid activation function, `sigmoid(x) = 1 / (1 + exp(-x))`.", "type": "API"}, {"name": "tf.keras.activations.softmax", "docs": "Softmax converts a vector of values to a probability distribution.\n\n The elements of the output vector are in range (0, 1) and sum to 1.\n\n Each vector is handled independently. The `axis` argument sets which axis\n of the input the function is applied along.\n\n Softmax is often used as the activation for the last\n layer of a classification network because the result could be interpreted as\n a probability distribution.\n\n The softmax of each vector x is computed as\n `exp(x) / tf.reduce_sum(exp(x))`.\n\n The input values in are the log-odds of the resulting probability.\n\n Args:\n x : Input tensor.\n axis: Integer, axis along which the softmax normalization is applied.\n\n Returns:\n Tensor, output of softmax transformation (all values are non-negative\n and sum to 1).\n\n Examples:\n\n **Example 1: standalone usage**\n\n >>> inputs = tf.random.normal(shape=(32, 10))\n >>> outputs = tf.keras.activations.softmax(inputs)\n >>> tf.reduce_sum(outputs[0, :]) # Each sample in the batch now sums to 1\n \n\n **Example 2: usage in a `Dense` layer**\n\n >>> layer = tf.keras.layers.Dense(32, activation=tf.keras.activations.softmax)\n ", "desc": "Softmax converts a vector of values to a probability distribution.", "type": "API"}, {"name": "tf.keras.activations.softplus", "docs": "Softplus activation function, `softplus(x) = log(exp(x) + 1)`.\n\n Example Usage:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.softplus(a)\n >>> b.numpy()\n array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00,\n 2.0000000e+01], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The softplus activation: `log(exp(x) + 1)`.\n ", "desc": "Softplus activation function, `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.keras.activations.softsign", "docs": "Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.\n\n Example Usage:\n\n >>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32)\n >>> b = tf.keras.activations.softsign(a)\n >>> b.numpy()\n array([-0.5, 0. , 0.5], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The softsign activation: `x / (abs(x) + 1)`.\n ", "desc": "Softsign activation function, `softsign(x) = x / (abs(x) + 1)`.", "type": "API"}, {"name": "tf.keras.activations.swish", "docs": "Swish activation function, `swish(x) = x * sigmoid(x)`.\n\n Swish activation function which returns `x*sigmoid(x)`.\n It is a smooth, non-monotonic function that consistently matches\n or outperforms ReLU on deep networks, it is unbounded above and\n bounded below.\n\n\n Example Usage:\n\n >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)\n >>> b = tf.keras.activations.swish(a)\n >>> b.numpy()\n array([-4.1223075e-08, -2.6894143e-01, 0.0000000e+00, 7.3105860e-01,\n 2.0000000e+01], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n The swish activation applied to `x` (see reference paper for details).\n\n Reference:\n - [Ramachandran et al., 2017](https://arxiv.org/abs/1710.05941)\n ", "desc": "Swish activation function, `swish(x) = x * sigmoid(x)`.", "type": "API"}, {"name": "tf.keras.activations.tanh", "docs": "Hyperbolic tangent activation function.\n\n For example:\n\n >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)\n >>> b = tf.keras.activations.tanh(a)\n >>> b.numpy()\n array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32)\n\n Args:\n x: Input tensor.\n\n Returns:\n Tensor of same shape and dtype of input `x`, with tanh activation:\n `tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x)))`.\n ", "desc": "Hyperbolic tangent activation function.", "type": "API"}, {"name": "tf.keras.applications", "docs": "Keras Applications are premade architectures with pre-trained weights.\n", "desc": "Keras Applications are premade architectures with pre-trained weights.", "type": "API"}, {"name": "tf.keras.applications.densenet", "docs": "DenseNet models for Keras.\n\nReference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n", "desc": "DenseNet models for Keras.", "type": "API"}, {"name": "tf.keras.applications.densenet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.densenet.DenseNet121", "docs": "Instantiates the Densenet121 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet121 architecture.", "type": "API"}, {"name": "tf.keras.applications.densenet.DenseNet169", "docs": "Instantiates the Densenet169 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet169 architecture.", "type": "API"}, {"name": "tf.keras.applications.densenet.DenseNet201", "docs": "Instantiates the Densenet201 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet201 architecture.", "type": "API"}, {"name": "tf.keras.applications.densenet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The input pixels values are scaled between 0 and 1 and each channel is\n normalized with respect to the ImageNet dataset.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.DenseNet121", "docs": "Instantiates the Densenet121 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet121 architecture.", "type": "API"}, {"name": "tf.keras.applications.DenseNet169", "docs": "Instantiates the Densenet169 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet169 architecture.", "type": "API"}, {"name": "tf.keras.applications.DenseNet201", "docs": "Instantiates the Densenet201 architecture.\n\n Reference:\n - [Densely Connected Convolutional Networks](\n https://arxiv.org/abs/1608.06993) (CVPR 2017)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For DenseNet, call `tf.keras.applications.densenet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the Densenet201 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet", "docs": "EfficientNet models for Keras.\n\nReference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n", "desc": "EfficientNet models for Keras.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB0", "docs": "Instantiates the EfficientNetB0 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB0 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB1", "docs": "Instantiates the EfficientNetB1 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB1 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB2", "docs": "Instantiates the EfficientNetB2 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB2 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB3", "docs": "Instantiates the EfficientNetB3 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB3 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB4", "docs": "Instantiates the EfficientNetB4 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB4 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB5", "docs": "Instantiates the EfficientNetB5 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB5 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB6", "docs": "Instantiates the EfficientNetB6 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB6 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.EfficientNetB7", "docs": "Instantiates the EfficientNetB7 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB7 architecture.", "type": "API"}, {"name": "tf.keras.applications.efficientnet.preprocess_input", "docs": "A placeholder method for backward compatibility.\n\n The preprocessing logic has been included in the efficientnet model\n implementation. Users are no longer required to call this method to normalize\n the input data. This method does nothing and only kept as a placeholder to\n align the API surface between old and new version of model.\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").{mode}\n\n Returns:\n Unchanged `numpy.array` or `tf.Tensor`.\n ", "desc": "A placeholder method for backward compatibility.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB0", "docs": "Instantiates the EfficientNetB0 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB0 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB1", "docs": "Instantiates the EfficientNetB1 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB1 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB2", "docs": "Instantiates the EfficientNetB2 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB2 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB3", "docs": "Instantiates the EfficientNetB3 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB3 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB4", "docs": "Instantiates the EfficientNetB4 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB4 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB5", "docs": "Instantiates the EfficientNetB5 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB5 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB6", "docs": "Instantiates the EfficientNetB6 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB6 architecture.", "type": "API"}, {"name": "tf.keras.applications.EfficientNetB7", "docs": "Instantiates the EfficientNetB7 architecture.\n\n Reference:\n - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](\n https://arxiv.org/abs/1905.11946) (ICML 2019)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For EfficientNet, input preprocessing is included as part of the model\n (as a `Rescaling` layer), and thus\n `tf.keras.applications.efficientnet.preprocess_input` is actually a\n pass-through function. EfficientNet models expect their inputs to be float\n tensors of pixels with values in the [0-255] range.\n\n Args:\n include_top: Whether to include the fully-connected\n layer at the top of the network. Defaults to True.\n weights: One of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded. Defaults to 'imagenet'.\n input_tensor: Optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False.\n It should have exactly 3 inputs channels.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`. Defaults to None.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Defaults to 1000 (number of\n ImageNet classes).\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n Defaults to 'softmax'.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the EfficientNetB7 architecture.", "type": "API"}, {"name": "tf.keras.applications.imagenet_utils", "docs": "Utilities for ImageNet data preprocessing & prediction decoding.\n", "desc": "Utilities for ImageNet data preprocessing & prediction decoding.", "type": "API"}, {"name": "tf.keras.applications.imagenet_utils.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.imagenet_utils.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n mode: One of \"caffe\", \"tf\" or \"torch\". Defaults to \"caffe\".\n - caffe: will convert the images from RGB to BGR,\n then will zero-center each color channel with\n respect to the ImageNet dataset,\n without scaling.\n - tf: will scale pixels between -1 and 1,\n sample-wise.\n - torch: will scale pixels between 0 and 1 and then\n will normalize each channel with respect to the\n ImageNet dataset.\n \n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n\n Raises:\n \n ValueError: In case of unknown `mode` or `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.inception_resnet_v2", "docs": "Inception-ResNet V2 model for Keras.\n\nReference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n", "desc": "Inception-ResNet V2 model for Keras.", "type": "API"}, {"name": "tf.keras.applications.inception_resnet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.inception_resnet_v2.InceptionResNetV2", "docs": "Instantiates the Inception-ResNet v2 architecture.\n\n Reference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For InceptionResNetV2, call\n `tf.keras.applications.inception_resnet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `inception_resnet_v2.preprocess_input`\n will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is `False` (otherwise the input shape\n has to be `(299, 299, 3)` (with `'channels_last'` data format)\n or `(3, 299, 299)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `'avg'` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `'max'` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is `True`, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception-ResNet v2 architecture.", "type": "API"}, {"name": "tf.keras.applications.inception_resnet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.inception_v3", "docs": "Inception V3 model for Keras.\n\nReference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n", "desc": "Inception V3 model for Keras.", "type": "API"}, {"name": "tf.keras.applications.inception_v3.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.inception_v3.InceptionV3", "docs": "Instantiates the Inception v3 architecture.\n\n Reference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For `InceptionV3`, call `tf.keras.applications.inception_v3.preprocess_input`\n on your inputs before passing them to the model.\n `inception_v3.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: Boolean, whether to include the fully-connected\n layer at the top, as the last layer of the network. Default to `True`.\n weights: One of `None` (random initialization),\n `imagenet` (pre-training on ImageNet),\n or the path to the weights file to be loaded. Default to `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)` (with `channels_last` data format)\n or `(3, 299, 299)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n `input_shape` will be ignored if the `input_tensor` is provided.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Default to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception v3 architecture.", "type": "API"}, {"name": "tf.keras.applications.inception_v3.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.InceptionResNetV2", "docs": "Instantiates the Inception-ResNet v2 architecture.\n\n Reference:\n - [Inception-v4, Inception-ResNet and the Impact of\n Residual Connections on Learning](https://arxiv.org/abs/1602.07261)\n (AAAI 2017)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For InceptionResNetV2, call\n `tf.keras.applications.inception_resnet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `inception_resnet_v2.preprocess_input`\n will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is `False` (otherwise the input shape\n has to be `(299, 299, 3)` (with `'channels_last'` data format)\n or `(3, 299, 299)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `'avg'` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `'max'` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is `True`, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception-ResNet v2 architecture.", "type": "API"}, {"name": "tf.keras.applications.InceptionV3", "docs": "Instantiates the Inception v3 architecture.\n\n Reference:\n - [Rethinking the Inception Architecture for Computer Vision](\n http://arxiv.org/abs/1512.00567) (CVPR 2016)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For `InceptionV3`, call `tf.keras.applications.inception_v3.preprocess_input`\n on your inputs before passing them to the model.\n `inception_v3.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: Boolean, whether to include the fully-connected\n layer at the top, as the last layer of the network. Default to `True`.\n weights: One of `None` (random initialization),\n `imagenet` (pre-training on ImageNet),\n or the path to the weights file to be loaded. Default to `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)` (with `channels_last` data format)\n or `(3, 299, 299)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 75.\n E.g. `(150, 150, 3)` would be one valid value.\n `input_shape` will be ignored if the `input_tensor` is provided.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified. Default to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Inception v3 architecture.", "type": "API"}, {"name": "tf.keras.applications.MobileNet", "docs": "Instantiates the MobileNet architecture.\n\n Reference:\n - [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](\n https://arxiv.org/abs/1704.04861)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNet, call `tf.keras.applications.mobilenet.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, only to be specified if `include_top`\n is False (otherwise the input shape has to be `(224, 224, 3)` (with\n `channels_last` data format) or (3, 224, 224) (with `channels_first`\n data format). It should have exactly 3 inputs channels, and width and\n height should be no smaller than 32. E.g. `(200, 200, 3)` would be one\n valid value. Default to `None`.\n `input_shape` will be ignored if the `input_tensor` is provided.\n alpha: Controls the width of the network. This is known as the width\n multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally\n decreases the number of filters in each layer. - If `alpha` > 1.0,\n proportionally increases the number of filters in each layer. - If\n `alpha` = 1, default number of filters from the paper are used at each\n layer. Default to 1.0.\n depth_multiplier: Depth multiplier for depthwise convolution. This is\n called the resolution multiplier in the MobileNet paper. Default to 1.0.\n dropout: Dropout rate. Default to 0.001.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Default to `True`.\n weights: One of `None` (random initialization), 'imagenet' (pre-training\n on ImageNet), or the path to the weights file to be loaded. Default to\n `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`) to\n use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n pooling: Optional pooling mode for feature extraction when `include_top`\n is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: Optional number of classes to classify images into, only to be\n specified if `include_top` is True, and if no `weights` argument is\n specified. Defaults to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNet architecture.", "type": "API"}, {"name": "tf.keras.applications.mobilenet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.mobilenet.MobileNet", "docs": "Instantiates the MobileNet architecture.\n\n Reference:\n - [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](\n https://arxiv.org/abs/1704.04861)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNet, call `tf.keras.applications.mobilenet.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, only to be specified if `include_top`\n is False (otherwise the input shape has to be `(224, 224, 3)` (with\n `channels_last` data format) or (3, 224, 224) (with `channels_first`\n data format). It should have exactly 3 inputs channels, and width and\n height should be no smaller than 32. E.g. `(200, 200, 3)` would be one\n valid value. Default to `None`.\n `input_shape` will be ignored if the `input_tensor` is provided.\n alpha: Controls the width of the network. This is known as the width\n multiplier in the MobileNet paper. - If `alpha` < 1.0, proportionally\n decreases the number of filters in each layer. - If `alpha` > 1.0,\n proportionally increases the number of filters in each layer. - If\n `alpha` = 1, default number of filters from the paper are used at each\n layer. Default to 1.0.\n depth_multiplier: Depth multiplier for depthwise convolution. This is\n called the resolution multiplier in the MobileNet paper. Default to 1.0.\n dropout: Dropout rate. Default to 0.001.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Default to `True`.\n weights: One of `None` (random initialization), 'imagenet' (pre-training\n on ImageNet), or the path to the weights file to be loaded. Default to\n `imagenet`.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`) to\n use as image input for the model. `input_tensor` is useful for sharing\n inputs between multiple different networks. Default to None.\n pooling: Optional pooling mode for feature extraction when `include_top`\n is `False`.\n - `None` (default) means that the output of the model will be\n the 4D tensor output of the last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will be applied.\n classes: Optional number of classes to classify images into, only to be\n specified if `include_top` is True, and if no `weights` argument is\n specified. Defaults to 1000.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNet architecture.", "type": "API"}, {"name": "tf.keras.applications.mobilenet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v2", "docs": "MobileNet v2 models for Keras.\n\nMobileNetV2 is a general architecture and can be used for multiple use cases.\nDepending on the use case, it can use different input layer size and\ndifferent width factors. This allows different width models to reduce\nthe number of multiply-adds and thereby\nreduce inference cost on mobile devices.\n\nMobileNetV2 is very similar to the original MobileNet,\nexcept that it uses inverted residual blocks with\nbottlenecking features. It has a drastically lower\nparameter count than the original MobileNet.\nMobileNets support any input size greater\nthan 32 x 32, with larger image sizes\noffering better performance.\n\nThe number of parameters and number of multiply-adds\ncan be modified by using the `alpha` parameter,\nwhich increases/decreases the number of filters in each layer.\nBy altering the image size and `alpha` parameter,\nall 22 models from the paper can be built, with ImageNet weights provided.\n\nThe paper demonstrates the performance of MobileNets using `alpha` values of\n1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4\nFor each of these `alpha` values, weights for 5 different input image sizes\nare provided (224, 192, 160, 128, and 96).\n\nThe following table describes the performance of\nMobileNet on various input sizes:\n------------------------------------------------------------------------\nMACs stands for Multiply Adds\n Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy\n--------------------------|------------|---------------|---------|----|---------\n| [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 |\n| [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 |\n| [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 |\n| [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 |\n| [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 |\n| [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 |\n| [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 |\n| [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 |\n| [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 |\n| [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 |\n| [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 |\n| [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 |\n| [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 |\n| [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 |\n| [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 |\n| [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 |\n| [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 |\n| [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 |\n| [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 |\n| [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 |\n| [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 |\n| [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 |\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n", "desc": "MobileNet v2 models for Keras.", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v2.MobileNetV2", "docs": "Instantiates the MobileNetV2 architecture.\n\n MobileNetV2 is very similar to the original MobileNet,\n except that it uses inverted residual blocks with\n bottlenecking features. It has a drastically lower\n parameter count than the original MobileNet.\n MobileNets support any input size greater\n than 32 x 32, with larger image sizes\n offering better performance.\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV2, call `tf.keras.applications.mobilenet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: Float, larger than zero, controls the width of the network. This is\n known as the width multiplier in the MobileNetV2 paper, but the name is\n kept for consistency with `applications.MobileNetV1` model in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1.0, default number of filters from the paper\n are used at each layer.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization), 'imagenet'\n (pre-training on ImageNet), or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction when\n `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional integer number of classes to classify images into, only to\n be specified if `include_top` is True, and if no `weights` argument is\n specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNetV2 architecture.", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v3", "docs": "MobileNet v3 models for Keras.\n", "desc": "MobileNet v3 models for Keras.", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v3.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.mobilenet_v3.preprocess_input", "docs": "A placeholder method for backward compatibility.\n\n The preprocessing logic has been included in the mobilenet_v3 model\n implementation. Users are no longer required to call this method to normalize\n the input data. This method does nothing and only kept as a placeholder to\n align the API surface between old and new version of model.\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").{mode}\n\n Returns:\n Unchanged `numpy.array` or `tf.Tensor`.\n ", "desc": "A placeholder method for backward compatibility.", "type": "API"}, {"name": "tf.keras.applications.MobileNetV2", "docs": "Instantiates the MobileNetV2 architecture.\n\n MobileNetV2 is very similar to the original MobileNet,\n except that it uses inverted residual blocks with\n bottlenecking features. It has a drastically lower\n parameter count than the original MobileNet.\n MobileNets support any input size greater\n than 32 x 32, with larger image sizes\n offering better performance.\n\n Reference:\n - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](\n https://arxiv.org/abs/1801.04381) (CVPR 2018)\n\n This function returns a Keras image classification model,\n optionally loaded with weights pre-trained on ImageNet.\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV2, call `tf.keras.applications.mobilenet_v2.preprocess_input`\n on your inputs before passing them to the model.\n `mobilenet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: Float, larger than zero, controls the width of the network. This is\n known as the width multiplier in the MobileNetV2 paper, but the name is\n kept for consistency with `applications.MobileNetV1` model in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1.0, default number of filters from the paper\n are used at each layer.\n include_top: Boolean, whether to include the fully-connected layer at the\n top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization), 'imagenet'\n (pre-training on ImageNet), or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction when\n `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional integer number of classes to classify images into, only to\n be specified if `include_top` is True, and if no `weights` argument is\n specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n **kwargs: For backwards compatibility only.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the MobileNetV2 architecture.", "type": "API"}, {"name": "tf.keras.applications.MobileNetV3Large", "docs": "Instantiates the MobileNetV3Large architecture.\n\n Reference:\n - [Searching for MobileNetV3](\n https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)\n\n The following table describes the performance of MobileNets v3:\n ------------------------------------------------------------------------\n MACs stands for Multiply Adds\n\n |Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|\n |---|---|---|---|---|\n | mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |\n | mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |\n | mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |\n | mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |\n | mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |\n | mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV3, by default input preprocessing is included as a part of the\n model (as a `Rescaling` layer), and thus\n `tf.keras.applications.mobilenet_v3.preprocess_input` is actually a\n pass-through function. In this use case, MobileNetV3 models expect their inputs\n to be float tensors of pixels with values in the [0-255] range.\n At the same time, preprocessing as a part of the model (i.e. `Rescaling`\n layer) can be disabled by setting `include_preprocessing` argument to False.\n With preprocessing disabled MobileNetV3 models expect their inputs to be float\n tensors of pixels with values in the [-1, 1] range.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: controls the width of the network. This is known as the\n depth multiplier in the MobileNetV3 paper, but the name is kept for\n consistency with MobileNetV1 in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1, default number of filters from the paper\n are used at each layer.\n minimalistic: In addition to large and small models this module also\n contains so-called minimalistic models, these models have the same\n per-layer dimensions characteristic as MobilenetV3 however, they don't\n utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,\n and 5x5 convolutions). While these models are less efficient on CPU, they\n are much more performant on GPU/DSP.\n include_top: Boolean, whether to include the fully-connected\n layer at the top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Integer, optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n dropout_rate: fraction of the input units to drop on the last layer.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n include_preprocessing: Boolean, whether to include the preprocessing\n layer (`Rescaling`) at the bottom of the network. Defaults to `True`.\n\n Call arguments:\n inputs: A floating point `numpy.array` or a `tf.Tensor`, 4D with 3 color\n channels, with values in the range [0, 255] if `include_preprocessing`\n is True and in the range [-1, 1] otherwise.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the MobileNetV3Large architecture.", "type": "API"}, {"name": "tf.keras.applications.MobileNetV3Small", "docs": "Instantiates the MobileNetV3Small architecture.\n\n Reference:\n - [Searching for MobileNetV3](\n https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)\n\n The following table describes the performance of MobileNets v3:\n ------------------------------------------------------------------------\n MACs stands for Multiply Adds\n\n |Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|\n |---|---|---|---|---|\n | mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |\n | mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |\n | mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |\n | mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |\n | mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |\n | mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For MobileNetV3, by default input preprocessing is included as a part of the\n model (as a `Rescaling` layer), and thus\n `tf.keras.applications.mobilenet_v3.preprocess_input` is actually a\n pass-through function. In this use case, MobileNetV3 models expect their inputs\n to be float tensors of pixels with values in the [0-255] range.\n At the same time, preprocessing as a part of the model (i.e. `Rescaling`\n layer) can be disabled by setting `include_preprocessing` argument to False.\n With preprocessing disabled MobileNetV3 models expect their inputs to be float\n tensors of pixels with values in the [-1, 1] range.\n\n Args:\n input_shape: Optional shape tuple, to be specified if you would\n like to use a model with an input image resolution that is not\n (224, 224, 3).\n It should have exactly 3 inputs channels (224, 224, 3).\n You can also omit this option if you would like\n to infer input_shape from an input_tensor.\n If you choose to include both input_tensor and input_shape then\n input_shape will be used if they match, if the shapes\n do not match then we will throw an error.\n E.g. `(160, 160, 3)` would be one valid value.\n alpha: controls the width of the network. This is known as the\n depth multiplier in the MobileNetV3 paper, but the name is kept for\n consistency with MobileNetV1 in Keras.\n - If `alpha` < 1.0, proportionally decreases the number\n of filters in each layer.\n - If `alpha` > 1.0, proportionally increases the number\n of filters in each layer.\n - If `alpha` = 1, default number of filters from the paper\n are used at each layer.\n minimalistic: In addition to large and small models this module also\n contains so-called minimalistic models, these models have the same\n per-layer dimensions characteristic as MobilenetV3 however, they don't\n utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,\n and 5x5 convolutions). While these models are less efficient on CPU, they\n are much more performant on GPU/DSP.\n include_top: Boolean, whether to include the fully-connected\n layer at the top of the network. Defaults to `True`.\n weights: String, one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: String, optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Integer, optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n dropout_rate: fraction of the input units to drop on the last layer.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n include_preprocessing: Boolean, whether to include the preprocessing\n layer (`Rescaling`) at the bottom of the network. Defaults to `True`.\n\n Call arguments:\n inputs: A floating point `numpy.array` or a `tf.Tensor`, 4D with 3 color\n channels, with values in the range [0, 255] if `include_preprocessing`\n is True and in the range [-1, 1] otherwise.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the MobileNetV3Small architecture.", "type": "API"}, {"name": "tf.keras.applications.nasnet", "docs": "NASNet-A models for Keras.\n\nNASNet refers to Neural Architecture Search Network, a family of models\nthat were designed automatically by learning the model architectures\ndirectly on the dataset of interest.\n\nHere we consider NASNet-A, the highest performance model that was found\nfor the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset,\nobtaining state of the art performance on CIFAR-10 and ImageNet 2012.\nOnly the NASNet-A models, and their respective weights, which are suited\nfor ImageNet 2012 are provided.\n\nThe below table describes the performance on ImageNet 2012:\n--------------------------------------------------------------------------------\n Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M)\n--------------------------------------------------------------------------------\n| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 |\n| NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 |\n--------------------------------------------------------------------------------\n\nReference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n", "desc": "NASNet-A models for Keras.", "type": "API"}, {"name": "tf.keras.applications.nasnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.nasnet.NASNetLarge", "docs": "Instantiates a NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(331, 331, 3)` for NASNetLarge.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (331, 331, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: in case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.keras.applications.nasnet.NASNetMobile", "docs": "Instantiates a Mobile NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` for NASNetMobile\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (224, 224, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: In case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a Mobile NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.keras.applications.nasnet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.NASNetLarge", "docs": "Instantiates a NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(331, 331, 3)` for NASNetLarge.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (331, 331, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: in case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.keras.applications.NASNetMobile", "docs": "Instantiates a Mobile NASNet model in ImageNet mode.\n\n Reference:\n - [Learning Transferable Architectures for Scalable Image Recognition](\n https://arxiv.org/abs/1707.07012) (CVPR 2018)\n\n Optionally loads weights pre-trained on ImageNet.\n Note that the data format convention used by the model is\n the one specified in your Keras config at `~/.keras/keras.json`.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For NASNet, call `tf.keras.applications.nasnet.preprocess_input` on your\n inputs before passing them to the model.\n\n Args:\n input_shape: Optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` for NASNetMobile\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(224, 224, 3)` would be one valid value.\n include_top: Whether to include the fully-connected\n layer at the top of the network.\n weights: `None` (random initialization) or\n `imagenet` (ImageNet weights)\n For loading `imagenet` weights, `input_shape` should be (224, 224, 3)\n input_tensor: Optional Keras tensor (i.e. output of\n `layers.Input()`)\n to use as image input for the model.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model\n will be the 4D tensor output of the\n last convolutional layer.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional layer, and thus\n the output of the model will be a\n 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: Optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n\n Raises:\n ValueError: In case of invalid argument for `weights`,\n or invalid input shape.\n RuntimeError: If attempting to run this model with a\n backend that does not support separable convolutions.\n ", "desc": "Instantiates a Mobile NASNet model in ImageNet mode.", "type": "API"}, {"name": "tf.keras.applications.resnet", "docs": "ResNet models for Keras.\n\nReference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n", "desc": "ResNet models for Keras.", "type": "API"}, {"name": "tf.keras.applications.resnet.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.resnet.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.resnet.ResNet101", "docs": "Instantiates the ResNet101 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet101 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet.ResNet152", "docs": "Instantiates the ResNet152 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet152 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet_v2", "docs": "ResNet v2 models for Keras.\n\nReference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n", "desc": "ResNet v2 models for Keras.", "type": "API"}, {"name": "tf.keras.applications.resnet_v2.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.resnet_v2.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.resnet_v2.ResNet101V2", "docs": "Instantiates the ResNet101V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet101V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet_v2.ResNet152V2", "docs": "Instantiates the ResNet152V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet152V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet_v2.ResNet50V2", "docs": "Instantiates the ResNet50V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet50V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet101", "docs": "Instantiates the ResNet101 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet101 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet101V2", "docs": "Instantiates the ResNet101V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet101V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet152", "docs": "Instantiates the ResNet152 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet152 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet152V2", "docs": "Instantiates the ResNet152V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet152V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.keras.applications.resnet50.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.resnet50.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.resnet50.ResNet50", "docs": "Instantiates the ResNet50 architecture.\n\n Reference:\n - [Deep Residual Learning for Image Recognition](\n https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNet, call `tf.keras.applications.resnet.preprocess_input` on your\n inputs before passing them to the model.\n `resnet.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A Keras model instance.\n", "desc": "Instantiates the ResNet50 architecture.", "type": "API"}, {"name": "tf.keras.applications.ResNet50V2", "docs": "Instantiates the ResNet50V2 architecture.\n\n Reference:\n - [Identity Mappings in Deep Residual Networks]\n (https://arxiv.org/abs/1603.05027) (CVPR 2016)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For ResNetV2, call `tf.keras.applications.resnet_v2.preprocess_input` on your\n inputs before passing them to the model.\n `resnet_v2.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)` (with `'channels_last'` data format)\n or `(3, 224, 224)` (with `'channels_first'` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n", "desc": "Instantiates the ResNet50V2 architecture.", "type": "API"}, {"name": "tf.keras.applications.VGG16", "docs": "Instantiates the VGG16 model.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG16, call `tf.keras.applications.vgg16.preprocess_input` on your\n inputs before passing them to the model.\n `vgg16.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 input channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG16 model.", "type": "API"}, {"name": "tf.keras.applications.vgg16.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.vgg16.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.vgg16.VGG16", "docs": "Instantiates the VGG16 model.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG16, call `tf.keras.applications.vgg16.preprocess_input` on your\n inputs before passing them to the model.\n `vgg16.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 input channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG16 model.", "type": "API"}, {"name": "tf.keras.applications.VGG19", "docs": "Instantiates the VGG19 architecture.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG19, call `tf.keras.applications.vgg19.preprocess_input` on your\n inputs before passing them to the model.\n `vgg19.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG19 architecture.", "type": "API"}, {"name": "tf.keras.applications.vgg19.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.vgg19.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The images are converted from RGB to BGR, then each color channel is\n zero-centered with respect to the ImageNet dataset, without scaling.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.vgg19.VGG19", "docs": "Instantiates the VGG19 architecture.\n\n Reference:\n - [Very Deep Convolutional Networks for Large-Scale Image Recognition](\n https://arxiv.org/abs/1409.1556) (ICLR 2015)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input size for this model is 224x224.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For VGG19, call `tf.keras.applications.vgg19.preprocess_input` on your\n inputs before passing them to the model.\n `vgg19.preprocess_input` will convert the input images from RGB to BGR,\n then will zero-center each color channel with respect to the ImageNet dataset,\n without scaling.\n\n Args:\n include_top: whether to include the 3 fully-connected\n layers at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(224, 224, 3)`\n (with `channels_last` data format)\n or `(3, 224, 224)` (with `channels_first` data format).\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 32.\n E.g. `(200, 200, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True, and\n if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the VGG19 architecture.", "type": "API"}, {"name": "tf.keras.applications.Xception", "docs": "Instantiates the Xception architecture.\n\n Reference:\n - [Xception: Deep Learning with Depthwise Separable Convolutions](\n https://arxiv.org/abs/1610.02357) (CVPR 2017)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input image size for this model is 299x299.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For Xception, call `tf.keras.applications.xception.preprocess_input` on your\n inputs before passing them to the model.\n `xception.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)`.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 71.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True,\n and if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Xception architecture.", "type": "API"}, {"name": "tf.keras.applications.xception.decode_predictions", "docs": "Decodes the prediction of an ImageNet model.\n\n Args:\n preds: Numpy array encoding a batch of predictions.\n top: Integer, how many top-guesses to return. Defaults to 5.\n\n Returns:\n A list of lists of top class prediction tuples\n `(class_name, class_description, score)`.\n One list of tuples per sample in batch input.\n\n Raises:\n ValueError: In case of invalid shape of the `pred` array\n (must be 2D).\n ", "desc": "Decodes the prediction of an ImageNet model.", "type": "API"}, {"name": "tf.keras.applications.xception.preprocess_input", "docs": "\n Preprocesses a tensor or Numpy array encoding a batch of images.\n\n Usage example with `applications.MobileNet`:\n\n ```python\n i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)\n x = tf.cast(i, tf.float32)\n x = tf.keras.applications.mobilenet.preprocess_input(x)\n core = tf.keras.applications.MobileNet()\n x = core(x)\n model = tf.keras.Model(inputs=[i], outputs=[x])\n\n image = tf.image.decode_png(tf.io.read_file('file.png'))\n result = model(image)\n ```\n\n Args:\n x: A floating point `numpy.array` or a `tf.Tensor`, 3D or 4D with 3 color\n channels, with values in the range [0, 255].\n The preprocessed data are written over the input data\n if the data types are compatible. To avoid this\n behaviour, `numpy.copy(x)` can be used.\n data_format: Optional data format of the image tensor/array. Defaults to\n None, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to \"channels_last\").\n\n Returns:\n Preprocessed `numpy.array` or a `tf.Tensor` with type `float32`.\n \n The inputs pixel values are scaled between -1 and 1, sample-wise.\n\n Raises:\n \n ValueError: In case of unknown `data_format` argument.\n ", "desc": "", "type": "API"}, {"name": "tf.keras.applications.xception.Xception", "docs": "Instantiates the Xception architecture.\n\n Reference:\n - [Xception: Deep Learning with Depthwise Separable Convolutions](\n https://arxiv.org/abs/1610.02357) (CVPR 2017)\n\n For image classification use cases, see\n [this page for detailed examples](\n https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\n For transfer learning use cases, make sure to read the\n [guide to transfer learning & fine-tuning](\n https://keras.io/guides/transfer_learning/).\n\n The default input image size for this model is 299x299.\n\n Note: each Keras Application expects a specific kind of input preprocessing.\n For Xception, call `tf.keras.applications.xception.preprocess_input` on your\n inputs before passing them to the model.\n `xception.preprocess_input` will scale input pixels between -1 and 1.\n\n Args:\n include_top: whether to include the fully-connected\n layer at the top of the network.\n weights: one of `None` (random initialization),\n 'imagenet' (pre-training on ImageNet),\n or the path to the weights file to be loaded.\n input_tensor: optional Keras tensor\n (i.e. output of `layers.Input()`)\n to use as image input for the model.\n input_shape: optional shape tuple, only to be specified\n if `include_top` is False (otherwise the input shape\n has to be `(299, 299, 3)`.\n It should have exactly 3 inputs channels,\n and width and height should be no smaller than 71.\n E.g. `(150, 150, 3)` would be one valid value.\n pooling: Optional pooling mode for feature extraction\n when `include_top` is `False`.\n - `None` means that the output of the model will be\n the 4D tensor output of the\n last convolutional block.\n - `avg` means that global average pooling\n will be applied to the output of the\n last convolutional block, and thus\n the output of the model will be a 2D tensor.\n - `max` means that global max pooling will\n be applied.\n classes: optional number of classes to classify images\n into, only to be specified if `include_top` is True,\n and if no `weights` argument is specified.\n classifier_activation: A `str` or callable. The activation function to use\n on the \"top\" layer. Ignored unless `include_top=True`. Set\n `classifier_activation=None` to return the logits of the \"top\" layer.\n When loading pretrained weights, `classifier_activation` can only\n be `None` or `\"softmax\"`.\n\n Returns:\n A `keras.Model` instance.\n ", "desc": "Instantiates the Xception architecture.", "type": "API"}, {"name": "tf.keras.backend", "docs": "Keras backend API.\n", "desc": "Keras backend API.", "type": "API"}, {"name": "tf.keras.backend.clear_session", "docs": "Resets all state generated by Keras.\n\n Keras manages a global state, which it uses to implement the Functional\n model-building API and to uniquify autogenerated layer names.\n\n If you are creating many models in a loop, this global state will consume\n an increasing amount of memory over time, and you may want to clear it.\n Calling `clear_session()` releases the global state: this helps avoid clutter\n from old models and layers, especially when memory is limited.\n\n Example 1: calling `clear_session()` when creating models in a loop\n\n ```python\n for _ in range(100):\n # Without `clear_session()`, each iteration of this loop will\n # slightly increase the size of the global state managed by Keras\n model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])\n\n for _ in range(100):\n # With `clear_session()` called at the beginning,\n # Keras starts with a blank state at each iteration\n # and memory consumption is constant over time.\n tf.keras.backend.clear_session()\n model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])\n ```\n\n Example 2: resetting the layer name generation counter\n\n >>> import tensorflow as tf\n >>> layers = [tf.keras.layers.Dense(10) for _ in range(10)]\n >>> new_layer = tf.keras.layers.Dense(10)\n >>> print(new_layer.name)\n dense_10\n >>> tf.keras.backend.set_learning_phase(1)\n >>> print(tf.keras.backend.learning_phase())\n 1\n >>> tf.keras.backend.clear_session()\n >>> new_layer = tf.keras.layers.Dense(10)\n >>> print(new_layer.name)\n dense\n ", "desc": "Resets all state generated by Keras.", "type": "API"}, {"name": "tf.keras.backend.epsilon", "docs": "Returns the value of the fuzz factor used in numeric expressions.\n\n Returns:\n A float.\n\n Example:\n >>> tf.keras.backend.epsilon()\n 1e-07\n ", "desc": "Returns the value of the fuzz factor used in numeric expressions.", "type": "API"}, {"name": "tf.keras.backend.floatx", "docs": "Returns the default float type, as a string.\n\n E.g. `'float16'`, `'float32'`, `'float64'`.\n\n Returns:\n String, the current default float type.\n\n Example:\n >>> tf.keras.backend.floatx()\n 'float32'\n ", "desc": "Returns the default float type, as a string.", "type": "API"}, {"name": "tf.keras.backend.get_uid", "docs": "Associates a string prefix with an integer counter in a TensorFlow graph.\n\n Args:\n prefix: String prefix to index.\n\n Returns:\n Unique integer ID.\n\n Example:\n\n >>> get_uid('dense')\n 1\n >>> get_uid('dense')\n 2\n\n ", "desc": "Associates a string prefix with an integer counter in a TensorFlow graph.", "type": "API"}, {"name": "tf.keras.backend.image_data_format", "docs": "Returns the default image data format convention.\n\n Returns:\n A string, either `'channels_first'` or `'channels_last'`\n\n Example:\n >>> tf.keras.backend.image_data_format()\n 'channels_last'\n ", "desc": "Returns the default image data format convention.", "type": "API"}, {"name": "tf.keras.backend.is_keras_tensor", "docs": "Returns whether `x` is a Keras tensor.\n\n A \"Keras tensor\" is a tensor that was returned by a Keras layer,\n (`Layer` class) or by `Input`.\n\n Args:\n x: A candidate tensor.\n\n Returns:\n A boolean: Whether the argument is a Keras tensor.\n\n Raises:\n ValueError: In case `x` is not a symbolic tensor.\n\n Examples:\n\n >>> np_var = np.array([1, 2])\n >>> # A numpy array is not a symbolic tensor.\n >>> tf.keras.backend.is_keras_tensor(np_var)\n Traceback (most recent call last):\n ...\n ValueError: Unexpectedly found an instance of type ``.\n Expected a symbolic tensor instance.\n >>> keras_var = tf.keras.backend.variable(np_var)\n >>> # A variable created with the keras backend is not a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_var)\n False\n >>> keras_placeholder = tf.keras.backend.placeholder(shape=(2, 4, 5))\n >>> # A placeholder is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_placeholder)\n True\n >>> keras_input = tf.keras.layers.Input([10])\n >>> # An Input is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_input)\n True\n >>> keras_layer_output = tf.keras.layers.Dense(10)(keras_input)\n >>> # Any Keras layer output is a Keras tensor.\n >>> tf.keras.backend.is_keras_tensor(keras_layer_output)\n True\n\n ", "desc": "Returns whether `x` is a Keras tensor.", "type": "API"}, {"name": "tf.keras.backend.reset_uids", "docs": "Resets graph identifiers.\n ", "desc": "Resets graph identifiers.", "type": "API"}, {"name": "tf.keras.backend.rnn", "docs": "Iterates over the time dimension of a tensor.\n\n Args:\n step_function: RNN step function.\n Args;\n input; Tensor with shape `(samples, ...)` (no time dimension),\n representing input for the batch of samples at a certain\n time step.\n states; List of tensors.\n Returns;\n output; Tensor with shape `(samples, output_dim)`\n (no time dimension).\n new_states; List of tensors, same length and shapes\n as 'states'. The first state in the list must be the\n output tensor at the previous timestep.\n inputs: Tensor of temporal data of shape `(samples, time, ...)`\n (at least 3D), or nested tensors, and each of which has shape\n `(samples, time, ...)`.\n initial_states: Tensor with shape `(samples, state_size)`\n (no time dimension), containing the initial values for the states used\n in the step function. In the case that state_size is in a nested\n shape, the shape of initial_states will also follow the nested\n structure.\n go_backwards: Boolean. If True, do the iteration over the time\n dimension in reverse order and return the reversed sequence.\n mask: Binary tensor with shape `(samples, time, 1)`,\n with a zero for every element that is masked.\n constants: List of constant values passed at each step.\n unroll: Whether to unroll the RNN or to use a symbolic `while_loop`.\n input_length: An integer or a 1-D Tensor, depending on whether\n the time dimension is fixed-length or not. In case of variable length\n input, it is used for masking in case there's no mask specified.\n time_major: Boolean. If true, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n zero_output_for_mask: Boolean. If True, the output for masked timestep\n will be zeros, whereas in the False case, output from previous\n timestep is returned.\n return_all_outputs: Boolean. If True, return the recurrent outputs for all\n timesteps in the sequence. If False, only return the output for the\n last timestep (which consumes less memory).\n\n Returns:\n A tuple, `(last_output, outputs, new_states)`.\n last_output: the latest output of the rnn, of shape `(samples, ...)`\n outputs:\n - If `return_all_outputs=True`: a tensor with shape\n `(samples, time, ...)` where each entry `outputs[s, t]` is the\n output of the step function at time `t` for sample `s`\n - Else, a tensor equal to `last_output` with shape\n `(samples, 1, ...)`\n new_states: list of tensors, latest states returned by\n the step function, of shape `(samples, ...)`.\n\n Raises:\n ValueError: if input dimension is less than 3.\n ValueError: if `unroll` is `True` but input timestep is not a fixed\n number.\n ValueError: if `mask` is provided (not `None`) but states is not provided\n (`len(states)` == 0).\n ", "desc": "Iterates over the time dimension of a tensor.", "type": "API"}, {"name": "tf.keras.backend.set_epsilon", "docs": "Sets the value of the fuzz factor used in numeric expressions.\n\n Args:\n value: float. New value of epsilon.\n\n Example:\n >>> tf.keras.backend.epsilon()\n 1e-07\n >>> tf.keras.backend.set_epsilon(1e-5)\n >>> tf.keras.backend.epsilon()\n 1e-05\n >>> tf.keras.backend.set_epsilon(1e-7)\n ", "desc": "Sets the value of the fuzz factor used in numeric expressions.", "type": "API"}, {"name": "tf.keras.backend.set_floatx", "docs": "Sets the default float type.\n\n Note: It is not recommended to set this to float16 for training, as this will\n likely cause numeric stability issues. Instead, mixed precision, which is\n using a mix of float16 and float32, can be used by calling\n `tf.keras.mixed_precision.set_global_policy('mixed_float16')`. See the\n [mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) for details.\n\n Args:\n value: String; `'float16'`, `'float32'`, or `'float64'`.\n\n Example:\n >>> tf.keras.backend.floatx()\n 'float32'\n >>> tf.keras.backend.set_floatx('float64')\n >>> tf.keras.backend.floatx()\n 'float64'\n >>> tf.keras.backend.set_floatx('float32')\n\n Raises:\n ValueError: In case of invalid value.\n ", "desc": "Sets the default float type.", "type": "API"}, {"name": "tf.keras.backend.set_image_data_format", "docs": "Sets the value of the image data format convention.\n\n Args:\n data_format: string. `'channels_first'` or `'channels_last'`.\n\n Example:\n >>> tf.keras.backend.image_data_format()\n 'channels_last'\n >>> tf.keras.backend.set_image_data_format('channels_first')\n >>> tf.keras.backend.image_data_format()\n 'channels_first'\n >>> tf.keras.backend.set_image_data_format('channels_last')\n\n Raises:\n ValueError: In case of invalid `data_format` value.\n ", "desc": "Sets the value of the image data format convention.", "type": "API"}, {"name": "tf.keras.callbacks", "docs": "Callbacks: utilities called at certain points during model training.\n", "desc": "Callbacks: utilities called at certain points during model training.", "type": "API"}, {"name": "tf.keras.callbacks.BaseLogger", "docs": "Callback that accumulates epoch averages of metrics.\n\n This callback is automatically applied to every Keras model.\n\n Args:\n stateful_metrics: Iterable of string names of metrics that\n should *not* be averaged over an epoch.\n Metrics in this list will be logged as-is in `on_epoch_end`.\n All others will be averaged in `on_epoch_end`.\n ", "desc": "Callback that accumulates epoch averages of metrics.", "type": "API"}, {"name": "tf.keras.callbacks.Callback", "docs": "Abstract base class used to build new callbacks.\n\n Callbacks can be passed to keras methods such as `fit`, `evaluate`, and\n `predict` in order to hook into the various stages of the model training and\n inference lifecycle.\n\n To create a custom callback, subclass `keras.callbacks.Callback` and override\n the method associated with the stage of interest. See\n https://www.tensorflow.org/guide/keras/custom_callback for more information.\n\n Example:\n\n >>> training_finished = False\n >>> class MyCallback(tf.keras.callbacks.Callback):\n ... def on_train_end(self, logs=None):\n ... global training_finished\n ... training_finished = True\n >>> model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])\n >>> model.compile(loss='mean_squared_error')\n >>> model.fit(tf.constant([[1.0]]), tf.constant([[1.0]]),\n ... callbacks=[MyCallback()])\n >>> assert training_finished == True\n\n If you want to use `Callback` objects in a custom training loop:\n\n 1. You should pack all your callbacks into a single `callbacks.CallbackList`\n so they can all be called together.\n 2. You will need to manually call all the `on_*` methods at the appropriate\n locations in your loop. Like this:\n\n ```\n callbacks = tf.keras.callbacks.CallbackList([...])\n callbacks.append(...)\n\n callbacks.on_train_begin(...)\n for epoch in range(EPOCHS):\n callbacks.on_epoch_begin(epoch)\n for i, data in dataset.enumerate():\n callbacks.on_train_batch_begin(i)\n batch_logs = model.train_step(data)\n callbacks.on_train_batch_end(i, batch_logs)\n epoch_logs = ...\n callbacks.on_epoch_end(epoch, epoch_logs)\n final_logs=...\n callbacks.on_train_end(final_logs)\n ```\n\n Attributes:\n params: Dict. Training parameters\n (eg. verbosity, batch size, number of epochs...).\n model: Instance of `keras.models.Model`.\n Reference of the model being trained.\n\n The `logs` dictionary that callback methods\n take as argument will contain keys for quantities relevant to\n the current batch or epoch (see method-specific docstrings).\n ", "desc": "Abstract base class used to build new callbacks.", "type": "API"}, {"name": "tf.keras.callbacks.CallbackList", "docs": "Container abstracting a list of callbacks.", "desc": "Container abstracting a list of callbacks.", "type": "API"}, {"name": "tf.keras.callbacks.CSVLogger", "docs": "Callback that streams epoch results to a CSV file.\n\n Supports all values that can be represented as a string,\n including 1D iterables such as `np.ndarray`.\n\n Example:\n\n ```python\n csv_logger = CSVLogger('training.log')\n model.fit(X_train, Y_train, callbacks=[csv_logger])\n ```\n\n Args:\n filename: Filename of the CSV file, e.g. `'run/log.csv'`.\n separator: String used to separate elements in the CSV file.\n append: Boolean. True: append if file exists (useful for continuing\n training). False: overwrite existing file.\n ", "desc": "Callback that streams epoch results to a CSV file.", "type": "API"}, {"name": "tf.keras.callbacks.EarlyStopping", "docs": "Stop training when a monitored metric has stopped improving.\n\n Assuming the goal of a training is to minimize the loss. With this, the\n metric to be monitored would be `'loss'`, and mode would be `'min'`. A\n `model.fit()` training loop will check at end of every epoch whether\n the loss is no longer decreasing, considering the `min_delta` and\n `patience` if applicable. Once it's found no longer decreasing,\n `model.stop_training` is marked True and the training terminates.\n\n The quantity to be monitored needs to be available in `logs` dict.\n To make it so, pass the loss or metrics at `model.compile()`.\n\n Args:\n monitor: Quantity to be monitored.\n min_delta: Minimum change in the monitored quantity\n to qualify as an improvement, i.e. an absolute\n change of less than min_delta, will count as no\n improvement.\n patience: Number of epochs with no improvement\n after which training will be stopped.\n verbose: Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1\n displays messages when the callback takes an action.\n mode: One of `{\"auto\", \"min\", \"max\"}`. In `min` mode,\n training will stop when the quantity\n monitored has stopped decreasing; in `\"max\"`\n mode it will stop when the quantity\n monitored has stopped increasing; in `\"auto\"`\n mode, the direction is automatically inferred\n from the name of the monitored quantity.\n baseline: Baseline value for the monitored quantity.\n Training will stop if the model doesn't show improvement over the\n baseline.\n restore_best_weights: Whether to restore model weights from\n the epoch with the best value of the monitored quantity.\n If False, the model weights obtained at the last step of\n training are used. An epoch will be restored regardless\n of the performance relative to the `baseline`. If no epoch\n improves on `baseline`, training will run for `patience`\n epochs and restore weights from the best epoch in that set.\n\n Example:\n\n >>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)\n >>> # This callback will stop the training when there is no improvement in\n >>> # the loss for three consecutive epochs.\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=10, batch_size=1, callbacks=[callback],\n ... verbose=0)\n >>> len(history.history['loss']) # Only 4 epochs are run.\n 4\n ", "desc": "Stop training when a monitored metric has stopped improving.", "type": "API"}, {"name": "tf.keras.callbacks.experimental", "docs": "Public API for tf.keras.callbacks.experimental namespace.\n", "desc": "Public API for tf.keras.callbacks.experimental namespace.", "type": "API"}, {"name": "tf.keras.callbacks.experimental.BackupAndRestore", "docs": "Deprecated. Please use `tf.keras.callbacks.BackupAndRestore` instead.\n\n Caution: `tf.keras.callbacks.experimental.BackupAndRestore` endpoint is\n deprecated and will be removed in a future release. Please use\n `tf.keras.callbacks.BackupAndRestore`.\n ", "desc": "Deprecated. Please use `tf.keras.callbacks.BackupAndRestore` instead.", "type": "API"}, {"name": "tf.keras.callbacks.History", "docs": "Callback that records events into a `History` object.\n\n This callback is automatically applied to\n every Keras model. The `History` object\n gets returned by the `fit` method of models.\n\n Example:\n\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=10, verbose=1)\n >>> print(history.params)\n {'verbose': 1, 'epochs': 10, 'steps': 1}\n >>> # check the keys of history object\n >>> print(history.history.keys())\n dict_keys(['loss'])\n\n ", "desc": "Callback that records events into a `History` object.", "type": "API"}, {"name": "tf.keras.callbacks.LambdaCallback", "docs": "Callback for creating simple, custom callbacks on-the-fly.\n\n This callback is constructed with anonymous functions that will be called\n at the appropriate time (during `Model.{fit | evaluate | predict}`).\n Note that the callbacks expects positional arguments, as:\n\n - `on_epoch_begin` and `on_epoch_end` expect two positional arguments:\n `epoch`, `logs`\n - `on_batch_begin` and `on_batch_end` expect two positional arguments:\n `batch`, `logs`\n - `on_train_begin` and `on_train_end` expect one positional argument:\n `logs`\n\n Args:\n on_epoch_begin: called at the beginning of every epoch.\n on_epoch_end: called at the end of every epoch.\n on_batch_begin: called at the beginning of every batch.\n on_batch_end: called at the end of every batch.\n on_train_begin: called at the beginning of model training.\n on_train_end: called at the end of model training.\n\n Example:\n\n ```python\n # Print the batch number at the beginning of every batch.\n batch_print_callback = LambdaCallback(\n on_batch_begin=lambda batch,logs: print(batch))\n\n # Stream the epoch loss to a file in JSON format. The file content\n # is not well-formed JSON but rather has a JSON object per line.\n import json\n json_log = open('loss_log.json', mode='wt', buffering=1)\n json_logging_callback = LambdaCallback(\n on_epoch_end=lambda epoch, logs: json_log.write(\n json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\\n'),\n on_train_end=lambda logs: json_log.close()\n )\n\n # Terminate some processes after having finished model training.\n processes = ...\n cleanup_callback = LambdaCallback(\n on_train_end=lambda logs: [\n p.terminate() for p in processes if p.is_alive()])\n\n model.fit(...,\n callbacks=[batch_print_callback,\n json_logging_callback,\n cleanup_callback])\n ```\n ", "desc": "Callback for creating simple, custom callbacks on-the-fly.", "type": "API"}, {"name": "tf.keras.callbacks.LearningRateScheduler", "docs": "Learning rate scheduler.\n\n At the beginning of every epoch, this callback gets the updated learning rate\n value from `schedule` function provided at `__init__`, with the current epoch\n and current learning rate, and applies the updated learning rate\n on the optimizer.\n\n Args:\n schedule: a function that takes an epoch index (integer, indexed from 0)\n and current learning rate (float) as inputs and returns a new\n learning rate as output (float).\n verbose: int. 0: quiet, 1: update messages.\n\n Example:\n\n >>> # This function keeps the initial learning rate for the first ten epochs\n >>> # and decreases it exponentially after that.\n >>> def scheduler(epoch, lr):\n ... if epoch < 10:\n ... return lr\n ... else:\n ... return lr * tf.math.exp(-0.1)\n >>>\n >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n >>> model.compile(tf.keras.optimizers.SGD(), loss='mse')\n >>> round(model.optimizer.lr.numpy(), 5)\n 0.01\n\n >>> callback = tf.keras.callbacks.LearningRateScheduler(scheduler)\n >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n ... epochs=15, callbacks=[callback], verbose=0)\n >>> round(model.optimizer.lr.numpy(), 5)\n 0.00607\n\n ", "desc": "Learning rate scheduler.", "type": "API"}, {"name": "tf.keras.callbacks.ModelCheckpoint", "docs": "Callback to save the Keras model or model weights at some frequency.\n\n `ModelCheckpoint` callback is used in conjunction with training using\n `model.fit()` to save a model or weights (in a checkpoint file) at some\n interval, so the model or weights can be loaded later to continue the training\n from the state saved.\n\n A few options this callback provides include:\n\n - Whether to only keep the model that has achieved the \"best performance\" so\n far, or whether to save the model at the end of every epoch regardless of\n performance.\n - Definition of 'best'; which quantity to monitor and whether it should be\n maximized or minimized.\n - The frequency it should save at. Currently, the callback supports saving at\n the end of every epoch, or after a fixed number of training batches.\n - Whether only weights are saved, or the whole model is saved.\n\n Note: If you get `WARNING:tensorflow:Can save best model only with \n available, skipping` see the description of the `monitor` argument for\n details on how to get this right.\n\n Example:\n\n ```python\n model.compile(loss=..., optimizer=...,\n metrics=['accuracy'])\n\n EPOCHS = 10\n checkpoint_filepath = '/tmp/checkpoint'\n model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_filepath,\n save_weights_only=True,\n monitor='val_accuracy',\n mode='max',\n save_best_only=True)\n\n # Model weights are saved at the end of every epoch, if it's the best seen\n # so far.\n model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback])\n\n # The model weights (that are considered the best) are loaded into the model.\n model.load_weights(checkpoint_filepath)\n ```\n\n Args:\n filepath: string or `PathLike`, path to save the model file. e.g.\n filepath = os.path.join(working_dir, 'ckpt', file_name). `filepath`\n can contain named formatting options, which will be filled the value of\n `epoch` and keys in `logs` (passed in `on_epoch_end`). For example: if\n `filepath` is `weights.{epoch:02d}-{val_loss:.2f}.hdf5`, then the model\n checkpoints will be saved with the epoch number and the validation loss\n in the filename. The directory of the filepath should not be reused by\n any other callbacks to avoid conflicts.\n monitor: The metric name to monitor. Typically the metrics are set by the\n `Model.compile` method. Note:\n\n * Prefix the name with `\"val_`\" to monitor validation metrics.\n * Use `\"loss\"` or \"`val_loss`\" to monitor the model's total loss.\n * If you specify metrics as strings, like `\"accuracy\"`, pass the same\n string (with or without the `\"val_\"` prefix).\n * If you pass `metrics.Metric` objects, `monitor` should be set to\n `metric.name`\n * If you're not sure about the metric names you can check the contents\n of the `history.history` dictionary returned by\n `history = model.fit()`\n * Multi-output models set additional prefixes on the metric names.\n\n verbose: Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1\n displays messages when the callback takes an action.\n save_best_only: if `save_best_only=True`, it only saves when the model\n is considered the \"best\" and the latest best model according to the\n quantity monitored will not be overwritten. If `filepath` doesn't\n contain formatting options like `{epoch}` then `filepath` will be\n overwritten by each new better model.\n mode: one of {'auto', 'min', 'max'}. If `save_best_only=True`, the\n decision to overwrite the current save file is made based on either\n the maximization or the minimization of the monitored quantity.\n For `val_acc`, this should be `max`, for `val_loss` this should be\n `min`, etc. In `auto` mode, the mode is set to `max` if the quantities\n monitored are 'acc' or start with 'fmeasure' and are set to `min` for\n the rest of the quantities.\n save_weights_only: if True, then only the model's weights will be saved\n (`model.save_weights(filepath)`), else the full model is saved\n (`model.save(filepath)`).\n save_freq: `'epoch'` or integer. When using `'epoch'`, the callback saves\n the model after each epoch. When using integer, the callback saves the\n model at end of this many batches. If the `Model` is compiled with\n `steps_per_execution=N`, then the saving criteria will be\n checked every Nth batch. Note that if the saving isn't aligned to\n epochs, the monitored metric may potentially be less reliable (it\n could reflect as little as 1 batch, since the metrics get reset every\n epoch). Defaults to `'epoch'`.\n options: Optional `tf.train.CheckpointOptions` object if\n `save_weights_only` is true or optional `tf.saved_model.SaveOptions`\n object if `save_weights_only` is false.\n initial_value_threshold: Floating point initial \"best\" value of the metric\n to be monitored. Only applies if `save_best_value=True`. Only overwrites\n the model weights already saved if the performance of current\n model is better than this value.\n **kwargs: Additional arguments for backwards compatibility. Possible key\n is `period`.\n ", "desc": "Callback to save the Keras model or model weights at some frequency.", "type": "API"}, {"name": "tf.keras.callbacks.ProgbarLogger", "docs": "Callback that prints metrics to stdout.\n\n Args:\n count_mode: One of `\"steps\"` or `\"samples\"`.\n Whether the progress bar should\n count samples seen or steps (batches) seen.\n stateful_metrics: Iterable of string names of metrics that\n should *not* be averaged over an epoch.\n Metrics in this list will be logged as-is.\n All others will be averaged over time (e.g. loss, etc).\n If not provided, defaults to the `Model`'s metrics.\n\n Raises:\n ValueError: In case of invalid `count_mode`.\n ", "desc": "Callback that prints metrics to stdout.", "type": "API"}, {"name": "tf.keras.callbacks.ReduceLROnPlateau", "docs": "Reduce learning rate when a metric has stopped improving.\n\n Models often benefit from reducing the learning rate by a factor\n of 2-10 once learning stagnates. This callback monitors a\n quantity and if no improvement is seen for a 'patience' number\n of epochs, the learning rate is reduced.\n\n Example:\n\n ```python\n reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,\n patience=5, min_lr=0.001)\n model.fit(X_train, Y_train, callbacks=[reduce_lr])\n ```\n\n Args:\n monitor: quantity to be monitored.\n factor: factor by which the learning rate will be reduced.\n `new_lr = lr * factor`.\n patience: number of epochs with no improvement after which learning rate\n will be reduced.\n verbose: int. 0: quiet, 1: update messages.\n mode: one of `{'auto', 'min', 'max'}`. In `'min'` mode,\n the learning rate will be reduced when the\n quantity monitored has stopped decreasing; in `'max'` mode it will be\n reduced when the quantity monitored has stopped increasing; in `'auto'`\n mode, the direction is automatically inferred from the name of the\n monitored quantity.\n min_delta: threshold for measuring the new optimum, to only focus on\n significant changes.\n cooldown: number of epochs to wait before resuming normal operation after\n lr has been reduced.\n min_lr: lower bound on the learning rate.\n ", "desc": "Reduce learning rate when a metric has stopped improving.", "type": "API"}, {"name": "tf.keras.callbacks.RemoteMonitor", "docs": "Callback used to stream events to a server.\n\n Requires the `requests` library.\n Events are sent to `root + '/publish/epoch/end/'` by default. Calls are\n HTTP POST, with a `data` argument which is a\n JSON-encoded dictionary of event data.\n If `send_as_json=True`, the content type of the request will be\n `\"application/json\"`.\n Otherwise the serialized JSON will be sent within a form.\n\n Args:\n root: String; root url of the target server.\n path: String; path relative to `root` to which the events will be sent.\n field: String; JSON field under which the data will be stored.\n The field is used only if the payload is sent within a form\n (i.e. send_as_json is set to False).\n headers: Dictionary; optional custom HTTP headers.\n send_as_json: Boolean; whether the request should be\n sent as `\"application/json\"`.\n ", "desc": "Callback used to stream events to a server.", "type": "API"}, {"name": "tf.keras.callbacks.TensorBoard", "docs": "Enable visualizations for TensorBoard.\n\n TensorBoard is a visualization tool provided with TensorFlow.\n\n This callback logs events for TensorBoard, including:\n\n * Metrics summary plots\n * Training graph visualization\n * Weight histograms\n * Sampled profiling\n\n When used in `Model.evaluate`, in addition to epoch summaries, there will be\n a summary that records evaluation metrics vs `Model.optimizer.iterations`\n written. The metric names will be prepended with `evaluation`, with\n `Model.optimizer.iterations` being the step in the visualized TensorBoard.\n\n If you have installed TensorFlow with pip, you should be able\n to launch TensorBoard from the command line:\n\n ```\n tensorboard --logdir=path_to_your_logs\n ```\n\n You can find more information about TensorBoard\n [here](https://www.tensorflow.org/get_started/summaries_and_tensorboard).\n\n Args:\n log_dir: the path of the directory where to save the log files to be\n parsed by TensorBoard. e.g. log_dir = os.path.join(working_dir, 'logs')\n This directory should not be reused by any other callbacks.\n histogram_freq: frequency (in epochs) at which to compute\n weight histograms for the layers of the model. If set to 0, histograms\n won't be computed. Validation data (or split) must be specified for\n histogram visualizations.\n write_graph: whether to visualize the graph in TensorBoard. The log file\n can become quite large when write_graph is set to True.\n write_images: whether to write model weights to visualize as image in\n TensorBoard.\n write_steps_per_second: whether to log the training steps per second into\n Tensorboard. This supports both epoch and batch frequency logging.\n update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`,\n writes the losses and metrics to TensorBoard after each batch. The same\n applies for `'epoch'`. If using an integer, let's say `1000`, the\n callback will write the metrics and losses to TensorBoard every 1000\n batches. Note that writing too frequently to TensorBoard can slow down\n your training.\n profile_batch: Profile the batch(es) to sample compute characteristics.\n profile_batch must be a non-negative integer or a tuple of integers.\n A pair of positive integers signify a range of batches to profile.\n By default, profiling is disabled.\n embeddings_freq: frequency (in epochs) at which embedding layers will be\n visualized. If set to 0, embeddings won't be visualized.\n embeddings_metadata: Dictionary which maps embedding layer names to the\n filename of a file in which to save metadata for the embedding layer.\n In case the same metadata file is to be\n used for all embedding layers, a single filename can be passed.\n\n Examples:\n\n Basic usage:\n\n ```python\n tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=\"./logs\")\n model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n # Then run the tensorboard command to view the visualizations.\n ```\n\n Custom batch-level summaries in a subclassed Model:\n\n ```python\n class MyModel(tf.keras.Model):\n\n def build(self, _):\n self.dense = tf.keras.layers.Dense(10)\n\n def call(self, x):\n outputs = self.dense(x)\n tf.summary.histogram('outputs', outputs)\n return outputs\n\n model = MyModel()\n model.compile('sgd', 'mse')\n\n # Make sure to set `update_freq=N` to log a batch-level summary every N batches.\n # In addition to any `tf.summary` contained in `Model.call`, metrics added in\n # `Model.compile` will be logged every N batches.\n tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1)\n model.fit(x_train, y_train, callbacks=[tb_callback])\n ```\n\n Custom batch-level summaries in a Functional API Model:\n\n ```python\n def my_summary(x):\n tf.summary.histogram('x', x)\n return x\n\n inputs = tf.keras.Input(10)\n x = tf.keras.layers.Dense(10)(inputs)\n outputs = tf.keras.layers.Lambda(my_summary)(x)\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', 'mse')\n\n # Make sure to set `update_freq=N` to log a batch-level summary every N batches.\n # In addition to any `tf.summary` contained in `Model.call`, metrics added in\n # `Model.compile` will be logged every N batches.\n tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1)\n model.fit(x_train, y_train, callbacks=[tb_callback])\n ```\n\n Profiling:\n\n ```python\n # Profile a single batch, e.g. the 5th batch.\n tensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir='./logs', profile_batch=5)\n model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n\n # Profile a range of batches, e.g. from 10 to 20.\n tensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir='./logs', profile_batch=(10,20))\n model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n ```\n ", "desc": "Enable visualizations for TensorBoard.", "type": "API"}, {"name": "tf.keras.callbacks.TerminateOnNaN", "docs": "Callback that terminates training when a NaN loss is encountered.\n ", "desc": "Callback that terminates training when a NaN loss is encountered.", "type": "API"}, {"name": "tf.keras.constraints", "docs": "Constraints: functions that impose constraints on weight values.\n", "desc": "Constraints: functions that impose constraints on weight values.", "type": "API"}, {"name": "tf.keras.constraints.Constraint", "docs": "Base class for weight constraints.\n\n A `Constraint` instance works like a stateless function.\n Users who subclass this\n class should override the `__call__` method, which takes a single\n weight parameter and return a projected version of that parameter\n (e.g. normalized or clipped). Constraints can be used with various Keras\n layers via the `kernel_constraint` or `bias_constraint` arguments.\n\n Here's a simple example of a non-negative weight constraint:\n\n >>> class NonNegative(tf.keras.constraints.Constraint):\n ...\n ... def __call__(self, w):\n ... return w * tf.cast(tf.math.greater_equal(w, 0.), w.dtype)\n\n >>> weight = tf.constant((-1.0, 1.0))\n >>> NonNegative()(weight)\n \n\n >>> tf.keras.layers.Dense(4, kernel_constraint=NonNegative())\n ", "desc": "Base class for weight constraints.", "type": "API"}, {"name": "tf.keras.constraints.deserialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.keras.constraints.get", "docs": "Retrieves a Keras constraint function.", "desc": "Retrieves a Keras constraint function.", "type": "API"}, {"name": "tf.keras.constraints.max_norm", "docs": "MaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have a norm less than or equal to a desired value.\n\n Also available via the shortcut function `tf.keras.constraints.max_norm`.\n\n Args:\n max_value: the maximum norm value for the incoming weights.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n\n ", "desc": "MaxNorm weight constraint.", "type": "API"}, {"name": "tf.keras.constraints.MaxNorm", "docs": "MaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have a norm less than or equal to a desired value.\n\n Also available via the shortcut function `tf.keras.constraints.max_norm`.\n\n Args:\n max_value: the maximum norm value for the incoming weights.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n\n ", "desc": "MaxNorm weight constraint.", "type": "API"}, {"name": "tf.keras.constraints.min_max_norm", "docs": "MinMaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have the norm between a lower bound and an upper bound.\n\n Also available via the shortcut function `tf.keras.constraints.min_max_norm`.\n\n Args:\n min_value: the minimum norm for the incoming weights.\n max_value: the maximum norm for the incoming weights.\n rate: rate for enforcing the constraint: weights will be\n rescaled to yield\n `(1 - rate) * norm + rate * norm.clip(min_value, max_value)`.\n Effectively, this means that rate=1.0 stands for strict\n enforcement of the constraint, while rate<1.0 means that\n weights will be rescaled at each step to slowly move\n towards a value inside the desired interval.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "MinMaxNorm weight constraint.", "type": "API"}, {"name": "tf.keras.constraints.MinMaxNorm", "docs": "MinMaxNorm weight constraint.\n\n Constrains the weights incident to each hidden unit\n to have the norm between a lower bound and an upper bound.\n\n Also available via the shortcut function `tf.keras.constraints.min_max_norm`.\n\n Args:\n min_value: the minimum norm for the incoming weights.\n max_value: the maximum norm for the incoming weights.\n rate: rate for enforcing the constraint: weights will be\n rescaled to yield\n `(1 - rate) * norm + rate * norm.clip(min_value, max_value)`.\n Effectively, this means that rate=1.0 stands for strict\n enforcement of the constraint, while rate<1.0 means that\n weights will be rescaled at each step to slowly move\n towards a value inside the desired interval.\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "MinMaxNorm weight constraint.", "type": "API"}, {"name": "tf.keras.constraints.non_neg", "docs": "Constrains the weights to be non-negative.\n\n Also available via the shortcut function `tf.keras.constraints.non_neg`.\n ", "desc": "Constrains the weights to be non-negative.", "type": "API"}, {"name": "tf.keras.constraints.NonNeg", "docs": "Constrains the weights to be non-negative.\n\n Also available via the shortcut function `tf.keras.constraints.non_neg`.\n ", "desc": "Constrains the weights to be non-negative.", "type": "API"}, {"name": "tf.keras.constraints.radial_constraint", "docs": "Constrains `Conv2D` kernel weights to be the same for each radius.\n\n Also available via the shortcut function\n `tf.keras.constraints.radial_constraint`.\n\n For example, the desired output for the following 4-by-4 kernel:\n\n ```\n kernel = [[v_00, v_01, v_02, v_03],\n [v_10, v_11, v_12, v_13],\n [v_20, v_21, v_22, v_23],\n [v_30, v_31, v_32, v_33]]\n ```\n\n is this::\n\n ```\n kernel = [[v_11, v_11, v_11, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_11, v_11, v_11]]\n ```\n\n This constraint can be applied to any `Conv2D` layer version, including\n `Conv2DTranspose` and `SeparableConv2D`, and with either `\"channels_last\"` or\n `\"channels_first\"` data format. The method assumes the weight tensor is of\n shape `(rows, cols, input_depth, output_depth)`.\n ", "desc": "Constrains `Conv2D` kernel weights to be the same for each radius.", "type": "API"}, {"name": "tf.keras.constraints.RadialConstraint", "docs": "Constrains `Conv2D` kernel weights to be the same for each radius.\n\n Also available via the shortcut function\n `tf.keras.constraints.radial_constraint`.\n\n For example, the desired output for the following 4-by-4 kernel:\n\n ```\n kernel = [[v_00, v_01, v_02, v_03],\n [v_10, v_11, v_12, v_13],\n [v_20, v_21, v_22, v_23],\n [v_30, v_31, v_32, v_33]]\n ```\n\n is this::\n\n ```\n kernel = [[v_11, v_11, v_11, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_33, v_33, v_11],\n [v_11, v_11, v_11, v_11]]\n ```\n\n This constraint can be applied to any `Conv2D` layer version, including\n `Conv2DTranspose` and `SeparableConv2D`, and with either `\"channels_last\"` or\n `\"channels_first\"` data format. The method assumes the weight tensor is of\n shape `(rows, cols, input_depth, output_depth)`.\n ", "desc": "Constrains `Conv2D` kernel weights to be the same for each radius.", "type": "API"}, {"name": "tf.keras.constraints.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.keras.constraints.unit_norm", "docs": "Constrains the weights incident to each hidden unit to have unit norm.\n\n Also available via the shortcut function `tf.keras.constraints.unit_norm`.\n\n Args:\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "Constrains the weights incident to each hidden unit to have unit norm.", "type": "API"}, {"name": "tf.keras.constraints.UnitNorm", "docs": "Constrains the weights incident to each hidden unit to have unit norm.\n\n Also available via the shortcut function `tf.keras.constraints.unit_norm`.\n\n Args:\n axis: integer, axis along which to calculate weight norms.\n For instance, in a `Dense` layer the weight matrix\n has shape `(input_dim, output_dim)`,\n set `axis` to `0` to constrain each weight vector\n of length `(input_dim,)`.\n In a `Conv2D` layer with `data_format=\"channels_last\"`,\n the weight tensor has shape\n `(rows, cols, input_depth, output_depth)`,\n set `axis` to `[0, 1, 2]`\n to constrain the weights of each filter tensor of size\n `(rows, cols, input_depth)`.\n ", "desc": "Constrains the weights incident to each hidden unit to have unit norm.", "type": "API"}, {"name": "tf.keras.datasets", "docs": "Small NumPy datasets for debugging/testing.\n", "desc": "Small NumPy datasets for debugging/testing.", "type": "API"}, {"name": "tf.keras.datasets.boston_housing", "docs": "Boston housing price regression dataset.\n", "desc": "Boston housing price regression dataset.", "type": "API"}, {"name": "tf.keras.datasets.boston_housing.load_data", "docs": "Loads the Boston Housing dataset.\n\n This is a dataset taken from the StatLib library which is maintained at\n Carnegie Mellon University.\n\n Samples contain 13 attributes of houses at different locations around the\n Boston suburbs in the late 1970s. Targets are the median values of\n the houses at a location (in k$).\n\n The attributes themselves are defined in the\n [StatLib website](http://lib.stat.cmu.edu/datasets/boston).\n\n Args:\n path: path where to cache the dataset locally\n (relative to `~/.keras/datasets`).\n test_split: fraction of the data to reserve as test set.\n seed: Random seed for shuffling the data\n before computing the test split.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: numpy arrays with shape `(num_samples, 13)`\n containing either the training samples (for x_train),\n or test samples (for y_train).\n\n **y_train, y_test**: numpy arrays of shape `(num_samples,)` containing the\n target scalars. The targets are float scalars typically between 10 and\n 50 that represent the home prices in k$.\n ", "desc": "Loads the Boston Housing dataset.", "type": "API"}, {"name": "tf.keras.datasets.cifar10", "docs": "CIFAR10 small images classification dataset.\n", "desc": "CIFAR10 small images classification dataset.", "type": "API"}, {"name": "tf.keras.datasets.cifar10.load_data", "docs": "Loads the CIFAR10 dataset.\n\n This is a dataset of 50,000 32x32 color training images and 10,000 test\n images, labeled over 10 categories. See more info at the\n [CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).\n\n The classes are:\n\n | Label | Description |\n |:-----:|-------------|\n | 0 | airplane |\n | 1 | automobile |\n | 2 | bird |\n | 3 | cat |\n | 4 | deer |\n | 5 | dog |\n | 6 | frog |\n | 7 | horse |\n | 8 | ship |\n | 9 | truck |\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(50000, 32, 32, 3)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(50000, 1)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n `(10000, 32, 32, 3)`, containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(10000, 1)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()\n assert x_train.shape == (50000, 32, 32, 3)\n assert x_test.shape == (10000, 32, 32, 3)\n assert y_train.shape == (50000, 1)\n assert y_test.shape == (10000, 1)\n ```\n ", "desc": "Loads the CIFAR10 dataset.", "type": "API"}, {"name": "tf.keras.datasets.cifar100", "docs": "CIFAR100 small images classification dataset.\n", "desc": "CIFAR100 small images classification dataset.", "type": "API"}, {"name": "tf.keras.datasets.cifar100.load_data", "docs": "Loads the CIFAR100 dataset.\n\n This is a dataset of 50,000 32x32 color training images and\n 10,000 test images, labeled over 100 fine-grained classes that are\n grouped into 20 coarse-grained classes. See more info at the\n [CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).\n\n Args:\n label_mode: one of \"fine\", \"coarse\". If it is \"fine\" the category labels\n are the fine-grained labels, if it is \"coarse\" the output labels are the\n coarse-grained superclasses.\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(50000, 32, 32, 3)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-99)\n with shape `(50000, 1)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n `(10000, 32, 32, 3)`, containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-99)\n with shape `(10000, 1)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()\n assert x_train.shape == (50000, 32, 32, 3)\n assert x_test.shape == (10000, 32, 32, 3)\n assert y_train.shape == (50000, 1)\n assert y_test.shape == (10000, 1)\n ```\n ", "desc": "Loads the CIFAR100 dataset.", "type": "API"}, {"name": "tf.keras.datasets.fashion_mnist", "docs": "Fashion-MNIST dataset.\n", "desc": "Fashion-MNIST dataset.", "type": "API"}, {"name": "tf.keras.datasets.fashion_mnist.load_data", "docs": "Loads the Fashion-MNIST dataset.\n\n This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,\n along with a test set of 10,000 images. This dataset can be used as\n a drop-in replacement for MNIST.\n\n The classes are:\n\n | Label | Description |\n |:-----:|-------------|\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(60000, 28, 28)`, containing the training data.\n\n **y_train**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(60000,)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n (10000, 28, 28), containing the test data.\n\n **y_test**: uint8 NumPy array of labels (integers in range 0-9)\n with shape `(10000,)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n assert x_train.shape == (60000, 28, 28)\n assert x_test.shape == (10000, 28, 28)\n assert y_train.shape == (60000,)\n assert y_test.shape == (10000,)\n ```\n\n License:\n The copyright for Fashion-MNIST is held by Zalando SE.\n Fashion-MNIST is licensed under the [MIT license](\n https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE).\n\n ", "desc": "Loads the Fashion-MNIST dataset.", "type": "API"}, {"name": "tf.keras.datasets.imdb", "docs": "IMDB sentiment classification dataset.\n", "desc": "IMDB sentiment classification dataset.", "type": "API"}, {"name": "tf.keras.datasets.imdb.get_word_index", "docs": "Retrieves a dict mapping words to their index in the IMDB dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their index.\n\n Example:\n\n ```python\n # Retrieve the training sequences.\n (x_train, _), _ = keras.datasets.imdb.load_data()\n # Retrieve the word index file mapping words to indices\n word_index = keras.datasets.imdb.get_word_index()\n # Reverse the word index to obtain a dict mapping indices to words\n inverted_word_index = dict((i, word) for (word, i) in word_index.items())\n # Decode the first sequence in the dataset\n decoded_sequence = \" \".join(inverted_word_index[i] for i in x_train[0])\n ```\n ", "desc": "Retrieves a dict mapping words to their index in the IMDB dataset.", "type": "API"}, {"name": "tf.keras.datasets.imdb.load_data", "docs": "Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).\n\n This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment\n (positive/negative). Reviews have been preprocessed, and each review is\n encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode any unknown word.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to None, so all words are kept.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. Defaults to 0, so no words are\n skipped.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. Defaults to None, which\n means no truncation.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. Defaults to 1 because 0 is usually the padding character.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n **kwargs: Used for backwards compatibility.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **y_train, y_test**: lists of integer labels (1 or 0).\n\n Raises:\n ValueError: in case `maxlen` is so low\n that no input sequence could be kept.\n\n Note that the 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n ", "desc": "Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).", "type": "API"}, {"name": "tf.keras.datasets.mnist", "docs": "MNIST handwritten digits dataset.\n", "desc": "MNIST handwritten digits dataset.", "type": "API"}, {"name": "tf.keras.datasets.mnist.load_data", "docs": "Loads the MNIST dataset.\n\n This is a dataset of 60,000 28x28 grayscale images of the 10 digits,\n along with a test set of 10,000 images.\n More info can be found at the\n [MNIST homepage](http://yann.lecun.com/exdb/mnist/).\n\n Args:\n path: path where to cache the dataset locally\n (relative to `~/.keras/datasets`).\n\n Returns:\n Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train**: uint8 NumPy array of grayscale image data with shapes\n `(60000, 28, 28)`, containing the training data. Pixel values range\n from 0 to 255.\n\n **y_train**: uint8 NumPy array of digit labels (integers in range 0-9)\n with shape `(60000,)` for the training data.\n\n **x_test**: uint8 NumPy array of grayscale image data with shapes\n (10000, 28, 28), containing the test data. Pixel values range\n from 0 to 255.\n\n **y_test**: uint8 NumPy array of digit labels (integers in range 0-9)\n with shape `(10000,)` for the test data.\n\n Example:\n\n ```python\n (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n assert x_train.shape == (60000, 28, 28)\n assert x_test.shape == (10000, 28, 28)\n assert y_train.shape == (60000,)\n assert y_test.shape == (10000,)\n ```\n\n License:\n Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset,\n which is a derivative work from original NIST datasets.\n MNIST dataset is made available under the terms of the\n [Creative Commons Attribution-Share Alike 3.0 license.](\n https://creativecommons.org/licenses/by-sa/3.0/)\n ", "desc": "Loads the MNIST dataset.", "type": "API"}, {"name": "tf.keras.datasets.reuters", "docs": "Reuters topic classification dataset.\n", "desc": "Reuters topic classification dataset.", "type": "API"}, {"name": "tf.keras.datasets.reuters.get_word_index", "docs": "Retrieves a dict mapping words to their index in the Reuters dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their index.\n ", "desc": "Retrieves a dict mapping words to their index in the Reuters dataset.", "type": "API"}, {"name": "tf.keras.datasets.reuters.load_data", "docs": "Loads the Reuters newswire classification dataset.\n\n This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics.\n\n This was originally generated by parsing and preprocessing the classic\n Reuters-21578 dataset, but the preprocessing code is no longer packaged\n with Keras. See this\n [github discussion](https://github.com/keras-team/keras/issues/12072)\n for more info.\n\n Each newswire is encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode any unknown word.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to None, so all words are kept.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. Defaults to 0, so no words are\n skipped.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. Defaults to None, which\n means no truncation.\n test_split: Float between 0 and 1. Fraction of the dataset to be used\n as test data. Defaults to 0.2, meaning 20% of the dataset is used as\n test data.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. Defaults to 1 because 0 is usually the padding character.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n **kwargs: Used for backwards compatibility.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **x_train, x_test**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **y_train, y_test**: lists of integer labels (1 or 0).\n\n Note: The 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n ", "desc": "Loads the Reuters newswire classification dataset.", "type": "API"}, {"name": "tf.keras.estimator", "docs": "Keras estimator API.\n", "desc": "Keras estimator API.", "type": "API"}, {"name": "tf.keras.estimator.model_to_estimator", "docs": "Constructs an `Estimator` instance from given keras model.\n\n If you use infrastructure or other tooling that relies on Estimators, you can\n still build a Keras model and use model_to_estimator to convert the Keras\n model to an Estimator for use with downstream systems.\n\n For usage example, please see:\n [Creating estimators from Keras Models](\n https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models).\n\n Sample Weights:\n Estimators returned by `model_to_estimator` are configured so that they can\n handle sample weights (similar to `keras_model.fit(x, y, sample_weights)`).\n\n To pass sample weights when training or evaluating the Estimator, the first\n item returned by the input function should be a dictionary with keys\n `features` and `sample_weights`. Example below:\n\n ```python\n keras_model = tf.keras.Model(...)\n keras_model.compile(...)\n\n estimator = tf.keras.estimator.model_to_estimator(keras_model)\n\n def input_fn():\n return dataset_ops.Dataset.from_tensors(\n ({'features': features, 'sample_weights': sample_weights},\n targets))\n\n estimator.train(input_fn, steps=1)\n ```\n\n Example with customized export signature:\n ```python\n inputs = {'a': tf.keras.Input(..., name='a'),\n 'b': tf.keras.Input(..., name='b')}\n outputs = {'c': tf.keras.layers.Dense(..., name='c')(inputs['a']),\n 'd': tf.keras.layers.Dense(..., name='d')(inputs['b'])}\n keras_model = tf.keras.Model(inputs, outputs)\n keras_model.compile(...)\n export_outputs = {'c': tf.estimator.export.RegressionOutput,\n 'd': tf.estimator.export.ClassificationOutput}\n\n estimator = tf.keras.estimator.model_to_estimator(\n keras_model, export_outputs=export_outputs)\n\n def input_fn():\n return dataset_ops.Dataset.from_tensors(\n ({'features': features, 'sample_weights': sample_weights},\n targets))\n\n estimator.train(input_fn, steps=1)\n ```\n\n Note: We do not support creating weighted metrics in Keras and converting them\n to weighted metrics in the Estimator API using `model_to_estimator`.\n You will have to create these metrics directly on the estimator spec using the\n `add_metrics` function.\n\n To customize the estimator `eval_metric_ops` names, you can pass in the\n `metric_names_map` dictionary mapping the keras model output metric names\n to the custom names as follows:\n\n ```python\n input_a = tf.keras.layers.Input(shape=(16,), name='input_a')\n input_b = tf.keras.layers.Input(shape=(16,), name='input_b')\n dense = tf.keras.layers.Dense(8, name='dense_1')\n interm_a = dense(input_a)\n interm_b = dense(input_b)\n merged = tf.keras.layers.concatenate([interm_a, interm_b], name='merge')\n output_a = tf.keras.layers.Dense(3, activation='softmax', name='dense_2')(\n merged)\n output_b = tf.keras.layers.Dense(2, activation='softmax', name='dense_3')(\n merged)\n keras_model = tf.keras.models.Model(\n inputs=[input_a, input_b], outputs=[output_a, output_b])\n keras_model.compile(\n loss='categorical_crossentropy',\n optimizer='rmsprop',\n metrics={\n 'dense_2': 'categorical_accuracy',\n 'dense_3': 'categorical_accuracy'\n })\n\n metric_names_map = {\n 'dense_2_categorical_accuracy': 'acc_1',\n 'dense_3_categorical_accuracy': 'acc_2',\n }\n keras_est = tf.keras.estimator.model_to_estimator(\n keras_model=keras_model,\n config=config,\n metric_names_map=metric_names_map)\n ```\n\n Args:\n keras_model: A compiled Keras model object. This argument is mutually\n exclusive with `keras_model_path`. Estimator's `model_fn` uses the\n structure of the model to clone the model. Defaults to `None`.\n keras_model_path: Path to a compiled Keras model saved on disk, in HDF5\n format, which can be generated with the `save()` method of a Keras model.\n This argument is mutually exclusive with `keras_model`.\n Defaults to `None`.\n custom_objects: Dictionary for cloning customized objects. This is\n used with classes that is not part of this pip package. For example, if\n user maintains a `relu6` class that inherits from `tf.keras.layers.Layer`,\n then pass `custom_objects={'relu6': relu6}`. Defaults to `None`.\n model_dir: Directory to save `Estimator` model parameters, graph, summary\n files for TensorBoard, etc. If unset a directory will be created with\n `tempfile.mkdtemp`\n config: `RunConfig` to config `Estimator`. Allows setting up things in\n `model_fn` based on configuration such as `num_ps_replicas`, or\n `model_dir`. Defaults to `None`. If both `config.model_dir` and the\n `model_dir` argument (above) are specified the `model_dir` **argument**\n takes precedence.\n checkpoint_format: Sets the format of the checkpoint saved by the estimator\n when training. May be `saver` or `checkpoint`, depending on whether to\n save checkpoints from `tf.compat.v1.train.Saver` or `tf.train.Checkpoint`.\n The default is `checkpoint`. Estimators use name-based `tf.train.Saver`\n checkpoints, while Keras models use object-based checkpoints from\n `tf.train.Checkpoint`. Currently, saving object-based checkpoints from\n `model_to_estimator` is only supported by Functional and Sequential\n models. Defaults to 'checkpoint'.\n metric_names_map: Optional dictionary mapping Keras model output metric\n names to custom names. This can be used to override the default Keras\n model output metrics names in a multi IO model use case and provide custom\n names for the `eval_metric_ops` in Estimator.\n The Keras model metric names can be obtained using `model.metrics_names`\n excluding any loss metrics such as total loss and output losses.\n For example, if your Keras model has two outputs `out_1` and `out_2`,\n with `mse` loss and `acc` metric, then `model.metrics_names` will be\n `['loss', 'out_1_loss', 'out_2_loss', 'out_1_acc', 'out_2_acc']`.\n The model metric names excluding the loss metrics will be\n `['out_1_acc', 'out_2_acc']`.\n export_outputs: Optional dictionary. This can be used to override the\n default Keras model output exports in a multi IO model use case and\n provide custom names for the `export_outputs` in\n `tf.estimator.EstimatorSpec`. Default is None, which is equivalent to\n {'serving_default': `tf.estimator.export.PredictOutput`}. If not None,\n the keys must match the keys of `model.output_names`.\n A dict `{name: output}` where:\n * name: An arbitrary name for this output.\n * output: an `ExportOutput` class such as `ClassificationOutput`,\n `RegressionOutput`, or `PredictOutput`. Single-headed models only need\n to specify one entry in this dictionary. Multi-headed models should\n specify one entry for each head, one of which must be named using\n `tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY`\n If no entry is provided, a default `PredictOutput` mapping to\n `predictions` will be created.\n\n Returns:\n An Estimator from given keras model.\n\n Raises:\n ValueError: If neither keras_model nor keras_model_path was given.\n ValueError: If both keras_model and keras_model_path was given.\n ValueError: If the keras_model_path is a GCS URI.\n ValueError: If keras_model has not been compiled.\n ValueError: If an invalid checkpoint_format was given.\n ", "desc": "Constructs an `Estimator` instance from given keras model.", "type": "API"}, {"name": "tf.keras.experimental", "docs": "Public API for tf.keras.experimental namespace.\n", "desc": "Public API for tf.keras.experimental namespace.", "type": "API"}, {"name": "tf.keras.experimental.CosineDecay", "docs": "A LearningRateSchedule that uses a cosine decay schedule.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n return initial_learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(\n initial_learning_rate, decay_steps)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule.", "type": "API"}, {"name": "tf.keras.experimental.CosineDecayRestarts", "docs": "A LearningRateSchedule that uses a cosine decay schedule with restarts.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function with\n restarts to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n\n The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more\n steps and with `m_mul` times initial learning rate as the new learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed_fn = (\n tf.keras.optimizers.schedules.CosineDecayRestarts(\n initial_learning_rate,\n first_decay_steps))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule with restarts.", "type": "API"}, {"name": "tf.keras.experimental.LinearModel", "docs": "Linear Model for regression and classification problems.\n\n This model approximates the following function:\n $$y = \\beta + \\sum_{i=1}^{N} w_{i} * x_{i}$$\n where $$\\beta$$ is the bias and $$w_{i}$$ is the weight for each feature.\n\n Example:\n\n ```python\n model = LinearModel()\n model.compile(optimizer='sgd', loss='mse')\n model.fit(x, y, epochs=epochs)\n ```\n\n This model accepts sparse float inputs as well:\n\n Example:\n ```python\n model = LinearModel()\n opt = tf.keras.optimizers.Adam()\n loss_fn = tf.keras.losses.MeanSquaredError()\n with tf.GradientTape() as tape:\n output = model(sparse_input)\n loss = tf.reduce_mean(loss_fn(target, output))\n grads = tape.gradient(loss, model.weights)\n opt.apply_gradients(zip(grads, model.weights))\n ```\n\n ", "desc": "Linear Model for regression and classification problems.", "type": "API"}, {"name": "tf.keras.experimental.SequenceFeatures", "docs": "A layer for sequence input.\n\n All `feature_columns` must be sequence dense columns with the same\n `sequence_length`. The output of this method can be fed into sequence\n networks, such as RNN.\n\n The output of this method is a 3D `Tensor` of shape `[batch_size, T, D]`.\n `T` is the maximum sequence length for this batch, which could differ from\n batch to batch.\n\n If multiple `feature_columns` are given with `Di` `num_elements` each, their\n outputs are concatenated. So, the final `Tensor` has shape\n `[batch_size, T, D0 + D1 + ... + Dn]`.\n\n Example:\n\n ```python\n\n import tensorflow as tf\n\n # Behavior of some cells or feature columns may depend on whether we are in\n # training or inference mode, e.g. applying dropout.\n training = True\n rating = tf.feature_column.sequence_numeric_column('rating')\n watches = tf.feature_column.sequence_categorical_column_with_identity(\n 'watches', num_buckets=1000)\n watches_embedding = tf.feature_column.embedding_column(watches,\n dimension=10)\n columns = [rating, watches_embedding]\n\n features = {\n 'rating': tf.sparse.from_dense([[1.0,1.1, 0, 0, 0],\n [2.0,2.1,2.2, 2.3, 2.5]]),\n 'watches': tf.sparse.from_dense([[2, 85, 0, 0, 0],[33,78, 2, 73, 1]])\n }\n\n sequence_input_layer = tf.keras.experimental.SequenceFeatures(columns)\n sequence_input, sequence_length = sequence_input_layer(\n features, training=training)\n sequence_length_mask = tf.sequence_mask(sequence_length)\n hidden_size = 32\n rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)\n rnn_layer = tf.keras.layers.RNN(rnn_cell)\n outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)\n ```\n ", "desc": "A layer for sequence input.", "type": "API"}, {"name": "tf.keras.experimental.WideDeepModel", "docs": "Wide & Deep Model for regression and classification problems.\n\n This model jointly train a linear and a dnn model.\n\n Example:\n\n ```python\n linear_model = LinearModel()\n dnn_model = keras.Sequential([keras.layers.Dense(units=64),\n keras.layers.Dense(units=1)])\n combined_model = WideDeepModel(linear_model, dnn_model)\n combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])\n # define dnn_inputs and linear_inputs as separate numpy arrays or\n # a single numpy array if dnn_inputs is same as linear_inputs.\n combined_model.fit([linear_inputs, dnn_inputs], y, epochs)\n # or define a single `tf.data.Dataset` that contains a single tensor or\n # separate tensors for dnn_inputs and linear_inputs.\n dataset = tf.data.Dataset.from_tensors(([linear_inputs, dnn_inputs], y))\n combined_model.fit(dataset, epochs)\n ```\n\n Both linear and dnn model can be pre-compiled and trained separately\n before jointly training:\n\n Example:\n ```python\n linear_model = LinearModel()\n linear_model.compile('adagrad', 'mse')\n linear_model.fit(linear_inputs, y, epochs)\n dnn_model = keras.Sequential([keras.layers.Dense(units=1)])\n dnn_model.compile('rmsprop', 'mse')\n dnn_model.fit(dnn_inputs, y, epochs)\n combined_model = WideDeepModel(linear_model, dnn_model)\n combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])\n combined_model.fit([linear_inputs, dnn_inputs], y, epochs)\n ```\n\n ", "desc": "Wide & Deep Model for regression and classification problems.", "type": "API"}, {"name": "tf.keras.initializers", "docs": "Keras initializer serialization / deserialization.\n", "desc": "Keras initializer serialization / deserialization.", "type": "API"}, {"name": "tf.keras.initializers.Constant", "docs": "Initializer that generates tensors with constant values.\n\n Also available via the shortcut function `tf.keras.initializers.constant`.\n\n Only scalar values are allowed.\n The constant value provided must be convertible to the dtype requested\n when calling the initializer.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Constant(3.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Constant(3.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n value: A Python scalar.\n ", "desc": "Initializer that generates tensors with constant values.", "type": "API"}, {"name": "tf.keras.initializers.deserialize", "docs": "Return an `Initializer` object from its config.", "desc": "Return an `Initializer` object from its config.", "type": "API"}, {"name": "tf.keras.initializers.get", "docs": "Retrieve a Keras initializer by the identifier.\n\n The `identifier` may be the string name of a initializers function or class (\n case-sensitively).\n\n >>> identifier = 'Ones'\n >>> tf.keras.initializers.deserialize(identifier)\n <...keras.initializers.initializers_v2.Ones...>\n\n You can also specify `config` of the initializer to this function by passing\n dict containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Initializer` class.\n\n >>> cfg = {'class_name': 'Ones', 'config': {}}\n >>> tf.keras.initializers.deserialize(cfg)\n <...keras.initializers.initializers_v2.Ones...>\n\n In the case that the `identifier` is a class, this method will return a new\n instance of the class by its constructor.\n\n Args:\n identifier: String or dict that contains the initializer name or\n configurations.\n\n Returns:\n Initializer instance base on the input identifier.\n\n Raises:\n ValueError: If the input identifier is not a supported type or in a bad\n format.\n ", "desc": "Retrieve a Keras initializer by the identifier.", "type": "API"}, {"name": "tf.keras.initializers.glorot_normal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_normal`.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number of input units in\n the weight tensor and `fan_out` is the number of output units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.glorot_uniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / (fan_in + fan_out))` (`fan_in` is the number of input units\n in the weight tensor and `fan_out` is the number of output units).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.keras.initializers.GlorotNormal", "docs": "The Glorot normal initializer, also called Xavier normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_normal`.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(2 / (fan_in + fan_out))` where `fan_in` is the number of input units in\n the weight tensor and `fan_out` is the number of output units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot normal initializer, also called Xavier normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.GlorotUniform", "docs": "The Glorot uniform initializer, also called Xavier uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.glorot_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / (fan_in + fan_out))` (`fan_in` is the number of input units\n in the weight tensor and `fan_out` is the number of output units).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.GlorotUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Glorot et al., 2010](http://proceedings.mlr.press/v9/glorot10a.html)\n ", "desc": "The Glorot uniform initializer, also called Xavier uniform initializer.", "type": "API"}, {"name": "tf.keras.initializers.he_normal", "docs": "He normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_normal`.\n\n It draws samples from a truncated normal distribution centered on 0 with\n `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the\n weight tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.he_uniform", "docs": "He uniform variance scaling initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He uniform variance scaling initializer.", "type": "API"}, {"name": "tf.keras.initializers.HeNormal", "docs": "He normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_normal`.\n\n It draws samples from a truncated normal distribution centered on 0 with\n `stddev = sqrt(2 / fan_in)` where `fan_in` is the number of input units in the\n weight tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.HeUniform", "docs": "He uniform variance scaling initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.he_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`, where\n `limit = sqrt(6 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.HeUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [He et al., 2015](https://arxiv.org/abs/1502.01852)\n ", "desc": "He uniform variance scaling initializer.", "type": "API"}, {"name": "tf.keras.initializers.Identity", "docs": "Initializer that generates the identity matrix.\n\n Also available via the shortcut function `tf.keras.initializers.identity`.\n\n Only usable for generating 2D matrices.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Identity()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Identity()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n gain: Multiplicative factor to apply to the identity matrix.\n ", "desc": "Initializer that generates the identity matrix.", "type": "API"}, {"name": "tf.keras.initializers.Initializer", "docs": "Initializer base class: all Keras initializers inherit from this class.\n\n Initializers should implement a `__call__` method with the following\n signature:\n\n ```python\n def __call__(self, shape, dtype=None, **kwargs):\n # returns a tensor of shape `shape` and dtype `dtype`\n # containing values drawn from a distribution of your choice.\n ```\n\n Optionally, you an also implement the method `get_config` and the class\n method `from_config` in order to support serialization -- just like with\n any Keras object.\n\n Here's a simple example: a random normal initializer.\n\n ```python\n import tensorflow as tf\n\n class ExampleRandomNormal(tf.keras.initializers.Initializer):\n\n def __init__(self, mean, stddev):\n self.mean = mean\n self.stddev = stddev\n\n def __call__(self, shape, dtype=None, **kwargs):\n return tf.random.normal(\n shape, mean=self.mean, stddev=self.stddev, dtype=dtype)\n\n def get_config(self): # To support serialization\n return {\"mean\": self.mean, \"stddev\": self.stddev}\n ```\n\n Note that we don't have to implement `from_config` in the example above since\n the constructor arguments of the class the keys in the config returned by\n `get_config` are the same. In this case, the default `from_config`\n works fine.\n ", "desc": "Initializer base class: all Keras initializers inherit from this class.", "type": "API"}, {"name": "tf.keras.initializers.lecun_normal", "docs": "Lecun normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_normal`.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.lecun_uniform", "docs": "Lecun uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`,\n where `limit = sqrt(3 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun uniform initializer.", "type": "API"}, {"name": "tf.keras.initializers.LecunNormal", "docs": "Lecun normal initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_normal`.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Draws samples from a truncated normal distribution centered on 0 with `stddev\n = sqrt(1 / fan_in)` where `fan_in` is the number of input units in the weight\n tensor.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunNormal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun normal initializer.", "type": "API"}, {"name": "tf.keras.initializers.LecunUniform", "docs": "Lecun uniform initializer.\n\n Also available via the shortcut function\n `tf.keras.initializers.lecun_uniform`.\n\n Draws samples from a uniform distribution within `[-limit, limit]`,\n where `limit = sqrt(3 / fan_in)` (`fan_in` is the number of input units in the\n weight tensor).\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.LecunUniform()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)\n ", "desc": "Lecun uniform initializer.", "type": "API"}, {"name": "tf.keras.initializers.Ones", "docs": "Initializer that generates tensors initialized to 1.\n\n Also available via the shortcut function `tf.keras.initializers.ones`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Ones()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Ones()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n ", "desc": "Initializer that generates tensors initialized to 1.", "type": "API"}, {"name": "tf.keras.initializers.Orthogonal", "docs": "Initializer that generates an orthogonal matrix.\n\n Also available via the shortcut function `tf.keras.initializers.orthogonal`.\n\n If the shape of the tensor to initialize is two-dimensional, it is initialized\n with an orthogonal matrix obtained from the QR decomposition of a matrix of\n random numbers drawn from a normal distribution.\n If the matrix has fewer rows than columns then the output will have orthogonal\n rows. Otherwise, the output will have orthogonal columns.\n\n If the shape of the tensor to initialize is more than two-dimensional,\n a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`\n is initialized, where `n` is the length of the shape vector.\n The matrix is subsequently reshaped to give a tensor of the desired shape.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Orthogonal()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Orthogonal()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n gain: multiplicative factor to apply to the orthogonal matrix\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n\n References:\n - [Saxe et al., 2014](https://openreview.net/forum?id=_wzZwKpTDF_9C)\n ", "desc": "Initializer that generates an orthogonal matrix.", "type": "API"}, {"name": "tf.keras.initializers.random_normal", "docs": "Initializer that generates tensors with a normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_normal`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.keras.initializers.random_uniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_uniform`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate (inclusive).\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate (exclusive).\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.keras.initializers.RandomNormal", "docs": "Initializer that generates tensors with a normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_normal`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values to\n generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the random\n values to generate.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a normal distribution.", "type": "API"}, {"name": "tf.keras.initializers.RandomUniform", "docs": "Initializer that generates tensors with a uniform distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.random_uniform`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n minval: A python scalar or a scalar tensor. Lower bound of the range of\n random values to generate (inclusive).\n maxval: A python scalar or a scalar tensor. Upper bound of the range of\n random values to generate (exclusive).\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates tensors with a uniform distribution.", "type": "API"}, {"name": "tf.keras.initializers.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.keras.initializers.truncated_normal", "docs": "Initializer that generates a truncated normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.truncated_normal`.\n\n The values generated are similar to values from a\n `tf.keras.initializers.RandomNormal` initializer except that values more\n than two standard deviations from the mean are\n discarded and re-drawn.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values\n to generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate before truncation.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.keras.initializers.TruncatedNormal", "docs": "Initializer that generates a truncated normal distribution.\n\n Also available via the shortcut function\n `tf.keras.initializers.truncated_normal`.\n\n The values generated are similar to values from a\n `tf.keras.initializers.RandomNormal` initializer except that values more\n than two standard deviations from the mean are\n discarded and re-drawn.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.)\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n mean: a python scalar or a scalar tensor. Mean of the random values\n to generate.\n stddev: a python scalar or a scalar tensor. Standard deviation of the\n random values to generate before truncation.\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer that generates a truncated normal distribution.", "type": "API"}, {"name": "tf.keras.initializers.variance_scaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n Also available via the shortcut function\n `tf.keras.initializers.variance_scaling`.\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`, samples are\n drawn from a truncated/untruncated normal distribution with a mean of zero and\n a standard deviation (after truncation, if used) `stddev = sqrt(scale / n)`,\n where `n` is:\n\n - number of input units in the weight tensor, if `mode=\"fan_in\"`\n - number of output units, if `mode=\"fan_out\"`\n - average of the numbers of input and output units, if `mode=\"fan_avg\"`\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within `[-limit, limit]`, where `limit = sqrt(3 * scale / n)`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"truncated_normal\",\n \"untruncated_normal\" and \"uniform\".\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.keras.initializers.VarianceScaling", "docs": "Initializer capable of adapting its scale to the shape of weights tensors.\n\n Also available via the shortcut function\n `tf.keras.initializers.variance_scaling`.\n\n With `distribution=\"truncated_normal\" or \"untruncated_normal\"`, samples are\n drawn from a truncated/untruncated normal distribution with a mean of zero and\n a standard deviation (after truncation, if used) `stddev = sqrt(scale / n)`,\n where `n` is:\n\n - number of input units in the weight tensor, if `mode=\"fan_in\"`\n - number of output units, if `mode=\"fan_out\"`\n - average of the numbers of input and output units, if `mode=\"fan_avg\"`\n\n With `distribution=\"uniform\"`, samples are drawn from a uniform distribution\n within `[-limit, limit]`, where `limit = sqrt(3 * scale / n)`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.VarianceScaling(\n ... scale=0.1, mode='fan_in', distribution='uniform')\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n\n Args:\n scale: Scaling factor (positive float).\n mode: One of \"fan_in\", \"fan_out\", \"fan_avg\".\n distribution: Random distribution to use. One of \"truncated_normal\",\n \"untruncated_normal\" and \"uniform\".\n seed: A Python integer. Used to make the behavior of the initializer\n deterministic. Note that a seeded\n initializer will not produce the same random values across multiple calls,\n but multiple initializers will produce the same sequence when constructed\n with the same seed value.\n ", "desc": "Initializer capable of adapting its scale to the shape of weights tensors.", "type": "API"}, {"name": "tf.keras.initializers.Zeros", "docs": "Initializer that generates tensors initialized to 0.\n\n Also available via the shortcut function `tf.keras.initializers.zeros`.\n\n Examples:\n\n >>> # Standalone usage:\n >>> initializer = tf.keras.initializers.Zeros()\n >>> values = initializer(shape=(2, 2))\n\n >>> # Usage in a Keras layer:\n >>> initializer = tf.keras.initializers.Zeros()\n >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)\n ", "desc": "Initializer that generates tensors initialized to 0.", "type": "API"}, {"name": "tf.keras.Input", "docs": "`Input()` is used to instantiate a Keras tensor.\n\n A Keras tensor is a symbolic tensor-like object,\n which we augment with certain attributes that allow us to build a Keras model\n just by knowing the inputs and outputs of the model.\n\n For instance, if `a`, `b` and `c` are Keras tensors,\n it becomes possible to do:\n `model = Model(input=[a, b], output=c)`\n\n Args:\n shape: A shape tuple (integers), not including the batch size.\n For instance, `shape=(32,)` indicates that the expected input\n will be batches of 32-dimensional vectors. Elements of this tuple\n can be None; 'None' elements represent dimensions where the shape is\n not known.\n batch_size: optional static batch size (integer).\n name: An optional name string for the layer.\n Should be unique in a model (do not reuse the same name twice).\n It will be autogenerated if it isn't provided.\n dtype: The data type expected by the input, as a string\n (`float32`, `float64`, `int32`...)\n sparse: A boolean specifying whether the placeholder to be created is\n sparse. Only one of 'ragged' and 'sparse' can be True. Note that,\n if `sparse` is False, sparse tensors can still be passed into the\n input - they will be densified with a default value of 0.\n tensor: Optional existing tensor to wrap into the `Input` layer.\n If set, the layer will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n ragged: A boolean specifying whether the placeholder to be created is\n ragged. Only one of 'ragged' and 'sparse' can be True. In this case,\n values of 'None' in the 'shape' argument represent ragged dimensions.\n For more information about RaggedTensors, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensors).\n type_spec: A `tf.TypeSpec` object to create the input placeholder from.\n When provided, all other args except name must be None.\n **kwargs: deprecated arguments support. Supports `batch_shape` and\n `batch_input_shape`.\n\n Returns:\n A `tensor`.\n\n Example:\n\n ```python\n # this is a logistic regression in Keras\n x = Input(shape=(32,))\n y = Dense(16, activation='softmax')(x)\n model = Model(x, y)\n ```\n\n Note that even if eager execution is enabled,\n `Input` produces a symbolic tensor-like object (i.e. a placeholder).\n This symbolic tensor-like object can be used with lower-level\n TensorFlow ops that take tensors as inputs, as such:\n\n ```python\n x = Input(shape=(32,))\n y = tf.square(x) # This op will be treated like a layer\n model = Model(x, y)\n ```\n\n (This behavior does not work for higher-order TensorFlow APIs such as\n control flow and being directly watched by a `tf.GradientTape`).\n\n However, the resulting model will not track any variables that were\n used as inputs to TensorFlow ops. All variable usages must happen within\n Keras layers to make sure they will be tracked by the model's weights.\n\n The Keras Input can also create a placeholder from an arbitrary `tf.TypeSpec`,\n e.g:\n\n ```python\n x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],\n dtype=tf.float32, ragged_rank=1))\n y = x.values\n model = Model(x, y)\n ```\n When passing an arbitrary `tf.TypeSpec`, it must represent the signature of an\n entire batch instead of just one example.\n\n Raises:\n ValueError: If both `sparse` and `ragged` are provided.\n ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are\n provided.\n ValueError: If `shape`, `tensor` and `type_spec` are None.\n ValueError: If arguments besides `type_spec` are non-None while `type_spec`\n is passed.\n ValueError: if any unrecognized parameters are provided.\n ", "desc": "`Input()` is used to instantiate a Keras tensor.", "type": "API"}, {"name": "tf.keras.layers", "docs": "Keras layers API.\n", "desc": "Keras layers API.", "type": "API"}, {"name": "tf.keras.layers.AbstractRNNCell", "docs": "Abstract object representing an RNN cell.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This is the base class for implementing RNN cells with custom behavior.\n\n Every `RNNCell` must have the properties below and implement `call` with\n the signature `(output, next_state) = call(input, state)`.\n\n Examples:\n\n ```python\n class MinimalRNNCell(AbstractRNNCell):\n\n def __init__(self, units, **kwargs):\n self.units = units\n super(MinimalRNNCell, self).__init__(**kwargs)\n\n @property\n def state_size(self):\n return self.units\n\n def build(self, input_shape):\n self.kernel = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='uniform',\n name='kernel')\n self.recurrent_kernel = self.add_weight(\n shape=(self.units, self.units),\n initializer='uniform',\n name='recurrent_kernel')\n self.built = True\n\n def call(self, inputs, states):\n prev_output = states[0]\n h = backend.dot(inputs, self.kernel)\n output = h + backend.dot(prev_output, self.recurrent_kernel)\n return output, output\n ```\n\n This definition of cell differs from the definition used in the literature.\n In the literature, 'cell' refers to an object with a single scalar output.\n This definition refers to a horizontal array of such units.\n\n An RNN cell, in the most abstract setting, is anything that has\n a state and performs some operation that takes a matrix of inputs.\n This operation results in an output matrix with `self.output_size` columns.\n If `self.state_size` is an integer, this operation also results in a new\n state matrix with `self.state_size` columns. If `self.state_size` is a\n (possibly nested tuple of) TensorShape object(s), then it should return a\n matching structure of Tensors having shape `[batch_size].concatenate(s)`\n for each `s` in `self.batch_size`.\n ", "desc": "Abstract object representing an RNN cell.", "type": "API"}, {"name": "tf.keras.layers.Activation", "docs": "Applies an activation function to an output.\n\n Args:\n activation: Activation function, such as `tf.nn.relu`, or string name of\n built-in activation function, such as \"relu\".\n\n Usage:\n\n >>> layer = tf.keras.layers.Activation('relu')\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.Activation(tf.nn.relu)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Applies an activation function to an output.", "type": "API"}, {"name": "tf.keras.layers.ActivityRegularization", "docs": "Layer that applies an update to the cost function based input activity.\n\n Args:\n l1: L1 regularization factor (positive float).\n l2: L2 regularization factor (positive float).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Layer that applies an update to the cost function based input activity.", "type": "API"}, {"name": "tf.keras.layers.Add", "docs": "Layer that adds a list of inputs.\n\n It takes as input a list of tensors,\n all of the same shape, and returns\n a single tensor (also of the same shape).\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x1 = tf.random.normal(input_shape)\n >>> x2 = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Add()([x1, x2])\n >>> print(y.shape)\n (2, 3, 4)\n\n Used in a functional model:\n\n >>> input1 = tf.keras.layers.Input(shape=(16,))\n >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)\n >>> input2 = tf.keras.layers.Input(shape=(32,))\n >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)\n >>> # equivalent to `added = tf.keras.layers.add([x1, x2])`\n >>> added = tf.keras.layers.Add()([x1, x2])\n >>> out = tf.keras.layers.Dense(4)(added)\n >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)\n\n ", "desc": "Layer that adds a list of inputs.", "type": "API"}, {"name": "tf.keras.layers.AdditiveAttention", "docs": "Additive attention layer, a.k.a. Bahdanau-style attention.\n\n Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of\n shape `[batch_size, Tv, dim]` and `key` tensor of shape\n `[batch_size, Tv, dim]`. The calculation follows the steps:\n\n 1. Reshape `query` and `key` into shapes `[batch_size, Tq, 1, dim]`\n and `[batch_size, 1, Tv, dim]` respectively.\n 2. Calculate scores with shape `[batch_size, Tq, Tv]` as a non-linear\n sum: `scores = tf.reduce_sum(tf.tanh(query + key), axis=-1)`\n 3. Use scores to calculate a distribution with shape\n `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`.\n 4. Use `distribution` to create a linear combination of `value` with\n shape `[batch_size, Tq, dim]`:\n `return tf.matmul(distribution, value)`.\n\n Args:\n use_scale: If `True`, will create a variable to scale the attention scores.\n causal: Boolean. Set to `True` for decoder self-attention. Adds a mask such\n that position `i` cannot attend to positions `j > i`. This prevents the\n flow of information from the future towards the past.\n Defaults to `False`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the\n attention scores. Defaults to 0.0.\n\n Call Args:\n\n inputs: List of the following tensors:\n * query: Query `Tensor` of shape `[batch_size, Tq, dim]`.\n * value: Value `Tensor` of shape `[batch_size, Tv, dim]`.\n * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not\n given, will use `value` for both `key` and `value`, which is the\n most common case.\n mask: List of the following tensors:\n * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`.\n If given, the output will be zero at the positions where\n `mask==False`.\n * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`.\n If given, will apply the mask such that values at positions where\n `mask==False` do not contribute to the result.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n return_attention_scores: bool, it `True`, returns the attention scores\n (after masking and softmax) as an additional output argument.\n\n Output:\n\n Attention outputs of shape `[batch_size, Tq, dim]`.\n [Optional] Attention scores after masking and softmax with shape\n `[batch_size, Tq, Tv]`.\n\n The meaning of `query`, `value` and `key` depend on the application. In the\n case of text similarity, for example, `query` is the sequence embeddings of\n the first piece of text and `value` is the sequence embeddings of the second\n piece of text. `key` is usually the same tensor as `value`.\n\n Here is a code example for using `AdditiveAttention` in a CNN+Attention\n network:\n\n ```python\n # Variable-length int sequences.\n query_input = tf.keras.Input(shape=(None,), dtype='int32')\n value_input = tf.keras.Input(shape=(None,), dtype='int32')\n\n # Embedding lookup.\n token_embedding = tf.keras.layers.Embedding(max_tokens, dimension)\n # Query embeddings of shape [batch_size, Tq, dimension].\n query_embeddings = token_embedding(query_input)\n # Value embeddings of shape [batch_size, Tv, dimension].\n value_embeddings = token_embedding(value_input)\n\n # CNN layer.\n cnn_layer = tf.keras.layers.Conv1D(\n filters=100,\n kernel_size=4,\n # Use 'same' padding so outputs have the same shape as inputs.\n padding='same')\n # Query encoding of shape [batch_size, Tq, filters].\n query_seq_encoding = cnn_layer(query_embeddings)\n # Value encoding of shape [batch_size, Tv, filters].\n value_seq_encoding = cnn_layer(value_embeddings)\n\n # Query-value attention of shape [batch_size, Tq, filters].\n query_value_attention_seq = tf.keras.layers.AdditiveAttention()(\n [query_seq_encoding, value_seq_encoding])\n\n # Reduce over the sequence axis to produce encodings of shape\n # [batch_size, filters].\n query_encoding = tf.keras.layers.GlobalAveragePooling1D()(\n query_seq_encoding)\n query_value_attention = tf.keras.layers.GlobalAveragePooling1D()(\n query_value_attention_seq)\n\n # Concatenate query and document encodings to produce a DNN input layer.\n input_layer = tf.keras.layers.Concatenate()(\n [query_encoding, query_value_attention])\n\n # Add DNN layers, and create Model.\n # ...\n ```\n ", "desc": "Additive attention layer, a.k.a. Bahdanau-style attention.", "type": "API"}, {"name": "tf.keras.layers.AlphaDropout", "docs": "Applies Alpha Dropout to the input.\n\n Alpha Dropout is a `Dropout` that keeps mean and variance of inputs\n to their original values, in order to ensure the self-normalizing property\n even after this dropout.\n Alpha Dropout fits well to Scaled Exponential Linear Units\n by randomly setting activations to the negative saturation value.\n\n Args:\n rate: float, drop probability (as with `Dropout`).\n The multiplicative noise will have\n standard deviation `sqrt(rate / (1 - rate))`.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Applies Alpha Dropout to the input.", "type": "API"}, {"name": "tf.keras.layers.Attention", "docs": "Dot-product attention layer, a.k.a. Luong-style attention.\n\n Inputs are `query` tensor of shape `[batch_size, Tq, dim]`, `value` tensor of\n shape `[batch_size, Tv, dim]` and `key` tensor of shape\n `[batch_size, Tv, dim]`. The calculation follows the steps:\n\n 1. Calculate scores with shape `[batch_size, Tq, Tv]` as a `query`-`key` dot\n product: `scores = tf.matmul(query, key, transpose_b=True)`.\n 2. Use scores to calculate a distribution with shape\n `[batch_size, Tq, Tv]`: `distribution = tf.nn.softmax(scores)`.\n 3. Use `distribution` to create a linear combination of `value` with\n shape `[batch_size, Tq, dim]`:\n `return tf.matmul(distribution, value)`.\n\n Args:\n use_scale: If `True`, will create a scalar variable to scale the attention\n scores.\n causal: Boolean. Set to `True` for decoder self-attention. Adds a mask such\n that position `i` cannot attend to positions `j > i`. This prevents the\n flow of information from the future towards the past.\n Defaults to `False`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the\n attention scores. Defaults to 0.0.\n score_mode: Function to use to compute attention scores, one of\n `{\"dot\", \"concat\"}`. `\"dot\"` refers to the dot product between the query\n and key vectors. `\"concat\"` refers to the hyperbolic tangent of the\n concatenation of the query and key vectors.\n\n Call Args:\n\n inputs: List of the following tensors:\n * query: Query `Tensor` of shape `[batch_size, Tq, dim]`.\n * value: Value `Tensor` of shape `[batch_size, Tv, dim]`.\n * key: Optional key `Tensor` of shape `[batch_size, Tv, dim]`. If not\n given, will use `value` for both `key` and `value`, which is the\n most common case.\n mask: List of the following tensors:\n * query_mask: A boolean mask `Tensor` of shape `[batch_size, Tq]`.\n If given, the output will be zero at the positions where\n `mask==False`.\n * value_mask: A boolean mask `Tensor` of shape `[batch_size, Tv]`.\n If given, will apply the mask such that values at positions where\n `mask==False` do not contribute to the result.\n return_attention_scores: bool, it `True`, returns the attention scores\n (after masking and softmax) as an additional output argument.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n\n Output:\n\n Attention outputs of shape `[batch_size, Tq, dim]`.\n [Optional] Attention scores after masking and softmax with shape\n `[batch_size, Tq, Tv]`.\n\n The meaning of `query`, `value` and `key` depend on the application. In the\n case of text similarity, for example, `query` is the sequence embeddings of\n the first piece of text and `value` is the sequence embeddings of the second\n piece of text. `key` is usually the same tensor as `value`.\n\n Here is a code example for using `Attention` in a CNN+Attention network:\n\n ```python\n # Variable-length int sequences.\n query_input = tf.keras.Input(shape=(None,), dtype='int32')\n value_input = tf.keras.Input(shape=(None,), dtype='int32')\n\n # Embedding lookup.\n token_embedding = tf.keras.layers.Embedding(input_dim=1000, output_dim=64)\n # Query embeddings of shape [batch_size, Tq, dimension].\n query_embeddings = token_embedding(query_input)\n # Value embeddings of shape [batch_size, Tv, dimension].\n value_embeddings = token_embedding(value_input)\n\n # CNN layer.\n cnn_layer = tf.keras.layers.Conv1D(\n filters=100,\n kernel_size=4,\n # Use 'same' padding so outputs have the same shape as inputs.\n padding='same')\n # Query encoding of shape [batch_size, Tq, filters].\n query_seq_encoding = cnn_layer(query_embeddings)\n # Value encoding of shape [batch_size, Tv, filters].\n value_seq_encoding = cnn_layer(value_embeddings)\n\n # Query-value attention of shape [batch_size, Tq, filters].\n query_value_attention_seq = tf.keras.layers.Attention()(\n [query_seq_encoding, value_seq_encoding])\n\n # Reduce over the sequence axis to produce encodings of shape\n # [batch_size, filters].\n query_encoding = tf.keras.layers.GlobalAveragePooling1D()(\n query_seq_encoding)\n query_value_attention = tf.keras.layers.GlobalAveragePooling1D()(\n query_value_attention_seq)\n\n # Concatenate query and document encodings to produce a DNN input layer.\n input_layer = tf.keras.layers.Concatenate()(\n [query_encoding, query_value_attention])\n\n # Add DNN layers, and create Model.\n # ...\n ```\n ", "desc": "Dot-product attention layer, a.k.a. Luong-style attention.", "type": "API"}, {"name": "tf.keras.layers.Average", "docs": "Layer that averages a list of inputs element-wise.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n Example:\n\n >>> x1 = np.ones((2, 2))\n >>> x2 = np.zeros((2, 2))\n >>> y = tf.keras.layers.Average()([x1, x2])\n >>> y.numpy().tolist()\n [[0.5, 0.5], [0.5, 0.5]]\n\n Usage in a functional model:\n\n >>> input1 = tf.keras.layers.Input(shape=(16,))\n >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)\n >>> input2 = tf.keras.layers.Input(shape=(32,))\n >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)\n >>> avg = tf.keras.layers.Average()([x1, x2])\n >>> out = tf.keras.layers.Dense(4)(avg)\n >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)\n\n Raises:\n ValueError: If there is a shape mismatch between the inputs and the shapes\n cannot be broadcasted to match.\n ", "desc": "Layer that averages a list of inputs element-wise.", "type": "API"}, {"name": "tf.keras.layers.AveragePooling1D", "docs": "Average pooling for temporal data.\n\n Downsamples the input representation by taking the average value over the\n window defined by `pool_size`. The window is shifted by `strides`. The\n resulting output when using \"valid\" padding option has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the \"same\" padding option is:\n `output_shape = input_shape / strides`\n\n For example, for strides=1 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=2 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=1 and padding=\"same\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> avg_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the average pooling windows.\n strides: Integer, or None. Factor by which to downscale.\n E.g. 2 will halve the input.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Average pooling for temporal data.", "type": "API"}, {"name": "tf.keras.layers.AveragePooling2D", "docs": "Average pooling operation for spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output when using `\"valid\"` padding option has a shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `stride=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `strides=(1, 1)` and `padding=\"same\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> avg_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n factors by which to downscale (vertical, horizontal).\n `(2, 2)` will halve the input in both spatial dimension.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n ", "desc": "Average pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.AveragePooling3D", "docs": "Average pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.AveragePooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Average pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.layers.AvgPool1D", "docs": "Average pooling for temporal data.\n\n Downsamples the input representation by taking the average value over the\n window defined by `pool_size`. The window is shifted by `strides`. The\n resulting output when using \"valid\" padding option has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the \"same\" padding option is:\n `output_shape = input_shape / strides`\n\n For example, for strides=1 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=2 and padding=\"valid\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> avg_pool_1d(x)\n \n\n For example, for strides=1 and padding=\"same\":\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> x\n \n >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> avg_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the average pooling windows.\n strides: Integer, or None. Factor by which to downscale.\n E.g. 2 will halve the input.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Average pooling for temporal data.", "type": "API"}, {"name": "tf.keras.layers.AvgPool2D", "docs": "Average pooling operation for spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output when using `\"valid\"` padding option has a shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `stride=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> avg_pool_2d(x)\n \n\n For example, for `strides=(1, 1)` and `padding=\"same\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> avg_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n factors by which to downscale (vertical, horizontal).\n `(2, 2)` will halve the input in both spatial dimension.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n ", "desc": "Average pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.AvgPool3D", "docs": "Average pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the average value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.AveragePooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Average pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.layers.BatchNormalization", "docs": "Layer that normalizes its inputs.\n\n Batch normalization applies a transformation that maintains the mean output\n close to 0 and the output standard deviation close to 1.\n\n Importantly, batch normalization works differently during training and\n during inference.\n\n **During training** (i.e. when using `fit()` or when calling the layer/model\n with the argument `training=True`), the layer normalizes its output using\n the mean and standard deviation of the current batch of inputs. That is to\n say, for each channel being normalized, the layer returns\n `gamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta`, where:\n\n - `epsilon` is small constant (configurable as part of the constructor\n arguments)\n - `gamma` is a learned scaling factor (initialized as 1), which\n can be disabled by passing `scale=False` to the constructor.\n - `beta` is a learned offset factor (initialized as 0), which\n can be disabled by passing `center=False` to the constructor.\n\n **During inference** (i.e. when using `evaluate()` or `predict()` or when\n calling the layer/model with the argument `training=False` (which is the\n default), the layer normalizes its output using a moving average of the\n mean and standard deviation of the batches it has seen during training. That\n is to say, it returns\n `gamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta`.\n\n `self.moving_mean` and `self.moving_var` are non-trainable variables that\n are updated each time the layer in called in training mode, as such:\n\n - `moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum)`\n - `moving_var = moving_var * momentum + var(batch) * (1 - momentum)`\n\n As such, the layer will only normalize its inputs during inference\n *after having been trained on data that has similar statistics as the\n inference data*.\n\n Args:\n axis: Integer, the axis that should be normalized (typically the features\n axis). For instance, after a `Conv2D` layer with\n `data_format=\"channels_first\"`, set `axis=1` in `BatchNormalization`.\n momentum: Momentum for the moving average.\n epsilon: Small float added to variance to avoid dividing by zero.\n center: If True, add offset of `beta` to normalized tensor. If False, `beta`\n is ignored.\n scale: If True, multiply by `gamma`. If False, `gamma` is not used. When the\n next layer is linear (also e.g. `nn.relu`), this can be disabled since the\n scaling will be done by the next layer.\n beta_initializer: Initializer for the beta weight.\n gamma_initializer: Initializer for the gamma weight.\n moving_mean_initializer: Initializer for the moving mean.\n moving_variance_initializer: Initializer for the moving variance.\n beta_regularizer: Optional regularizer for the beta weight.\n gamma_regularizer: Optional regularizer for the gamma weight.\n beta_constraint: Optional constraint for the beta weight.\n gamma_constraint: Optional constraint for the gamma weight.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode.\n - `training=True`: The layer will normalize its inputs using the mean and\n variance of the current batch of inputs.\n - `training=False`: The layer will normalize its inputs using the mean and\n variance of its moving statistics, learned during training.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape` (tuple of\n integers, does not include the samples axis) when using this layer as the\n first layer in a model.\n\n Output shape:\n Same shape as input.\n\n Reference:\n - [Ioffe and Szegedy, 2015](https://arxiv.org/abs/1502.03167).\n\n **About setting `layer.trainable = False` on a `BatchNormalization` layer:**\n\n The meaning of setting `layer.trainable = False` is to freeze the layer,\n i.e. its internal state will not change during training:\n its trainable weights will not be updated\n during `fit()` or `train_on_batch()`, and its state updates will not be run.\n\n Usually, this does not necessarily mean that the layer is run in inference\n mode (which is normally controlled by the `training` argument that can\n be passed when calling a layer). \"Frozen state\" and \"inference mode\"\n are two separate concepts.\n\n However, in the case of the `BatchNormalization` layer, **setting\n `trainable = False` on the layer means that the layer will be\n subsequently run in inference mode** (meaning that it will use\n the moving mean and the moving variance to normalize the current batch,\n rather than using the mean and variance of the current batch).\n\n This behavior has been introduced in TensorFlow 2.0, in order\n to enable `layer.trainable = False` to produce the most commonly\n expected behavior in the convnet fine-tuning use case.\n\n Note that:\n - Setting `trainable` on an model containing other layers will\n recursively set the `trainable` value of all inner layers.\n - If the value of the `trainable`\n attribute is changed after calling `compile()` on a model,\n the new value doesn't take effect for this model\n until `compile()` is called again.\n ", "desc": "Layer that normalizes its inputs.", "type": "API"}, {"name": "tf.keras.layers.Bidirectional", "docs": "Bidirectional wrapper for RNNs.\n\n Args:\n layer: `keras.layers.RNN` instance, such as `keras.layers.LSTM` or\n `keras.layers.GRU`. It could also be a `keras.layers.Layer` instance\n that meets the following criteria:\n 1. Be a sequence-processing layer (accepts 3D+ inputs).\n 2. Have a `go_backwards`, `return_sequences` and `return_state`\n attribute (with the same semantics as for the `RNN` class).\n 3. Have an `input_spec` attribute.\n 4. Implement serialization via `get_config()` and `from_config()`.\n Note that the recommended way to create new RNN layers is to write a\n custom RNN cell and use it with `keras.layers.RNN`, instead of\n subclassing `keras.layers.Layer` directly.\n - When the `returns_sequences` is true, the output of the masked timestep\n will be zero regardless of the layer's original `zero_output_for_mask`\n value.\n merge_mode: Mode by which outputs of the forward and backward RNNs will be\n combined. One of {'sum', 'mul', 'concat', 'ave', None}. If None, the\n outputs will not be combined, they will be returned as a list. Default\n value is 'concat'.\n backward_layer: Optional `keras.layers.RNN`, or `keras.layers.Layer`\n instance to be used to handle backwards input processing.\n If `backward_layer` is not provided, the layer instance passed as the\n `layer` argument will be used to generate the backward layer\n automatically.\n Note that the provided `backward_layer` layer should have properties\n matching those of the `layer` argument, in particular it should have the\n same values for `stateful`, `return_states`, `return_sequences`, etc.\n In addition, `backward_layer` and `layer` should have different\n `go_backwards` argument values.\n A `ValueError` will be raised if these requirements are not met.\n\n Call arguments:\n The call arguments for this layer are the same as those of the wrapped RNN\n layer.\n Beware that when passing the `initial_state` argument during the call of\n this layer, the first half in the list of elements in the `initial_state`\n list will be passed to the forward RNN call and the last half in the list\n of elements will be passed to the backward RNN call.\n\n Raises:\n ValueError:\n 1. If `layer` or `backward_layer` is not a `Layer` instance.\n 2. In case of invalid `merge_mode` argument.\n 3. If `backward_layer` has mismatched properties compared to `layer`.\n\n Examples:\n\n ```python\n model = Sequential()\n model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5, 10)))\n model.add(Bidirectional(LSTM(10)))\n model.add(Dense(5))\n model.add(Activation('softmax'))\n model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\n # With custom backward layer\n model = Sequential()\n forward_layer = LSTM(10, return_sequences=True)\n backward_layer = LSTM(10, activation='relu', return_sequences=True,\n go_backwards=True)\n model.add(Bidirectional(forward_layer, backward_layer=backward_layer,\n input_shape=(5, 10)))\n model.add(Dense(5))\n model.add(Activation('softmax'))\n model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n ```\n ", "desc": "Bidirectional wrapper for RNNs.", "type": "API"}, {"name": "tf.keras.layers.Concatenate", "docs": "Layer that concatenates a list of inputs.\n\n It takes as input a list of tensors, all of the same shape except\n for the concatenation axis, and returns a single tensor that is the\n concatenation of all inputs.\n\n >>> x = np.arange(20).reshape(2, 2, 5)\n >>> print(x)\n [[[ 0 1 2 3 4]\n [ 5 6 7 8 9]]\n [[10 11 12 13 14]\n [15 16 17 18 19]]]\n >>> y = np.arange(20, 30).reshape(2, 1, 5)\n >>> print(y)\n [[[20 21 22 23 24]]\n [[25 26 27 28 29]]]\n >>> tf.keras.layers.Concatenate(axis=1)([x, y])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> concatted = tf.keras.layers.Concatenate()([x1, x2])\n >>> concatted.shape\n TensorShape([5, 16])\n\n ", "desc": "Layer that concatenates a list of inputs.", "type": "API"}, {"name": "tf.keras.layers.Conv1D", "docs": "1D convolution layer (e.g. temporal convolution).\n\n This layer creates a convolution kernel that is convolved\n with the layer input over a single spatial (or temporal) dimension\n to produce a tensor of outputs.\n If `use_bias` is True, a bias vector is created and added to the outputs.\n Finally, if `activation` is not `None`,\n it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide an `input_shape` argument\n (tuple of integers or `None`, e.g.\n `(10, 128)` for sequences of 10 vectors of 128-dimensional vectors,\n or `(None, 128)` for variable-length sequences of 128-dimensional vectors.\n\n Examples:\n\n >>> # The inputs are 128-length vectors with 10 timesteps, and the batch size\n >>> # is 4.\n >>> input_shape = (4, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu',input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 8, 32)\n\n >>> # With extended batch shape [4, 7] (e.g. weather data where batch\n >>> # dimensions correspond to spatial location and the third dimension\n >>> # corresponds to time.)\n >>> input_shape = (4, 7, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 8, 32)\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer,\n specifying the length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer,\n specifying the stride length of the convolution.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"` or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n `\"causal\"` results in causal (dilated) convolutions, e.g. `output[t]`\n does not depend on `input[t+1:]`. Useful when modeling temporal data\n where the model should not violate the temporal order.\n See [WaveNet: A Generative Model for Raw Audio, section\n 2.1](https://arxiv.org/abs/1609.03499).\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n dilation_rate: an integer or tuple/list of a single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any `strides` value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved\n separately with `filters / groups` filters. The output is the\n concatenation of all the `groups` results along the channel axis.\n Input channels and `filters` must both be divisible by `groups`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3+D tensor with shape: `batch_shape + (steps, input_dim)`\n\n Output shape:\n 3+D tensor with shape: `batch_shape + (new_steps, filters)`\n `steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "1D convolution layer (e.g. temporal convolution).", "type": "API"}, {"name": "tf.keras.layers.Conv1DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 3)` for data with 128 time steps and 3 channels.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer length of the 1D convolution window.\n strides: An integer specifying the stride of the convolution along the\n time dimension. Specifying a stride value != 1 is incompatible with\n specifying a `dilation_rate` value != 1. Defaults to 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer specifying the amount of padding along\n the time dimension of the output tensor.\n The amount of output padding must be lower than the stride.\n If set to `None` (default), the output shape is inferred.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: an integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying a `dilation_rate` value != 1 is\n incompatible with specifying a stride value != 1.\n Also dilation rate larger than 1 is not currently supported.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, steps, channels)`\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, new_steps, filters)`\n If `output_padding` is specified:\n ```\n new_timesteps = ((timesteps - 1) * strides + kernel_size -\n 2 * padding + output_padding)\n ```\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep learning](\n https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional Networks](\n https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.Conv2D", "docs": "2D convolution layer (e.g. spatial convolution over images).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`. You can use `None` when\n a dimension has variable size.\n\n Examples:\n\n >>> # The inputs are 28x28 RGB images with `channels_last` and the batch\n >>> # size is 4.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 2)\n\n >>> # With `dilation_rate` as 2.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 24, 24, 2)\n\n >>> # With `padding` as \"same\".\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', padding=\"same\", input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 28, 28, 2)\n\n >>> # With extended batch shape [4, 7]:\n >>> input_shape = (4, 7, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 2)\n\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input. When `padding=\"same\"` and\n `strides=1`, the output has the same size as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be `channels_last`.\n dilation_rate: an integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4+D tensor with shape: `batch_shape + (channels, rows, cols)` if\n `data_format='channels_first'`\n or 4+D tensor with shape: `batch_shape + (rows, cols, channels)` if\n `data_format='channels_last'`.\n\n Output shape:\n 4+D tensor with shape: `batch_shape + (filters, new_rows, new_cols)` if\n `data_format='channels_first'` or 4+D tensor with shape: `batch_shape +\n (new_rows, new_cols, filters)` if `data_format='channels_last'`. `rows`\n and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4+ representing\n `activation(conv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is `\"causal\"`.\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "2D convolution layer (e.g. spatial convolution over images).", "type": "API"}, {"name": "tf.keras.layers.Conv2DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 2 integers,\n specifying the amount of padding along the height and width\n of the output tensor.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer, specifying the dilation rate for all spatial\n dimensions for dilated convolution. Specifying different dilation rates\n for different dimensions is not supported.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified:\n ```\n new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n ```\n\n Returns:\n A tensor of rank 4 representing\n `activation(conv2dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.Conv3D", "docs": "3D convolution layer (e.g. spatial convolution over volumes).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes\n with a single channel,\n in `data_format=\"channels_last\"`.\n\n Examples:\n\n >>> # The inputs are 28x28x28 volumes with a single channel, and the\n >>> # batch size is 4\n >>> input_shape =(4, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 26, 2)\n\n >>> # With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames,\n >>> # with 7 frames per video.\n >>> input_shape = (4, 7, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 26, 2)\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the depth,\n height and width of the 3D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides of\n the convolution along each spatial dimension. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `batch_shape + (spatial_dim1, spatial_dim2,\n spatial_dim3, channels)` while `channels_first` corresponds to inputs with\n shape `batch_shape + (channels, spatial_dim1, spatial_dim2,\n spatial_dim3)`. It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`. If you never set it, then it\n will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 5+D tensor with shape: `batch_shape + (channels, conv_dim1, conv_dim2,\n conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (conv_dim1, conv_dim2, conv_dim3,\n channels)` if data_format='channels_last'.\n\n Output shape:\n 5+D tensor with shape: `batch_shape + (filters, new_conv_dim1,\n new_conv_dim2, new_conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (new_conv_dim1, new_conv_dim2,\n new_conv_dim3, filters)` if data_format='channels_last'. `new_conv_dim1`,\n `new_conv_dim2` and `new_conv_dim3` values might have changed due to\n padding.\n\n Returns:\n A tensor of rank 5+ representing\n `activation(conv3d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "3D convolution layer (e.g. spatial convolution over volumes).", "type": "API"}, {"name": "tf.keras.layers.Conv3DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels\n if `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the convolution along the depth, height\n and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 3 integers,\n specifying the amount of padding along the depth, height, and\n width.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, depth, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix\n (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 5D tensor with shape:\n `(batch_size, channels, depth, rows, cols)` if data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, depth, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 5D tensor with shape:\n `(batch_size, filters, new_depth, new_rows, new_cols)` if\n data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, new_depth, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n `depth` and `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified::\n ```\n new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] +\n output_padding[2])\n ```\n\n Returns:\n A tensor of rank 5 representing\n `activation(conv3dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.ConvLSTM2D", "docs": "2D Convolutional LSTM.\n\n Similar to an LSTM layer, but the input transformations\n and recurrent transformations are both convolutional.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of n integers, specifying the\n dimensions of the convolution window.\n strides: An integer or tuple/list of n integers, specifying the strides of\n the convolution. Specifying any stride value != 1 is incompatible with\n specifying any `dilation_rate` value != 1.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive). `\"valid\"` means no\n padding. `\"same\"` results in padding evenly to the left/right or up/down\n of the input such that output has the same height/width dimension as the\n input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch, time, ..., channels)` while `channels_first`\n corresponds to inputs with shape `(batch, time, channels, ...)`. It\n defaults to the `image_data_format` value found in your Keras config file\n at `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n dilation_rate: An integer or tuple/list of n integers, specifying the\n dilation rate to use for dilated convolution. Currently, specifying any\n `dilation_rate` value != 1 is incompatible with specifying any `strides`\n value != 1.\n activation: Activation function to use. By default hyperbolic tangent\n activation function is applied (`tanh(x)`).\n recurrent_activation: Activation function to use for the recurrent step.\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs.\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n bias_initializer: Initializer for the bias vector.\n unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at\n initialization. Use in combination with `bias_initializer=\"zeros\"`. This\n is recommended in [Jozefowicz et al., 2015](\n http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n return_sequences: Boolean. Whether to return the last output in the output\n sequence, or the full sequence. (default False)\n return_state: Boolean Whether to return the last state in addition to the\n output. (default False)\n go_backwards: Boolean (default False). If True, process the input sequence\n backwards.\n stateful: Boolean (default False). If True, the last state for each sample\n at index i in a batch will be used as initial state for the sample of\n index i in the following batch.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state.\n Call arguments:\n inputs: A 5D tensor.\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether a\n given timestep should be masked.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or `recurrent_dropout`\n are set.\n initial_state: List of initial state tensors to be passed to the first call\n of the cell.\n Input shape: - If data_format='channels_first'\n 5D tensor with shape: `(samples, time, channels, rows, cols)` - If\n data_format='channels_last'\n 5D tensor with shape: `(samples, time, rows, cols, channels)`\n Output shape:\n - If `return_state`: a list of tensors. The first tensor is the output. The\n remaining tensors are the last states,\n each 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'. `rows` and `cols` values might have changed\n due to padding.\n - If `return_sequences`: 5D tensor with shape: `(samples, timesteps,\n filters, new_rows, new_cols)` if data_format='channels_first'\n or shape: `(samples, timesteps, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n - Else, 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n\n Raises:\n ValueError: in case of invalid constructor arguments.\n\n References:\n - [Shi et al., 2015](http://arxiv.org/abs/1506.04214v1)\n (the current implementation does not include the feedback loop on the\n cells output).\n ", "desc": "2D Convolutional LSTM.", "type": "API"}, {"name": "tf.keras.layers.Convolution1D", "docs": "1D convolution layer (e.g. temporal convolution).\n\n This layer creates a convolution kernel that is convolved\n with the layer input over a single spatial (or temporal) dimension\n to produce a tensor of outputs.\n If `use_bias` is True, a bias vector is created and added to the outputs.\n Finally, if `activation` is not `None`,\n it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide an `input_shape` argument\n (tuple of integers or `None`, e.g.\n `(10, 128)` for sequences of 10 vectors of 128-dimensional vectors,\n or `(None, 128)` for variable-length sequences of 128-dimensional vectors.\n\n Examples:\n\n >>> # The inputs are 128-length vectors with 10 timesteps, and the batch size\n >>> # is 4.\n >>> input_shape = (4, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu',input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 8, 32)\n\n >>> # With extended batch shape [4, 7] (e.g. weather data where batch\n >>> # dimensions correspond to spatial location and the third dimension\n >>> # corresponds to time.)\n >>> input_shape = (4, 7, 10, 128)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv1D(\n ... 32, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 8, 32)\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer,\n specifying the length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer,\n specifying the stride length of the convolution.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"` or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n `\"causal\"` results in causal (dilated) convolutions, e.g. `output[t]`\n does not depend on `input[t+1:]`. Useful when modeling temporal data\n where the model should not violate the temporal order.\n See [WaveNet: A Generative Model for Raw Audio, section\n 2.1](https://arxiv.org/abs/1609.03499).\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n dilation_rate: an integer or tuple/list of a single integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any `strides` value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved\n separately with `filters / groups` filters. The output is the\n concatenation of all the `groups` results along the channel axis.\n Input channels and `filters` must both be divisible by `groups`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3+D tensor with shape: `batch_shape + (steps, input_dim)`\n\n Output shape:\n 3+D tensor with shape: `batch_shape + (new_steps, filters)`\n `steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "1D convolution layer (e.g. temporal convolution).", "type": "API"}, {"name": "tf.keras.layers.Convolution1DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 3)` for data with 128 time steps and 3 channels.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer length of the 1D convolution window.\n strides: An integer specifying the stride of the convolution along the\n time dimension. Specifying a stride value != 1 is incompatible with\n specifying a `dilation_rate` value != 1. Defaults to 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer specifying the amount of padding along\n the time dimension of the output tensor.\n The amount of output padding must be lower than the stride.\n If set to `None` (default), the output shape is inferred.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: an integer, specifying\n the dilation rate to use for dilated convolution.\n Currently, specifying a `dilation_rate` value != 1 is\n incompatible with specifying a stride value != 1.\n Also dilation rate larger than 1 is not currently supported.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, steps, channels)`\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, new_steps, filters)`\n If `output_padding` is specified:\n ```\n new_timesteps = ((timesteps - 1) * strides + kernel_size -\n 2 * padding + output_padding)\n ```\n\n Returns:\n A tensor of rank 3 representing\n `activation(conv1dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep learning](\n https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional Networks](\n https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.Convolution2D", "docs": "2D convolution layer (e.g. spatial convolution over images).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`. You can use `None` when\n a dimension has variable size.\n\n Examples:\n\n >>> # The inputs are 28x28 RGB images with `channels_last` and the batch\n >>> # size is 4.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 2)\n\n >>> # With `dilation_rate` as 2.\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 24, 24, 2)\n\n >>> # With `padding` as \"same\".\n >>> input_shape = (4, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', padding=\"same\", input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 28, 28, 2)\n\n >>> # With extended batch shape [4, 7]:\n >>> input_shape = (4, 7, 28, 28, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv2D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 2)\n\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input. When `padding=\"same\"` and\n `strides=1`, the output has the same size as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be `channels_last`.\n dilation_rate: an integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4+D tensor with shape: `batch_shape + (channels, rows, cols)` if\n `data_format='channels_first'`\n or 4+D tensor with shape: `batch_shape + (rows, cols, channels)` if\n `data_format='channels_last'`.\n\n Output shape:\n 4+D tensor with shape: `batch_shape + (filters, new_rows, new_cols)` if\n `data_format='channels_first'` or 4+D tensor with shape: `batch_shape +\n (new_rows, new_cols, filters)` if `data_format='channels_last'`. `rows`\n and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4+ representing\n `activation(conv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is `\"causal\"`.\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "2D convolution layer (e.g. spatial convolution over images).", "type": "API"}, {"name": "tf.keras.layers.Convolution2DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures\n in `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 2 integers,\n specifying the amount of padding along the height and width\n of the output tensor.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer, specifying the dilation rate for all spatial\n dimensions for dilated convolution. Specifying different dilation rates\n for different dimensions is not supported.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified:\n ```\n new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n ```\n\n Returns:\n A tensor of rank 4 representing\n `activation(conv2dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.Convolution3D", "docs": "3D convolution layer (e.g. spatial convolution over volumes).\n\n This layer creates a convolution kernel that is convolved\n with the layer input to produce a tensor of\n outputs. If `use_bias` is True,\n a bias vector is created and added to the outputs. Finally, if\n `activation` is not `None`, it is applied to the outputs as well.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 1)` for 128x128x128 volumes\n with a single channel,\n in `data_format=\"channels_last\"`.\n\n Examples:\n\n >>> # The inputs are 28x28x28 volumes with a single channel, and the\n >>> # batch size is 4\n >>> input_shape =(4, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[1:])(x)\n >>> print(y.shape)\n (4, 26, 26, 26, 2)\n\n >>> # With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames,\n >>> # with 7 frames per video.\n >>> input_shape = (4, 7, 28, 28, 28, 1)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.Conv3D(\n ... 2, 3, activation='relu', input_shape=input_shape[2:])(x)\n >>> print(y.shape)\n (4, 7, 26, 26, 26, 2)\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number of\n output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the depth,\n height and width of the 3D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 3 integers, specifying the strides of\n the convolution along each spatial dimension. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `batch_shape + (spatial_dim1, spatial_dim2,\n spatial_dim3, channels)` while `channels_first` corresponds to inputs with\n shape `batch_shape + (channels, spatial_dim1, spatial_dim2,\n spatial_dim3)`. It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`. If you never set it, then it\n will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying the\n dilation rate to use for dilated convolution. Can be a single integer to\n specify the same value for all spatial dimensions. Currently, specifying\n any `dilation_rate` value != 1 is incompatible with specifying any stride\n value != 1.\n groups: A positive integer specifying the number of groups in which the\n input is split along the channel axis. Each group is convolved separately\n with `filters / groups` filters. The output is the concatenation of all\n the `groups` results along the channel axis. Input channels and `filters`\n must both be divisible by `groups`.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix (see\n `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 5+D tensor with shape: `batch_shape + (channels, conv_dim1, conv_dim2,\n conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (conv_dim1, conv_dim2, conv_dim3,\n channels)` if data_format='channels_last'.\n\n Output shape:\n 5+D tensor with shape: `batch_shape + (filters, new_conv_dim1,\n new_conv_dim2, new_conv_dim3)` if data_format='channels_first'\n or 5+D tensor with shape: `batch_shape + (new_conv_dim1, new_conv_dim2,\n new_conv_dim3, filters)` if data_format='channels_last'. `new_conv_dim1`,\n `new_conv_dim2` and `new_conv_dim3` values might have changed due to\n padding.\n\n Returns:\n A tensor of rank 5+ representing\n `activation(conv3d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n ", "desc": "3D convolution layer (e.g. spatial convolution over volumes).", "type": "API"}, {"name": "tf.keras.layers.Convolution3DTranspose", "docs": "Transposed convolution layer (sometimes called Deconvolution).\n\n The need for transposed convolutions generally arises\n from the desire to use a transformation going in the opposite direction\n of a normal convolution, i.e., from something that has the shape of the\n output of some convolution to something that has the shape of its input\n while maintaining a connectivity pattern that is compatible with\n said convolution.\n\n When using this layer as the first layer in a model,\n provide the keyword argument `input_shape`\n (tuple of integers or `None`, does not include the sample axis),\n e.g. `input_shape=(128, 128, 128, 3)` for a 128x128x128 volume with 3 channels\n if `data_format=\"channels_last\"`.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 3 integers, specifying the\n depth, height and width of the 3D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 3 integers,\n specifying the strides of the convolution along the depth, height\n and width.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n output_padding: An integer or tuple/list of 3 integers,\n specifying the amount of padding along the depth, height, and\n width.\n Can be a single integer to specify the same value for all\n spatial dimensions.\n The amount of output padding along a given dimension must be\n lower than the stride along that same dimension.\n If set to `None` (default), the output shape is inferred.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, depth, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, depth, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: an integer or tuple/list of 3 integers, specifying\n the dilation rate to use for dilated convolution.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n Currently, specifying any `dilation_rate` value != 1 is\n incompatible with specifying any stride value != 1.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix\n (see `keras.initializers`). Defaults to 'glorot_uniform'.\n bias_initializer: Initializer for the bias vector\n (see `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix\n (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 5D tensor with shape:\n `(batch_size, channels, depth, rows, cols)` if data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, depth, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 5D tensor with shape:\n `(batch_size, filters, new_depth, new_rows, new_cols)` if\n data_format='channels_first'\n or 5D tensor with shape:\n `(batch_size, new_depth, new_rows, new_cols, filters)` if\n data_format='channels_last'.\n `depth` and `rows` and `cols` values might have changed due to padding.\n If `output_padding` is specified::\n ```\n new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] +\n output_padding[0])\n new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] +\n output_padding[1])\n new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] +\n output_padding[2])\n ```\n\n Returns:\n A tensor of rank 5 representing\n `activation(conv3dtranspose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n\n References:\n - [A guide to convolution arithmetic for deep\n learning](https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional\n Networks](https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n ", "desc": "Transposed convolution layer (sometimes called Deconvolution).", "type": "API"}, {"name": "tf.keras.layers.Cropping1D", "docs": "Cropping layer for 1D input (e.g. temporal sequence).\n\n It crops along the time dimension (axis 1).\n\n Examples:\n\n >>> input_shape = (2, 3, 2)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1]\n [ 2 3]\n [ 4 5]]\n [[ 6 7]\n [ 8 9]\n [10 11]]]\n >>> y = tf.keras.layers.Cropping1D(cropping=1)(x)\n >>> print(y)\n tf.Tensor(\n [[[2 3]]\n [[8 9]]], shape=(2, 1, 2), dtype=int64)\n\n Args:\n cropping: Int or tuple of int (length 2)\n How many units should be trimmed off at the beginning and end of\n the cropping dimension (axis 1).\n If a single int is provided, the same value will be used for both.\n\n Input shape:\n 3D tensor with shape `(batch_size, axis_to_crop, features)`\n\n Output shape:\n 3D tensor with shape `(batch_size, cropped_axis, features)`\n ", "desc": "Cropping layer for 1D input (e.g. temporal sequence).", "type": "API"}, {"name": "tf.keras.layers.Cropping2D", "docs": "Cropping layer for 2D input (e.g. picture).\n\n It crops along spatial dimensions, i.e. height and width.\n\n Examples:\n\n >>> input_shape = (2, 28, 28, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.Cropping2D(cropping=((2, 2), (4, 4)))(x)\n >>> print(y.shape)\n (2, 24, 20, 3)\n\n Args:\n cropping: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.\n - If int: the same symmetric cropping\n is applied to height and width.\n - If tuple of 2 ints:\n interpreted as two different\n symmetric cropping values for height and width:\n `(symmetric_height_crop, symmetric_width_crop)`.\n - If tuple of 2 tuples of 2 ints:\n interpreted as\n `((top_crop, bottom_crop), (left_crop, right_crop))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, cropped_rows, cropped_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, cropped_rows, cropped_cols)`\n ", "desc": "Cropping layer for 2D input (e.g. picture).", "type": "API"}, {"name": "tf.keras.layers.Cropping3D", "docs": "Cropping layer for 3D data (e.g. spatial or spatio-temporal).\n\n Examples:\n\n >>> input_shape = (2, 28, 28, 10, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.Cropping3D(cropping=(2, 4, 2))(x)\n >>> print(y.shape)\n (2, 24, 20, 6, 3)\n\n Args:\n cropping: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints.\n - If int: the same symmetric cropping\n is applied to depth, height, and width.\n - If tuple of 3 ints: interpreted as two different\n symmetric cropping values for depth, height, and width:\n `(symmetric_dim1_crop, symmetric_dim2_crop, symmetric_dim3_crop)`.\n - If tuple of 3 tuples of 2 ints: interpreted as\n `((left_dim1_crop, right_dim1_crop), (left_dim2_crop,\n right_dim2_crop), (left_dim3_crop, right_dim3_crop))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_axis_to_crop, second_axis_to_crop,\n third_axis_to_crop)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_cropped_axis, second_cropped_axis, third_cropped_axis,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_cropped_axis, second_cropped_axis,\n third_cropped_axis)`\n ", "desc": "Cropping layer for 3D data (e.g. spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.layers.Dense", "docs": "Just your regular densely-connected NN layer.\n\n `Dense` implements the operation:\n `output = activation(dot(input, kernel) + bias)`\n where `activation` is the element-wise activation function\n passed as the `activation` argument, `kernel` is a weights matrix\n created by the layer, and `bias` is a bias vector created by the layer\n (only applicable if `use_bias` is `True`). These are all attributes of\n `Dense`.\n\n Note: If the input to the layer has a rank greater than 2, then `Dense`\n computes the dot product between the `inputs` and the `kernel` along the\n last axis of the `inputs` and axis 0 of the `kernel` (using `tf.tensordot`).\n For example, if input has dimensions `(batch_size, d0, d1)`,\n then we create a `kernel` with shape `(d1, units)`, and the `kernel` operates\n along axis 2 of the `input`, on every sub-tensor of shape `(1, 1, d1)`\n (there are `batch_size * d0` such sub-tensors).\n The output in this case will have shape `(batch_size, d0, units)`.\n\n Besides, layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n When a popular kwarg `input_shape` is passed, then keras will create\n an input layer to insert before the current layer. This can be treated\n equivalent to explicitly defining an `InputLayer`.\n\n Example:\n\n >>> # Create a `Sequential` model and add a Dense layer as the first layer.\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.Input(shape=(16,)))\n >>> model.add(tf.keras.layers.Dense(32, activation='relu'))\n >>> # Now the model will take as input arrays of shape (None, 16)\n >>> # and output arrays of shape (None, 32).\n >>> # Note that after the first layer, you don't need to specify\n >>> # the size of the input anymore:\n >>> model.add(tf.keras.layers.Dense(32))\n >>> model.output_shape\n (None, 32)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to\n the `kernel` weights matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\").\n kernel_constraint: Constraint function applied to\n the `kernel` weights matrix.\n bias_constraint: Constraint function applied to the bias vector.\n\n Input shape:\n N-D tensor with shape: `(batch_size, ..., input_dim)`.\n The most common situation would be\n a 2D input with shape `(batch_size, input_dim)`.\n\n Output shape:\n N-D tensor with shape: `(batch_size, ..., units)`.\n For instance, for a 2D input with shape `(batch_size, input_dim)`,\n the output would have shape `(batch_size, units)`.\n ", "desc": "Just your regular densely-connected NN layer.", "type": "API"}, {"name": "tf.keras.layers.DenseFeatures", "docs": "A layer that produces a dense `Tensor` based on given `feature_columns`.\n\n Generally a single example in training data is described with FeatureColumns.\n At the first layer of the model, this column oriented data should be converted\n to a single `Tensor`.\n\n This layer can be called multiple times with different features.\n\n This is the V2 version of this layer that uses name_scopes to create\n variables instead of variable_scopes. But this approach currently lacks\n support for partitioned variables. In that case, use the V1 version instead.\n\n Example:\n\n ```python\n price = tf.feature_column.numeric_column('price')\n keywords_embedded = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_hash_bucket(\"keywords\", 10000),\n dimensions=16)\n columns = [price, keywords_embedded, ...]\n feature_layer = tf.keras.layers.DenseFeatures(columns)\n\n features = tf.io.parse_example(\n ..., features=tf.feature_column.make_parse_example_spec(columns))\n dense_tensor = feature_layer(features)\n for units in [128, 64, 32]:\n dense_tensor = tf.keras.layers.Dense(units, activation='relu')(dense_tensor)\n prediction = tf.keras.layers.Dense(1)(dense_tensor)\n ```\n ", "desc": "A layer that produces a dense `Tensor` based on given `feature_columns`.", "type": "API"}, {"name": "tf.keras.layers.DepthwiseConv2D", "docs": "Depthwise 2D convolution.\n\n Depthwise convolution is a type of convolution in which each input channel is\n convolved with a different kernel (called a depthwise kernel). You\n can understand depthwise convolution as the first step in a depthwise\n separable convolution.\n\n It is implemented via the following steps:\n\n - Split the input into individual channels.\n - Convolve each channel with an individual depthwise kernel with\n `depth_multiplier` output channels.\n - Concatenate the convolved outputs along the channels axis.\n\n Unlike a regular 2D convolution, depthwise convolution does not mix\n information across different input channels.\n\n The `depth_multiplier` argument determines how many filter are applied to one\n input channel. As such, it controls the amount of output channels that are\n generated per input channel in the depthwise step.\n\n Args:\n kernel_size: An integer or tuple/list of 2 integers, specifying the height\n and width of the 2D convolution window. Can be a single integer to specify\n the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the height and width. Can be a single integer to\n specify the same value for all spatial dimensions. Specifying any stride\n value != 1 is incompatible with specifying any `dilation_rate` value != 1.\n padding: one of `'valid'` or `'same'` (case-insensitive). `\"valid\"` means no\n padding. `\"same\"` results in padding with zeros evenly to the left/right\n or up/down of the input such that output has the same height/width\n dimension as the input.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs. `channels_last` corresponds\n to inputs with shape `(batch_size, height, width, channels)` while\n `channels_first` corresponds to inputs with shape `(batch_size, channels,\n height, width)`. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be 'channels_last'.\n dilation_rate: An integer or tuple/list of 2 integers, specifying the\n dilation rate to use for dilated convolution. Currently, specifying any\n `dilation_rate` value != 1 is incompatible with specifying any `strides`\n value != 1.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: Initializer for the depthwise kernel matrix (see\n `keras.initializers`). If None, the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). If None, the default initializer ('zeros') will be\n used.\n depthwise_regularizer: Regularizer function applied to the depthwise kernel\n matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its 'activation') (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to the depthwise kernel\n matrix (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector (see\n `keras.constraints`).\n\n Input shape:\n 4D tensor with shape: `[batch_size, channels, rows, cols]` if\n data_format='channels_first'\n or 4D tensor with shape: `[batch_size, rows, cols, channels]` if\n data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape: `[batch_size, channels * depth_multiplier, new_rows,\n new_cols]` if `data_format='channels_first'`\n or 4D tensor with shape: `[batch_size,\n new_rows, new_cols, channels * depth_multiplier]` if\n `data_format='channels_last'`. `rows` and `cols` values might have changed\n due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(depthwiseconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ValueError: when both `strides` > 1 and `dilation_rate` > 1.\n ", "desc": "Depthwise 2D convolution.", "type": "API"}, {"name": "tf.keras.layers.deserialize", "docs": "Instantiates a layer from a config dictionary.\n\n Args:\n config: dict of the form {'class_name': str, 'config': dict}\n custom_objects: dict mapping class names (or function names) of custom\n (non-Keras) objects to class/functions\n\n Returns:\n Layer instance (may be Model, Sequential, Network, Layer...)\n\n Example:\n\n ```python\n # Configuration of Dense(32, activation='relu')\n config = {\n 'class_name': 'Dense',\n 'config': {\n 'activation': 'relu',\n 'activity_regularizer': None,\n 'bias_constraint': None,\n 'bias_initializer': {'class_name': 'Zeros', 'config': {}},\n 'bias_regularizer': None,\n 'dtype': 'float32',\n 'kernel_constraint': None,\n 'kernel_initializer': {'class_name': 'GlorotUniform',\n 'config': {'seed': None}},\n 'kernel_regularizer': None,\n 'name': 'dense',\n 'trainable': True,\n 'units': 32,\n 'use_bias': True\n }\n }\n dense_layer = tf.keras.layers.deserialize(config)\n ```\n ", "desc": "Instantiates a layer from a config dictionary.", "type": "API"}, {"name": "tf.keras.layers.Dot", "docs": "Layer that computes a dot product between samples in two tensors.\n\n E.g. if applied to a list of two tensors `a` and `b` of shape\n `(batch_size, n)`, the output will be a tensor of shape `(batch_size, 1)`\n where each entry `i` will be the dot product between\n `a[i]` and `b[i]`.\n\n >>> x = np.arange(10).reshape(1, 5, 2)\n >>> print(x)\n [[[0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]]]\n >>> y = np.arange(10, 20).reshape(1, 2, 5)\n >>> print(y)\n [[[10 11 12 13 14]\n [15 16 17 18 19]]]\n >>> tf.keras.layers.Dot(axes=(1, 2))([x, y])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> dotted = tf.keras.layers.Dot(axes=1)([x1, x2])\n >>> dotted.shape\n TensorShape([5, 1])\n\n\n ", "desc": "Layer that computes a dot product between samples in two tensors.", "type": "API"}, {"name": "tf.keras.layers.Dropout", "docs": "Applies Dropout to the input.\n\n The Dropout layer randomly sets input units to 0 with a frequency of `rate`\n at each step during training time, which helps prevent overfitting.\n Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over\n all inputs is unchanged.\n\n Note that the Dropout layer only applies when `training` is set to True\n such that no values are dropped during inference. When using `model.fit`,\n `training` will be appropriately set to True automatically, and in other\n contexts, you can set the kwarg explicitly to True when calling the layer.\n\n (This is in contrast to setting `trainable=False` for a Dropout layer.\n `trainable` does not affect the layer's behavior, as Dropout does\n not have any variables/weights that can be frozen during training.)\n\n >>> tf.random.set_seed(0)\n >>> layer = tf.keras.layers.Dropout(.2, input_shape=(2,))\n >>> data = np.arange(10).reshape(5, 2).astype(np.float32)\n >>> print(data)\n [[0. 1.]\n [2. 3.]\n [4. 5.]\n [6. 7.]\n [8. 9.]]\n >>> outputs = layer(data, training=True)\n >>> print(outputs)\n tf.Tensor(\n [[ 0. 1.25]\n [ 2.5 3.75]\n [ 5. 6.25]\n [ 7.5 8.75]\n [10. 0. ]], shape=(5, 2), dtype=float32)\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n noise_shape: 1D integer tensor representing the shape of the\n binary dropout mask that will be multiplied with the input.\n For instance, if your inputs have shape\n `(batch_size, timesteps, features)` and\n you want the dropout mask to be the same for all timesteps,\n you can use `noise_shape=(batch_size, 1, features)`.\n seed: A Python integer to use as random seed.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n ", "desc": "Applies Dropout to the input.", "type": "API"}, {"name": "tf.keras.layers.ELU", "docs": "Exponential Linear Unit.\n\n It follows:\n\n ```\n f(x) = alpha * (exp(x) - 1.) for x < 0\n f(x) = x for x >= 0\n ```\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha: Scale for the negative factor.\n ", "desc": "Exponential Linear Unit.", "type": "API"}, {"name": "tf.keras.layers.Embedding", "docs": "Turns positive integers (indexes) into dense vectors of fixed size.\n\n e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]`\n\n This layer can only be used on positive integer inputs of a fixed range. The\n `tf.keras.layers.TextVectorization`, `tf.keras.layers.StringLookup`,\n and `tf.keras.layers.IntegerLookup` preprocessing layers can help prepare\n inputs for an `Embedding` layer.\n\n This layer accepts `tf.Tensor` and `tf.RaggedTensor` inputs. It cannot be\n called with `tf.SparseTensor` input.\n\n Example:\n\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Embedding(1000, 64, input_length=10))\n >>> # The model will take as input an integer matrix of size (batch,\n >>> # input_length), and the largest integer (i.e. word index) in the input\n >>> # should be no larger than 999 (vocabulary size).\n >>> # Now model.output_shape is (None, 10, 64), where `None` is the batch\n >>> # dimension.\n >>> input_array = np.random.randint(1000, size=(32, 10))\n >>> model.compile('rmsprop', 'mse')\n >>> output_array = model.predict(input_array)\n >>> print(output_array.shape)\n (32, 10, 64)\n\n Args:\n input_dim: Integer. Size of the vocabulary,\n i.e. maximum integer index + 1.\n output_dim: Integer. Dimension of the dense embedding.\n embeddings_initializer: Initializer for the `embeddings`\n matrix (see `keras.initializers`).\n embeddings_regularizer: Regularizer function applied to\n the `embeddings` matrix (see `keras.regularizers`).\n embeddings_constraint: Constraint function applied to\n the `embeddings` matrix (see `keras.constraints`).\n mask_zero: Boolean, whether or not the input value 0 is a special \"padding\"\n value that should be masked out.\n This is useful when using recurrent layers\n which may take variable length input.\n If this is `True`, then all subsequent layers\n in the model need to support masking or an exception will be raised.\n If mask_zero is set to True, as a consequence, index 0 cannot be\n used in the vocabulary (input_dim should equal size of\n vocabulary + 1).\n input_length: Length of input sequences, when it is constant.\n This argument is required if you are going to connect\n `Flatten` then `Dense` layers upstream\n (without it, the shape of the dense outputs cannot be computed).\n\n Input shape:\n 2D tensor with shape: `(batch_size, input_length)`.\n\n Output shape:\n 3D tensor with shape: `(batch_size, input_length, output_dim)`.\n\n **Note on variable placement:**\n By default, if a GPU is available, the embedding matrix will be placed on\n the GPU. This achieves the best performance, but it might cause issues:\n\n - You may be using an optimizer that does not support sparse GPU kernels.\n In this case you will see an error upon training your model.\n - Your embedding matrix may be too large to fit on your GPU. In this case\n you will see an Out Of Memory (OOM) error.\n\n In such cases, you should place the embedding matrix on the CPU memory.\n You can do so with a device scope, as such:\n\n ```python\n with tf.device('cpu:0'):\n embedding_layer = Embedding(...)\n embedding_layer.build()\n ```\n\n The pre-built `embedding_layer` instance can then be added to a `Sequential`\n model (e.g. `model.add(embedding_layer)`), called in a Functional model\n (e.g. `x = embedding_layer(x)`), or used in a subclassed model.\n ", "desc": "Turns positive integers (indexes) into dense vectors of fixed size.", "type": "API"}, {"name": "tf.keras.layers.experimental", "docs": "Public API for tf.keras.layers.experimental namespace.\n", "desc": "Public API for tf.keras.layers.experimental namespace.", "type": "API"}, {"name": "tf.keras.layers.experimental.EinsumDense", "docs": "A layer that uses tf.einsum as the backing computation.\n\n This layer can perform einsum calculations of arbitrary dimensionality.\n\n Args:\n equation: An equation describing the einsum to perform. This equation must\n be a valid einsum string of the form `ab,bc->ac`, `...ab,bc->...ac`, or\n `ab...,bc->ac...` where 'ab', 'bc', and 'ac' can be any valid einsum axis\n expression sequence.\n output_shape: The expected shape of the output tensor (excluding the batch\n dimension and any dimensions represented by ellipses). You can specify\n None for any dimension that is unknown or can be inferred from the input\n shape.\n activation: Activation function to use. If you don't specify anything, no\n activation is applied (that is, a \"linear\" activation: `a(x) = x`).\n bias_axes: A string containing the output dimension(s) to apply a bias to.\n Each character in the `bias_axes` string should correspond to a character\n in the output portion of the `equation` string.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\")..\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix.\n bias_constraint: Constraint function applied to the bias vector.\n\n Examples:\n\n **Biased dense layer with einsums**\n\n This example shows how to instantiate a standard Keras dense layer using\n einsum operations. This example is equivalent to\n `tf.keras.layers.Dense(64, use_bias=True)`.\n\n >>> layer = EinsumDense(\"ab,bc->ac\", output_shape=64, bias_axes=\"c\")\n >>> input_tensor = tf.keras.Input(shape=[32])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 64) dtype=...>\n\n **Applying a dense layer to a sequence**\n\n This example shows how to instantiate a layer that applies the same dense\n operation to every element in a sequence. Here, the 'output_shape' has two\n values (since there are two non-batch dimensions in the output); the first\n dimension in the output_shape is `None`, because the sequence dimension `b`\n has an unknown shape.\n\n >>> layer = EinsumDense(\"abc,cd->abd\",\n ... output_shape=(None, 64),\n ... bias_axes=\"d\")\n >>> input_tensor = tf.keras.Input(shape=[32, 128])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 32, 64) dtype=...>\n\n **Applying a dense layer to a sequence using ellipses**\n\n This example shows how to instantiate a layer that applies the same dense\n operation to every element in a sequence, but uses the ellipsis notation\n instead of specifying the batch and sequence dimensions.\n\n Because we are using ellipsis notation and have specified only one axis, the\n output_shape arg is a single value. When instantiated in this way, the layer\n can handle any number of sequence dimensions - including the case where no\n sequence dimension exists.\n\n >>> layer = EinsumDense(\"...x,xy->...y\", output_shape=64, bias_axes=\"y\")\n >>> input_tensor = tf.keras.Input(shape=[32, 128])\n >>> output_tensor = layer(input_tensor)\n >>> output_tensor\n <... shape=(None, 32, 64) dtype=...>\n ", "desc": "A layer that uses tf.einsum as the backing computation.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing", "docs": "Public API for tf.keras.layers.experimental.preprocessing namespace.\n", "desc": "Public API for tf.keras.layers.experimental.preprocessing namespace.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.CategoryEncoding", "docs": "A preprocessing layer which encodes integer features.\n\n This layer provides options for condensing data into a categorical encoding\n when the total number of tokens are known in advance. It accepts integer\n values as inputs, and it outputs a dense or sparse representation of those\n inputs. For integer inputs where the total number of tokens is not known, use\n `tf.keras.layers.IntegerLookup` instead.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Examples:\n\n **One-hot encoding data**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"one_hot\")\n >>> layer([3, 2, 0, 1])\n \n\n **Multi-hot encoding data**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"multi_hot\")\n >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]])\n \n\n **Using weighted inputs in `\"count\"` mode**\n\n >>> layer = tf.keras.layers.CategoryEncoding(\n ... num_tokens=4, output_mode=\"count\")\n >>> count_weights = np.array([[.1, .2], [.1, .1], [.2, .3], [.4, .2]])\n >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]], count_weights=count_weights)\n \n\n Args:\n num_tokens: The total number of tokens the layer should support. All inputs\n to the layer must integers in the range `0 <= value < num_tokens`, or an\n error will be thrown.\n output_mode: Specification for the output of the layer.\n Defaults to `\"multi_hot\"`. Values can be `\"one_hot\"`, `\"multi_hot\"` or\n `\"count\"`, configuring the layer as follows:\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array of `num_tokens` size, containing a 1 at the element index. If\n the last dimension is size 1, will encode on that dimension. If the\n last dimension is not size 1, will append a new dimension for the\n encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n of `num_tokens` size, containing a 1 for each vocabulary term present\n in the sample. Treats the last dimension as the sample dimension, if\n input shape is `(..., sample_length)`, output shape will be\n `(..., num_tokens)`.\n - `\"count\"`: Like `\"multi_hot\"`, but the int array contains a count of\n the number of times the token at that index appeared in the sample.\n For all output modes, currently only output up to rank 2 is supported.\n sparse: Boolean. If true, returns a `SparseTensor` instead of a dense\n `Tensor`. Defaults to `False`.\n\n Call arguments:\n inputs: A 1D or 2D tensor of integer inputs.\n count_weights: A tensor in the same shape as `inputs` indicating the\n weight for each sample value when summing up in `count` mode. Not used in\n `\"multi_hot\"` or `\"one_hot\"` modes.\n ", "desc": "A preprocessing layer which encodes integer features.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.CenterCrop", "docs": "A preprocessing layer which crops images.\n\n This layers crops the central portion of the images to a target size. If an\n image is smaller than the target size, it will be resized and cropped so as to\n return the largest possible window in the image that matches the target aspect\n ratio.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., target_height, target_width, channels)`.\n\n If the input height/width is even and the target height/width is odd (or\n inversely), the input image is left-padded by 1 pixel.\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n ", "desc": "A preprocessing layer which crops images.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.Discretization", "docs": "A preprocessing layer which buckets continuous features by ranges.\n\n This layer will place each element of its input data into one of several\n contiguous ranges and output an integer index indicating which range each\n element was placed in.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n Any `tf.Tensor` or `tf.RaggedTensor` of dimension 2 or higher.\n\n Output shape:\n Same as input shape.\n\n Arguments:\n bin_boundaries: A list of bin boundaries. The leftmost and rightmost bins\n will always extend to `-inf` and `inf`, so `bin_boundaries=[0., 1., 2.]`\n generates bins `(-inf, 0.)`, `[0., 1.)`, `[1., 2.)`, and `[2., +inf)`. If\n this option is set, `adapt()` should not be called.\n num_bins: The integer number of bins to compute. If this option is set,\n `adapt()` should be called to learn the bin boundaries.\n epsilon: Error tolerance, typically a small fraction close to zero (e.g.\n 0.01). Higher values of epsilon increase the quantile approximation, and\n hence result in more unequal buckets, but could improve performance\n and resource consumption.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, or `\"count\"`\n configuring the layer as follows:\n - `\"int\"`: Return the discritized bin indices directly.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as `num_bins`, containing a 1 at the input's bin\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as `num_bins`, containing a 1 for each bin index\n index present in the sample. Treats the last dimension as the sample\n dimension, if input shape is `(..., sample_length)`, output shape will\n be `(..., num_tokens)`.\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the bin index appeared in the sample.\n sparse: Boolean. Only applicable to `\"one_hot\"`, `\"multi_hot\"`,\n and `\"count\"` output modes. If True, returns a `SparseTensor` instead of\n a dense `Tensor`. Defaults to False.\n\n Examples:\n\n Bucketize float values based on provided buckets.\n >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])\n >>> layer = tf.keras.layers.Discretization(bin_boundaries=[0., 1., 2.])\n >>> layer(input)\n \n\n Bucketize float values based on a number of buckets to compute.\n >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])\n >>> layer = tf.keras.layers.Discretization(num_bins=4, epsilon=0.01)\n >>> layer.adapt(input)\n >>> layer(input)\n \n ", "desc": "A preprocessing layer which buckets continuous features by ranges.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.Hashing", "docs": "A preprocessing layer which hashes and bins categorical features.\n\n This layer transforms categorical inputs to hashed output. It element-wise\n converts a ints or strings to ints in a fixed range. The stable hash\n function uses `tensorflow::ops::Fingerprint` to produce the same output\n consistently across all platforms.\n\n This layer uses [FarmHash64](https://github.com/google/farmhash) by default,\n which provides a consistent hashed output across different platforms and is\n stable across invocations, regardless of device and context, by mixing the\n input bits thoroughly.\n\n If you want to obfuscate the hashed output, you can also pass a random `salt`\n argument in the constructor. In that case, the layer will use the\n [SipHash64](https://github.com/google/highwayhash) hash function, with\n the `salt` value serving as additional input to the hash function.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n **Example (FarmHash64)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3)\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n **Example (FarmHash64) with a mask value**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, mask_value='')\n >>> inp = [['A'], ['B'], [''], ['C'], ['D']]\n >>> layer(inp)\n \n\n **Example (SipHash64)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, salt=[133, 137])\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n **Example (Siphash64 with a single integer, same as `salt=[133, 133]`)**\n\n >>> layer = tf.keras.layers.Hashing(num_bins=3, salt=133)\n >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]\n >>> layer(inp)\n \n\n Args:\n num_bins: Number of hash bins. Note that this includes the `mask_value` bin,\n so the effective number of bins is `(num_bins - 1)` if `mask_value` is\n set.\n mask_value: A value that represents masked inputs, which are mapped to\n index 0. Defaults to None, meaning no mask term will be added and the\n hashing will start at index 0.\n salt: A single unsigned integer or None.\n If passed, the hash function used will be SipHash64, with these values\n used as an additional input (known as a \"salt\" in cryptography).\n These should be non-zero. Defaults to `None` (in that\n case, the FarmHash64 hash function is used). It also supports\n tuple/list of 2 unsigned integer numbers, see reference paper for details.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, or `\"count\"`\n configuring the layer as follows:\n - `\"int\"`: Return the integer bin indices directly.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as `num_bins`, containing a 1 at the input's bin\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as `num_bins`, containing a 1 for each bin index\n index present in the sample. Treats the last dimension as the sample\n dimension, if input shape is `(..., sample_length)`, output shape will\n be `(..., num_tokens)`.\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the bin index appeared in the sample.\n sparse: Boolean. Only applicable to `\"one_hot\"`, `\"multi_hot\"`,\n and `\"count\"` output modes. If True, returns a `SparseTensor` instead of\n a dense `Tensor`. Defaults to False.\n **kwargs: Keyword arguments to construct a layer.\n\n Input shape:\n A single or list of string, int32 or int64 `Tensor`,\n `SparseTensor` or `RaggedTensor` of shape `(batch_size, ...,)`\n\n Output shape:\n An int64 `Tensor`, `SparseTensor` or `RaggedTensor` of shape\n `(batch_size, ...)`. If any input is `RaggedTensor` then output is\n `RaggedTensor`, otherwise if any input is `SparseTensor` then output is\n `SparseTensor`, otherwise the output is `Tensor`.\n\n Reference:\n - [SipHash with salt](https://www.131002.net/siphash/siphash.pdf)\n\n ", "desc": "A preprocessing layer which hashes and bins categorical features.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.IntegerLookup", "docs": "A preprocessing layer which maps integer features to contiguous ranges.\n\n This layer maps a set of arbitrary integer input tokens into indexed\n integer output via a table-based vocabulary lookup. The layer's output indices\n will be contiguously arranged up to the maximum vocab size, even if the input\n tokens are non-continguous or unbounded. The layer supports multiple options\n for encoding the output via `output_mode`, and has optional support for\n out-of-vocabulary (OOV) tokens and masking.\n\n The vocabulary for the layer must be either supplied on construction or\n learned via `adapt()`. During `adapt()`, the layer will analyze a data set,\n determine the frequency of individual integer tokens, and create a vocabulary\n from them. If the vocabulary is capped in size, the most frequent tokens will\n be used to create the vocabulary and all others will be treated as OOV.\n\n There are two possible output modes for the layer.\n When `output_mode` is `\"int\"`,\n input integers are converted to their index in the vocabulary (an integer).\n When `output_mode` is `\"multi_hot\"`, `\"count\"`, or `\"tf_idf\"`, input integers\n are encoded into an array where each dimension corresponds to an element in\n the vocabulary.\n\n The vocabulary can optionally contain a mask token as well as an OOV token\n (which can optionally occupy multiple indices in the vocabulary, as set\n by `num_oov_indices`).\n The position of these tokens in the vocabulary is fixed. When `output_mode` is\n `\"int\"`, the vocabulary will begin with the mask token at index 0, followed by\n OOV indices, followed by the rest of the vocabulary. When `output_mode` is\n `\"multi_hot\"`, `\"count\"`, or `\"tf_idf\"` the vocabulary will begin with OOV\n indices and instances of the mask token will be dropped.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n max_tokens: Maximum size of the vocabulary for this layer. This should only\n be specified when adapting the vocabulary or when setting\n `pad_to_max_tokens=True`. If None, there is no cap on the size of the\n vocabulary. Note that this size includes the OOV and mask tokens. Defaults\n to None.\n num_oov_indices: The number of out-of-vocabulary tokens to use. If this\n value is more than 1, OOV inputs are modulated to determine their OOV\n value. If this value is 0, OOV inputs will cause an error when calling the\n layer. Defaults to 1.\n mask_token: An integer token that represents masked inputs. When\n `output_mode` is `\"int\"`, the token is included in vocabulary and mapped\n to index 0. In other output modes, the token will not appear in the\n vocabulary and instances of the mask token in the input will be dropped.\n If set to None, no mask term will be added. Defaults to None.\n oov_token: Only used when `invert` is True. The token to return for OOV\n indices. Defaults to -1.\n vocabulary: Optional. Either an array of integers or a string path to a text\n file. If passing an array, can pass a tuple, list, 1D numpy array, or 1D\n tensor containing the integer vocbulary terms. If passing a file path, the\n file should contain one line per term in the vocabulary. If this argument\n is set, there is no need to `adapt()` the layer.\n vocabulary_dtype: The dtype of the vocabulary terms, for example\n `\"int64\"` or `\"int32\"`. Defaults to `\"int64\"`.\n idf_weights: Only valid when `output_mode` is `\"tf_idf\"`. A tuple, list, 1D\n numpy array, or 1D tensor or the same length as the vocabulary, containing\n the floating point inverse document frequency weights, which will be\n multiplied by per sample term counts for the final `tf_idf` weight. If the\n `vocabulary` argument is set, and `output_mode` is `\"tf_idf\"`, this\n argument must be supplied.\n invert: Only valid when `output_mode` is `\"int\"`. If True, this layer will\n map indices to vocabulary items instead of mapping vocabulary items to\n indices. Default to False.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, `\"count\"`, or\n `\"tf_idf\"` configuring the layer as follows:\n - `\"int\"`: Return the vocabulary indices of the input tokens.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as the vocabulary, containing a 1 at the element\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as the vocabulary, containing a 1 for each vocabulary\n term present in the sample. Treats the last dimension as the sample\n dimension, if input shape is (..., sample_length), output shape will\n be (..., num_tokens).\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the token at that index appeared in the sample.\n - `\"tf_idf\"`: As `\"multi_hot\"`, but the TF-IDF algorithm is applied to\n find the value in each token slot.\n For `\"int\"` output, any shape of input and output is supported. For all\n other output modes, currently only output up to rank 2 is supported.\n pad_to_max_tokens: Only applicable when `output_mode` is `\"multi_hot\"`,\n `\"count\"`, or `\"tf_idf\"`. If True, the output will have its feature axis\n padded to `max_tokens` even if the number of unique tokens in the\n vocabulary is less than max_tokens, resulting in a tensor of shape\n [batch_size, max_tokens] regardless of vocabulary size. Defaults to False.\n sparse: Boolean. Only applicable when `output_mode` is `\"multi_hot\"`,\n `\"count\"`, or `\"tf_idf\"`. If True, returns a `SparseTensor` instead of a\n dense `Tensor`. Defaults to False.\n\n Examples:\n\n **Creating a lookup layer with a known vocabulary**\n\n This example creates a lookup layer with a pre-existing vocabulary.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(vocabulary=vocab)\n >>> layer(data)\n \n\n **Creating a lookup layer with an adapted vocabulary**\n\n This example creates a lookup layer and generates the vocabulary by analyzing\n the dataset.\n\n >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]])\n >>> layer = tf.keras.layers.IntegerLookup()\n >>> layer.adapt(data)\n >>> layer.get_vocabulary()\n [-1, 42, 1138, 1000, 36, 12]\n\n Note that the OOV token -1 have been added to the vocabulary. The remaining\n tokens are sorted by frequency (42, which has 2 occurrences, is first) then\n by inverse sort order.\n\n >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]])\n >>> layer = tf.keras.layers.IntegerLookup()\n >>> layer.adapt(data)\n >>> layer(data)\n \n\n\n **Lookups with multiple OOV indices**\n\n This example demonstrates how to use a lookup layer with multiple OOV indices.\n When a layer is created with more than one OOV index, any OOV tokens are\n hashed into the number of OOV buckets, distributing OOV tokens in a\n deterministic fashion across the set.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[12, 1138, 42], [37, 1000, 36]])\n >>> layer = tf.keras.layers.IntegerLookup(vocabulary=vocab, num_oov_indices=2)\n >>> layer(data)\n \n\n Note that the output for OOV token 37 is 1, while the output for OOV token\n 1000 is 0. The in-vocab terms have their output index increased by 1 from\n earlier examples (12 maps to 2, etc) in order to make space for the extra OOV\n token.\n\n **One-hot output**\n\n Configure the layer with `output_mode='one_hot'`. Note that the first\n `num_oov_indices` dimensions in the ont_hot encoding represent OOV values.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([12, 36, 1138, 42, 7]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(\n ... vocabulary=vocab, output_mode='one_hot')\n >>> layer(data)\n \n\n **Multi-hot output**\n\n Configure the layer with `output_mode='multi_hot'`. Note that the first\n `num_oov_indices` dimensions in the multi_hot encoding represent OOV tokens\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(\n ... vocabulary=vocab, output_mode='multi_hot')\n >>> layer(data)\n \n\n **Token count output**\n\n Configure the layer with `output_mode='count'`. As with multi_hot output, the\n first `num_oov_indices` dimensions in the output represent OOV tokens.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(\n ... vocabulary=vocab, output_mode='count')\n >>> layer(data)\n \n\n **TF-IDF output**\n\n Configure the layer with `output_mode='tf_idf'`. As with multi_hot output, the\n first `num_oov_indices` dimensions in the output represent OOV tokens.\n\n Each token bin will output `token_count * idf_weight`, where the idf weights\n are the inverse document frequency weights per token. These should be provided\n along with the vocabulary. Note that the `idf_weight` for OOV tokens will\n default to the average of all idf weights passed in.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> idf_weights = [0.25, 0.75, 0.6, 0.4]\n >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(\n ... output_mode='tf_idf', vocabulary=vocab, idf_weights=idf_weights)\n >>> layer(data)\n \n\n To specify the idf weights for oov tokens, you will need to pass the entire\n vocabularly including the leading oov token.\n\n >>> vocab = [-1, 12, 36, 1138, 42]\n >>> idf_weights = [0.9, 0.25, 0.75, 0.6, 0.4]\n >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens\n >>> layer = tf.keras.layers.IntegerLookup(\n ... output_mode='tf_idf', vocabulary=vocab, idf_weights=idf_weights)\n >>> layer(data)\n \n\n When adapting the layer in tf_idf mode, each input sample will be considered a\n document, and idf weight per token will be calculated as\n `log(1 + num_documents / (1 + token_document_count))`.\n\n **Inverse lookup**\n\n This example demonstrates how to map indices to tokens using this layer. (You\n can also use `adapt()` with `inverse=True`, but for simplicity we'll pass the\n vocab in this example.)\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[1, 3, 4], [4, 0, 2]])\n >>> layer = tf.keras.layers.IntegerLookup(vocabulary=vocab, invert=True)\n >>> layer(data)\n \n\n Note that the first index correspond to the oov token by default.\n\n\n **Forward and inverse lookup pairs**\n\n This example demonstrates how to use the vocabulary of a standard lookup\n layer to create an inverse lookup layer.\n\n >>> vocab = [12, 36, 1138, 42]\n >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]])\n >>> layer = tf.keras.layers.IntegerLookup(vocabulary=vocab)\n >>> i_layer = tf.keras.layers.IntegerLookup(\n ... vocabulary=layer.get_vocabulary(), invert=True)\n >>> int_data = layer(data)\n >>> i_layer(int_data)\n \n\n In this example, the input token 1000 resulted in an output of -1, since\n 1000 was not in the vocabulary - it got represented as an OOV, and all OOV\n tokens are returned as -1 in the inverse layer. Also, note that for the\n inverse to work, you must have already set the forward layer vocabulary\n either directly or via `adapt()` before calling `get_vocabulary()`.\n ", "desc": "A preprocessing layer which maps integer features to contiguous ranges.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.Normalization", "docs": "A preprocessing layer which normalizes continuous features.\n\n This layer will shift and scale inputs into a distribution centered around\n 0 with standard deviation 1. It accomplishes this by precomputing the mean and\n variance of the data, and calling `(input - mean) / sqrt(var)` at runtime.\n\n The mean and variance values for the layer must be either supplied on\n construction or learned via `adapt()`. `adapt()` will compute the mean and\n variance of the data and store them as the layer's weights. `adapt()` should\n be called before `fit()`, `evaluate()`, or `predict()`.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n axis: Integer, tuple of integers, or None. The axis or axes that should\n have a separate mean and variance for each index in the shape. For\n example, if shape is `(None, 5)` and `axis=1`, the layer will track 5\n separate mean and variance values for the last axis. If `axis` is set to\n `None`, the layer will normalize all elements in the input by a scalar\n mean and variance. Defaults to -1, where the last axis of the input is\n assumed to be a feature dimension and is normalized per index. Note that\n in the specific case of batched scalar inputs where the only axis is the\n batch axis, the default will normalize each index in the batch\n separately. In this case, consider passing `axis=None`.\n mean: The mean value(s) to use during normalization. The passed value(s)\n will be broadcast to the shape of the kept axes above; if the value(s)\n cannot be broadcast, an error will be raised when this layer's `build()`\n method is called.\n variance: The variance value(s) to use during normalization. The passed\n value(s) will be broadcast to the shape of the kept axes above; if the\n value(s) cannot be broadcast, an error will be raised when this layer's\n `build()` method is called.\n\n Examples:\n\n Calculate a global mean and variance by analyzing the dataset in `adapt()`.\n\n >>> adapt_data = np.array([1., 2., 3., 4., 5.], dtype='float32')\n >>> input_data = np.array([1., 2., 3.], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(axis=None)\n >>> layer.adapt(adapt_data)\n >>> layer(input_data)\n \n\n Calculate a mean and variance for each index on the last axis.\n\n >>> adapt_data = np.array([[0., 7., 4.],\n ... [2., 9., 6.],\n ... [0., 7., 4.],\n ... [2., 9., 6.]], dtype='float32')\n >>> input_data = np.array([[0., 7., 4.]], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(axis=-1)\n >>> layer.adapt(adapt_data)\n >>> layer(input_data)\n \n\n Pass the mean and variance directly.\n\n >>> input_data = np.array([[1.], [2.], [3.]], dtype='float32')\n >>> layer = tf.keras.layers.Normalization(mean=3., variance=2.)\n >>> layer(input_data)\n \n ", "desc": "A preprocessing layer which normalizes continuous features.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.PreprocessingLayer", "docs": "Base class for Preprocessing Layers.\n\n **Don't use this class directly: it's an abstract base class!** You may\n be looking for one of the many built-in\n [preprocessing layers](https://keras.io/guides/preprocessing_layers/)\n instead.\n\n Preprocessing layers are layers whose state gets computed before model\n training starts. They do not get updated during training.\n Most preprocessing layers implement an `adapt()` method for state computation.\n\n The `PreprocessingLayer` class is the base class you would subclass to\n implement your own preprocessing layers.\n ", "desc": "Base class for Preprocessing Layers.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomContrast", "docs": "A preprocessing layer which randomly adjusts contrast during training.\n\n This layer will randomly adjust the contrast of an image or images by a random\n factor. Contrast is adjusted independently for each channel of each image\n during training.\n\n For each channel, this layer computes the mean of the image pixels in the\n channel and then adjusts each component `x` of each pixel to\n `(x - mean) * contrast_factor + mean`.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n in integer or floating point dtype. By default, the layer will output floats.\n The output value will be clipped to the range `[0, 255]`, the valid\n range of RGB colors.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Arguments:\n factor: a positive float represented as fraction of value, or a tuple of\n size 2 representing lower and upper bound. When represented as a single\n float, lower = upper. The contrast factor will be randomly picked between\n `[1.0 - lower, 1.0 + upper]`. For any pixel x in the channel, the output\n will be `(x - mean) * factor + mean` where `mean` is the mean value of the\n channel.\n seed: Integer. Used to create a random seed.\n ", "desc": "A preprocessing layer which randomly adjusts contrast during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomCrop", "docs": "A preprocessing layer which randomly crops images during training.\n\n During training, this layer will randomly choose a location to crop images\n down to a target size. The layer will crop all the images in the same batch to\n the same cropping location.\n\n At inference time, and during training if an input image is smaller than the\n target size, the input will be resized and cropped so as to return the largest\n possible window in the image that matches the target aspect ratio. If you need\n to apply random cropping at inference time, set `training` to True when\n calling the layer.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., target_height, target_width, channels)`.\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n seed: Integer. Used to create a random seed.\n ", "desc": "A preprocessing layer which randomly crops images during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomFlip", "docs": "A preprocessing layer which randomly flips images during training.\n\n This layer will flip the images horizontally and or vertically based on the\n `mode` attribute. During inference time, the output will be identical to\n input. Call the layer with `training=True` to flip the input.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Arguments:\n mode: String indicating which flip mode to use. Can be `\"horizontal\"`,\n `\"vertical\"`, or `\"horizontal_and_vertical\"`. Defaults to\n `\"horizontal_and_vertical\"`. `\"horizontal\"` is a left-right flip and\n `\"vertical\"` is a top-bottom flip.\n seed: Integer. Used to create a random seed.\n ", "desc": "A preprocessing layer which randomly flips images during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomHeight", "docs": "A preprocessing layer which randomly varies image height during training.\n\n This layer adjusts the height of a batch of images by a random factor.\n The input should be a 3D (unbatched) or 4D (batched) tensor in the\n `\"channels_last\"` image data format. Input pixel values can be of any range\n (e.g. `[0., 1.)` or `[0, 255]`) and of interger or floating point dtype. By\n default, the layer will output floats.\n\n\n By default, this layer is inactive during inference.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n factor: A positive float (fraction of original height), or a tuple of size 2\n representing lower and upper bound for resizing vertically. When\n represented as a single float, this value is used for both the upper and\n lower bound. For instance, `factor=(0.2, 0.3)` results in an output with\n height changed by a random amount in the range `[20%, 30%]`.\n `factor=(-0.2, 0.3)` results in an output with height changed by a random\n amount in the range `[-20%, +30%]`. `factor=0.2` results in an output with\n height changed by a random amount in the range `[-20%, +20%]`.\n interpolation: String, the interpolation method. Defaults to `\"bilinear\"`.\n Supports `\"bilinear\"`, `\"nearest\"`, `\"bicubic\"`, `\"area\"`,\n `\"lanczos3\"`, `\"lanczos5\"`, `\"gaussian\"`, `\"mitchellcubic\"`.\n seed: Integer. Used to create a random seed.\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., random_height, width, channels)`.\n ", "desc": "A preprocessing layer which randomly varies image height during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomRotation", "docs": "A preprocessing layer which randomly rotates images during training.\n\n This layer will apply random rotations to each image, filling empty space\n according to `fill_mode`.\n\n By default, random rotations are only applied during training.\n At inference time, the layer does nothing. If you need to apply random\n rotations at inference time, set `training` to True when calling the layer.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format\n\n Arguments:\n factor: a float represented as fraction of 2 Pi, or a tuple of size 2\n representing lower and upper bound for rotating clockwise and\n counter-clockwise. A positive values means rotating counter clock-wise,\n while a negative value means clock-wise. When represented as a single\n float, this value is used for both the upper and lower bound. For\n instance, `factor=(-0.2, 0.3)` results in an output rotation by a random\n amount in the range `[-20% * 2pi, 30% * 2pi]`. `factor=0.2` results in an\n output rotating by a random amount in the range `[-20% * 2pi, 20% * 2pi]`.\n fill_mode: Points outside the boundaries of the input are filled according\n to the given mode (one of `{\"constant\", \"reflect\", \"wrap\", \"nearest\"}`).\n - *reflect*: `(d c b a | a b c d | d c b a)` The input is extended by\n reflecting about the edge of the last pixel.\n - *constant*: `(k k k k | a b c d | k k k k)` The input is extended by\n filling all values beyond the edge with the same constant value k = 0.\n - *wrap*: `(a b c d | a b c d | a b c d)` The input is extended by\n wrapping around to the opposite edge.\n - *nearest*: `(a a a a | a b c d | d d d d)` The input is extended by the\n nearest pixel.\n interpolation: Interpolation mode. Supported values: `\"nearest\"`,\n `\"bilinear\"`.\n seed: Integer. Used to create a random seed.\n fill_value: a float represents the value to be filled outside the boundaries\n when `fill_mode=\"constant\"`.\n ", "desc": "A preprocessing layer which randomly rotates images during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomTranslation", "docs": "A preprocessing layer which randomly translates images during training.\n\n This layer will apply random translations to each image during training,\n filling empty space according to `fill_mode`.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n height_factor: a float represented as fraction of value, or a tuple of size\n 2 representing lower and upper bound for shifting vertically. A negative\n value means shifting image up, while a positive value means shifting image\n down. When represented as a single positive float, this value is used for\n both the upper and lower bound. For instance, `height_factor=(-0.2, 0.3)`\n results in an output shifted by a random amount in the range\n `[-20%, +30%]`.\n `height_factor=0.2` results in an output height shifted by a random amount\n in the range `[-20%, +20%]`.\n width_factor: a float represented as fraction of value, or a tuple of size 2\n representing lower and upper bound for shifting horizontally. A negative\n value means shifting image left, while a positive value means shifting\n image right. When represented as a single positive float, this value is\n used for both the upper and lower bound. For instance,\n `width_factor=(-0.2, 0.3)` results in an output shifted left by 20%, and\n shifted right by 30%. `width_factor=0.2` results in an output height\n shifted left or right by 20%.\n fill_mode: Points outside the boundaries of the input are filled according\n to the given mode (one of `{\"constant\", \"reflect\", \"wrap\", \"nearest\"}`).\n - *reflect*: `(d c b a | a b c d | d c b a)` The input is extended by\n reflecting about the edge of the last pixel.\n - *constant*: `(k k k k | a b c d | k k k k)` The input is extended by\n filling all values beyond the edge with the same constant value k = 0.\n - *wrap*: `(a b c d | a b c d | a b c d)` The input is extended by\n wrapping around to the opposite edge.\n - *nearest*: `(a a a a | a b c d | d d d d)` The input is extended by the\n nearest pixel.\n interpolation: Interpolation mode. Supported values: `\"nearest\"`,\n `\"bilinear\"`.\n seed: Integer. Used to create a random seed.\n fill_value: a float represents the value to be filled outside the boundaries\n when `fill_mode=\"constant\"`.\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n ", "desc": "A preprocessing layer which randomly translates images during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomWidth", "docs": "A preprocessing layer which randomly varies image width during training.\n\n This layer will randomly adjusts the width of a batch of images of a\n batch of images by a random factor. The input should be a 3D (unbatched) or\n 4D (batched) tensor in the `\"channels_last\"` image data format. Input pixel\n values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and of interger or\n floating point dtype. By default, the layer will output floats.\n\n By default, this layer is inactive during inference.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n factor: A positive float (fraction of original width), or a tuple of size 2\n representing lower and upper bound for resizing vertically. When\n represented as a single float, this value is used for both the upper and\n lower bound. For instance, `factor=(0.2, 0.3)` results in an output with\n width changed by a random amount in the range `[20%, 30%]`. `factor=(-0.2,\n 0.3)` results in an output with width changed by a random amount in the\n range `[-20%, +30%]`. `factor=0.2` results in an output with width changed\n by a random amount in the range `[-20%, +20%]`.\n interpolation: String, the interpolation method. Defaults to `bilinear`.\n Supports `\"bilinear\"`, `\"nearest\"`, `\"bicubic\"`, `\"area\"`, `\"lanczos3\"`,\n `\"lanczos5\"`, `\"gaussian\"`, `\"mitchellcubic\"`.\n seed: Integer. Used to create a random seed.\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, random_width, channels)`.\n ", "desc": "A preprocessing layer which randomly varies image width during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.RandomZoom", "docs": "A preprocessing layer which randomly zooms images during training.\n\n This layer will randomly zoom in or out on each axis of an image\n independently, filling empty space according to `fill_mode`.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\n of interger or floating point dtype. By default, the layer will output floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n height_factor: a float represented as fraction of value, or a tuple of size\n 2 representing lower and upper bound for zooming vertically. When\n represented as a single float, this value is used for both the upper and\n lower bound. A positive value means zooming out, while a negative value\n means zooming in. For instance, `height_factor=(0.2, 0.3)` result in an\n output zoomed out by a random amount in the range `[+20%, +30%]`.\n `height_factor=(-0.3, -0.2)` result in an output zoomed in by a random\n amount in the range `[+20%, +30%]`.\n width_factor: a float represented as fraction of value, or a tuple of size 2\n representing lower and upper bound for zooming horizontally. When\n represented as a single float, this value is used for both the upper and\n lower bound. For instance, `width_factor=(0.2, 0.3)` result in an output\n zooming out between 20% to 30%. `width_factor=(-0.3, -0.2)` result in an\n output zooming in between 20% to 30%. Defaults to `None`, i.e., zooming\n vertical and horizontal directions by preserving the aspect ratio.\n fill_mode: Points outside the boundaries of the input are filled according\n to the given mode (one of `{\"constant\", \"reflect\", \"wrap\", \"nearest\"}`).\n - *reflect*: `(d c b a | a b c d | d c b a)` The input is extended by\n reflecting about the edge of the last pixel.\n - *constant*: `(k k k k | a b c d | k k k k)` The input is extended by\n filling all values beyond the edge with the same constant value k = 0.\n - *wrap*: `(a b c d | a b c d | a b c d)` The input is extended by\n wrapping around to the opposite edge.\n - *nearest*: `(a a a a | a b c d | d d d d)` The input is extended by the\n nearest pixel.\n interpolation: Interpolation mode. Supported values: `\"nearest\"`,\n `\"bilinear\"`.\n seed: Integer. Used to create a random seed.\n fill_value: a float represents the value to be filled outside the boundaries\n when `fill_mode=\"constant\"`.\n\n Example:\n\n >>> input_img = np.random.random((32, 224, 224, 3))\n >>> layer = tf.keras.layers.RandomZoom(.5, .2)\n >>> out_img = layer(input_img)\n >>> out_img.shape\n TensorShape([32, 224, 224, 3])\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format.\n ", "desc": "A preprocessing layer which randomly zooms images during training.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.Rescaling", "docs": "A preprocessing layer which rescales input values to a new range.\n\n This layer rescales every value of an input (often an image) by multiplying by\n `scale` and adding `offset`.\n\n For instance:\n\n 1. To rescale an input in the ``[0, 255]`` range\n to be in the `[0, 1]` range, you would pass `scale=1./255`.\n\n 2. To rescale an input in the ``[0, 255]`` range to be in the `[-1, 1]` range,\n you would pass `scale=1./127.5, offset=-1`.\n\n The rescaling is applied both during training and inference. Inputs can be\n of integer or floating point dtype, and by default the layer will output\n floats.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Input shape:\n Arbitrary.\n\n Output shape:\n Same as input.\n\n Args:\n scale: Float, the scale to apply to the inputs.\n offset: Float, the offset to apply to the inputs.\n ", "desc": "A preprocessing layer which rescales input values to a new range.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.Resizing", "docs": "A preprocessing layer which resizes images.\n\n This layer resizes an image input to a target height and width. The input\n should be a 4D (batched) or 3D (unbatched) tensor in `\"channels_last\"` format.\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and of\n interger or floating point dtype. By default, the layer will output floats.\n\n This layer can be called on tf.RaggedTensor batches of input images of\n distinct sizes, and will resize the outputs to dense tensors of uniform size.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n interpolation: String, the interpolation method. Defaults to `\"bilinear\"`.\n Supports `\"bilinear\"`, `\"nearest\"`, `\"bicubic\"`, `\"area\"`, `\"lanczos3\"`,\n `\"lanczos5\"`, `\"gaussian\"`, `\"mitchellcubic\"`.\n crop_to_aspect_ratio: If True, resize the images without aspect\n ratio distortion. When the original aspect ratio differs from the target\n aspect ratio, the output image will be cropped so as to return the largest\n possible window in the image (of size `(height, width)`) that matches\n the target aspect ratio. By default (`crop_to_aspect_ratio=False`),\n aspect ratio may not be preserved.\n ", "desc": "A preprocessing layer which resizes images.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.StringLookup", "docs": "A preprocessing layer which maps string features to integer indices.\n\n This layer translates a set of arbitrary strings into integer output via a\n table-based vocabulary lookup. This layer will perform no splitting or\n transformation of input strings. For a layer than can split and tokenize\n natural language, see the `TextVectorization` layer.\n\n The vocabulary for the layer must be either supplied on construction or\n learned via `adapt()`. During `adapt()`, the layer will analyze a data set,\n determine the frequency of individual strings tokens, and create a vocabulary\n from them. If the vocabulary is capped in size, the most frequent tokens will\n be used to create the vocabulary and all others will be treated as\n out-of-vocabulary (OOV).\n\n There are two possible output modes for the layer.\n When `output_mode` is `\"int\"`,\n input strings are converted to their index in the vocabulary (an integer).\n When `output_mode` is `\"multi_hot\"`, `\"count\"`, or `\"tf_idf\"`, input strings\n are encoded into an array where each dimension corresponds to an element in\n the vocabulary.\n\n The vocabulary can optionally contain a mask token as well as an OOV token\n (which can optionally occupy multiple indices in the vocabulary, as set\n by `num_oov_indices`).\n The position of these tokens in the vocabulary is fixed. When `output_mode` is\n `\"int\"`, the vocabulary will begin with the mask token (if set), followed by\n OOV indices, followed by the rest of the vocabulary. When `output_mode` is\n `\"multi_hot\"`, `\"count\"`, or `\"tf_idf\"` the vocabulary will begin with OOV\n indices and instances of the mask token will be dropped.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n max_tokens: Maximum size of the vocabulary for this layer. This should only\n be specified when adapting the vocabulary or when setting\n `pad_to_max_tokens=True`. If None, there is no cap on the size of the\n vocabulary. Note that this size includes the OOV and mask tokens. Defaults\n to None.\n num_oov_indices: The number of out-of-vocabulary tokens to use. If this\n value is more than 1, OOV inputs are hashed to determine their OOV value.\n If this value is 0, OOV inputs will cause an error when calling the layer.\n Defaults to 1.\n mask_token: A token that represents masked inputs. When `output_mode` is\n `\"int\"`, the token is included in vocabulary and mapped to index 0. In\n other output modes, the token will not appear in the vocabulary and\n instances of the mask token in the input will be dropped. If set to None,\n no mask term will be added. Defaults to `None`.\n oov_token: Only used when `invert` is True. The token to return for OOV\n indices. Defaults to `\"[UNK]\"`.\n vocabulary: Optional. Either an array of strings or a string path to a text\n file. If passing an array, can pass a tuple, list, 1D numpy array, or 1D\n tensor containing the string vocbulary terms. If passing a file path, the\n file should contain one line per term in the vocabulary. If this argument\n is set, there is no need to `adapt()` the layer.\n idf_weights: Only valid when `output_mode` is `\"tf_idf\"`. A tuple, list, 1D\n numpy array, or 1D tensor or the same length as the vocabulary, containing\n the floating point inverse document frequency weights, which will be\n multiplied by per sample term counts for the final `tf_idf` weight. If the\n `vocabulary` argument is set, and `output_mode` is `\"tf_idf\"`, this\n argument must be supplied.\n invert: Only valid when `output_mode` is `\"int\"`. If True, this layer will\n map indices to vocabulary items instead of mapping vocabulary items to\n indices. Default to False.\n output_mode: Specification for the output of the layer. Defaults to `\"int\"`.\n Values can be `\"int\"`, `\"one_hot\"`, `\"multi_hot\"`, `\"count\"`, or\n `\"tf_idf\"` configuring the layer as follows:\n - `\"int\"`: Return the raw integer indices of the input tokens.\n - `\"one_hot\"`: Encodes each individual element in the input into an\n array the same size as the vocabulary, containing a 1 at the element\n index. If the last dimension is size 1, will encode on that dimension.\n If the last dimension is not size 1, will append a new dimension for\n the encoded output.\n - `\"multi_hot\"`: Encodes each sample in the input into a single array\n the same size as the vocabulary, containing a 1 for each vocabulary\n term present in the sample. Treats the last dimension as the sample\n dimension, if input shape is (..., sample_length), output shape will\n be (..., num_tokens).\n - `\"count\"`: As `\"multi_hot\"`, but the int array contains a count of the\n number of times the token at that index appeared in the sample.\n - `\"tf_idf\"`: As `\"multi_hot\"`, but the TF-IDF algorithm is applied to\n find the value in each token slot.\n For `\"int\"` output, any shape of input and output is supported. For all\n other output modes, currently only output up to rank 2 is supported.\n pad_to_max_tokens: Only applicable when `output_mode` is `\"multi_hot\"`,\n `\"count\"`, or `\"tf_idf\"`. If True, the output will have its feature axis\n padded to `max_tokens` even if the number of unique tokens in the\n vocabulary is less than max_tokens, resulting in a tensor of shape\n [batch_size, max_tokens] regardless of vocabulary size. Defaults to False.\n sparse: Boolean. Only applicable when `output_mode` is `\"multi_hot\"`,\n `\"count\"`, or `\"tf_idf\"`. If True, returns a `SparseTensor` instead of a\n dense `Tensor`. Defaults to False.\n\n Examples:\n\n **Creating a lookup layer with a known vocabulary**\n\n This example creates a lookup layer with a pre-existing vocabulary.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[\"a\", \"c\", \"d\"], [\"d\", \"z\", \"b\"]])\n >>> layer = tf.keras.layers.StringLookup(vocabulary=vocab)\n >>> layer(data)\n \n\n **Creating a lookup layer with an adapted vocabulary**\n\n This example creates a lookup layer and generates the vocabulary by analyzing\n the dataset.\n\n >>> data = tf.constant([[\"a\", \"c\", \"d\"], [\"d\", \"z\", \"b\"]])\n >>> layer = tf.keras.layers.StringLookup()\n >>> layer.adapt(data)\n >>> layer.get_vocabulary()\n ['[UNK]', 'd', 'z', 'c', 'b', 'a']\n\n Note that the OOV token `\"[UNK]\"` has been added to the vocabulary.\n The remaining tokens are sorted by frequency\n (`\"d\"`, which has 2 occurrences, is first) then by inverse sort order.\n\n >>> data = tf.constant([[\"a\", \"c\", \"d\"], [\"d\", \"z\", \"b\"]])\n >>> layer = tf.keras.layers.StringLookup()\n >>> layer.adapt(data)\n >>> layer(data)\n \n\n **Lookups with multiple OOV indices**\n\n This example demonstrates how to use a lookup layer with multiple OOV indices.\n When a layer is created with more than one OOV index, any OOV values are\n hashed into the number of OOV buckets, distributing OOV values in a\n deterministic fashion across the set.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[\"a\", \"c\", \"d\"], [\"m\", \"z\", \"b\"]])\n >>> layer = tf.keras.layers.StringLookup(vocabulary=vocab, num_oov_indices=2)\n >>> layer(data)\n \n\n Note that the output for OOV value 'm' is 0, while the output for OOV value\n 'z' is 1. The in-vocab terms have their output index increased by 1 from\n earlier examples (a maps to 2, etc) in order to make space for the extra OOV\n value.\n\n **One-hot output**\n\n Configure the layer with `output_mode='one_hot'`. Note that the first\n `num_oov_indices` dimensions in the ont_hot encoding represent OOV values.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([\"a\", \"b\", \"c\", \"d\", \"z\"])\n >>> layer = tf.keras.layers.StringLookup(\n ... vocabulary=vocab, output_mode='one_hot')\n >>> layer(data)\n \n\n **Multi-hot output**\n\n Configure the layer with `output_mode='multi_hot'`. Note that the first\n `num_oov_indices` dimensions in the multi_hot encoding represent OOV values.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[\"a\", \"c\", \"d\", \"d\"], [\"d\", \"z\", \"b\", \"z\"]])\n >>> layer = tf.keras.layers.StringLookup(\n ... vocabulary=vocab, output_mode='multi_hot')\n >>> layer(data)\n \n\n **Token count output**\n\n Configure the layer with `output_mode='count'`. As with multi_hot output, the\n first `num_oov_indices` dimensions in the output represent OOV values.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[\"a\", \"c\", \"d\", \"d\"], [\"d\", \"z\", \"b\", \"z\"]])\n >>> layer = tf.keras.layers.StringLookup(\n ... vocabulary=vocab, output_mode='count')\n >>> layer(data)\n \n\n **TF-IDF output**\n\n Configure the layer with `output_mode=\"tf_idf\"`. As with multi_hot output, the\n first `num_oov_indices` dimensions in the output represent OOV values.\n\n Each token bin will output `token_count * idf_weight`, where the idf weights\n are the inverse document frequency weights per token. These should be provided\n along with the vocabulary. Note that the `idf_weight` for OOV values will\n default to the average of all idf weights passed in.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> idf_weights = [0.25, 0.75, 0.6, 0.4]\n >>> data = tf.constant([[\"a\", \"c\", \"d\", \"d\"], [\"d\", \"z\", \"b\", \"z\"]])\n >>> layer = tf.keras.layers.StringLookup(output_mode=\"tf_idf\")\n >>> layer.set_vocabulary(vocab, idf_weights=idf_weights)\n >>> layer(data)\n \n\n To specify the idf weights for oov values, you will need to pass the entire\n vocabularly including the leading oov token.\n\n >>> vocab = [\"[UNK]\", \"a\", \"b\", \"c\", \"d\"]\n >>> idf_weights = [0.9, 0.25, 0.75, 0.6, 0.4]\n >>> data = tf.constant([[\"a\", \"c\", \"d\", \"d\"], [\"d\", \"z\", \"b\", \"z\"]])\n >>> layer = tf.keras.layers.StringLookup(output_mode=\"tf_idf\")\n >>> layer.set_vocabulary(vocab, idf_weights=idf_weights)\n >>> layer(data)\n \n\n When adapting the layer in `\"tf_idf\"` mode, each input sample will be\n considered a document, and IDF weight per token will be calculated as\n `log(1 + num_documents / (1 + token_document_count))`.\n\n **Inverse lookup**\n\n This example demonstrates how to map indices to strings using this layer. (You\n can also use `adapt()` with `inverse=True`, but for simplicity we'll pass the\n vocab in this example.)\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[1, 3, 4], [4, 0, 2]])\n >>> layer = tf.keras.layers.StringLookup(vocabulary=vocab, invert=True)\n >>> layer(data)\n \n\n Note that the first index correspond to the oov token by default.\n\n\n **Forward and inverse lookup pairs**\n\n This example demonstrates how to use the vocabulary of a standard lookup\n layer to create an inverse lookup layer.\n\n >>> vocab = [\"a\", \"b\", \"c\", \"d\"]\n >>> data = tf.constant([[\"a\", \"c\", \"d\"], [\"d\", \"z\", \"b\"]])\n >>> layer = tf.keras.layers.StringLookup(vocabulary=vocab)\n >>> i_layer = tf.keras.layers.StringLookup(vocabulary=vocab, invert=True)\n >>> int_data = layer(data)\n >>> i_layer(int_data)\n \n\n In this example, the input value `\"z\"` resulted in an output of `\"[UNK]\"`,\n since 1000 was not in the vocabulary - it got represented as an OOV, and all\n OOV values are returned as `\"[UNK]\"` in the inverse layer. Also, note that\n for the inverse to work, you must have already set the forward layer\n vocabulary either directly or via `adapt()` before calling `get_vocabulary()`.\n ", "desc": "A preprocessing layer which maps string features to integer indices.", "type": "API"}, {"name": "tf.keras.layers.experimental.preprocessing.TextVectorization", "docs": "A preprocessing layer which maps text features to integer sequences.\n\n This layer has basic options for managing text in a Keras model. It transforms\n a batch of strings (one example = one string) into either a list of token\n indices (one example = 1D tensor of integer token indices) or a dense\n representation (one example = 1D tensor of float values representing data\n about the example's tokens). This layer is meant to handle natural language\n inputs. To handle simple string inputs (categorical strings or pre-tokenized\n strings) see `tf.keras.layers.StringLookup`.\n\n The vocabulary for the layer must be either supplied on construction or\n learned via `adapt()`. When this layer is adapted, it will analyze the\n dataset, determine the frequency of individual string values, and create a\n vocabulary from them. This vocabulary can have unlimited size or be capped,\n depending on the configuration options for this layer; if there are more\n unique values in the input than the maximum vocabulary size, the most frequent\n terms will be used to create the vocabulary.\n\n The processing of each example contains the following steps:\n\n 1. Standardize each example (usually lowercasing + punctuation stripping)\n 2. Split each example into substrings (usually words)\n 3. Recombine substrings into tokens (usually ngrams)\n 4. Index tokens (associate a unique int value with each token)\n 5. Transform each example using this index, either into a vector of ints or\n a dense float vector.\n\n Some notes on passing callables to customize splitting and normalization for\n this layer:\n\n 1. Any callable can be passed to this Layer, but if you want to serialize\n this object you should only pass functions that are registered Keras\n serializables (see `tf.keras.utils.register_keras_serializable` for more\n details).\n 2. When using a custom callable for `standardize`, the data received\n by the callable will be exactly as passed to this layer. The callable\n should return a tensor of the same shape as the input.\n 3. When using a custom callable for `split`, the data received by the\n callable will have the 1st dimension squeezed out - instead of\n `[[\"string to split\"], [\"another string to split\"]]`, the Callable will\n see `[\"string to split\", \"another string to split\"]`. The callable should\n return a Tensor with the first dimension containing the split tokens -\n in this example, we should see something like `[[\"string\", \"to\",\n \"split\"], [\"another\", \"string\", \"to\", \"split\"]]`. This makes the callable\n site natively compatible with `tf.strings.split()`.\n\n For an overview and full list of preprocessing layers, see the preprocessing\n [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n max_tokens: Maximum size of the vocabulary for this layer. This should only\n be specified when adapting a vocabulary or when setting\n `pad_to_max_tokens=True`. Note that this vocabulary\n contains 1 OOV token, so the effective number of tokens is `(max_tokens -\n 1 - (1 if output_mode == \"int\" else 0))`.\n standardize: Optional specification for standardization to apply to the\n input text. Values can be:\n - `None`: No standardization.\n - `\"lower_and_strip_punctuation\"`: Text will be lowercased and all\n punctuation removed.\n - `\"lower\"`: Text will be lowercased.\n - `\"strip_punctuation\"`: All punctuation will be removed.\n - Callable: Inputs will passed to the callable function, which should\n standardized and returned.\n split: Optional specification for splitting the input text. Values can be:\n - `None`: No splitting.\n - `\"whitespace\"`: Split on whitespace.\n - `\"character\"`: Split on each unicode character.\n - Callable: Standardized inputs will passed to the callable function,\n which should split and returned.\n ngrams: Optional specification for ngrams to create from the possibly-split\n input text. Values can be None, an integer or tuple of integers; passing\n an integer will create ngrams up to that integer, and passing a tuple of\n integers will create ngrams for the specified values in the tuple. Passing\n None means that no ngrams will be created.\n output_mode: Optional specification for the output of the layer. Values can\n be `\"int\"`, `\"multi_hot\"`, `\"count\"` or `\"tf_idf\"`, configuring the layer\n as follows:\n - `\"int\"`: Outputs integer indices, one integer index per split string\n token. When `output_mode == \"int\"`, 0 is reserved for masked\n locations; this reduces the vocab size to\n `max_tokens - 2` instead of `max_tokens - 1`.\n - `\"multi_hot\"`: Outputs a single int array per batch, of either\n vocab_size or max_tokens size, containing 1s in all elements where the\n token mapped to that index exists at least once in the batch item.\n - `\"count\"`: Like `\"multi_hot\"`, but the int array contains a count of\n the number of times the token at that index appeared in the\n batch item.\n - `\"tf_idf\"`: Like `\"multi_hot\"`, but the TF-IDF algorithm is applied to\n find the value in each token slot.\n For `\"int\"` output, any shape of input and output is supported. For all\n other output modes, currently only rank 1 inputs (and rank 2 outputs after\n splitting) are supported.\n output_sequence_length: Only valid in INT mode. If set, the output will have\n its time dimension padded or truncated to exactly `output_sequence_length`\n values, resulting in a tensor of shape\n `(batch_size, output_sequence_length)` regardless of how many tokens\n resulted from the splitting step. Defaults to None.\n pad_to_max_tokens: Only valid in `\"multi_hot\"`, `\"count\"`, and `\"tf_idf\"`\n modes. If True, the output will have its feature axis padded to\n `max_tokens` even if the number of unique tokens in the vocabulary is less\n than max_tokens, resulting in a tensor of shape `(batch_size, max_tokens)`\n regardless of vocabulary size. Defaults to False.\n vocabulary: Optional. Either an array of strings or a string path to a text\n file. If passing an array, can pass a tuple, list, 1D numpy array, or 1D\n tensor containing the string vocbulary terms. If passing a file path, the\n file should contain one line per term in the vocabulary. If this argument\n is set, there is no need to `adapt()` the layer.\n idf_weights: Only valid when `output_mode` is `\"tf_idf\"`. A tuple, list, 1D\n numpy array, or 1D tensor or the same length as the vocabulary, containing\n the floating point inverse document frequency weights, which will be\n multiplied by per sample term counts for the final `tf_idf` weight. If the\n `vocabulary` argument is set, and `output_mode` is `\"tf_idf\"`, this\n argument must be supplied.\n ragged: Boolean. Only applicable to `\"int\"` output mode. If True, returns a\n `RaggedTensor` instead of a dense `Tensor`, where each sequence may have a\n different length after string splitting. Defaults to False.\n sparse: Boolean. Only applicable to `\"multi_hot\"`, `\"count\"`, and\n `\"tf_idf\"` output modes. If True, returns a `SparseTensor` instead of a\n dense `Tensor`. Defaults to False.\n\n Example:\n\n This example instantiates a `TextVectorization` layer that lowercases text,\n splits on whitespace, strips punctuation, and outputs integer vocab indices.\n\n >>> text_dataset = tf.data.Dataset.from_tensor_slices([\"foo\", \"bar\", \"baz\"])\n >>> max_features = 5000 # Maximum vocab size.\n >>> max_len = 4 # Sequence length to pad the outputs to.\n >>>\n >>> # Create the layer.\n >>> vectorize_layer = tf.keras.layers.TextVectorization(\n ... max_tokens=max_features,\n ... output_mode='int',\n ... output_sequence_length=max_len)\n >>>\n >>> # Now that the vocab layer has been created, call `adapt` on the text-only\n >>> # dataset to create the vocabulary. You don't have to batch, but for large\n >>> # datasets this means we're not keeping spare copies of the dataset.\n >>> vectorize_layer.adapt(text_dataset.batch(64))\n >>>\n >>> # Create the model that uses the vectorize text layer\n >>> model = tf.keras.models.Sequential()\n >>>\n >>> # Start by creating an explicit input layer. It needs to have a shape of\n >>> # (1,) (because we need to guarantee that there is exactly one string\n >>> # input per batch), and the dtype needs to be 'string'.\n >>> model.add(tf.keras.Input(shape=(1,), dtype=tf.string))\n >>>\n >>> # The first layer in our model is the vectorization layer. After this\n >>> # layer, we have a tensor of shape (batch_size, max_len) containing vocab\n >>> # indices.\n >>> model.add(vectorize_layer)\n >>>\n >>> # Now, the model can map strings to integers, and you can add an embedding\n >>> # layer to map these integers to learned embeddings.\n >>> input_data = [[\"foo qux bar\"], [\"qux baz\"]]\n >>> model.predict(input_data)\n array([[2, 1, 4, 0],\n [1, 3, 0, 0]])\n\n Example:\n\n This example instantiates a `TextVectorization` layer by passing a list\n of vocabulary terms to the layer's `__init__()` method.\n\n >>> vocab_data = [\"earth\", \"wind\", \"and\", \"fire\"]\n >>> max_len = 4 # Sequence length to pad the outputs to.\n >>>\n >>> # Create the layer, passing the vocab directly. You can also pass the\n >>> # vocabulary arg a path to a file containing one vocabulary word per\n >>> # line.\n >>> vectorize_layer = tf.keras.layers.TextVectorization(\n ... max_tokens=max_features,\n ... output_mode='int',\n ... output_sequence_length=max_len,\n ... vocabulary=vocab_data)\n >>>\n >>> # Because we've passed the vocabulary directly, we don't need to adapt\n >>> # the layer - the vocabulary is already set. The vocabulary contains the\n >>> # padding token ('') and OOV token ('[UNK]') as well as the passed tokens.\n >>> vectorize_layer.get_vocabulary()\n ['', '[UNK]', 'earth', 'wind', 'and', 'fire']\n\n ", "desc": "A preprocessing layer which maps text features to integer sequences.", "type": "API"}, {"name": "tf.keras.layers.experimental.RandomFourierFeatures", "docs": "Layer that projects its inputs into a random feature space.\n\n This layer implements a mapping from input space to a space with `output_dim`\n dimensions, which approximates shift-invariant kernels. A kernel function\n `K(x, y)` is shift-invariant if `K(x, y) == k(x - y)` for some function `k`.\n Many popular Radial Basis Functions (RBF), including Gaussian and\n Laplacian kernels, are shift-invariant.\n\n The implementation of this layer is based on the following paper:\n [\"Random Features for Large-Scale Kernel Machines\"](\n https://people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf)\n by Ali Rahimi and Ben Recht.\n\n The distribution from which the parameters of the random features map (layer)\n are sampled determines which shift-invariant kernel the layer approximates\n (see paper for more details). You can use the distribution of your\n choice. The layer supports out-of-the-box\n approximations of the following two RBF kernels:\n\n - Gaussian: `K(x, y) == exp(- square(x - y) / (2 * square(scale)))`\n - Laplacian: `K(x, y) = exp(-abs(x - y) / scale))`\n\n **Note:** Unlike what is described in the paper and unlike what is used in\n the Scikit-Learn implementation, the output of this layer does not apply\n the `sqrt(2 / D)` normalization factor.\n\n **Usage:** Typically, this layer is used to \"kernelize\" linear models by\n applying a non-linear transformation (this layer) to the input features and\n then training a linear model on top of the transformed features. Depending on\n the loss function of the linear model, the composition of this layer and the\n linear model results to models that are equivalent (up to approximation) to\n kernel SVMs (for hinge loss), kernel logistic regression (for logistic loss),\n kernel linear regression (for squared loss), etc.\n\n Examples:\n\n A kernel multinomial logistic regression model with Gaussian kernel for MNIST:\n\n ```python\n model = keras.Sequential([\n keras.Input(shape=(784,)),\n RandomFourierFeatures(\n output_dim=4096,\n scale=10.,\n kernel_initializer='gaussian'),\n layers.Dense(units=10, activation='softmax'),\n ])\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['categorical_accuracy']\n )\n ```\n\n A quasi-SVM classifier for MNIST:\n\n ```python\n model = keras.Sequential([\n keras.Input(shape=(784,)),\n RandomFourierFeatures(\n output_dim=4096,\n scale=10.,\n kernel_initializer='gaussian'),\n layers.Dense(units=10),\n ])\n model.compile(\n optimizer='adam',\n loss='hinge',\n metrics=['categorical_accuracy']\n )\n ```\n\n To use another kernel, just replace the layer creation line with:\n\n ```python\n random_features_layer = RandomFourierFeatures(\n output_dim=500,\n kernel_initializer=,\n scale=...,\n ...)\n ```\n\n Args:\n output_dim: Positive integer, the dimension of the layer's output, i.e., the\n number of random features used to approximate the kernel.\n kernel_initializer: Determines the distribution of the parameters of the\n random features map (and therefore the kernel approximated by the layer).\n It can be either a string identifier or a Keras `Initializer` instance.\n Currently only 'gaussian' and 'laplacian' are supported string\n identifiers (case insensitive). Note that the kernel matrix is not\n trainable.\n scale: For Gaussian and Laplacian kernels, this corresponds to a scaling\n factor of the corresponding kernel approximated by the layer (see concrete\n definitions above). When provided, it should be a positive float. If None,\n a default value is used: if the kernel initializer is set to \"gaussian\",\n `scale` defaults to `sqrt(input_dim / 2)`, otherwise, it defaults to 1.0.\n Both the approximation error of the kernel and the classification quality\n are sensitive to this parameter. If `trainable` is set to `True`, this\n parameter is learned end-to-end during training and the provided value\n serves as the initial value.\n **Note:** When features from this layer are fed to a linear model,\n by making `scale` trainable, the resulting optimization problem is\n no longer convex (even if the loss function used by the linear model\n is convex).\n trainable: Whether the scaling parameter of the layer should be trainable.\n Defaults to `False`.\n name: String, name to use for this layer.\n ", "desc": "Layer that projects its inputs into a random feature space.", "type": "API"}, {"name": "tf.keras.layers.experimental.SyncBatchNormalization", "docs": "Normalize and scale inputs or activations synchronously across replicas.\n\n Applies batch normalization to activations of the previous layer at each batch\n by synchronizing the global batch statistics across all devices that are\n training the model. For specific details about batch normalization please\n refer to the `tf.keras.layers.BatchNormalization` layer docs.\n\n If this layer is used when using tf.distribute strategy to train models\n across devices/workers, there will be an allreduce call to aggregate batch\n statistics across all replicas at every training step. Without tf.distribute\n strategy, this layer behaves as a regular `tf.keras.layers.BatchNormalization`\n layer.\n\n Example usage:\n\n ```python\n strategy = tf.distribute.MirroredStrategy()\n\n with strategy.scope():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(16))\n model.add(tf.keras.layers.experimental.SyncBatchNormalization())\n ```\n\n Args:\n axis: Integer, the axis that should be normalized\n (typically the features axis).\n For instance, after a `Conv2D` layer with\n `data_format=\"channels_first\"`,\n set `axis=1` in `BatchNormalization`.\n momentum: Momentum for the moving average.\n epsilon: Small float added to variance to avoid dividing by zero.\n center: If True, add offset of `beta` to normalized tensor.\n If False, `beta` is ignored.\n scale: If True, multiply by `gamma`.\n If False, `gamma` is not used.\n When the next layer is linear (also e.g. `nn.relu`),\n this can be disabled since the scaling\n will be done by the next layer.\n beta_initializer: Initializer for the beta weight.\n gamma_initializer: Initializer for the gamma weight.\n moving_mean_initializer: Initializer for the moving mean.\n moving_variance_initializer: Initializer for the moving variance.\n beta_regularizer: Optional regularizer for the beta weight.\n gamma_regularizer: Optional regularizer for the gamma weight.\n beta_constraint: Optional constraint for the beta weight.\n gamma_constraint: Optional constraint for the gamma weight.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode.\n - `training=True`: The layer will normalize its inputs using the\n mean and variance of the current batch of inputs.\n - `training=False`: The layer will normalize its inputs using the\n mean and variance of its moving statistics, learned during training.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n\n ", "desc": "Normalize and scale inputs or activations synchronously across replicas.", "type": "API"}, {"name": "tf.keras.layers.Flatten", "docs": "Flattens the input. Does not affect the batch size.\n\n Note: If inputs are shaped `(batch,)` without a feature axis, then\n flattening adds an extra channel dimension and output shape is `(batch, 1)`.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, ..., channels)` while `channels_first` corresponds to\n inputs with shape `(batch, channels, ...)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Example:\n\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Conv2D(64, 3, 3, input_shape=(3, 32, 32)))\n >>> model.output_shape\n (None, 1, 10, 64)\n\n >>> model.add(Flatten())\n >>> model.output_shape\n (None, 640)\n\n ", "desc": "Flattens the input. Does not affect the batch size.", "type": "API"}, {"name": "tf.keras.layers.GaussianDropout", "docs": "Apply multiplicative 1-centered Gaussian noise.\n\n As it is a regularization layer, it is only active at training time.\n\n Args:\n rate: Float, drop probability (as with `Dropout`).\n The multiplicative noise will have\n standard deviation `sqrt(rate / (1 - rate))`.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Apply multiplicative 1-centered Gaussian noise.", "type": "API"}, {"name": "tf.keras.layers.GaussianNoise", "docs": "Apply additive zero-centered Gaussian noise.\n\n This is useful to mitigate overfitting\n (you could see it as a form of random data augmentation).\n Gaussian Noise (GS) is a natural choice as corruption process\n for real valued inputs.\n\n As it is a regularization layer, it is only active at training time.\n\n Args:\n stddev: Float, standard deviation of the noise distribution.\n seed: Integer, optional random seed to enable deterministic behavior.\n\n Call arguments:\n inputs: Input tensor (of any rank).\n training: Python boolean indicating whether the layer should behave in\n training mode (adding noise) or in inference mode (doing nothing).\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as input.\n ", "desc": "Apply additive zero-centered Gaussian noise.", "type": "API"}, {"name": "tf.keras.layers.GlobalAveragePooling1D", "docs": "Global average pooling operation for temporal data.\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling1D()(x)\n >>> print(y.shape)\n (2, 4)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(batch_size, steps)` indicating whether\n a given step should be masked (excluded from the average).\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global average pooling operation for temporal data.", "type": "API"}, {"name": "tf.keras.layers.GlobalAveragePooling2D", "docs": "Global average pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global average pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.GlobalAveragePooling3D", "docs": "Global Average pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Average pooling operation for 3D data.", "type": "API"}, {"name": "tf.keras.layers.GlobalAvgPool1D", "docs": "Global average pooling operation for temporal data.\n\n Examples:\n\n >>> input_shape = (2, 3, 4)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling1D()(x)\n >>> print(y.shape)\n (2, 4)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Call arguments:\n inputs: A 3D tensor.\n mask: Binary tensor of shape `(batch_size, steps)` indicating whether\n a given step should be masked (excluded from the average).\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global average pooling operation for temporal data.", "type": "API"}, {"name": "tf.keras.layers.GlobalAvgPool2D", "docs": "Global average pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalAveragePooling2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global average pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.GlobalAvgPool3D", "docs": "Global Average pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_mean` or `np.mean`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Average pooling operation for 3D data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPool1D", "docs": "Global max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over\n the time dimension.\n\n For example:\n\n >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> x = tf.reshape(x, [3, 3, 1])\n >>> x\n \n >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D()\n >>> max_pool_1d(x)\n \n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPool2D", "docs": "Global max pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalMaxPool2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global max pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPool3D", "docs": "Global Max pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Max pooling operation for 3D data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPooling1D", "docs": "Global max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over\n the time dimension.\n\n For example:\n\n >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> x = tf.reshape(x, [3, 3, 1])\n >>> x\n \n >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D()\n >>> max_pool_1d(x)\n \n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n keepdims: A boolean, whether to keep the temporal dimension or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the temporal dimension are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape:\n `(batch_size, steps, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape:\n `(batch_size, features, steps)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, features)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, 1, features)`\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, 1)`\n ", "desc": "Global max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPooling2D", "docs": "Global max pooling operation for spatial data.\n\n Examples:\n\n >>> input_shape = (2, 4, 5, 3)\n >>> x = tf.random.normal(input_shape)\n >>> y = tf.keras.layers.GlobalMaxPool2D()(x)\n >>> print(y.shape)\n (2, 3)\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, 1, 1)`\n ", "desc": "Global max pooling operation for spatial data.", "type": "API"}, {"name": "tf.keras.layers.GlobalMaxPooling3D", "docs": "Global Max pooling operation for 3D data.\n\n Args:\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n keepdims: A boolean, whether to keep the spatial dimensions or not.\n If `keepdims` is `False` (default), the rank of the tensor is reduced\n for spatial dimensions.\n If `keepdims` is `True`, the spatial dimensions are retained with\n length 1.\n The behavior is the same as for `tf.reduce_max` or `np.max`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `keepdims`=False:\n 2D tensor with shape `(batch_size, channels)`.\n - If `keepdims`=True:\n - If `data_format='channels_last'`:\n 5D tensor with shape `(batch_size, 1, 1, 1, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape `(batch_size, channels, 1, 1, 1)`\n ", "desc": "Global Max pooling operation for 3D data.", "type": "API"}, {"name": "tf.keras.layers.GRU", "docs": "Gated Recurrent Unit - Cho et al. 2014.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Based on available runtime hardware and constraints, this layer\n will choose different implementations (cuDNN-based or pure-TensorFlow)\n to maximize the performance. If a GPU is available and all\n the arguments to the layer meet the requirement of the cuDNN kernel\n (see below for details), the layer will use a fast cuDNN implementation.\n\n The requirements to use the cuDNN implementation are:\n\n 1. `activation` == `tanh`\n 2. `recurrent_activation` == `sigmoid`\n 3. `recurrent_dropout` == 0\n 4. `unroll` is `False`\n 5. `use_bias` is `True`\n 6. `reset_after` is `True`\n 7. Inputs, if use masking, are strictly right-padded.\n 8. Eager execution is enabled in the outermost context.\n\n There are two variants of the GRU implementation. The default one is based on\n [v3](https://arxiv.org/abs/1406.1078v3) and has reset gate applied to hidden\n state before matrix multiplication. The other one is based on\n [original](https://arxiv.org/abs/1406.1078v1) and has the order reversed.\n\n The second variant is compatible with CuDNNGRU (GPU-only) and allows\n inference on CPU. Thus it has separate biases for `kernel` and\n `recurrent_kernel`. To use this variant, set `reset_after=True` and\n `recurrent_activation='sigmoid'`.\n\n For example:\n\n >>> inputs = tf.random.normal([32, 10, 8])\n >>> gru = tf.keras.layers.GRU(4)\n >>> output = gru(inputs)\n >>> print(output.shape)\n (32, 4)\n >>> gru = tf.keras.layers.GRU(4, return_sequences=True, return_state=True)\n >>> whole_sequence_output, final_state = gru(inputs)\n >>> print(whole_sequence_output.shape)\n (32, 10, 4)\n >>> print(final_state.shape)\n (32, 4)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use\n for the recurrent step.\n Default: sigmoid (`sigmoid`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent\n state. Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\"). Default: `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n return_sequences: Boolean. Whether to return the last output\n in the output sequence, or the full sequence. Default: `False`.\n return_state: Boolean. Whether to return the last state in addition to the\n output. Default: `False`.\n go_backwards: Boolean (default `False`).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default False). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default False).\n If True, the network will be unrolled,\n else a symbolic loop will be used.\n Unrolling can speed-up a RNN,\n although it tends to be more memory-intensive.\n Unrolling is only suitable for short sequences.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `[timesteps, batch, feature]`, whereas in the False case, it will be\n `[batch, timesteps, feature]`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n reset_after: GRU convention (whether to apply reset gate after or\n before matrix multiplication). False = \"before\",\n True = \"after\" (default and cuDNN compatible).\n\n Call arguments:\n inputs: A 3D tensor, with shape `[batch, timesteps, feature]`.\n mask: Binary tensor of shape `[samples, timesteps]` indicating whether\n a given timestep should be masked (optional, defaults to `None`).\n An individual `True` entry indicates that the corresponding timestep\n should be utilized, while a `False` entry indicates that the\n corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used (optional, defaults to `None`).\n initial_state: List of initial state tensors to be passed to the first\n call of the cell (optional, defaults to `None` which causes creation\n of zero-filled initial state tensors).\n ", "desc": "Gated Recurrent Unit - Cho et al. 2014.", "type": "API"}, {"name": "tf.keras.layers.GRUCell", "docs": "Cell class for the GRU layer.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This class processes one step within the whole time sequence input, whereas\n `tf.keras.layer.GRU` processes the whole sequence.\n\n For example:\n\n >>> inputs = tf.random.normal([32, 10, 8])\n >>> rnn = tf.keras.layers.RNN(tf.keras.layers.GRUCell(4))\n >>> output = rnn(inputs)\n >>> print(output.shape)\n (32, 4)\n >>> rnn = tf.keras.layers.RNN(\n ... tf.keras.layers.GRUCell(4),\n ... return_sequences=True,\n ... return_state=True)\n >>> whole_sequence_output, final_state = rnn(inputs)\n >>> print(whole_sequence_output.shape)\n (32, 10, 4)\n >>> print(final_state.shape)\n (32, 4)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use. Default: hyperbolic tangent\n (`tanh`). If you pass None, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use for the recurrent step.\n Default: sigmoid (`sigmoid`). If you pass `None`, no activation is\n applied (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the\n linear transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n reset_after: GRU convention (whether to apply reset gate after or\n before matrix multiplication). False = \"before\",\n True = \"after\" (default and cuDNN compatible).\n\n Call arguments:\n inputs: A 2D tensor, with shape of `[batch, feature]`.\n states: A 2D tensor with shape of `[batch, units]`, which is the state from\n the previous time step. For timestep 0, the initial state provided by user\n will be feed to cell.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n ", "desc": "Cell class for the GRU layer.", "type": "API"}, {"name": "tf.keras.layers.Input", "docs": "`Input()` is used to instantiate a Keras tensor.\n\n A Keras tensor is a symbolic tensor-like object,\n which we augment with certain attributes that allow us to build a Keras model\n just by knowing the inputs and outputs of the model.\n\n For instance, if `a`, `b` and `c` are Keras tensors,\n it becomes possible to do:\n `model = Model(input=[a, b], output=c)`\n\n Args:\n shape: A shape tuple (integers), not including the batch size.\n For instance, `shape=(32,)` indicates that the expected input\n will be batches of 32-dimensional vectors. Elements of this tuple\n can be None; 'None' elements represent dimensions where the shape is\n not known.\n batch_size: optional static batch size (integer).\n name: An optional name string for the layer.\n Should be unique in a model (do not reuse the same name twice).\n It will be autogenerated if it isn't provided.\n dtype: The data type expected by the input, as a string\n (`float32`, `float64`, `int32`...)\n sparse: A boolean specifying whether the placeholder to be created is\n sparse. Only one of 'ragged' and 'sparse' can be True. Note that,\n if `sparse` is False, sparse tensors can still be passed into the\n input - they will be densified with a default value of 0.\n tensor: Optional existing tensor to wrap into the `Input` layer.\n If set, the layer will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n ragged: A boolean specifying whether the placeholder to be created is\n ragged. Only one of 'ragged' and 'sparse' can be True. In this case,\n values of 'None' in the 'shape' argument represent ragged dimensions.\n For more information about RaggedTensors, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensors).\n type_spec: A `tf.TypeSpec` object to create the input placeholder from.\n When provided, all other args except name must be None.\n **kwargs: deprecated arguments support. Supports `batch_shape` and\n `batch_input_shape`.\n\n Returns:\n A `tensor`.\n\n Example:\n\n ```python\n # this is a logistic regression in Keras\n x = Input(shape=(32,))\n y = Dense(16, activation='softmax')(x)\n model = Model(x, y)\n ```\n\n Note that even if eager execution is enabled,\n `Input` produces a symbolic tensor-like object (i.e. a placeholder).\n This symbolic tensor-like object can be used with lower-level\n TensorFlow ops that take tensors as inputs, as such:\n\n ```python\n x = Input(shape=(32,))\n y = tf.square(x) # This op will be treated like a layer\n model = Model(x, y)\n ```\n\n (This behavior does not work for higher-order TensorFlow APIs such as\n control flow and being directly watched by a `tf.GradientTape`).\n\n However, the resulting model will not track any variables that were\n used as inputs to TensorFlow ops. All variable usages must happen within\n Keras layers to make sure they will be tracked by the model's weights.\n\n The Keras Input can also create a placeholder from an arbitrary `tf.TypeSpec`,\n e.g:\n\n ```python\n x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],\n dtype=tf.float32, ragged_rank=1))\n y = x.values\n model = Model(x, y)\n ```\n When passing an arbitrary `tf.TypeSpec`, it must represent the signature of an\n entire batch instead of just one example.\n\n Raises:\n ValueError: If both `sparse` and `ragged` are provided.\n ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are\n provided.\n ValueError: If `shape`, `tensor` and `type_spec` are None.\n ValueError: If arguments besides `type_spec` are non-None while `type_spec`\n is passed.\n ValueError: if any unrecognized parameters are provided.\n ", "desc": "`Input()` is used to instantiate a Keras tensor.", "type": "API"}, {"name": "tf.keras.layers.InputLayer", "docs": "Layer to be used as an entry point into a Network (a graph of layers).\n\n It can either wrap an existing tensor (pass an `input_tensor` argument)\n or create a placeholder tensor (pass arguments `input_shape`, and\n optionally, `dtype`).\n\n It is generally recommend to use the Keras Functional model via `Input`,\n (which creates an `InputLayer`) without directly using `InputLayer`.\n\n When using `InputLayer` with the Keras Sequential model, it can be skipped by\n moving the `input_shape` parameter to the first layer after the `InputLayer`.\n\n This class can create placeholders for `tf.Tensors`, `tf.SparseTensors`, and\n `tf.RaggedTensors` by choosing `sparse=True` or `ragged=True`. Note that\n `sparse` and `ragged` can't be configured to `True` at the same time.\n Usage:\n\n ```python\n # With explicit InputLayer.\n model = tf.keras.Sequential([\n tf.keras.layers.InputLayer(input_shape=(4,)),\n tf.keras.layers.Dense(8)])\n model.compile(tf.optimizers.RMSprop(0.001), loss='mse')\n model.fit(np.zeros((10, 4)),\n np.ones((10, 8)))\n\n # Without InputLayer and let the first layer to have the input_shape.\n # Keras will add a input for the model behind the scene.\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(8, input_shape=(4,))])\n model.compile(tf.optimizers.RMSprop(0.001), loss='mse')\n model.fit(np.zeros((10, 4)),\n np.ones((10, 8)))\n ```\n\n Args:\n input_shape: Shape tuple (not including the batch axis), or `TensorShape`\n instance (not including the batch axis).\n batch_size: Optional input batch size (integer or `None`).\n dtype: Optional datatype of the input. When not provided, the Keras\n default `float` type will be used.\n input_tensor: Optional tensor to use as layer input. If set, the layer\n will use the `tf.TypeSpec` of this tensor rather\n than creating a new placeholder tensor.\n sparse: Boolean, whether the placeholder created is meant to be sparse.\n Default to `False`.\n ragged: Boolean, whether the placeholder created is meant to be ragged.\n In this case, values of `None` in the `shape` argument represent\n ragged dimensions. For more information about `tf.RaggedTensor`, see\n [this guide](https://www.tensorflow.org/guide/ragged_tensor).\n Default to `False`.\n type_spec: A `tf.TypeSpec` object to create Input from. This `tf.TypeSpec`\n represents the entire batch. When provided, all other args except\n name must be `None`.\n name: Optional name of the layer (string).\n ", "desc": "Layer to be used as an entry point into a Network (a graph of layers).", "type": "API"}, {"name": "tf.keras.layers.InputSpec", "docs": "Specifies the rank, dtype and shape of every input to a layer.\n\n Layers can expose (if appropriate) an `input_spec` attribute:\n an instance of `InputSpec`, or a nested structure of `InputSpec` instances\n (one per input tensor). These objects enable the layer to run input\n compatibility checks for input structure, input rank, input shape, and\n input dtype.\n\n A None entry in a shape is compatible with any dimension,\n a None shape is compatible with any shape.\n\n Args:\n dtype: Expected DataType of the input.\n shape: Shape tuple, expected shape of the input\n (may include None for unchecked axes). Includes the batch size.\n ndim: Integer, expected rank of the input.\n max_ndim: Integer, maximum rank of the input.\n min_ndim: Integer, minimum rank of the input.\n axes: Dictionary mapping integer axes to\n a specific dimension value.\n allow_last_axis_squeeze: If True, then allow inputs of rank N+1 as long\n as the last axis of the input is 1, as well as inputs of rank N-1\n as long as the last axis of the spec is 1.\n name: Expected key corresponding to this input when passing data as\n a dictionary.\n\n Example:\n\n ```python\n class MyLayer(Layer):\n def __init__(self):\n super(MyLayer, self).__init__()\n # The layer will accept inputs with shape (?, 28, 28) & (?, 28, 28, 1)\n # and raise an appropriate error message otherwise.\n self.input_spec = InputSpec(\n shape=(None, 28, 28, 1),\n allow_last_axis_squeeze=True)\n ```\n ", "desc": "Specifies the rank, dtype and shape of every input to a layer.", "type": "API"}, {"name": "tf.keras.layers.Lambda", "docs": "Wraps arbitrary expressions as a `Layer` object.\n\n The `Lambda` layer exists so that arbitrary expressions can be used\n as a `Layer` when constructing `Sequential`\n and Functional API models. `Lambda` layers are best suited for simple\n operations or quick experimentation. For more advanced use cases, follow\n [this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models)\n for subclassing `tf.keras.layers.Layer`.\n\n WARNING: `tf.keras.layers.Lambda` layers have (de)serialization limitations!\n\n The main reason to subclass `tf.keras.layers.Layer` instead of using a\n `Lambda` layer is saving and inspecting a Model. `Lambda` layers\n are saved by serializing the Python bytecode, which is fundamentally\n non-portable. They should only be loaded in the same environment where\n they were saved. Subclassed layers can be saved in a more portable way\n by overriding their `get_config` method. Models that rely on\n subclassed Layers are also often easier to visualize and reason about.\n\n Examples:\n\n ```python\n # add a x -> x^2 layer\n model.add(Lambda(lambda x: x ** 2))\n ```\n ```python\n # add a layer that returns the concatenation\n # of the positive part of the input and\n # the opposite of the negative part\n\n def antirectifier(x):\n x -= K.mean(x, axis=1, keepdims=True)\n x = K.l2_normalize(x, axis=1)\n pos = K.relu(x)\n neg = K.relu(-x)\n return K.concatenate([pos, neg], axis=1)\n\n model.add(Lambda(antirectifier))\n ```\n\n Variables:\n While it is possible to use Variables with Lambda layers, this practice is\n discouraged as it can easily lead to bugs. For instance, consider the\n following layer:\n\n ```python\n scale = tf.Variable(1.)\n scale_layer = tf.keras.layers.Lambda(lambda x: x * scale)\n ```\n\n Because scale_layer does not directly track the `scale` variable, it will\n not appear in `scale_layer.trainable_weights` and will therefore not be\n trained if `scale_layer` is used in a Model.\n\n A better pattern is to write a subclassed Layer:\n\n ```python\n class ScaleLayer(tf.keras.layers.Layer):\n def __init__(self):\n super(ScaleLayer, self).__init__()\n self.scale = tf.Variable(1.)\n\n def call(self, inputs):\n return inputs * self.scale\n ```\n\n In general, Lambda layers can be convenient for simple stateless\n computation, but anything more complex should use a subclass Layer instead.\n\n Args:\n function: The function to be evaluated. Takes input tensor as first\n argument.\n output_shape: Expected output shape from function. This argument can be\n inferred if not explicitly provided. Can be a tuple or function. If a\n tuple, it only specifies the first dimension onward;\n sample dimension is assumed either the same as the input: `output_shape =\n (input_shape[0], ) + output_shape` or, the input is `None` and\n the sample dimension is also `None`: `output_shape = (None, ) +\n output_shape` If a function, it specifies the entire shape as a function\n of the\n input shape: `output_shape = f(input_shape)`\n mask: Either None (indicating no masking) or a callable with the same\n signature as the `compute_mask` layer method, or a tensor that will be\n returned as output mask regardless of what the input is.\n arguments: Optional dictionary of keyword arguments to be passed to the\n function.\n Input shape: Arbitrary. Use the keyword argument input_shape (tuple of\n integers, does not include the samples axis) when using this layer as the\n first layer in a model.\n Output shape: Specified by `output_shape` argument\n ", "desc": "Wraps arbitrary expressions as a `Layer` object.", "type": "API"}, {"name": "tf.keras.layers.Layer", "docs": "This is the class from which all layers inherit.\n\n A layer is a callable object that takes as input one or more tensors and\n that outputs one or more tensors. It involves *computation*, defined\n in the `call()` method, and a *state* (weight variables). State can be\n created in various places, at the convenience of the subclass implementer:\n\n * in `__init__()`;\n * in the optional `build()` method, which is invoked by the first\n `__call__()` to the layer, and supplies the shape(s) of the input(s),\n which may not have been known at initialization time;\n * in the first invocation of `call()`, with some caveats discussed\n below.\n\n Users will just instantiate a layer and then treat it as a callable.\n\n Args:\n trainable: Boolean, whether the layer's variables should be trainable.\n name: String name of the layer.\n dtype: The dtype of the layer's computations and weights. Can also be a\n `tf.keras.mixed_precision.Policy`, which allows the computation and weight\n dtype to differ. Default of `None` means to use\n `tf.keras.mixed_precision.global_policy()`, which is a float32 policy\n unless set to different value.\n dynamic: Set this to `True` if your layer should only be run eagerly, and\n should not be used to generate a static computation graph.\n This would be the case for a Tree-RNN or a recursive network,\n for example, or generally for any layer that manipulates tensors\n using Python control flow. If `False`, we assume that the layer can\n safely be used to generate a static computation graph.\n\n Attributes:\n name: The name of the layer (string).\n dtype: The dtype of the layer's weights.\n variable_dtype: Alias of `dtype`.\n compute_dtype: The dtype of the layer's computations. Layers automatically\n cast inputs to this dtype which causes the computations and output to also\n be in this dtype. When mixed precision is used with a\n `tf.keras.mixed_precision.Policy`, this will be different than\n `variable_dtype`.\n dtype_policy: The layer's dtype policy. See the\n `tf.keras.mixed_precision.Policy` documentation for details.\n trainable_weights: List of variables to be included in backprop.\n non_trainable_weights: List of variables that should not be\n included in backprop.\n weights: The concatenation of the lists trainable_weights and\n non_trainable_weights (in this order).\n trainable: Whether the layer should be trained (boolean), i.e. whether\n its potentially-trainable weights should be returned as part of\n `layer.trainable_weights`.\n input_spec: Optional (list of) `InputSpec` object(s) specifying the\n constraints on inputs that can be accepted by the layer.\n\n We recommend that descendants of `Layer` implement the following methods:\n\n * `__init__()`: Defines custom layer attributes, and creates layer weights\n that do not depend on input shapes, using `add_weight()`, or other state.\n * `build(self, input_shape)`: This method can be used to create weights that\n depend on the shape(s) of the input(s), using `add_weight()`, or other\n state. `__call__()` will automatically build the layer (if it has not been\n built yet) by calling `build()`.\n * `call(self, inputs, *args, **kwargs)`: Called in `__call__` after making\n sure `build()` has been called. `call()` performs the logic of applying the\n layer to the `inputs`. The first invocation may additionally create state\n that could not be conveniently created in `build()`; see its docstring\n for details.\n Two reserved keyword arguments you can optionally use in `call()` are:\n - `training` (boolean, whether the call is in inference mode or training\n mode). See more details in [the layer/model subclassing guide](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models#privileged_training_argument_in_the_call_method)\n - `mask` (boolean tensor encoding masked timesteps in the input, used\n in RNN layers). See more details in [the layer/model subclassing guide](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models#privileged_mask_argument_in_the_call_method)\n A typical signature for this method is `call(self, inputs)`, and user could\n optionally add `training` and `mask` if the layer need them. `*args` and\n `**kwargs` is only useful for future extension when more input parameters\n are planned to be added.\n * `get_config(self)`: Returns a dictionary containing the configuration used\n to initialize this layer. If the keys differ from the arguments\n in `__init__`, then override `from_config(self)` as well.\n This method is used when saving\n the layer or a model that contains this layer.\n\n Examples:\n\n Here's a basic example: a layer with two variables, `w` and `b`,\n that returns `y = w . x + b`.\n It shows how to implement `build()` and `call()`.\n Variables set as attributes of a layer are tracked as weights\n of the layers (in `layer.weights`).\n\n ```python\n class SimpleDense(Layer):\n\n def __init__(self, units=32):\n super(SimpleDense, self).__init__()\n self.units = units\n\n def build(self, input_shape): # Create the state of the layer (weights)\n w_init = tf.random_normal_initializer()\n self.w = tf.Variable(\n initial_value=w_init(shape=(input_shape[-1], self.units),\n dtype='float32'),\n trainable=True)\n b_init = tf.zeros_initializer()\n self.b = tf.Variable(\n initial_value=b_init(shape=(self.units,), dtype='float32'),\n trainable=True)\n\n def call(self, inputs): # Defines the computation from inputs to outputs\n return tf.matmul(inputs, self.w) + self.b\n\n # Instantiates the layer.\n linear_layer = SimpleDense(4)\n\n # This will also call `build(input_shape)` and create the weights.\n y = linear_layer(tf.ones((2, 2)))\n assert len(linear_layer.weights) == 2\n\n # These weights are trainable, so they're listed in `trainable_weights`:\n assert len(linear_layer.trainable_weights) == 2\n ```\n\n Note that the method `add_weight()` offers a shortcut to create weights:\n\n ```python\n class SimpleDense(Layer):\n\n def __init__(self, units=32):\n super(SimpleDense, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='random_normal',\n trainable=True)\n self.b = self.add_weight(shape=(self.units,),\n initializer='random_normal',\n trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n ```\n\n Besides trainable weights, updated via backpropagation during training,\n layers can also have non-trainable weights. These weights are meant to\n be updated manually during `call()`. Here's a example layer that computes\n the running sum of its inputs:\n\n ```python\n class ComputeSum(Layer):\n\n def __init__(self, input_dim):\n super(ComputeSum, self).__init__()\n # Create a non-trainable weight.\n self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),\n trainable=False)\n\n def call(self, inputs):\n self.total.assign_add(tf.reduce_sum(inputs, axis=0))\n return self.total\n\n my_sum = ComputeSum(2)\n x = tf.ones((2, 2))\n\n y = my_sum(x)\n print(y.numpy()) # [2. 2.]\n\n y = my_sum(x)\n print(y.numpy()) # [4. 4.]\n\n assert my_sum.weights == [my_sum.total]\n assert my_sum.non_trainable_weights == [my_sum.total]\n assert my_sum.trainable_weights == []\n ```\n\n For more information about creating layers, see the guide\n [Making new Layers and Models via subclassing](\n https://www.tensorflow.org/guide/keras/custom_layers_and_models)\n ", "desc": "This is the class from which all layers inherit.", "type": "API"}, {"name": "tf.keras.layers.LayerNormalization", "docs": "Layer normalization layer (Ba et al., 2016).\n\n Normalize the activations of the previous layer for each given example in a\n batch independently, rather than across a batch like Batch Normalization.\n i.e. applies a transformation that maintains the mean activation within each\n example close to 0 and the activation standard deviation close to 1.\n\n Given a tensor `inputs`, moments are calculated and normalization\n is performed across the axes specified in `axis`.\n\n Example:\n\n >>> data = tf.constant(np.arange(10).reshape(5, 2) * 10, dtype=tf.float32)\n >>> print(data)\n tf.Tensor(\n [[ 0. 10.]\n [20. 30.]\n [40. 50.]\n [60. 70.]\n [80. 90.]], shape=(5, 2), dtype=float32)\n\n >>> layer = tf.keras.layers.LayerNormalization(axis=1)\n >>> output = layer(data)\n >>> print(output)\n tf.Tensor(\n [[-1. 1.]\n [-1. 1.]\n [-1. 1.]\n [-1. 1.]\n [-1. 1.]], shape=(5, 2), dtype=float32)\n\n Notice that with Layer Normalization the normalization happens across the\n axes *within* each example, rather than across different examples in the\n batch.\n\n If `scale` or `center` are enabled, the layer will scale the normalized\n outputs by broadcasting them with a trainable variable `gamma`, and center\n the outputs by broadcasting with a trainable variable `beta`. `gamma` will\n default to a ones tensor and `beta` will default to a zeros tensor, so that\n centering and scaling are no-ops before training has begun.\n\n So, with scaling and centering enabled the normalization equations\n are as follows:\n\n Let the intermediate activations for a mini-batch to be the `inputs`.\n\n For each sample `x_i` in `inputs` with `k` features, we compute the mean and\n variance of the sample:\n\n ```python\n mean_i = sum(x_i[j] for j in range(k)) / k\n var_i = sum((x_i[j] - mean_i) ** 2 for j in range(k)) / k\n ```\n\n and then compute a normalized `x_i_normalized`, including a small factor\n `epsilon` for numerical stability.\n\n ```python\n x_i_normalized = (x_i - mean_i) / sqrt(var_i + epsilon)\n ```\n\n And finally `x_i_normalized ` is linearly transformed by `gamma` and `beta`,\n which are learned parameters:\n\n ```python\n output_i = x_i_normalized * gamma + beta\n ```\n\n `gamma` and `beta` will span the axes of `inputs` specified in `axis`, and\n this part of the inputs' shape must be fully defined.\n\n For example:\n\n >>> layer = tf.keras.layers.LayerNormalization(axis=[1, 2, 3])\n >>> layer.build([5, 20, 30, 40])\n >>> print(layer.beta.shape)\n (20, 30, 40)\n >>> print(layer.gamma.shape)\n (20, 30, 40)\n\n Note that other implementations of layer normalization may choose to define\n `gamma` and `beta` over a separate set of axes from the axes being\n normalized across. For example, Group Normalization\n ([Wu et al. 2018](https://arxiv.org/abs/1803.08494)) with group size of 1\n corresponds to a Layer Normalization that normalizes across height, width,\n and channel and has `gamma` and `beta` span only the channel dimension.\n So, this Layer Normalization implementation will not match a Group\n Normalization layer with group size set to 1.\n\n Args:\n axis: Integer or List/Tuple. The axis or axes to normalize across. Typically\n this is the features axis/axes. The left-out axes are typically the batch\n axis/axes. This argument defaults to `-1`, the last dimension in the\n input.\n epsilon: Small float added to variance to avoid dividing by zero. Defaults\n to 1e-3\n center: If True, add offset of `beta` to normalized tensor. If False, `beta`\n is ignored. Defaults to True.\n scale: If True, multiply by `gamma`. If False, `gamma` is not used. Defaults\n to True. When the next layer is linear (also e.g. `nn.relu`), this can be\n disabled since the scaling will be done by the next layer.\n beta_initializer: Initializer for the beta weight. Defaults to zeros.\n gamma_initializer: Initializer for the gamma weight. Defaults to ones.\n beta_regularizer: Optional regularizer for the beta weight. None by default.\n gamma_regularizer: Optional regularizer for the gamma weight. None by\n default.\n beta_constraint: Optional constraint for the beta weight. None by default.\n gamma_constraint: Optional constraint for the gamma weight. None by default.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape` (tuple of\n integers, does not include the samples axis) when using this layer as the\n first layer in a model.\n\n Output shape:\n Same shape as input.\n\n Reference:\n - [Lei Ba et al., 2016](https://arxiv.org/abs/1607.06450).\n ", "desc": "Layer normalization layer (Ba et al., 2016).", "type": "API"}, {"name": "tf.keras.layers.LeakyReLU", "docs": "Leaky version of a Rectified Linear Unit.\n\n It allows a small gradient when the unit is not active:\n\n ```\n f(x) = alpha * x if x < 0\n f(x) = x if x >= 0\n ```\n\n Usage:\n\n >>> layer = tf.keras.layers.LeakyReLU()\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-0.9, -0.3, 0.0, 2.0]\n >>> layer = tf.keras.layers.LeakyReLU(alpha=0.1)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-0.3, -0.1, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha: Float >= 0. Negative slope coefficient. Default to 0.3.\n\n ", "desc": "Leaky version of a Rectified Linear Unit.", "type": "API"}, {"name": "tf.keras.layers.LocallyConnected1D", "docs": "Locally-connected layer for 1D inputs.\n\n The `LocallyConnected1D` layer works similarly to\n the `Conv1D` layer, except that weights are unshared,\n that is, a different set of filters is applied at each different patch\n of the input.\n\n Note: layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n\n Example:\n ```python\n # apply a unshared weight convolution 1d of length 3 to a sequence with\n # 10 timesteps, with 64 output filters\n model = Sequential()\n model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))\n # now model.output_shape == (None, 8, 64)\n # add a new conv1d on top\n model.add(LocallyConnected1D(32, 3))\n # now model.output_shape == (None, 6, 32)\n ```\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of output filters in the convolution).\n kernel_size: An integer or tuple/list of a single integer, specifying the\n length of the 1D convolution window.\n strides: An integer or tuple/list of a single integer, specifying the\n stride length of the convolution.\n padding: Currently only supports `\"valid\"` (case-insensitive). `\"same\"`\n may be supported in the future. `\"valid\"` means no padding.\n data_format: A string, one of `channels_last` (default) or\n `channels_first`. The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape `(batch, length,\n channels)` while `channels_first` corresponds to inputs with shape\n `(batch, channels, length)`. It defaults to the `image_data_format`\n value found in your Keras config file at `~/.keras/keras.json`. If you\n never set it, then it will be \"channels_last\".\n activation: Activation function to use. If you don't specify anything, no\n activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\")..\n kernel_constraint: Constraint function applied to the kernel matrix.\n bias_constraint: Constraint function applied to the bias vector.\n implementation: implementation mode, either `1`, `2`, or `3`. `1` loops\n over input spatial locations to perform the forward pass. It is\n memory-efficient but performs a lot of (small) ops. `2` stores layer\n weights in a dense but sparsely-populated 2D matrix and implements the\n forward pass as a single matrix-multiply. It uses a lot of RAM but\n performs few (large) ops. `3` stores layer weights in a sparse tensor\n and implements the forward pass as a single sparse matrix-multiply.\n How to choose:\n `1`: large, dense models,\n `2`: small models,\n `3`: large, sparse models, where \"large\" stands for large\n input/output activations (i.e. many `filters`, `input_filters`,\n large `input_size`, `output_size`), and \"sparse\" stands for few\n connections between inputs and outputs, i.e. small ratio `filters *\n input_filters * kernel_size / (input_size * strides)`, where inputs\n to and outputs of the layer are assumed to have shapes `(input_size,\n input_filters)`, `(output_size, filters)` respectively. It is\n recommended to benchmark each in the setting of interest to pick the\n most efficient one (in terms of speed and memory usage). Correct\n choice of implementation can lead to dramatic speed improvements\n (e.g. 50X), potentially at the expense of RAM. Also, only\n `padding=\"valid\"` is supported by `implementation=1`.\n Input shape:\n 3D tensor with shape: `(batch_size, steps, input_dim)`\n Output shape:\n 3D tensor with shape: `(batch_size, new_steps, filters)` `steps` value\n might have changed due to padding or strides.\n ", "desc": "Locally-connected layer for 1D inputs.", "type": "API"}, {"name": "tf.keras.layers.LocallyConnected2D", "docs": "Locally-connected layer for 2D inputs.\n\n The `LocallyConnected2D` layer works similarly\n to the `Conv2D` layer, except that weights are unshared,\n that is, a different set of filters is applied at each\n different patch of the input.\n\n Note: layer attributes cannot be modified after the layer has been called\n once (except the `trainable` attribute).\n\n Examples:\n ```python\n # apply a 3x3 unshared weights convolution with 64 output filters on a\n 32x32 image\n # with `data_format=\"channels_last\"`:\n model = Sequential()\n model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))\n # now model.output_shape == (None, 30, 30, 64)\n # notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64\n parameters\n\n # add a 3x3 unshared weights convolution on top, with 32 output filters:\n model.add(LocallyConnected2D(32, (3, 3)))\n # now model.output_shape == (None, 28, 28, 32)\n ```\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the width\n and height of the 2D convolution window. Can be a single integer to\n specify the same value for all spatial dimensions.\n strides: An integer or tuple/list of 2 integers, specifying the strides of\n the convolution along the width and height. Can be a single integer to\n specify the same value for all spatial dimensions.\n padding: Currently only support `\"valid\"` (case-insensitive). `\"same\"`\n will be supported in future. `\"valid\"` means no padding.\n data_format: A string, one of `channels_last` (default) or\n `channels_first`. The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape `(batch, height, width,\n channels)` while `channels_first` corresponds to inputs with shape\n `(batch, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n activation: Activation function to use. If you don't specify anything, no\n activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix.\n bias_initializer: Initializer for the bias vector.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix.\n bias_regularizer: Regularizer function applied to the bias vector.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\").\n kernel_constraint: Constraint function applied to the kernel matrix.\n bias_constraint: Constraint function applied to the bias vector.\n implementation: implementation mode, either `1`, `2`, or `3`. `1` loops\n over input spatial locations to perform the forward pass. It is\n memory-efficient but performs a lot of (small) ops. `2` stores layer\n weights in a dense but sparsely-populated 2D matrix and implements the\n forward pass as a single matrix-multiply. It uses a lot of RAM but\n performs few (large) ops. `3` stores layer weights in a sparse tensor\n and implements the forward pass as a single sparse matrix-multiply.\n How to choose:\n `1`: large, dense models,\n `2`: small models,\n `3`: large, sparse models, where \"large\" stands for large\n input/output activations (i.e. many `filters`, `input_filters`,\n large `np.prod(input_size)`, `np.prod(output_size)`), and \"sparse\"\n stands for few connections between inputs and outputs, i.e. small\n ratio `filters * input_filters * np.prod(kernel_size) /\n (np.prod(input_size) * np.prod(strides))`, where inputs to and\n outputs of the layer are assumed to have shapes `input_size +\n (input_filters,)`, `output_size + (filters,)` respectively. It is\n recommended to benchmark each in the setting of interest to pick the\n most efficient one (in terms of speed and memory usage). Correct\n choice of implementation can lead to dramatic speed improvements\n (e.g. 50X), potentially at the expense of RAM. Also, only\n `padding=\"valid\"` is supported by `implementation=1`.\n Input shape:\n 4D tensor with shape: `(samples, channels, rows, cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, rows, cols, channels)` if\n data_format='channels_last'.\n Output shape:\n 4D tensor with shape: `(samples, filters, new_rows, new_cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, new_rows, new_cols, filters)` if\n data_format='channels_last'. `rows` and `cols` values might have changed\n due to padding.\n ", "desc": "Locally-connected layer for 2D inputs.", "type": "API"}, {"name": "tf.keras.layers.LSTM", "docs": "Long Short-Term Memory layer - Hochreiter 1997.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Based on available runtime hardware and constraints, this layer\n will choose different implementations (cuDNN-based or pure-TensorFlow)\n to maximize the performance. If a GPU is available and all\n the arguments to the layer meet the requirement of the cuDNN kernel\n (see below for details), the layer will use a fast cuDNN implementation.\n\n The requirements to use the cuDNN implementation are:\n\n 1. `activation` == `tanh`\n 2. `recurrent_activation` == `sigmoid`\n 3. `recurrent_dropout` == 0\n 4. `unroll` is `False`\n 5. `use_bias` is `True`\n 6. Inputs, if use masking, are strictly right-padded.\n 7. Eager execution is enabled in the outermost context.\n\n For example:\n\n >>> inputs = tf.random.normal([32, 10, 8])\n >>> lstm = tf.keras.layers.LSTM(4)\n >>> output = lstm(inputs)\n >>> print(output.shape)\n (32, 4)\n >>> lstm = tf.keras.layers.LSTM(4, return_sequences=True, return_state=True)\n >>> whole_seq_output, final_memory_state, final_carry_state = lstm(inputs)\n >>> print(whole_seq_output.shape)\n (32, 10, 4)\n >>> print(final_memory_state.shape)\n (32, 4)\n >>> print(final_carry_state.shape)\n (32, 4)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`). If you pass `None`, no activation\n is applied (ie. \"linear\" activation: `a(x) = x`).\n recurrent_activation: Activation function to use for the recurrent step.\n Default: sigmoid (`sigmoid`). If you pass `None`, no activation is\n applied (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs. Default: `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n unit_forget_bias: Boolean (default `True`). If True, add 1 to the bias of\n the forget gate at initialization. Setting it to true will also force\n `bias_initializer=\"zeros\"`. This is recommended in [Jozefowicz et\n al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\"). Default: `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n return_sequences: Boolean. Whether to return the last output. in the output\n sequence, or the full sequence. Default: `False`.\n return_state: Boolean. Whether to return the last state in addition to the\n output. Default: `False`.\n go_backwards: Boolean (default `False`). If True, process the input sequence\n backwards and return the reversed sequence.\n stateful: Boolean (default `False`). If True, the last state for each sample\n at index i in a batch will be used as initial state for the sample of\n index i in the following batch.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `[timesteps, batch, feature]`, whereas in the False case, it will be\n `[batch, timesteps, feature]`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n unroll: Boolean (default `False`). If True, the network will be unrolled,\n else a symbolic loop will be used. Unrolling can speed-up a RNN, although\n it tends to be more memory-intensive. Unrolling is only suitable for short\n sequences.\n\n Call arguments:\n inputs: A 3D tensor with shape `[batch, timesteps, feature]`.\n mask: Binary tensor of shape `[batch, timesteps]` indicating whether\n a given timestep should be masked (optional, defaults to `None`).\n An individual `True` entry indicates that the corresponding timestep\n should be utilized, while a `False` entry indicates that the corresponding\n timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used (optional, defaults to `None`).\n initial_state: List of initial state tensors to be passed to the first\n call of the cell (optional, defaults to `None` which causes creation\n of zero-filled initial state tensors).\n ", "desc": "Long Short-Term Memory layer - Hochreiter 1997.", "type": "API"}, {"name": "tf.keras.layers.LSTMCell", "docs": "Cell class for the LSTM layer.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This class processes one step within the whole time sequence input, whereas\n `tf.keras.layer.LSTM` processes the whole sequence.\n\n For example:\n\n >>> inputs = tf.random.normal([32, 10, 8])\n >>> rnn = tf.keras.layers.RNN(tf.keras.layers.LSTMCell(4))\n >>> output = rnn(inputs)\n >>> print(output.shape)\n (32, 4)\n >>> rnn = tf.keras.layers.RNN(\n ... tf.keras.layers.LSTMCell(4),\n ... return_sequences=True,\n ... return_state=True)\n >>> whole_seq_output, final_memory_state, final_carry_state = rnn(inputs)\n >>> print(whole_seq_output.shape)\n (32, 10, 4)\n >>> print(final_memory_state.shape)\n (32, 4)\n >>> print(final_carry_state.shape)\n (32, 4)\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use. Default: hyperbolic tangent\n (`tanh`). If you pass `None`, no activation is applied (ie. \"linear\"\n activation: `a(x) = x`).\n recurrent_activation: Activation function to use for the recurrent step.\n Default: sigmoid (`sigmoid`). If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix, used for\n the linear transformation of the inputs. Default: `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel` weights\n matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n unit_forget_bias: Boolean (default `True`). If True, add 1 to the bias of\n the forget gate at initialization. Setting it to true will also force\n `bias_initializer=\"zeros\"`. This is recommended in [Jozefowicz et\n al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to\n the `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n\n Call arguments:\n inputs: A 2D tensor, with shape of `[batch, feature]`.\n states: List of 2 tensors that corresponding to the cell's units. Both of\n them have shape `[batch, units]`, the first tensor is the memory state\n from previous time step, the second tensor is the carry state from\n previous time step. For timestep 0, the initial state provided by user\n will be feed to cell.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n ", "desc": "Cell class for the LSTM layer.", "type": "API"}, {"name": "tf.keras.layers.Masking", "docs": "Masks a sequence by using a mask value to skip timesteps.\n\n For each timestep in the input tensor (dimension #1 in the tensor),\n if all values in the input tensor at that timestep\n are equal to `mask_value`, then the timestep will be masked (skipped)\n in all downstream layers (as long as they support masking).\n\n If any downstream layer does not support masking yet receives such\n an input mask, an exception will be raised.\n\n Example:\n\n Consider a Numpy data array `x` of shape `(samples, timesteps, features)`,\n to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you\n lack data for these timesteps. You can:\n\n - Set `x[:, 3, :] = 0.` and `x[:, 5, :] = 0.`\n - Insert a `Masking` layer with `mask_value=0.` before the LSTM layer:\n\n ```python\n samples, timesteps, features = 32, 10, 8\n inputs = np.random.random([samples, timesteps, features]).astype(np.float32)\n inputs[:, 3, :] = 0.\n inputs[:, 5, :] = 0.\n\n model = tf.keras.models.Sequential()\n model.add(tf.keras.layers.Masking(mask_value=0.,\n input_shape=(timesteps, features)))\n model.add(tf.keras.layers.LSTM(32))\n\n output = model(inputs)\n # The time step 3 and 5 will be skipped from LSTM calculation.\n ```\n\n See [the masking and padding guide](\n https://www.tensorflow.org/guide/keras/masking_and_padding)\n for more details.\n ", "desc": "Masks a sequence by using a mask value to skip timesteps.", "type": "API"}, {"name": "tf.keras.layers.Maximum", "docs": "Layer that computes the maximum (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Maximum()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> maxed = tf.keras.layers.Maximum()([x1, x2])\n >>> maxed.shape\n TensorShape([5, 8])\n ", "desc": "Layer that computes the maximum (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.keras.layers.MaxPool1D", "docs": "Max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over a\n spatial window of size `pool_size`. The window is shifted by `strides`. The\n resulting output, when using the `\"valid\"` padding option, has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = input_shape / strides`\n\n For example, for `strides=1` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=2` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=1` and `padding=\"same\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> max_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the max pooling window.\n strides: Integer, or None. Specifies how much the pooling window moves\n for each pooling step.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.keras.layers.MaxPool2D", "docs": "Max pooling operation for 2D spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output,\n when using the `\"valid\"` padding option, has a spatial shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> max_pool_2d(x)\n \n\n For example, for `strides=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> max_pool_2d(x)\n \n\n Usage Example:\n\n >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]],\n ... [[2.], [2.], [3.], [2.]],\n ... [[4.], [1.], [1.], [1.]],\n ... [[2.], [2.], [1.], [4.]]]])\n >>> output = tf.constant([[[[1], [0]],\n ... [[0], [1]]]])\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... input_shape=(4, 4, 1)))\n >>> model.compile('adam', 'mean_squared_error')\n >>> model.predict(input_image, steps=1)\n array([[[[2.],\n [4.]],\n [[4.],\n [4.]]]], dtype=float32)\n\n For example, for stride=(1, 1) and padding=\"same\":\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> max_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n window size over which to take the maximum.\n `(2, 2)` will take the max value over a 2x2 pooling window.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values. Specifies how far the pooling window moves\n for each pooling step. If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n\n Returns:\n A tensor of rank 4 representing the maximum pooled values. See above for\n output shape.\n ", "desc": "Max pooling operation for 2D spatial data.", "type": "API"}, {"name": "tf.keras.layers.MaxPool3D", "docs": "Max pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: Tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.MaxPooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Max pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.layers.MaxPooling1D", "docs": "Max pooling operation for 1D temporal data.\n\n Downsamples the input representation by taking the maximum value over a\n spatial window of size `pool_size`. The window is shifted by `strides`. The\n resulting output, when using the `\"valid\"` padding option, has a shape of:\n `output_shape = (input_shape - pool_size + 1) / strides)`\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = input_shape / strides`\n\n For example, for `strides=1` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=2` and `padding=\"valid\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=2, padding='valid')\n >>> max_pool_1d(x)\n \n\n For example, for `strides=1` and `padding=\"same\"`:\n\n >>> x = tf.constant([1., 2., 3., 4., 5.])\n >>> x = tf.reshape(x, [1, 5, 1])\n >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2,\n ... strides=1, padding='same')\n >>> max_pool_1d(x)\n \n\n Args:\n pool_size: Integer, size of the max pooling window.\n strides: Integer, or None. Specifies how much the pooling window moves\n for each pooling step.\n If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, steps, features)` while `channels_first`\n corresponds to inputs with shape\n `(batch, features, steps)`.\n\n Input shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, steps)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 3D tensor with shape `(batch_size, downsampled_steps, features)`.\n - If `data_format='channels_first'`:\n 3D tensor with shape `(batch_size, features, downsampled_steps)`.\n ", "desc": "Max pooling operation for 1D temporal data.", "type": "API"}, {"name": "tf.keras.layers.MaxPooling2D", "docs": "Max pooling operation for 2D spatial data.\n\n Downsamples the input along its spatial dimensions (height and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n The resulting output,\n when using the `\"valid\"` padding option, has a spatial shape\n (number of rows or columns) of:\n `output_shape = math.floor((input_shape - pool_size) / strides) + 1`\n (when `input_shape >= pool_size`)\n\n The resulting output shape when using the `\"same\"` padding option is:\n `output_shape = math.floor((input_shape - 1) / strides) + 1`\n\n For example, for `strides=(1, 1)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='valid')\n >>> max_pool_2d(x)\n \n\n For example, for `strides=(2, 2)` and `padding=\"valid\"`:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = tf.reshape(x, [1, 3, 4, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(2, 2), padding='valid')\n >>> max_pool_2d(x)\n \n\n Usage Example:\n\n >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]],\n ... [[2.], [2.], [3.], [2.]],\n ... [[4.], [1.], [1.], [1.]],\n ... [[2.], [2.], [1.], [4.]]]])\n >>> output = tf.constant([[[[1], [0]],\n ... [[0], [1]]]])\n >>> model = tf.keras.models.Sequential()\n >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... input_shape=(4, 4, 1)))\n >>> model.compile('adam', 'mean_squared_error')\n >>> model.predict(input_image, steps=1)\n array([[[[2.],\n [4.]],\n [[4.],\n [4.]]]], dtype=float32)\n\n For example, for stride=(1, 1) and padding=\"same\":\n\n >>> x = tf.constant([[1., 2., 3.],\n ... [4., 5., 6.],\n ... [7., 8., 9.]])\n >>> x = tf.reshape(x, [1, 3, 3, 1])\n >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),\n ... strides=(1, 1), padding='same')\n >>> max_pool_2d(x)\n \n\n Args:\n pool_size: integer or tuple of 2 integers,\n window size over which to take the maximum.\n `(2, 2)` will take the max value over a 2x2 pooling window.\n If only one integer is specified, the same window length\n will be used for both dimensions.\n strides: Integer, tuple of 2 integers, or None.\n Strides values. Specifies how far the pooling window moves\n for each pooling step. If None, it will default to `pool_size`.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, rows, cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, rows, cols)`.\n\n Output shape:\n - If `data_format='channels_last'`:\n 4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.\n - If `data_format='channels_first'`:\n 4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.\n\n Returns:\n A tensor of rank 4 representing the maximum pooled values. See above for\n output shape.\n ", "desc": "Max pooling operation for 2D spatial data.", "type": "API"}, {"name": "tf.keras.layers.MaxPooling3D", "docs": "Max pooling operation for 3D data (spatial or spatio-temporal).\n\n Downsamples the input along its spatial dimensions (depth, height, and width)\n by taking the maximum value over an input window\n (of size defined by `pool_size`) for each channel of the input.\n The window is shifted by `strides` along each dimension.\n\n Args:\n pool_size: Tuple of 3 integers,\n factors by which to downscale (dim1, dim2, dim3).\n `(2, 2, 2)` will halve the size of the 3D input in each dimension.\n strides: tuple of 3 integers, or None. Strides values.\n padding: One of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`\n\n Output shape:\n - If `data_format='channels_last'`:\n 5D tensor with shape:\n `(batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)`\n - If `data_format='channels_first'`:\n 5D tensor with shape:\n `(batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)`\n\n Example:\n\n ```python\n depth = 30\n height = 30\n width = 30\n input_channels = 3\n\n inputs = tf.keras.Input(shape=(depth, height, width, input_channels))\n layer = tf.keras.layers.MaxPooling3D(pool_size=3)\n outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)\n ```\n ", "desc": "Max pooling operation for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.layers.Minimum", "docs": "Layer that computes the minimum (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Minimum()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> minned = tf.keras.layers.Minimum()([x1, x2])\n >>> minned.shape\n TensorShape([5, 8])\n ", "desc": "Layer that computes the minimum (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.keras.layers.MultiHeadAttention", "docs": "MultiHeadAttention layer.\n\n This is an implementation of multi-headed attention as described in the paper\n \"Attention is all you Need\" (Vaswani et al., 2017).\n If `query`, `key,` `value` are the same, then\n this is self-attention. Each timestep in `query` attends to the\n corresponding sequence in `key`, and returns a fixed-width vector.\n\n This layer first projects `query`, `key` and `value`. These are\n (effectively) a list of tensors of length `num_attention_heads`, where the\n corresponding shapes are `(batch_size, , key_dim)`,\n `(batch_size, , key_dim)`,\n `(batch_size, , value_dim)`.\n\n Then, the query and key tensors are dot-producted and scaled. These are\n softmaxed to obtain attention probabilities. The value tensors are then\n interpolated by these probabilities, then concatenated back to a single\n tensor.\n\n Finally, the result tensor with the last dimension as value_dim can take an\n linear projection and return.\n\n When using MultiHeadAttention inside a custom Layer, the custom Layer must\n implement `build()` and call MultiHeadAttention's `_build_from_signature()`.\n This enables weights to be restored correctly when the model is loaded.\n TODO(b/172609172): link to documentation about calling custom build functions\n when used in a custom Layer.\n\n Examples:\n\n Performs 1D cross-attention over two sequence inputs with an attention mask.\n Returns the additional attention weights over heads.\n\n >>> layer = MultiHeadAttention(num_heads=2, key_dim=2)\n >>> target = tf.keras.Input(shape=[8, 16])\n >>> source = tf.keras.Input(shape=[4, 16])\n >>> output_tensor, weights = layer(target, source,\n ... return_attention_scores=True)\n >>> print(output_tensor.shape)\n (None, 8, 16)\n >>> print(weights.shape)\n (None, 2, 8, 4)\n\n Performs 2D self-attention over a 5D input tensor on axes 2 and 3.\n\n >>> layer = MultiHeadAttention(num_heads=2, key_dim=2, attention_axes=(2, 3))\n >>> input_tensor = tf.keras.Input(shape=[5, 3, 4, 16])\n >>> output_tensor = layer(input_tensor, input_tensor)\n >>> print(output_tensor.shape)\n (None, 5, 3, 4, 16)\n\n Args:\n num_heads: Number of attention heads.\n key_dim: Size of each attention head for query and key.\n value_dim: Size of each attention head for value.\n dropout: Dropout probability.\n use_bias: Boolean, whether the dense layers use bias vectors/matrices.\n output_shape: The expected shape of an output tensor, besides the batch and\n sequence dims. If not specified, projects back to the key feature dim.\n attention_axes: axes over which the attention is applied. `None` means\n attention over all axes, but batch, heads, and features.\n kernel_initializer: Initializer for dense layer kernels.\n bias_initializer: Initializer for dense layer biases.\n kernel_regularizer: Regularizer for dense layer kernels.\n bias_regularizer: Regularizer for dense layer biases.\n activity_regularizer: Regularizer for dense layer activity.\n kernel_constraint: Constraint for dense layer kernels.\n bias_constraint: Constraint for dense layer kernels.\n\n Call arguments:\n query: Query `Tensor` of shape `(B, T, dim)`.\n value: Value `Tensor` of shape `(B, S, dim)`.\n key: Optional key `Tensor` of shape `(B, S, dim)`. If not given, will use\n `value` for both `key` and `value`, which is the most common case.\n attention_mask: a boolean mask of shape `(B, T, S)`, that prevents\n attention to certain positions. The boolean mask specifies which query\n elements can attend to which key elements, 1 indicates attention and 0\n indicates no attention. Broadcasting can happen for the missing batch\n dimensions and the head dimension.\n return_attention_scores: A boolean to indicate whether the output should\n be `(attention_output, attention_scores)` if `True`, or `attention_output`\n if `False`. Defaults to `False`.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (no dropout).\n Defaults to either using the training mode of the parent layer/model,\n or False (inference) if there is no parent layer.\n\n Returns:\n attention_output: The result of the computation, of shape `(B, T, E)`,\n where `T` is for target sequence shapes and `E` is the query input last\n dimension if `output_shape` is `None`. Otherwise, the multi-head outputs\n are project to the shape specified by `output_shape`.\n attention_scores: [Optional] multi-head attention coefficients over\n attention axes.\n ", "desc": "MultiHeadAttention layer.", "type": "API"}, {"name": "tf.keras.layers.Multiply", "docs": "Layer that multiplies (element-wise) a list of inputs.\n\n It takes as input a list of tensors, all of the same shape, and returns\n a single tensor (also of the same shape).\n\n >>> tf.keras.layers.Multiply()([np.arange(5).reshape(5, 1),\n ... np.arange(5, 10).reshape(5, 1)])\n \n\n >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))\n >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))\n >>> multiplied = tf.keras.layers.Multiply()([x1, x2])\n >>> multiplied.shape\n TensorShape([5, 8])\n ", "desc": "Layer that multiplies (element-wise) a list of inputs.", "type": "API"}, {"name": "tf.keras.layers.Permute", "docs": "Permutes the dimensions of the input according to a given pattern.\n\n Useful e.g. connecting RNNs and convnets.\n\n Example:\n\n ```python\n model = Sequential()\n model.add(Permute((2, 1), input_shape=(10, 64)))\n # now: model.output_shape == (None, 64, 10)\n # note: `None` is the batch dimension\n ```\n\n Args:\n dims: Tuple of integers. Permutation pattern does not include the\n samples dimension. Indexing starts at 1.\n For instance, `(2, 1)` permutes the first and second dimensions\n of the input.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same as the input shape, but with the dimensions re-ordered according\n to the specified pattern.\n ", "desc": "Permutes the dimensions of the input according to a given pattern.", "type": "API"}, {"name": "tf.keras.layers.PReLU", "docs": "Parametric Rectified Linear Unit.\n\n It follows:\n\n ```\n f(x) = alpha * x for x < 0\n f(x) = x for x >= 0\n ```\n\n where `alpha` is a learned array with the same shape as x.\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n alpha_initializer: Initializer function for the weights.\n alpha_regularizer: Regularizer for the weights.\n alpha_constraint: Constraint for the weights.\n shared_axes: The axes along which to share learnable\n parameters for the activation function.\n For example, if the incoming feature maps\n are from a 2D convolution\n with output shape `(batch, height, width, channels)`,\n and you wish to share parameters across space\n so that each filter only has one set of parameters,\n set `shared_axes=[1, 2]`.\n ", "desc": "Parametric Rectified Linear Unit.", "type": "API"}, {"name": "tf.keras.layers.ReLU", "docs": "Rectified Linear Unit activation function.\n\n With default values, it returns element-wise `max(x, 0)`.\n\n Otherwise, it follows:\n\n ```\n f(x) = max_value if x >= max_value\n f(x) = x if threshold <= x < max_value\n f(x) = negative_slope * (x - threshold) otherwise\n ```\n\n Usage:\n\n >>> layer = tf.keras.layers.ReLU()\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.ReLU(max_value=1.0)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 1.0]\n >>> layer = tf.keras.layers.ReLU(negative_slope=1.0)\n >>> output = layer([-3.0, -1.0, 0.0, 2.0])\n >>> list(output.numpy())\n [-3.0, -1.0, 0.0, 2.0]\n >>> layer = tf.keras.layers.ReLU(threshold=1.5)\n >>> output = layer([-3.0, -1.0, 1.0, 2.0])\n >>> list(output.numpy())\n [0.0, 0.0, 0.0, 2.0]\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the batch axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n max_value: Float >= 0. Maximum activation value. Default to None, which\n means unlimited.\n negative_slope: Float >= 0. Negative slope coefficient. Default to 0.\n threshold: Float >= 0. Threshold value for thresholded activation. Default\n to 0.\n ", "desc": "Rectified Linear Unit activation function.", "type": "API"}, {"name": "tf.keras.layers.RepeatVector", "docs": "Repeats the input n times.\n\n Example:\n\n ```python\n model = Sequential()\n model.add(Dense(32, input_dim=32))\n # now: model.output_shape == (None, 32)\n # note: `None` is the batch dimension\n\n model.add(RepeatVector(3))\n # now: model.output_shape == (None, 3, 32)\n ```\n\n Args:\n n: Integer, repetition factor.\n Input shape: 2D tensor of shape `(num_samples, features)`.\n Output shape: 3D tensor of shape `(num_samples, n, features)`.\n ", "desc": "Repeats the input n times.", "type": "API"}, {"name": "tf.keras.layers.Reshape", "docs": "Layer that reshapes inputs into the given shape.\n\n Input shape:\n Arbitrary, although all dimensions in the input shape must be known/fixed.\n Use the keyword argument `input_shape` (tuple of integers, does not include\n the samples/batch size axis) when using this layer as the first layer\n in a model.\n\n Output shape:\n `(batch_size,) + target_shape`\n\n Example:\n\n >>> # as first layer in a Sequential model\n >>> model = tf.keras.Sequential()\n >>> model.add(tf.keras.layers.Reshape((3, 4), input_shape=(12,)))\n >>> # model.output_shape == (None, 3, 4), `None` is the batch size.\n >>> model.output_shape\n (None, 3, 4)\n\n >>> # as intermediate layer in a Sequential model\n >>> model.add(tf.keras.layers.Reshape((6, 2)))\n >>> model.output_shape\n (None, 6, 2)\n\n >>> # also supports shape inference using `-1` as dimension\n >>> model.add(tf.keras.layers.Reshape((-1, 2, 2)))\n >>> model.output_shape\n (None, 3, 2, 2)\n ", "desc": "Layer that reshapes inputs into the given shape.", "type": "API"}, {"name": "tf.keras.layers.RNN", "docs": "Base class for recurrent layers.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Args:\n cell: A RNN cell instance or a list of RNN cell instances.\n A RNN cell is a class that has:\n - A `call(input_at_t, states_at_t)` method, returning\n `(output_at_t, states_at_t_plus_1)`. The call method of the\n cell can also take the optional argument `constants`, see\n section \"Note on passing external constants\" below.\n - A `state_size` attribute. This can be a single integer\n (single state) in which case it is the size of the recurrent\n state. This can also be a list/tuple of integers (one size per state).\n The `state_size` can also be TensorShape or tuple/list of\n TensorShape, to represent high dimension state.\n - A `output_size` attribute. This can be a single integer or a\n TensorShape, which represent the shape of the output. For backward\n compatible reason, if this attribute is not available for the\n cell, the value will be inferred by the first element of the\n `state_size`.\n - A `get_initial_state(inputs=None, batch_size=None, dtype=None)`\n method that creates a tensor meant to be fed to `call()` as the\n initial state, if the user didn't specify any initial state via other\n means. The returned initial state should have a shape of\n [batch_size, cell.state_size]. The cell might choose to create a\n tensor full of zeros, or full of other values based on the cell's\n implementation.\n `inputs` is the input tensor to the RNN layer, which should\n contain the batch size as its shape[0], and also dtype. Note that\n the shape[0] might be `None` during the graph construction. Either\n the `inputs` or the pair of `batch_size` and `dtype` are provided.\n `batch_size` is a scalar tensor that represents the batch size\n of the inputs. `dtype` is `tf.DType` that represents the dtype of\n the inputs.\n For backward compatibility, if this method is not implemented\n by the cell, the RNN layer will create a zero filled tensor with the\n size of [batch_size, cell.state_size].\n In the case that `cell` is a list of RNN cell instances, the cells\n will be stacked on top of each other in the RNN, resulting in an\n efficient stacked RNN.\n return_sequences: Boolean (default `False`). Whether to return the last\n output in the output sequence, or the full sequence.\n return_state: Boolean (default `False`). Whether to return the last state\n in addition to the output.\n go_backwards: Boolean (default `False`).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default `False`). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default `False`).\n If True, the network will be unrolled, else a symbolic loop will be used.\n Unrolling can speed-up a RNN, although it tends to be more\n memory-intensive. Unrolling is only suitable for short sequences.\n time_major: The shape format of the `inputs` and `outputs` tensors.\n If True, the inputs and outputs will be in shape\n `(timesteps, batch, ...)`, whereas in the False case, it will be\n `(batch, timesteps, ...)`. Using `time_major = True` is a bit more\n efficient because it avoids transposes at the beginning and end of the\n RNN calculation. However, most TensorFlow data is batch-major, so by\n default this function accepts input and emits output in batch-major\n form.\n zero_output_for_mask: Boolean (default `False`).\n Whether the output should use zeros for the masked timesteps. Note that\n this field is only used when `return_sequences` is True and mask is\n provided. It can useful if you want to reuse the raw output sequence of\n the RNN without interference from the masked timesteps, eg, merging\n bidirectional RNNs.\n\n Call arguments:\n inputs: Input tensor.\n mask: Binary tensor of shape `[batch_size, timesteps]` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False`\n entry indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is for use with cells that use dropout.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n constants: List of constant tensors to be passed to the cell at each\n timestep.\n\n Input shape:\n N-D tensor with shape `[batch_size, timesteps, ...]` or\n `[timesteps, batch_size, ...]` when time_major is True.\n\n Output shape:\n - If `return_state`: a list of tensors. The first tensor is\n the output. The remaining tensors are the last states,\n each with shape `[batch_size, state_size]`, where `state_size` could\n be a high dimension tensor shape.\n - If `return_sequences`: N-D tensor with shape\n `[batch_size, timesteps, output_size]`, where `output_size` could\n be a high dimension tensor shape, or\n `[timesteps, batch_size, output_size]` when `time_major` is True.\n - Else, N-D tensor with shape `[batch_size, output_size]`, where\n `output_size` could be a high dimension tensor shape.\n\n Masking:\n This layer supports masking for input data with a variable number\n of timesteps. To introduce masks to your data,\n use an [tf.keras.layers.Embedding] layer with the `mask_zero` parameter\n set to `True`.\n\n Note on using statefulness in RNNs:\n You can set RNN layers to be 'stateful', which means that the states\n computed for the samples in one batch will be reused as initial states\n for the samples in the next batch. This assumes a one-to-one mapping\n between samples in different successive batches.\n\n To enable statefulness:\n - Specify `stateful=True` in the layer constructor.\n - Specify a fixed batch size for your model, by passing\n If sequential model:\n `batch_input_shape=(...)` to the first layer in your model.\n Else for functional model with 1 or more Input layers:\n `batch_shape=(...)` to all the first layers in your model.\n This is the expected shape of your inputs\n *including the batch size*.\n It should be a tuple of integers, e.g. `(32, 10, 100)`.\n - Specify `shuffle=False` when calling `fit()`.\n\n To reset the states of your model, call `.reset_states()` on either\n a specific layer, or on your entire model.\n\n Note on specifying the initial state of RNNs:\n You can specify the initial state of RNN layers symbolically by\n calling them with the keyword argument `initial_state`. The value of\n `initial_state` should be a tensor or list of tensors representing\n the initial state of the RNN layer.\n\n You can specify the initial state of RNN layers numerically by\n calling `reset_states` with the keyword argument `states`. The value of\n `states` should be a numpy array or list of numpy arrays representing\n the initial state of the RNN layer.\n\n Note on passing external constants to RNNs:\n You can pass \"external\" constants to the cell using the `constants`\n keyword argument of `RNN.__call__` (as well as `RNN.call`) method. This\n requires that the `cell.call` method accepts the same keyword argument\n `constants`. Such constants can be used to condition the cell\n transformation on additional static inputs (not changing over time),\n a.k.a. an attention mechanism.\n\n Examples:\n\n ```python\n # First, let's define a RNN Cell, as a layer subclass.\n\n class MinimalRNNCell(keras.layers.Layer):\n\n def __init__(self, units, **kwargs):\n self.units = units\n self.state_size = units\n super(MinimalRNNCell, self).__init__(**kwargs)\n\n def build(self, input_shape):\n self.kernel = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='uniform',\n name='kernel')\n self.recurrent_kernel = self.add_weight(\n shape=(self.units, self.units),\n initializer='uniform',\n name='recurrent_kernel')\n self.built = True\n\n def call(self, inputs, states):\n prev_output = states[0]\n h = backend.dot(inputs, self.kernel)\n output = h + backend.dot(prev_output, self.recurrent_kernel)\n return output, [output]\n\n # Let's use this cell in a RNN layer:\n\n cell = MinimalRNNCell(32)\n x = keras.Input((None, 5))\n layer = RNN(cell)\n y = layer(x)\n\n # Here's how to use the cell to build a stacked RNN:\n\n cells = [MinimalRNNCell(32), MinimalRNNCell(64)]\n x = keras.Input((None, 5))\n layer = RNN(cells)\n y = layer(x)\n ```\n ", "desc": "Base class for recurrent layers.", "type": "API"}, {"name": "tf.keras.layers.SeparableConv1D", "docs": "Depthwise separable 1D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"`, or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input. `\"causal\"` results in causal\n (dilated) convolutions, e.g. `output[t]` does not depend on `input[t+1:]`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel (see `keras.regularizers`).\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel (see `keras.regularizers`).\n bias_regularizer: Optional regularizer for the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Optional regularizer function for the output\n (see `keras.regularizers`).\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training\n (see `keras.constraints`).\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`\n (see `keras.constraints`).\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`\n (see `keras.constraints`).\n trainable: Boolean, if `True` the weights of this layer will be marked as\n trainable (and listed in `layer.trainable_weights`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, channels, steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, steps, channels)` if data_format='channels_last'.\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, filters, new_steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, new_steps, filters)` if data_format='channels_last'.\n `new_steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(separableconv1d(inputs, kernel) + bias)`.\n ", "desc": "Depthwise separable 1D convolution.", "type": "API"}, {"name": "tf.keras.layers.SeparableConv2D", "docs": "Depthwise separable 2D convolution.\n\n Separable convolutions consist of first performing\n a depthwise spatial convolution\n (which acts on each input channel separately)\n followed by a pointwise convolution which mixes the resulting\n output channels. The `depth_multiplier` argument controls how many\n output channels are generated per input channel in the depthwise step.\n\n Intuitively, separable convolutions can be understood as\n a way to factorize a convolution kernel into two smaller kernels,\n or as an extreme version of an Inception block.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions. Current implementation only supports equal\n length strides in the row and column dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels\n for each input channel.\n The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Regularizer function applied to\n the depthwise kernel matrix (see `keras.regularizers`).\n pointwise_regularizer: Regularizer function applied to\n the pointwise kernel matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to\n the depthwise kernel matrix\n (see `keras.constraints`).\n pointwise_constraint: Constraint function applied to\n the pointwise kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(separableconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ", "desc": "Depthwise separable 2D convolution.", "type": "API"}, {"name": "tf.keras.layers.SeparableConvolution1D", "docs": "Depthwise separable 1D convolution.\n\n This layer performs a depthwise convolution that acts separately on\n channels, followed by a pointwise convolution that mixes channels.\n If `use_bias` is True and a bias initializer is provided,\n it adds a bias vector to the output.\n It then optionally applies an activation function to produce the final output.\n\n Args:\n filters: Integer, the dimensionality of the output space (i.e. the number\n of filters in the convolution).\n kernel_size: A single integer specifying the spatial\n dimensions of the filters.\n strides: A single integer specifying the strides\n of the convolution.\n Specifying any `stride` value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: One of `\"valid\"`, `\"same\"`, or `\"causal\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input. `\"causal\"` results in causal\n (dilated) convolutions, e.g. `output[t]` does not depend on `input[t+1:]`.\n data_format: A string, one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, length, channels)` while `channels_first` corresponds to\n inputs with shape `(batch_size, channels, length)`.\n dilation_rate: A single integer, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels for\n each input channel. The total number of depthwise convolution output\n channels will be equal to `num_filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Optional regularizer for the depthwise\n convolution kernel (see `keras.regularizers`).\n pointwise_regularizer: Optional regularizer for the pointwise\n convolution kernel (see `keras.regularizers`).\n bias_regularizer: Optional regularizer for the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Optional regularizer function for the output\n (see `keras.regularizers`).\n depthwise_constraint: Optional projection function to be applied to the\n depthwise kernel after being updated by an `Optimizer` (e.g. used for\n norm constraints or value constraints for layer weights). The function\n must take as input the unprojected variable and must return the\n projected variable (which must have the same shape). Constraints are\n not safe to use when doing asynchronous distributed training\n (see `keras.constraints`).\n pointwise_constraint: Optional projection function to be applied to the\n pointwise kernel after being updated by an `Optimizer`\n (see `keras.constraints`).\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`\n (see `keras.constraints`).\n trainable: Boolean, if `True` the weights of this layer will be marked as\n trainable (and listed in `layer.trainable_weights`).\n\n Input shape:\n 3D tensor with shape:\n `(batch_size, channels, steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, steps, channels)` if data_format='channels_last'.\n\n Output shape:\n 3D tensor with shape:\n `(batch_size, filters, new_steps)` if data_format='channels_first'\n or 3D tensor with shape:\n `(batch_size, new_steps, filters)` if data_format='channels_last'.\n `new_steps` value might have changed due to padding or strides.\n\n Returns:\n A tensor of rank 3 representing\n `activation(separableconv1d(inputs, kernel) + bias)`.\n ", "desc": "Depthwise separable 1D convolution.", "type": "API"}, {"name": "tf.keras.layers.SeparableConvolution2D", "docs": "Depthwise separable 2D convolution.\n\n Separable convolutions consist of first performing\n a depthwise spatial convolution\n (which acts on each input channel separately)\n followed by a pointwise convolution which mixes the resulting\n output channels. The `depth_multiplier` argument controls how many\n output channels are generated per input channel in the depthwise step.\n\n Intuitively, separable convolutions can be understood as\n a way to factorize a convolution kernel into two smaller kernels,\n or as an extreme version of an Inception block.\n\n Args:\n filters: Integer, the dimensionality of the output space\n (i.e. the number of output filters in the convolution).\n kernel_size: An integer or tuple/list of 2 integers, specifying the\n height and width of the 2D convolution window.\n Can be a single integer to specify the same value for\n all spatial dimensions.\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n all spatial dimensions. Current implementation only supports equal\n length strides in the row and column dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n padding: one of `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding with zeros evenly\n to the left/right or up/down of the input such that output has the same\n height/width dimension as the input.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n dilation_rate: An integer or tuple/list of 2 integers, specifying\n the dilation rate to use for dilated convolution.\n depth_multiplier: The number of depthwise convolution output channels\n for each input channel.\n The total number of depthwise convolution output\n channels will be equal to `filters_in * depth_multiplier`.\n activation: Activation function to use.\n If you don't specify anything, no activation is applied\n (see `keras.activations`).\n use_bias: Boolean, whether the layer uses a bias vector.\n depthwise_initializer: An initializer for the depthwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel\n (see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n depthwise_regularizer: Regularizer function applied to\n the depthwise kernel matrix (see `keras.regularizers`).\n pointwise_regularizer: Regularizer function applied to\n the pointwise kernel matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector\n (see `keras.regularizers`).\n activity_regularizer: Regularizer function applied to\n the output of the layer (its \"activation\")\n (see `keras.regularizers`).\n depthwise_constraint: Constraint function applied to\n the depthwise kernel matrix\n (see `keras.constraints`).\n pointwise_constraint: Constraint function applied to\n the pointwise kernel matrix\n (see `keras.constraints`).\n bias_constraint: Constraint function applied to the bias vector\n (see `keras.constraints`).\n\n Input shape:\n 4D tensor with shape:\n `(batch_size, channels, rows, cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, rows, cols, channels)` if data_format='channels_last'.\n\n Output shape:\n 4D tensor with shape:\n `(batch_size, filters, new_rows, new_cols)` if data_format='channels_first'\n or 4D tensor with shape:\n `(batch_size, new_rows, new_cols, filters)` if data_format='channels_last'.\n `rows` and `cols` values might have changed due to padding.\n\n Returns:\n A tensor of rank 4 representing\n `activation(separableconv2d(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: if `padding` is \"causal\".\n ", "desc": "Depthwise separable 2D convolution.", "type": "API"}, {"name": "tf.keras.layers.serialize", "docs": "Serializes a `Layer` object into a JSON-compatible representation.\n\n Args:\n layer: The `Layer` object to serialize.\n\n Returns:\n A JSON-serializable dict representing the object's config.\n\n Example:\n\n ```python\n from pprint import pprint\n model = tf.keras.models.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(32, activation='relu'))\n\n pprint(tf.keras.layers.serialize(model))\n # prints the configuration of the model, as a dict.\n ", "desc": "Serializes a `Layer` object into a JSON-compatible representation.", "type": "API"}, {"name": "tf.keras.layers.SimpleRNN", "docs": "Fully-connected RNN where the output is to be fed back to input.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass None, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\"). Default: `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1.\n Fraction of the units to drop for the linear transformation of the inputs.\n Default: 0.\n recurrent_dropout: Float between 0 and 1.\n Fraction of the units to drop for the linear transformation of the\n recurrent state. Default: 0.\n return_sequences: Boolean. Whether to return the last output\n in the output sequence, or the full sequence. Default: `False`.\n return_state: Boolean. Whether to return the last state\n in addition to the output. Default: `False`\n go_backwards: Boolean (default False).\n If True, process the input sequence backwards and return the\n reversed sequence.\n stateful: Boolean (default False). If True, the last state\n for each sample at index i in a batch will be used as initial\n state for the sample of index i in the following batch.\n unroll: Boolean (default False).\n If True, the network will be unrolled,\n else a symbolic loop will be used.\n Unrolling can speed-up a RNN,\n although it tends to be more memory-intensive.\n Unrolling is only suitable for short sequences.\n\n Call arguments:\n inputs: A 3D tensor, with shape `[batch, timesteps, feature]`.\n mask: Binary tensor of shape `[batch, timesteps]` indicating whether\n a given timestep should be masked. An individual `True` entry indicates\n that the corresponding timestep should be utilized, while a `False` entry\n indicates that the corresponding timestep should be ignored.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the cell\n when calling it. This is only relevant if `dropout` or\n `recurrent_dropout` is used.\n initial_state: List of initial state tensors to be passed to the first\n call of the cell.\n\n Examples:\n\n ```python\n inputs = np.random.random([32, 10, 8]).astype(np.float32)\n simple_rnn = tf.keras.layers.SimpleRNN(4)\n\n output = simple_rnn(inputs) # The output has shape `[32, 4]`.\n\n simple_rnn = tf.keras.layers.SimpleRNN(\n 4, return_sequences=True, return_state=True)\n\n # whole_sequence_output has shape `[32, 10, 4]`.\n # final_state has shape `[32, 4]`.\n whole_sequence_output, final_state = simple_rnn(inputs)\n ```\n ", "desc": "Fully-connected RNN where the output is to be fed back to input.", "type": "API"}, {"name": "tf.keras.layers.SimpleRNNCell", "docs": "Cell class for SimpleRNN.\n\n See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)\n for details about the usage of RNN API.\n\n This class processes one step within the whole time sequence input, whereas\n `tf.keras.layer.SimpleRNN` processes the whole sequence.\n\n Args:\n units: Positive integer, dimensionality of the output space.\n activation: Activation function to use.\n Default: hyperbolic tangent (`tanh`).\n If you pass `None`, no activation is applied\n (ie. \"linear\" activation: `a(x) = x`).\n use_bias: Boolean, (default `True`), whether the layer uses a bias vector.\n kernel_initializer: Initializer for the `kernel` weights matrix,\n used for the linear transformation of the inputs. Default:\n `glorot_uniform`.\n recurrent_initializer: Initializer for the `recurrent_kernel`\n weights matrix, used for the linear transformation of the recurrent state.\n Default: `orthogonal`.\n bias_initializer: Initializer for the bias vector. Default: `zeros`.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_regularizer: Regularizer function applied to the\n `recurrent_kernel` weights matrix. Default: `None`.\n bias_regularizer: Regularizer function applied to the bias vector. Default:\n `None`.\n kernel_constraint: Constraint function applied to the `kernel` weights\n matrix. Default: `None`.\n recurrent_constraint: Constraint function applied to the `recurrent_kernel`\n weights matrix. Default: `None`.\n bias_constraint: Constraint function applied to the bias vector. Default:\n `None`.\n dropout: Float between 0 and 1. Fraction of the units to drop for the linear\n transformation of the inputs. Default: 0.\n recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for\n the linear transformation of the recurrent state. Default: 0.\n\n Call arguments:\n inputs: A 2D tensor, with shape of `[batch, feature]`.\n states: A 2D tensor with shape of `[batch, units]`, which is the state from\n the previous time step. For timestep 0, the initial state provided by user\n will be feed to cell.\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. Only relevant when `dropout` or\n `recurrent_dropout` is used.\n\n Examples:\n\n ```python\n inputs = np.random.random([32, 10, 8]).astype(np.float32)\n rnn = tf.keras.layers.RNN(tf.keras.layers.SimpleRNNCell(4))\n\n output = rnn(inputs) # The output has shape `[32, 4]`.\n\n rnn = tf.keras.layers.RNN(\n tf.keras.layers.SimpleRNNCell(4),\n return_sequences=True,\n return_state=True)\n\n # whole_sequence_output has shape `[32, 10, 4]`.\n # final_state has shape `[32, 4]`.\n whole_sequence_output, final_state = rnn(inputs)\n ```\n ", "desc": "Cell class for SimpleRNN.", "type": "API"}, {"name": "tf.keras.layers.Softmax", "docs": "Softmax activation function.\n\n Example without mask:\n\n >>> inp = np.asarray([1., 2., 1.])\n >>> layer = tf.keras.layers.Softmax()\n >>> layer(inp).numpy()\n array([0.21194157, 0.5761169 , 0.21194157], dtype=float32)\n >>> mask = np.asarray([True, False, True], dtype=bool)\n >>> layer(inp, mask).numpy()\n array([0.5, 0. , 0.5], dtype=float32)\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n axis: Integer, or list of Integers, axis along which the softmax\n normalization is applied.\n Call arguments:\n inputs: The inputs, or logits to the softmax layer.\n mask: A boolean mask of the same shape as `inputs`. Defaults to `None`. The\n mask specifies 1 to keep and 0 to mask.\n\n Returns:\n softmaxed output with the same shape as `inputs`.\n ", "desc": "Softmax activation function.", "type": "API"}, {"name": "tf.keras.layers.SpatialDropout1D", "docs": "Spatial 1D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 1D feature maps instead of individual elements. If adjacent frames\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout1D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n Call arguments:\n inputs: A 3D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 3D tensor with shape: `(samples, timesteps, channels)`\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 1D version of Dropout.", "type": "API"}, {"name": "tf.keras.layers.SpatialDropout2D", "docs": "Spatial 2D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 2D feature maps instead of individual elements. If adjacent pixels\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout2D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode,\n the channels dimension (the depth) is at index 1, in 'channels_last' mode\n is it at index 3. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be \"channels_last\".\n Call arguments:\n inputs: A 4D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 4D tensor with shape: `(samples, channels, rows, cols)` if\n data_format='channels_first'\n or 4D tensor with shape: `(samples, rows, cols, channels)` if\n data_format='channels_last'.\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 2D version of Dropout.", "type": "API"}, {"name": "tf.keras.layers.SpatialDropout3D", "docs": "Spatial 3D version of Dropout.\n\n This version performs the same function as Dropout, however, it drops\n entire 3D feature maps instead of individual elements. If adjacent voxels\n within feature maps are strongly correlated (as is normally the case in\n early convolution layers) then regular dropout will not regularize the\n activations and will otherwise just result in an effective learning rate\n decrease. In this case, SpatialDropout3D will help promote independence\n between feature maps and should be used instead.\n\n Args:\n rate: Float between 0 and 1. Fraction of the input units to drop.\n data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode,\n the channels dimension (the depth) is at index 1, in 'channels_last' mode\n is it at index 4. It defaults to the `image_data_format` value found in\n your Keras config file at `~/.keras/keras.json`. If you never set it, then\n it will be \"channels_last\".\n Call arguments:\n inputs: A 5D tensor.\n training: Python boolean indicating whether the layer should behave in\n training mode (adding dropout) or in inference mode (doing nothing).\n Input shape:\n 5D tensor with shape: `(samples, channels, dim1, dim2, dim3)` if\n data_format='channels_first'\n or 5D tensor with shape: `(samples, dim1, dim2, dim3, channels)` if\n data_format='channels_last'.\n Output shape: Same as input.\n References: - [Efficient Object Localization Using Convolutional\n Networks](https://arxiv.org/abs/1411.4280)\n ", "desc": "Spatial 3D version of Dropout.", "type": "API"}, {"name": "tf.keras.layers.StackedRNNCells", "docs": "Wrapper allowing a stack of RNN cells to behave as a single cell.\n\n Used to implement efficient stacked RNNs.\n\n Args:\n cells: List of RNN cell instances.\n\n Examples:\n\n ```python\n batch_size = 3\n sentence_max_length = 5\n n_features = 2\n new_shape = (batch_size, sentence_max_length, n_features)\n x = tf.constant(np.reshape(np.arange(30), new_shape), dtype = tf.float32)\n\n rnn_cells = [tf.keras.layers.LSTMCell(128) for _ in range(2)]\n stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells)\n lstm_layer = tf.keras.layers.RNN(stacked_lstm)\n\n result = lstm_layer(x)\n ```\n ", "desc": "Wrapper allowing a stack of RNN cells to behave as a single cell.", "type": "API"}, {"name": "tf.keras.layers.Subtract", "docs": "Layer that subtracts two inputs.\n\n It takes as input a list of tensors of size 2,\n both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]),\n also of the same shape.\n\n Examples:\n\n ```python\n import keras\n\n input1 = keras.layers.Input(shape=(16,))\n x1 = keras.layers.Dense(8, activation='relu')(input1)\n input2 = keras.layers.Input(shape=(32,))\n x2 = keras.layers.Dense(8, activation='relu')(input2)\n # Equivalent to subtracted = keras.layers.subtract([x1, x2])\n subtracted = keras.layers.Subtract()([x1, x2])\n\n out = keras.layers.Dense(4)(subtracted)\n model = keras.models.Model(inputs=[input1, input2], outputs=out)\n ```\n ", "desc": "Layer that subtracts two inputs.", "type": "API"}, {"name": "tf.keras.layers.ThresholdedReLU", "docs": "Thresholded Rectified Linear Unit.\n\n It follows:\n\n ```\n f(x) = x for x > theta\n f(x) = 0 otherwise`\n ```\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n theta: Float >= 0. Threshold location of activation.\n ", "desc": "Thresholded Rectified Linear Unit.", "type": "API"}, {"name": "tf.keras.layers.TimeDistributed", "docs": "This wrapper allows to apply a layer to every temporal slice of an input.\n\n Every input should be at least 3D, and the dimension of index one of the\n first input will be considered to be the temporal dimension.\n\n Consider a batch of 32 video samples, where each sample is a 128x128 RGB image\n with `channels_last` data format, across 10 timesteps.\n The batch input shape is `(32, 10, 128, 128, 3)`.\n\n You can then use `TimeDistributed` to apply the same `Conv2D` layer to each\n of the 10 timesteps, independently:\n\n >>> inputs = tf.keras.Input(shape=(10, 128, 128, 3))\n >>> conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))\n >>> outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)\n >>> outputs.shape\n TensorShape([None, 10, 126, 126, 64])\n\n Because `TimeDistributed` applies the same instance of `Conv2D` to each of the\n timestamps, the same set of weights are used at each timestamp.\n\n Args:\n layer: a `tf.keras.layers.Layer` instance.\n\n Call arguments:\n inputs: Input tensor of shape (batch, time, ...) or nested tensors,\n and each of which has shape (batch, time, ...).\n training: Python boolean indicating whether the layer should behave in\n training mode or in inference mode. This argument is passed to the\n wrapped layer (only if the layer supports this argument).\n mask: Binary tensor of shape `(samples, timesteps)` indicating whether\n a given timestep should be masked. This argument is passed to the\n wrapped layer (only if the layer supports this argument).\n\n Raises:\n ValueError: If not initialized with a `tf.keras.layers.Layer` instance.\n ", "desc": "This wrapper allows to apply a layer to every temporal slice of an input.", "type": "API"}, {"name": "tf.keras.layers.UpSampling1D", "docs": "Upsampling layer for 1D inputs.\n\n Repeats each temporal step `size` times along the time axis.\n\n Examples:\n\n >>> input_shape = (2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1 2]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 9 10 11]]]\n >>> y = tf.keras.layers.UpSampling1D(size=2)(x)\n >>> print(y)\n tf.Tensor(\n [[[ 0 1 2]\n [ 0 1 2]\n [ 3 4 5]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 6 7 8]\n [ 9 10 11]\n [ 9 10 11]]], shape=(2, 4, 3), dtype=int64)\n\n Args:\n size: Integer. Upsampling factor.\n\n Input shape:\n 3D tensor with shape: `(batch_size, steps, features)`.\n\n Output shape:\n 3D tensor with shape: `(batch_size, upsampled_steps, features)`.\n ", "desc": "Upsampling layer for 1D inputs.", "type": "API"}, {"name": "tf.keras.layers.UpSampling2D", "docs": "Upsampling layer for 2D inputs.\n\n Repeats the rows and columns of the data\n by `size[0]` and `size[1]` respectively.\n\n Examples:\n\n >>> input_shape = (2, 2, 1, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[[ 0 1 2]]\n [[ 3 4 5]]]\n [[[ 6 7 8]]\n [[ 9 10 11]]]]\n >>> y = tf.keras.layers.UpSampling2D(size=(1, 2))(x)\n >>> print(y)\n tf.Tensor(\n [[[[ 0 1 2]\n [ 0 1 2]]\n [[ 3 4 5]\n [ 3 4 5]]]\n [[[ 6 7 8]\n [ 6 7 8]]\n [[ 9 10 11]\n [ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64)\n\n Args:\n size: Int, or tuple of 2 integers.\n The upsampling factors for rows and columns.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n interpolation: A string, one of `\"area\"`, `\"bicubic\"`, `\"bilinear\"`,\n `\"gaussian\"`, `\"lanczos3\"`, `\"lanczos5\"`, `\"mitchellcubic\"`, `\"nearest\"`.\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, upsampled_rows, upsampled_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, upsampled_rows, upsampled_cols)`\n ", "desc": "Upsampling layer for 2D inputs.", "type": "API"}, {"name": "tf.keras.layers.UpSampling3D", "docs": "Upsampling layer for 3D inputs.\n\n Repeats the 1st, 2nd and 3rd dimensions\n of the data by `size[0]`, `size[1]` and `size[2]` respectively.\n\n Examples:\n\n >>> input_shape = (2, 1, 2, 1, 3)\n >>> x = tf.constant(1, shape=input_shape)\n >>> y = tf.keras.layers.UpSampling3D(size=2)(x)\n >>> print(y.shape)\n (2, 2, 4, 2, 3)\n\n Args:\n size: Int, or tuple of 3 integers.\n The upsampling factors for dim1, dim2 and dim3.\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, dim1, dim2, dim3, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, dim1, dim2, dim3)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)`\n ", "desc": "Upsampling layer for 3D inputs.", "type": "API"}, {"name": "tf.keras.layers.Wrapper", "docs": "Abstract wrapper base class.\n\n Wrappers take another layer and augment it in various ways.\n Do not use this class as a layer, it is only an abstract base class.\n Two usable wrappers are the `TimeDistributed` and `Bidirectional` wrappers.\n\n Args:\n layer: The layer to be wrapped.\n ", "desc": "Abstract wrapper base class.", "type": "API"}, {"name": "tf.keras.layers.ZeroPadding1D", "docs": "Zero-padding layer for 1D input (e.g. temporal sequence).\n\n Examples:\n\n >>> input_shape = (2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[ 0 1 2]\n [ 3 4 5]]\n [[ 6 7 8]\n [ 9 10 11]]]\n >>> y = tf.keras.layers.ZeroPadding1D(padding=2)(x)\n >>> print(y)\n tf.Tensor(\n [[[ 0 0 0]\n [ 0 0 0]\n [ 0 1 2]\n [ 3 4 5]\n [ 0 0 0]\n [ 0 0 0]]\n [[ 0 0 0]\n [ 0 0 0]\n [ 6 7 8]\n [ 9 10 11]\n [ 0 0 0]\n [ 0 0 0]]], shape=(2, 6, 3), dtype=int64)\n\n Args:\n padding: Int, or tuple of int (length 2), or dictionary.\n - If int:\n How many zeros to add at the beginning and end of\n the padding dimension (axis 1).\n - If tuple of int (length 2):\n How many zeros to add at the beginning and the end of\n the padding dimension (`(left_pad, right_pad)`).\n\n Input shape:\n 3D tensor with shape `(batch_size, axis_to_pad, features)`\n\n Output shape:\n 3D tensor with shape `(batch_size, padded_axis, features)`\n ", "desc": "Zero-padding layer for 1D input (e.g. temporal sequence).", "type": "API"}, {"name": "tf.keras.layers.ZeroPadding2D", "docs": "Zero-padding layer for 2D input (e.g. picture).\n\n This layer can add rows and columns of zeros\n at the top, bottom, left and right side of an image tensor.\n\n Examples:\n\n >>> input_shape = (1, 1, 2, 2)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> print(x)\n [[[[0 1]\n [2 3]]]]\n >>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x)\n >>> print(y)\n tf.Tensor(\n [[[[0 0]\n [0 0]\n [0 0]\n [0 0]]\n [[0 0]\n [0 1]\n [2 3]\n [0 0]]\n [[0 0]\n [0 0]\n [0 0]\n [0 0]]]], shape=(1, 3, 4, 2), dtype=int64)\n\n Args:\n padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.\n - If int: the same symmetric padding\n is applied to height and width.\n - If tuple of 2 ints:\n interpreted as two different\n symmetric padding values for height and width:\n `(symmetric_height_pad, symmetric_width_pad)`.\n - If tuple of 2 tuples of 2 ints:\n interpreted as\n `((top_pad, bottom_pad), (left_pad, right_pad))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, height, width, channels)` while `channels_first`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, rows, cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, rows, cols)`\n\n Output shape:\n 4D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, padded_rows, padded_cols, channels)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, channels, padded_rows, padded_cols)`\n ", "desc": "Zero-padding layer for 2D input (e.g. picture).", "type": "API"}, {"name": "tf.keras.layers.ZeroPadding3D", "docs": "Zero-padding layer for 3D data (spatial or spatio-temporal).\n\n Examples:\n\n >>> input_shape = (1, 1, 2, 2, 3)\n >>> x = np.arange(np.prod(input_shape)).reshape(input_shape)\n >>> y = tf.keras.layers.ZeroPadding3D(padding=2)(x)\n >>> print(y.shape)\n (1, 5, 6, 6, 3)\n\n Args:\n padding: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints.\n - If int: the same symmetric padding\n is applied to height and width.\n - If tuple of 3 ints:\n interpreted as two different\n symmetric padding values for height and width:\n `(symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad)`.\n - If tuple of 3 tuples of 2 ints:\n interpreted as\n `((left_dim1_pad, right_dim1_pad), (left_dim2_pad,\n right_dim2_pad), (left_dim3_pad, right_dim3_pad))`\n data_format: A string,\n one of `channels_last` (default) or `channels_first`.\n The ordering of the dimensions in the inputs.\n `channels_last` corresponds to inputs with shape\n `(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`\n while `channels_first` corresponds to inputs with shape\n `(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.\n It defaults to the `image_data_format` value found in your\n Keras config file at `~/.keras/keras.json`.\n If you never set it, then it will be \"channels_last\".\n\n Input shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_axis_to_pad, second_axis_to_pad,\n third_axis_to_pad)`\n\n Output shape:\n 5D tensor with shape:\n - If `data_format` is `\"channels_last\"`:\n `(batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad,\n depth)`\n - If `data_format` is `\"channels_first\"`:\n `(batch_size, depth, first_padded_axis, second_padded_axis,\n third_axis_to_pad)`\n ", "desc": "Zero-padding layer for 3D data (spatial or spatio-temporal).", "type": "API"}, {"name": "tf.keras.losses", "docs": "Built-in loss functions.\n", "desc": "Built-in loss functions.", "type": "API"}, {"name": "tf.keras.losses.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.keras.losses.BinaryCrossentropy", "docs": "Computes the cross-entropy loss between true labels and predicted labels.\n\n Use this cross-entropy loss for binary (0 or 1) classification applications.\n The loss function requires the following inputs:\n\n - `y_true` (true label): This is either 0 or 1.\n - `y_pred` (predicted value): This is the model's prediction, i.e, a single\n floating-point value which either represents a\n [logit](https://en.wikipedia.org/wiki/Logit), (i.e, value in [-inf, inf]\n when `from_logits=True`) or a probability (i.e, value in [0., 1.] when\n `from_logits=False`).\n\n **Recommended Usage:** (set `from_logits=True`)\n\n With `tf.keras` API:\n\n ```python\n model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n ....\n )\n ```\n\n As a standalone function:\n\n >>> # Example 1: (batch_size = 1, number of samples = 4)\n >>> y_true = [0, 1, 0, 0]\n >>> y_pred = [-18.6, 0.51, 2.94, -12.8]\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n\n >>> # Example 2: (batch_size = 2, number of samples = 4)\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[-18.6, 0.51], [2.94, -12.8]]\n >>> # Using default 'auto'/'sum_over_batch_size' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n >>> # Using 'sample_weight' attribute\n >>> bce(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.243\n >>> # Using 'sum' reduction` type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> bce(y_true, y_pred).numpy()\n 1.730\n >>> # Using 'none' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> bce(y_true, y_pred).numpy()\n array([0.235, 1.496], dtype=float32)\n\n **Default Usage:** (set `from_logits=False`)\n\n >>> # Make the following updates to the above \"Recommended Usage\" section\n >>> # 1. Set `from_logits=False`\n >>> tf.keras.losses.BinaryCrossentropy() # OR ...('from_logits=False')\n >>> # 2. Update `y_pred` to use probabilities instead of logits\n >>> y_pred = [0.6, 0.3, 0.2, 0.8] # OR [[0.6, 0.3], [0.2, 0.8]]\n ", "desc": "Computes the cross-entropy loss between true labels and predicted labels.", "type": "API"}, {"name": "tf.keras.losses.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.keras.losses.categorical_hinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 3, size=(2,))\n >>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> pos = np.sum(y_true * y_pred, axis=-1)\n >>> neg = np.amax((1. - y_true) * y_pred, axis=-1)\n >>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be\n either `{-1, +1}` or `{0, 1}` (i.e. a one-hot-encoded tensor).\n y_pred: The predicted values.\n\n Returns:\n Categorical hinge loss values.\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.CategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided in a `one_hot` representation. If you want to\n provide labels as integers, please use `SparseCategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature.\n\n In the snippet below, there is `# classes` floating pointing values per\n example. The shape of both `y_pred` and `y_true` are\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy()\n >>> cce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.CategoricalHinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge()\n >>> h(y_true, y_pred).numpy()\n 1.4\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.6\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.8\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.2, 1.6], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge())\n ```\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.cosine_similarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.CosineSimilarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and targets.\n If either `y_true` or `y_pred` is a zero vector, cosine similarity will be 0\n regardless of the proximity between predictions and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1)\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = -((0. + 0.) + (0.5 + 0.5)) / 2\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.5\n\n >>> # Calling with 'sample_weight'.\n >>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n -0.0999\n\n >>> # Using 'sum' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.999\n\n >>> # Using 'none' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cosine_loss(y_true, y_pred).numpy()\n array([-0., -0.999], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1))\n ```\n\n Args:\n axis: The axis along which the cosine similarity is computed\n (the features axis). Defaults to -1.\n reduction: Type of `tf.keras.losses.Reduction` to apply to loss.\n Default value is `AUTO`. `AUTO` indicates that the reduction option will\n be determined by the usage context. For almost all cases this defaults to\n `SUM_OVER_BATCH_SIZE`. When used with `tf.distribute.Strategy`, outside of\n built-in training loops such as `tf.keras` `compile` and `fit`, using\n `AUTO` or `SUM_OVER_BATCH_SIZE` will raise an error. Please see this\n custom training [tutorial]\n (https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details.\n name: Optional name for the instance.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.deserialize", "docs": "Deserializes a serialized loss class/function instance.\n\n Args:\n name: Loss configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Loss` instance or a loss function.\n ", "desc": "Deserializes a serialized loss class/function instance.", "type": "API"}, {"name": "tf.keras.losses.get", "docs": "Retrieves a Keras loss as a `function`/`Loss` class instance.\n\n The `identifier` may be the string name of a loss function or `Loss` class.\n\n >>> loss = tf.keras.losses.get(\"categorical_crossentropy\")\n >>> type(loss)\n \n >>> loss = tf.keras.losses.get(\"CategoricalCrossentropy\")\n >>> type(loss)\n \n\n You can also specify `config` of the loss to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Loss` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> loss = tf.keras.losses.get(identifier)\n >>> type(loss)\n \n\n Args:\n identifier: A loss identifier. One of None or string name of a loss\n function/class or loss configuration dictionary or a loss function or a\n loss class instance.\n\n Returns:\n A Keras loss as a `function`/ `Loss` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras loss as a `function`/`Loss` class instance.", "type": "API"}, {"name": "tf.keras.losses.Hinge", "docs": "Computes the hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(1 - y_true * y_pred, 0)`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Hinge()\n >>> h(y_true, y_pred).numpy()\n 1.3\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.55\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.6\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.1, 1.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge())\n ```\n ", "desc": "Computes the hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.Huber", "docs": "Computes the Huber loss between `y_true` and `y_pred`.\n\n For each value x in `error = y_true - y_pred`:\n\n ```\n loss = 0.5 * x^2 if |x| <= d\n loss = 0.5 * d^2 + d * (|x| - d) if |x| > d\n ```\n where d is `delta`. See: https://en.wikipedia.org/wiki/Huber_loss\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Huber()\n >>> h(y_true, y_pred).numpy()\n 0.155\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.09\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 0.31\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([0.18, 0.13], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Huber())\n ```\n ", "desc": "Computes the Huber loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.KLDivergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> kl = tf.keras.losses.KLDivergence()\n >>> kl(y_true, y_pred).numpy()\n 0.458\n\n >>> # Calling with 'sample_weight'.\n >>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.366\n\n >>> # Using 'sum' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> kl(y_true, y_pred).numpy()\n 0.916\n\n >>> # Using 'none' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> kl(y_true, y_pred).numpy()\n array([0.916, -3.08e-06], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence())\n ```\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.keras.losses.LogCosh", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`,\n where x is the error `y_pred - y_true`.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> l = tf.keras.losses.LogCosh()\n >>> l(y_true, y_pred).numpy()\n 0.108\n\n >>> # Calling with 'sample_weight'.\n >>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.087\n\n >>> # Using 'sum' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> l(y_true, y_pred).numpy()\n 0.217\n\n >>> # Using 'none' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> l(y_true, y_pred).numpy()\n array([0.217, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh())\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.keras.losses.Loss", "docs": "Loss base class.\n\n To be implemented by subclasses:\n * `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.\n\n Example subclass implementation:\n\n ```python\n class MeanSquaredError(Loss):\n\n def call(self, y_true, y_pred):\n return tf.reduce_mean(tf.math.square(y_pred - y_true), axis=-1)\n ```\n\n When used with `tf.distribute.Strategy`, outside of built-in training loops\n such as `tf.keras` `compile` and `fit`, please use 'SUM' or 'NONE' reduction\n types, and reduce losses explicitly in your training loop. Using 'AUTO' or\n 'SUM_OVER_BATCH_SIZE' will raise an error.\n\n Please see this custom training [tutorial](\n https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details on this.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:\n\n ```python\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = (tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size))\n ```\n ", "desc": "Loss base class.", "type": "API"}, {"name": "tf.keras.losses.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.MeanAbsoluteError", "docs": "Computes the mean of absolute difference between labels and predictions.\n\n `loss = abs(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError()\n >>> mae(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mae(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mae(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError())\n ```\n ", "desc": "Computes the mean of absolute difference between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Formula:\n\n `loss = 100 * abs((y_true - y_pred) / y_true)`\n\n Note that to avoid dividing by zero, a small epsilon value\n is added to the denominator.\n\n Standalone usage:\n\n >>> y_true = [[2., 1.], [2., 3.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError()\n >>> mape(y_true, y_pred).numpy()\n 50.\n\n >>> # Calling with 'sample_weight'.\n >>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 20.\n\n >>> # Using 'sum' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mape(y_true, y_pred).numpy()\n 100.\n\n >>> # Using 'none' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mape(y_true, y_pred).numpy()\n array([25., 75.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanAbsolutePercentageError())\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.MeanSquaredError", "docs": "Computes the mean of squares of errors between labels and predictions.\n\n `loss = square(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError()\n >>> mse(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mse(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mse(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError())\n ```\n ", "desc": "Computes the mean of squares of errors between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = square(log(y_true + 1.) - log(y_pred + 1.))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError()\n >>> msle(y_true, y_pred).numpy()\n 0.240\n\n >>> # Calling with 'sample_weight'.\n >>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.120\n\n >>> # Using 'sum' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> msle(y_true, y_pred).numpy()\n 0.480\n\n >>> # Using 'none' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> msle(y_true, y_pred).numpy()\n array([0.240, 0.240], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanSquaredLogarithmicError())\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.Poisson", "docs": "Computes the Poisson loss between `y_true` and `y_pred`.\n\n `loss = y_pred - y_true * log(y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> p = tf.keras.losses.Poisson()\n >>> p(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.4\n\n >>> # Using 'sum' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> p(y_true, y_pred).numpy()\n 0.999\n\n >>> # Using 'none' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> p(y_true, y_pred).numpy()\n array([0.999, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson())\n ```\n ", "desc": "Computes the Poisson loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.Reduction", "docs": "Types of loss reduction.\n\n Contains the following values:\n\n * `AUTO`: Indicates that the reduction option will be determined by the usage\n context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When\n used with `tf.distribute.Strategy`, outside of built-in training loops such\n as `tf.keras` `compile` and `fit`, we expect reduction value to be\n `SUM` or `NONE`. Using `AUTO` in that case will raise an error.\n * `NONE`: No **additional** reduction is applied to the output of the wrapped\n loss function. When non-scalar losses are returned to Keras functions like\n `fit`/`evaluate`, the unreduced vector loss is passed to the optimizer\n but the reported loss will be a scalar value.\n\n Caution: **Verify the shape of the outputs when using** `Reduction.NONE`.\n The builtin loss functions wrapped by the loss classes reduce\n one dimension (`axis=-1`, or `axis` if specified by loss function).\n `Reduction.NONE` just means that no **additional** reduction is applied by\n the class wrapper. For categorical losses with an example input shape of\n `[batch, W, H, n_classes]` the `n_classes` dimension is reduced. For\n pointwise losses you must include a dummy axis so that `[batch, W, H, 1]`\n is reduced to `[batch, W, H]`. Without the dummy axis `[batch, W, H]`\n will be incorrectly reduced to `[batch, W]`.\n\n * `SUM`: Scalar sum of weighted losses.\n * `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses.\n This reduction type is not supported when used with\n `tf.distribute.Strategy` outside of built-in training loops like `tf.keras`\n `compile`/`fit`.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:\n ```\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size)\n ```\n\n Please see the [custom training guide](\n https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details on this.\n ", "desc": "Types of loss reduction.", "type": "API"}, {"name": "tf.keras.losses.serialize", "docs": "Serializes loss function or `Loss` instance.\n\n Args:\n loss: A Keras `Loss` instance or a loss function.\n\n Returns:\n Loss configuration dictionary.\n ", "desc": "Serializes loss function or `Loss` instance.", "type": "API"}, {"name": "tf.keras.losses.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.keras.losses.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy()\n >>> scce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> scce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> scce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.SparseCategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.keras.losses.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.losses.SquaredHinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = square(maximum(1 - y_true * y_pred, 0))`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.SquaredHinge()\n >>> h(y_true, y_pred).numpy()\n 1.86\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.73\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 3.72\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.46, 2.26], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge())\n ```\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics", "docs": "All Keras metrics.\n", "desc": "All Keras metrics.", "type": "API"}, {"name": "tf.keras.metrics.Accuracy", "docs": "Calculates how often predictions equal labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Accuracy()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],\n ... sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Accuracy()])\n ```\n ", "desc": "Calculates how often predictions equal labels.", "type": "API"}, {"name": "tf.keras.metrics.AUC", "docs": "Approximates the AUC (Area under the curve) of the ROC or PR curves.\n\n The AUC (Area under the curve) of the ROC (Receiver operating\n characteristic; default) or PR (Precision Recall) curves are quality measures\n of binary classifiers. Unlike the accuracy, and like cross-entropy\n losses, ROC-AUC and PR-AUC evaluate all the operational points of a model.\n\n This class approximates AUCs using a Riemann sum. During the metric\n accumulation phrase, predictions are accumulated within predefined buckets\n by value. The AUC is then computed by interpolating per-bucket averages. These\n buckets define the evaluated operational points.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the AUC.\n To discretize the AUC curve, a linearly spaced set of thresholds is used to\n compute pairs of recall and precision values. The area under the ROC-curve is\n therefore computed using the height of the recall values by the false positive\n rate, while the area under the PR-curve is the computed using the height of\n the precision values by the recall.\n\n This value is ultimately returned as `auc`, an idempotent operation that\n computes the area under a discretized curve of precision versus recall values\n (computed using the aforementioned variables). The `num_thresholds` variable\n controls the degree of discretization with larger numbers of thresholds more\n closely approximating the true AUC. The quality of the approximation may vary\n dramatically depending on `num_thresholds`. The `thresholds` parameter can be\n used to manually specify thresholds which split the predictions more evenly.\n\n For a best approximation of the real AUC, `predictions` should be distributed\n approximately uniformly in the range [0, 1] (if `from_logits=False`). The\n quality of the AUC approximation may be poor if this is not the case. Setting\n `summation_method` to 'minoring' or 'majoring' can help quantify the error in\n the approximation by providing lower or upper bound estimate of the AUC.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use when discretizing the roc curve. Values must be > 1.\n curve: (Optional) Specifies the name of the curve to be computed, 'ROC'\n [default] or 'PR' for the Precision-Recall-curve.\n summation_method: (Optional) Specifies the [Riemann summation method](\n https://en.wikipedia.org/wiki/Riemann_sum) used.\n 'interpolation' (default) applies mid-point summation scheme for `ROC`.\n For PR-AUC, interpolates (true/false) positives but not the ratio that\n is precision (see Davis & Goadrich 2006 for details);\n 'minoring' applies left summation\n for increasing intervals and right summation for decreasing intervals;\n 'majoring' does the opposite.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n thresholds: (Optional) A list of floating point values to use as the\n thresholds for discretizing the curve. If set, the `num_thresholds`\n parameter is ignored. Values should be in [0, 1]. Endpoint thresholds\n equal to {-epsilon, 1+epsilon} for a small positive epsilon value will\n be automatically included with these to correctly handle predictions\n equal to exactly 0 or 1.\n multi_label: boolean indicating whether multilabel data should be\n treated as such, wherein AUC is computed separately for each label and\n then averaged across labels, or (when False) if the data should be\n flattened into a single label before AUC computation. In the latter\n case, when multilabel data is passed to AUC, each label-prediction pair\n is treated as an individual data point. Should be set to False for\n multi-class data.\n num_labels: (Optional) The number of labels, used when `multi_label` is\n True. If `num_labels` is not specified, then state variables get created\n on the first call to `update_state`.\n label_weights: (Optional) list, array, or tensor of non-negative weights\n used to compute AUCs for multilabel data. When `multi_label` is True,\n the weights are applied to the individual label AUCs when they are\n averaged to produce the multi-label AUC. When it's False, they are used\n to weight the individual label predictions in computing the confusion\n matrix on the flattened data. Note that this is unlike class_weights in\n that class_weights weights the example depending on the value of its\n label, whereas label_weights depends only on the index of that label\n before flattening; therefore `label_weights` should not be used for\n multi-class data.\n from_logits: boolean indicating whether the predictions (`y_pred` in\n `update_state`) are probabilities or sigmoid logits. As a rule of thumb,\n when using a keras loss, the `from_logits` constructor argument of the\n loss should match the AUC `from_logits` constructor argument.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.AUC(num_thresholds=3)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> # threshold values are [0 - 1e-7, 0.5, 1 + 1e-7]\n >>> # tp = [2, 1, 0], fp = [2, 0, 0], fn = [0, 1, 2], tn = [0, 2, 2]\n >>> # tp_rate = recall = [1, 0.5, 0], fp_rate = [1, 0, 0]\n >>> # auc = ((((1+0.5)/2)*(1-0)) + (((0.5+0)/2)*(0-0))) = 0.75\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n # Reports the AUC of a model outputting a probability.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(),\n metrics=[tf.keras.metrics.AUC()])\n\n # Reports the AUC of a model outputting a logit.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)])\n ```\n ", "desc": "Approximates the AUC (Area under the curve) of the ROC or PR curves.", "type": "API"}, {"name": "tf.keras.metrics.binary_accuracy", "docs": "Calculates how often predictions match binary labels.\n\n Standalone usage:\n >>> y_true = [[1], [1], [0], [0]]\n >>> y_pred = [[1], [1], [0], [0]]\n >>> m = tf.keras.metrics.binary_accuracy(y_true, y_pred)\n >>> assert m.shape == (4,)\n >>> m.numpy()\n array([1., 1., 1., 1.], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n threshold: (Optional) Float representing the threshold for deciding whether\n prediction values are 1 or 0.\n\n Returns:\n Binary accuracy values. shape = `[batch_size, d0, .. dN-1]`\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.keras.metrics.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.keras.metrics.BinaryAccuracy", "docs": "Calculates how often predictions match binary labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n threshold: (Optional) Float representing the threshold for deciding\n whether prediction values are 1 or 0.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryAccuracy()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.keras.metrics.BinaryCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are only two\n label classes (0 and 1).\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional )Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed.\n e.g. `label_smoothing=0.2` means that we will use a value of `0.1` for\n label `0` and `0.9` for label `1`\".\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryCrossentropy()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.81492424\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162905\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.categorical_accuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: One-hot ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Categorical accuracy values.\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.keras.metrics.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.keras.metrics.CategoricalAccuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `categorical accuracy`: an idempotent operation that\n simply divides `total` by `count`.\n\n `y_pred` and `y_true` should be passed in as vectors of probabilities, rather\n than as labels. If necessary, use `tf.one_hot` to expand `y_true` as a vector.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalAccuracy()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.keras.metrics.CategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are multiple\n label classes (2 or more). Here we assume that labels are given as a `one_hot`\n representation. eg., When labels values are [2, 0, 1],\n `y_true` = [[0, 0, 1], [1, 0, 0], [0, 1, 0]].\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed. e.g.\n `label_smoothing=0.2` means that we will use a value of `0.1` for label\n `0` and `0.9` for label `1`\"\n\n Standalone usage:\n\n >>> # EPSILON = 1e-7, y = y_true, y` = y_pred\n >>> # y` = clip_ops.clip_by_value(output, EPSILON, 1. - EPSILON)\n >>> # y` = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(y'), axis = -1)\n >>> # = -((log 0.95), (log 0.1))\n >>> # = [0.051, 2.302]\n >>> # Reduced xent = (0.051 + 2.302) / 2\n >>> m = tf.keras.metrics.CategoricalCrossentropy()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.CategoricalHinge", "docs": "Computes the categorical hinge metric between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.4000001\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.2\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalHinge()])\n ```\n ", "desc": "Computes the categorical hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.CosineSimilarity", "docs": "Computes the cosine similarity between the labels and predictions.\n\n `cosine similarity = (a . b) / ||a|| ||b||`\n\n See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).\n\n This metric keeps the average cosine similarity between `predictions` and\n `labels` over a stream of data.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n axis: (Optional) Defaults to -1. The dimension along which the cosine\n similarity is computed.\n\n Standalone usage:\n\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = ((0. + 0.) + (0.5 + 0.5)) / 2\n >>> m = tf.keras.metrics.CosineSimilarity(axis=1)\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]],\n ... sample_weight=[0.3, 0.7])\n >>> m.result().numpy()\n 0.6999999\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CosineSimilarity(axis=1)])\n ```\n ", "desc": "Computes the cosine similarity between the labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.deserialize", "docs": "Deserializes a serialized metric class/function instance.\n\n Args:\n config: Metric configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Metric` instance or a metric function.\n ", "desc": "Deserializes a serialized metric class/function instance.", "type": "API"}, {"name": "tf.keras.metrics.FalseNegatives", "docs": "Calculates the number of false negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalseNegatives()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalseNegatives()])\n ```\n ", "desc": "Calculates the number of false negatives.", "type": "API"}, {"name": "tf.keras.metrics.FalsePositives", "docs": "Calculates the number of false positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false positives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalsePositives()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalsePositives()])\n ```\n ", "desc": "Calculates the number of false positives.", "type": "API"}, {"name": "tf.keras.metrics.get", "docs": "Retrieves a Keras metric as a `function`/`Metric` class instance.\n\n The `identifier` may be the string name of a metric function or class.\n\n >>> metric = tf.keras.metrics.get(\"categorical_crossentropy\")\n >>> type(metric)\n \n >>> metric = tf.keras.metrics.get(\"CategoricalCrossentropy\")\n >>> type(metric)\n \n\n You can also specify `config` of the metric to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Metric` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> metric = tf.keras.metrics.get(identifier)\n >>> type(metric)\n \n\n Args:\n identifier: A metric identifier. One of None or string name of a metric\n function/class or metric configuration dictionary or a metric function or\n a metric class instance\n\n Returns:\n A Keras metric as a `function`/ `Metric` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras metric as a `function`/`Metric` class instance.", "type": "API"}, {"name": "tf.keras.metrics.Hinge", "docs": "Computes the hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Hinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.3\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.1\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Hinge()])\n ```\n ", "desc": "Computes the hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.KLDivergence", "docs": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.\n\n `metric = y_true * log(y_true / y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.KLDivergence()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.45814306\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162892\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.KLDivergence()])\n ```\n ", "desc": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.keras.metrics.logcosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.keras.metrics.LogCoshError", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`, where x is the error (y_pred - y_true)\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.LogCoshError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.10844523\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.21689045\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.LogCoshError()])\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.keras.metrics.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.Mean", "docs": "Computes the (weighted) mean of the given values.\n\n For example, if values is [1, 3, 5, 7] then the mean is 4.\n If the weights were specified as [1, 1, 0, 0] then the mean would be 2.\n\n This metric creates two variables, `total` and `count` that are used to\n compute the average of `values`. This average is ultimately returned as `mean`\n which is an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Mean()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 4.0\n >>> m.reset_state()\n >>> m.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Mean(name='mean_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) mean of the given values.", "type": "API"}, {"name": "tf.keras.metrics.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.MeanAbsoluteError", "docs": "Computes the mean absolute error between the labels and predictions.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsoluteError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsoluteError()])\n ```\n ", "desc": "Computes the mean absolute error between the labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsolutePercentageError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 250000000.0\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 500000000.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsolutePercentageError()])\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.MeanIoU", "docs": "Computes the mean Intersection-Over-Union metric.\n\n General definition and computation:\n\n Intersection-Over-Union is a common evaluation metric for semantic image\n segmentation.\n\n For an individual class, the IoU metric is defined as follows:\n\n ```\n iou = true_positives / (true_positives + false_positives + false_negatives)\n ```\n\n To compute IoUs, the predictions are accumulated in a confusion matrix,\n weighted by `sample_weight` and the metric is then calculated from it.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Note that this class first computes IoUs for all individual classes, then\n returns the mean of these values.\n\n Args:\n num_classes: The possible number of labels the prediction task can have.\n This value must be provided, since a confusion matrix of dimension =\n [num_classes, num_classes] will be allocated.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> # cm = [[1, 1],\n >>> # [1, 1]]\n >>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]\n >>> # iou = true_positives / (sum_row + sum_col - true_positives))\n >>> # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33\n >>> m = tf.keras.metrics.MeanIoU(num_classes=2)\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1])\n >>> m.result().numpy()\n 0.33333334\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1],\n ... sample_weight=[0.3, 0.3, 0.3, 0.1])\n >>> m.result().numpy()\n 0.23809525\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])\n ```\n ", "desc": "Computes the mean Intersection-Over-Union metric.", "type": "API"}, {"name": "tf.keras.metrics.MeanRelativeError", "docs": "Computes the mean relative error by normalizing with the given values.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the mean relative error. This is weighted by `sample_weight`, and\n it is ultimately returned as `mean_relative_error`:\n an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n normalizer: The normalizer values with same shape as predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanRelativeError(normalizer=[1, 3, 2, 3])\n >>> m.update_state([1, 3, 2, 3], [2, 4, 6, 8])\n\n >>> # metric = mean(|y_pred - y_true| / normalizer)\n >>> # = mean([1, 1, 4, 5] / [1, 3, 2, 3]) = mean([1, 1/3, 2, 5/3])\n >>> # = 5/4 = 1.25\n >>> m.result().numpy()\n 1.25\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanRelativeError(normalizer=[1, 3])])\n ```\n ", "desc": "Computes the mean relative error by normalizing with the given values.", "type": "API"}, {"name": "tf.keras.metrics.MeanSquaredError", "docs": "Computes the mean squared error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredError()])\n ```\n ", "desc": "Computes the mean squared error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredLogarithmicError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.12011322\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.24022643\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()])\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.MeanTensor", "docs": "Computes the element-wise (weighted) mean of the given tensors.\n\n `MeanTensor` returns a tensor with the same shape of the input tensors. The\n mean value is updated by keeping local variables `total` and `count`. The\n `total` tracks the sum of the weighted values, and `count` stores the sum of\n the weighted counts.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n shape: (Optional) A list of integers, a tuple of integers, or a 1-D Tensor\n of type int32. If not specified, the shape is inferred from the values at\n the first call of update_state.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanTensor()\n >>> m.update_state([0, 1, 2, 3])\n >>> m.update_state([4, 5, 6, 7])\n >>> m.result().numpy()\n array([2., 3., 4., 5.], dtype=float32)\n\n >>> m.update_state([12, 10, 8, 6], sample_weight= [0, 0.2, 0.5, 1])\n >>> m.result().numpy()\n array([2. , 3.6363635, 4.8 , 5.3333335], dtype=float32)\n\n >>> m = tf.keras.metrics.MeanTensor(dtype=tf.float64, shape=(1, 4))\n >>> m.result().numpy()\n array([[0., 0., 0., 0.]])\n >>> m.update_state([[0, 1, 2, 3]])\n >>> m.update_state([[4, 5, 6, 7]])\n >>> m.result().numpy()\n array([[2., 3., 4., 5.]])\n ", "desc": "Computes the element-wise (weighted) mean of the given tensors.", "type": "API"}, {"name": "tf.keras.metrics.Metric", "docs": "Encapsulates metric logic and state.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n **kwargs: Additional layer keywords arguments.\n\n Standalone usage:\n\n ```python\n m = SomeMetric(...)\n for input in ...:\n m.update_state(input)\n print('Final result: ', m.result().numpy())\n ```\n\n Usage with `compile()` API:\n\n ```python\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(10, activation='softmax'))\n\n model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),\n loss=tf.keras.losses.CategoricalCrossentropy(),\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n\n data = np.random.random((1000, 32))\n labels = np.random.random((1000, 10))\n\n dataset = tf.data.Dataset.from_tensor_slices((data, labels))\n dataset = dataset.batch(32)\n\n model.fit(dataset, epochs=10)\n ```\n\n To be implemented by subclasses:\n * `__init__()`: All state variables should be created in this method by\n calling `self.add_weight()` like: `self.var = self.add_weight(...)`\n * `update_state()`: Has all updates to the state variables like:\n self.var.assign_add(...).\n * `result()`: Computes and returns a scalar value or a dict of scalar values\n for the metric from the state variables.\n\n Example subclass implementation:\n\n ```python\n class BinaryTruePositives(tf.keras.metrics.Metric):\n\n def __init__(self, name='binary_true_positives', **kwargs):\n super(BinaryTruePositives, self).__init__(name=name, **kwargs)\n self.true_positives = self.add_weight(name='tp', initializer='zeros')\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, tf.bool)\n y_pred = tf.cast(y_pred, tf.bool)\n\n values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))\n values = tf.cast(values, self.dtype)\n if sample_weight is not None:\n sample_weight = tf.cast(sample_weight, self.dtype)\n sample_weight = tf.broadcast_to(sample_weight, values.shape)\n values = tf.multiply(values, sample_weight)\n self.true_positives.assign_add(tf.reduce_sum(values))\n\n def result(self):\n return self.true_positives\n ```\n ", "desc": "Encapsulates metric logic and state.", "type": "API"}, {"name": "tf.keras.metrics.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.Poisson", "docs": "Computes the Poisson metric between `y_true` and `y_pred`.\n\n `metric = y_pred - y_true * log(y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Poisson()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.99999994\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Poisson()])\n ```\n ", "desc": "Computes the Poisson metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.Precision", "docs": "Computes the precision of the predictions with respect to the labels.\n\n The metric creates two local variables, `true_positives` and `false_positives`\n that are used to compute the precision. This value is ultimately returned as\n `precision`, an idempotent operation that simply divides `true_positives`\n by the sum of `true_positives` and `false_positives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, we'll calculate precision as how often on average a class\n among the top-k classes with the highest predicted values of a batch entry is\n correct and can be found in the label for that entry.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold and/or in the\n top-k highest predictions, and computing the fraction of them for which\n `class_id` is indeed a correct label.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate precision with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Precision()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n >>> # With top_k=2, it will calculate precision over y_true[:2] and y_pred[:2]\n >>> m = tf.keras.metrics.Precision(top_k=2)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.0\n\n >>> # With top_k=4, it will calculate precision over y_true[:4] and y_pred[:4]\n >>> m = tf.keras.metrics.Precision(top_k=4)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Precision()])\n ```\n ", "desc": "Computes the precision of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.keras.metrics.PrecisionAtRecall", "docs": "Computes best precision where recall is >= specified value.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n precision at the given recall. The threshold for the given recall\n value is computed and used to evaluate the corresponding precision.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n recall: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.PrecisionAtRecall(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[2, 2, 2, 1, 1])\n >>> m.result().numpy()\n 0.33333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)])\n ```\n ", "desc": "Computes best precision where recall is >= specified value.", "type": "API"}, {"name": "tf.keras.metrics.Recall", "docs": "Computes the recall of the predictions with respect to the labels.\n\n This metric creates two local variables, `true_positives` and\n `false_negatives`, that are used to compute the recall. This value is\n ultimately returned as `recall`, an idempotent operation that simply divides\n `true_positives` by the sum of `true_positives` and `false_negatives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, recall will be computed as how often on average a class\n among the labels of a batch entry is in the top-k predictions.\n\n If `class_id` is specified, we calculate recall by considering only the\n entries in the batch for which `class_id` is in the label, and computing the\n fraction of them for which `class_id` is above the threshold and/or in the\n top-k predictions.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate recall with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Recall()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Recall()])\n ```\n ", "desc": "Computes the recall of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.keras.metrics.RecallAtPrecision", "docs": "Computes best recall where precision is >= specified value.\n\n For a given score-label-distribution the required precision might not\n be achievable, in this case 0.0 is returned as recall.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n recall at the given precision. The threshold for the given precision\n value is computed and used to evaluate the corresponding recall.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n precision: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RecallAtPrecision(0.8)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RecallAtPrecision(precision=0.8)])\n ```\n ", "desc": "Computes best recall where precision is >= specified value.", "type": "API"}, {"name": "tf.keras.metrics.RootMeanSquaredError", "docs": "Computes root mean squared error metric between `y_true` and `y_pred`.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RootMeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.70710677\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RootMeanSquaredError()])\n ```\n ", "desc": "Computes root mean squared error metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.SensitivityAtSpecificity", "docs": "Computes best sensitivity where specificity is >= specified value.\n\n the sensitivity at a given specificity.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n sensitivity at the given specificity. The threshold for the given specificity\n value is computed and used to evaluate the corresponding sensitivity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n specificity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given specificity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SensitivityAtSpecificity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 1])\n >>> m.result().numpy()\n 0.333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SensitivityAtSpecificity()])\n ```\n ", "desc": "Computes best sensitivity where specificity is >= specified value.", "type": "API"}, {"name": "tf.keras.metrics.serialize", "docs": "Serializes metric function or `Metric` instance.\n\n Args:\n metric: A Keras `Metric` instance or a metric function.\n\n Returns:\n Metric configuration dictionary.\n ", "desc": "Serializes metric function or `Metric` instance.", "type": "API"}, {"name": "tf.keras.metrics.sparse_categorical_accuracy", "docs": "Calculates how often predictions match integer labels.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: Integer ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Sparse categorical accuracy values.\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.keras.metrics.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.keras.metrics.sparse_top_k_categorical_accuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_top_k_categorical_accuracy(\n ... y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: tensor of true targets.\n y_pred: tensor of predicted targets.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Sparse top K categorical accuracy value.\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.keras.metrics.SparseCategoricalAccuracy", "docs": "Calculates how often predictions match integer labels.\n\n ```python\n acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))\n ```\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `sparse categorical accuracy`: an idempotent operation\n that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseCategoricalAccuracy()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.keras.metrics.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n Use this crossentropy metric when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` metric.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n axis: (Optional) Defaults to -1. The dimension along which the metric is\n computed.\n\n Standalone usage:\n\n >>> # y_true = one_hot(y_true) = [[0, 1, 0], [0, 0, 1]]\n >>> # logits = log(y_pred)\n >>> # softmax = exp(logits) / sum(exp(logits), axis=-1)\n >>> # softmax = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(softmax), 1)\n >>> # log(softmax) = [[-2.9957, -0.0513, -16.1181],\n >>> # [-2.3026, -0.2231, -2.3026]]\n >>> # y_true * log(softmax) = [[0, -0.0513, 0], [0, 0, -2.3026]]\n >>> # xent = [0.0513, 2.3026]\n >>> # Reduced xent = (0.0513 + 2.3026) / 2\n >>> m = tf.keras.metrics.SparseCategoricalCrossentropy()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.keras.metrics.SparseTopKCategoricalAccuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1)\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.keras.metrics.SpecificityAtSensitivity", "docs": "Computes best specificity where sensitivity is >= specified value.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n specificity at the given sensitivity. The threshold for the given sensitivity\n value is computed and used to evaluate the corresponding specificity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n sensitivity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given sensitivity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SpecificityAtSensitivity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.66666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 2])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SpecificityAtSensitivity()])\n ```\n ", "desc": "Computes best specificity where sensitivity is >= specified value.", "type": "API"}, {"name": "tf.keras.metrics.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.SquaredHinge", "docs": "Computes the squared hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SquaredHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.86\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.46\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SquaredHinge()])\n ```\n ", "desc": "Computes the squared hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.keras.metrics.Sum", "docs": "Computes the (weighted) sum of the given values.\n\n For example, if values is [1, 3, 5, 7] then the sum is 16.\n If the weights were specified as [1, 1, 0, 0] then the sum would be 4.\n\n This metric creates one variable, `total`, that is used to compute the sum of\n `values`. This is ultimately returned as `sum`.\n\n If `sample_weight` is `None`, weights default to 1. Use `sample_weight` of 0\n to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Sum()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 16.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Sum(name='sum_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) sum of the given values.", "type": "API"}, {"name": "tf.keras.metrics.top_k_categorical_accuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: The ground truth values.\n y_pred: The prediction values.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Top K categorical accuracy value.\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.keras.metrics.TopKCategoricalAccuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1)\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.keras.metrics.TrueNegatives", "docs": "Calculates the number of true negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of true negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TrueNegatives()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TrueNegatives()])\n ```\n ", "desc": "Calculates the number of true negatives.", "type": "API"}, {"name": "tf.keras.metrics.TruePositives", "docs": "Calculates the number of true positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true positives. This metric creates one local variable, `true_positives`\n that is used to keep track of the number of true positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TruePositives()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TruePositives()])\n ```\n ", "desc": "Calculates the number of true positives.", "type": "API"}, {"name": "tf.keras.mixed_precision", "docs": "Keras mixed precision API.\n\nSee [the mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) to learn how to\nuse the API.\n\n", "desc": "Keras mixed precision API.", "type": "API"}, {"name": "tf.keras.mixed_precision.global_policy", "docs": "Returns the global dtype policy.\n\n The global policy is the default `tf.keras.mixed_precision.Policy` used for\n layers, if no policy is passed to the layer constructor. If no policy has been\n set with `keras.mixed_precision.set_global_policy`, this will return a policy\n constructed from `tf.keras.backend.floatx()` (floatx defaults to float32).\n\n >>> tf.keras.mixed_precision.global_policy()\n \n >>> tf.keras.layers.Dense(10).dtype_policy # Defaults to the global policy\n \n\n If TensorFlow 2 behavior has been disabled with\n `tf.compat.v1.disable_v2_behavior()`, this will instead return a special\n \"_infer\" policy which infers the dtype from the dtype of the first input the\n first time the layer is called. This behavior matches the behavior that\n existed in TensorFlow 1.\n\n See `tf.keras.mixed_precision.Policy` for more information on policies.\n\n Returns:\n The global Policy.\n ", "desc": "Returns the global dtype policy.", "type": "API"}, {"name": "tf.keras.mixed_precision.LossScaleOptimizer", "docs": "An optimizer that applies loss scaling to prevent numeric underflow.\n\n Loss scaling is a technique to prevent numeric underflow in intermediate\n gradients when float16 is used. To prevent underflow, the loss is multiplied\n (or \"scaled\") by a certain factor called the \"loss scale\", which causes\n intermediate gradients to be scaled by the loss scale as well. The final\n gradients are divided (or \"unscaled\") by the loss scale to bring them back to\n their original value.\n\n `LossScaleOptimizer` wraps another optimizer and applies loss scaling to it.\n By default, the loss scale is dynamically updated over time so you do not have\n to choose the loss scale. The `minimize` method automatically scales the loss,\n unscales the gradients, and updates the loss scale so all you have to do is\n wrap your optimizer with a `LossScaleOptimizer` if you use `minimize`. For\n example:\n\n >>> opt = tf.keras.optimizers.SGD(0.25)\n >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)\n >>> var = tf.Variable(1.)\n >>> loss_fn = lambda: var ** 2\n >>> # 'minimize' applies loss scaling and updates the loss sale.\n >>> opt.minimize(loss_fn, var_list=var)\n >>> var.numpy()\n 0.5\n\n If a `tf.GradientTape` is used to compute gradients instead of `minimize`, you\n must scale the loss and gradients manually. This can be done with the\n `LossScaleOptimizer.get_scaled_loss` and\n `LossScaleOptimizer.get_unscaled_gradients` methods. For example:\n\n >>> with tf.GradientTape() as tape:\n ... loss = loss_fn()\n ... scaled_loss = opt.get_scaled_loss(loss)\n >>> scaled_grad = tape.gradient(scaled_loss, var)\n >>> (grad,) = opt.get_unscaled_gradients([scaled_grad])\n >>> opt.apply_gradients([(grad, var)]) # Loss scale is updated here\n >>> var.numpy()\n 0.25\n\n Warning: If you forget to call `get_scaled_loss` or `get_unscaled_gradients`\n (or both) when using a `tf.GradientTape`, the model will likely converge to a\n worse quality. Please make sure you call each function exactly once.\n\n When mixed precision with float16 is used, there is typically no risk of\n underflow affecting model quality if loss scaling is properly used. See\n [the mixed precision guide](\n https://www.tensorflow.org/guide/keras/mixed_precision) for more information\n on how to use mixed precision.\n\n Args:\n inner_optimizer: The `tf.keras.optimizers.Optimizer` or\n `tf.keras.optimizers.experimental.Optimizer` instance to wrap.\n dynamic: Bool indicating whether dynamic loss scaling is used. Defaults to\n True. If True, the loss scale will be dynamically updated over time using\n an algorithm that keeps the loss scale at approximately its optimal value.\n If False, a single fixed loss scale is used and `initial_scale` must be\n specified, which is used as the loss scale. Recommended to keep as True,\n as choosing a fixed loss scale can be tricky. Currently, there is a small\n performance overhead to dynamic loss scaling compared to fixed loss\n scaling.\n initial_scale: The initial loss scale. If `dynamic` is True, this defaults\n to `2 ** 15`. If `dynamic` is False, this must be specified and acts as\n the sole loss scale, as the loss scale does not change over time. When\n dynamic loss scaling is used, is better for this to be a very high number,\n because a loss scale that is too high gets lowered far more quickly than a\n loss scale that is too low gets raised.\n dynamic_growth_steps: With dynamic loss scaling, every\n `dynamic_growth_steps` steps with finite gradients, the loss scale is\n doubled. Defaults to 2000. If a nonfinite gradient is encountered, the\n count is reset back to zero, gradients are skipped that step, and the loss\n scale is halved. The count can be queried with\n `LossScaleOptimizer.dynamic_counter`. This argument can only be specified\n if `dynamic` is True.\n\n `LossScaleOptimizer` will occasionally skip applying gradients to the\n variables, in which case the trainable variables will not change that step.\n This is done because the dynamic loss scale will sometimes be raised too\n high, causing overflow in the gradients. Typically, the first 2 to 15 steps of\n the model are skipped as the initial loss scale is very high, but afterwards\n steps will only be skipped on average 0.05% of the time (the fraction of steps\n skipped is `1 / dynamic_growth_steps`).\n\n `LossScaleOptimizer` delegates all public `Optimizer` methods to the inner\n optimizer. Additionally, in methods `minimize` and `get_gradients`, it scales\n the loss and unscales the gradients. In methods `minimize` and\n `apply_gradients`, it additionally updates the loss scale and skips applying\n gradients if any gradient has a nonfinite value.\n\n ### Hyperparameters\n\n If wrapping a `tf.keras.optimizers.Optimizer`, hyperparameters can be accessed\n and set on the LossScaleOptimizer, which will be delegated to the wrapped\n optimizer.\n\n >>> opt = tf.keras.optimizers.Adam(beta_1=0.8, epsilon=1e-5)\n >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)\n >>> opt.beta_1 # Equivalent to `opt.inner_optimizer.beta_1`\n 0.8\n >>> opt.beta_1 = 0.7 # Equivalent to `opt.inner_optimizer.beta_1 = 0.7`\n >>> opt.beta_1\n 0.7\n >>> opt.inner_optimizer.beta_1\n 0.7\n\n However, accessing or setting non-hyperparameters is not delegated to the\n LossScaleOptimizer. In an Adam optimizer, `beta_1` is a hyperparameter but\n `epsilon` is not, as the Adam optimizer only calls `Optimizer._set_hyper` on\n `beta_1`.\n\n >>> opt.inner_optimizer.epsilon\n 1e-5\n >>> opt.epsilon\n Traceback (most recent call last):\n ...\n AttributeError: 'LossScaleOptimizer' object has no attribute 'epsilon'\n >>> opt.epsilon = 1e-4 # This does NOT set epsilon on `opt.inner_optimizer`\n >>> opt.inner_optimizer.epsilon\n >>> 1e-5\n\n In the above example, despite epsilon being set on the LossScaleOptimizer, the\n old epsilon value will still be used when training as epsilon was not set on\n the inner optimizer.\n ", "desc": "An optimizer that applies loss scaling to prevent numeric underflow.", "type": "API"}, {"name": "tf.keras.mixed_precision.Policy", "docs": "A dtype policy for a Keras layer.\n\n A dtype policy determines a layer's computation and variable dtypes. Each\n layer has a policy. Policies can be passed to the `dtype` argument of layer\n constructors, or a global policy can be set with\n `tf.keras.mixed_precision.set_global_policy`.\n\n Args:\n name: The policy name, which determines the compute and variable dtypes. Can\n be any dtype name, such as `'float32'` or `'float64'`, which causes both\n the compute and variable dtypes will be that dtype. Can also be the string\n `'mixed_float16'` or `'mixed_bfloat16'`, which causes the compute dtype to\n be float16 or bfloat16 and the variable dtype to be float32.\n\n Typically you only need to interact with dtype policies when using mixed\n precision, which is the use of float16 or bfloat16 for computations and\n float32 for variables. This is why the term `mixed_precision` appears in the\n API name. Mixed precision can be enabled by passing `'mixed_float16'` or\n `'mixed_bfloat16'` to `tf.keras.mixed_precision.set_global_policy`. See [the\n mixed precision guide](https://www.tensorflow.org/guide/keras/mixed_precision)\n for more information on how to use mixed precision.\n\n >>> tf.keras.mixed_precision.set_global_policy('mixed_float16')\n >>> layer1 = tf.keras.layers.Dense(10)\n >>> layer1.dtype_policy # `layer1` will automatically use mixed precision\n \n >>> # Can optionally override layer to use float32 instead of mixed precision.\n >>> layer2 = tf.keras.layers.Dense(10, dtype='float32')\n >>> layer2.dtype_policy\n \n >>> # Set policy back to initial float32 for future examples.\n >>> tf.keras.mixed_precision.set_global_policy('float32')\n\n In the example above, passing `dtype='float32'` to the layer is equivalent to\n passing `dtype=tf.keras.mixed_precision.Policy('float32')`. In general,\n passing a dtype policy name to a layer is equivalent to passing the\n corresponding policy, so it is never necessary to explicitly construct a\n `Policy` object.\n\n Note: `Model.compile` will automatically wrap an optimizer with a\n `tf.keras.mixed_precision.LossScaleOptimizer` if you use the `'mixed_float16'`\n policy. If you use a custom training loop instead of calling `Model.compile`,\n you should explicitly use a `tf.keras.mixed_precision.LossScaleOptimizer` to\n avoid numeric underflow with float16.\n\n ### How a layer uses its policy's compute dtype\n\n A layer casts its inputs to its compute dtype. This causes the layer's\n computations and output to also be in the compute dtype. For example:\n\n >>> x = tf.ones((4, 4, 4, 4), dtype='float64')\n >>> # `layer`'s policy defaults to float32.\n >>> layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2)\n >>> layer.compute_dtype # Equivalent to layer.dtype_policy.compute_dtype\n 'float32'\n >>> # `layer` casts its inputs to its compute dtype and does computations in\n >>> # that dtype.\n >>> y = layer(x)\n >>> y.dtype\n tf.float32\n\n Note that the base `tf.keras.layers.Layer` class inserts the casts. If\n subclassing your own layer, you do not have to insert any casts.\n\n Currently, only tensors in the first argument to the layer's `call` method are\n casted (although this will likely be changed in a future minor release). For\n example:\n\n >>> class MyLayer(tf.keras.layers.Layer):\n ... # Bug! `b` will not be casted.\n ... def call(self, a, b):\n ... return a + 1., b + 1.\n >>> a = tf.constant(1., dtype=\"float32\")\n >>> b = tf.constant(1., dtype=\"float32\")\n >>> layer = MyLayer(dtype=\"float64\")\n >>> x, y = layer(a, b)\n >>> x.dtype\n tf.float64\n >>> y.dtype\n tf.float32\n\n If writing your own layer with multiple inputs, you should either explicitly\n cast other tensors to `self.compute_dtype` in `call` or accept all tensors in\n the first argument as a list.\n\n The casting only occurs in TensorFlow 2. If\n `tf.compat.v1.disable_v2_behavior()` has been called, you can enable the\n casting behavior with `tf.compat.v1.keras.layers.enable_v2_dtype_behavior()`.\n\n ### How a layer uses its policy's variable dtype\n\n The default dtype of variables created by `tf.keras.layers.Layer.add_weight`\n is the layer's policy's variable dtype.\n\n If a layer's compute and variable dtypes differ, `add_weight` will wrap\n floating-point variables with a special wrapper called an `AutoCastVariable`.\n `AutoCastVariable` is identical to the original variable except it casts\n itself to the layer's compute dtype when used within `Layer.call`. This means\n if you are writing a layer, you do not have to explicitly cast the variables\n to the layer's compute dtype. For example:\n\n >>> class SimpleDense(tf.keras.layers.Layer):\n ...\n ... def build(self, input_shape):\n ... # With mixed precision, self.kernel is a float32 AutoCastVariable\n ... self.kernel = self.add_weight('kernel', (input_shape[-1], 10))\n ...\n ... def call(self, inputs):\n ... # With mixed precision, self.kernel will be casted to float16\n ... return tf.linalg.matmul(inputs, self.kernel)\n ...\n >>> layer = SimpleDense(dtype='mixed_float16')\n >>> y = layer(tf.ones((10, 10)))\n >>> y.dtype\n tf.float16\n >>> layer.kernel.dtype\n tf.float32\n\n A layer author can prevent a variable from being wrapped with an\n `AutoCastVariable` by passing `experimental_autocast=False` to `add_weight`,\n which is useful if the float32 value of the variable must be accessed within\n the layer.\n\n ### How to write a layer that supports mixed precision and float64.\n\n For the most part, layers will automatically support mixed precision and\n float64 without any additional work, due to the fact the base layer\n automatically casts inputs, creates variables of the correct type, and in the\n case of mixed precision, wraps variables with `AutoCastVariables`.\n\n The primary case where you need extra work to support mixed precision or\n float64 is when you create a new tensor, such as with `tf.ones` or\n `tf.random.normal`, In such cases, you must create the tensor of the correct\n dtype. For example, if you call `tf.random.normal`, you must pass the compute\n dtype, which is the dtype the inputs have been casted to:\n\n >>> class AddRandom(tf.keras.layers.Layer):\n ...\n ... def call(self, inputs):\n ... # We must pass `dtype=inputs.dtype`, otherwise a TypeError may\n ... # occur when adding `inputs` to `rand`.\n ... rand = tf.random.normal(shape=inputs.shape, dtype=inputs.dtype)\n ... return inputs + rand\n >>> layer = AddRandom(dtype='mixed_float16')\n >>> y = layer(x)\n >>> y.dtype\n tf.float16\n\n If you did not pass `dtype=inputs.dtype` to `tf.random.normal`, a\n `TypeError` would have occurred. This is because the `tf.random.normal`'s\n dtype defaults to `\"float32\"`, but the input dtype is float16. You cannot add\n a float32 tensor with a float16 tensor.\n ", "desc": "A dtype policy for a Keras layer.", "type": "API"}, {"name": "tf.keras.mixed_precision.set_global_policy", "docs": "Sets the global dtype policy.\n\n The global policy is the default `tf.keras.mixed_precision.Policy` used for\n layers, if no policy is passed to the layer constructor.\n\n >>> tf.keras.mixed_precision.set_global_policy('mixed_float16')\n >>> tf.keras.mixed_precision.global_policy()\n \n >>> tf.keras.layers.Dense(10).dtype_policy\n \n >>> # Global policy is not used if a policy is directly passed to constructor\n >>> tf.keras.layers.Dense(10, dtype='float64').dtype_policy\n \n >>> tf.keras.mixed_precision.set_global_policy('float32')\n\n If no global policy is set, layers will instead default to a Policy\n constructed from `tf.keras.backend.floatx()`.\n\n To use mixed precision, the global policy should be set to `'mixed_float16'`\n or `'mixed_bfloat16'`, so that every layer uses a 16-bit compute dtype and\n float32 variable dtype by default.\n\n Only floating point policies can be set as the global policy, such as\n `'float32'` and `'mixed_float16'`. Non-floating point policies such as\n `'int32'` and `'complex64'` cannot be set as the global policy because most\n layers do not support such policies.\n\n See `tf.keras.mixed_precision.Policy` for more information.\n\n Args:\n policy: A Policy, or a string that will be converted to a Policy. Can also\n be None, in which case the global policy will be constructed from\n `tf.keras.backend.floatx()`\n ", "desc": "Sets the global dtype policy.", "type": "API"}, {"name": "tf.keras.Model", "docs": "`Model` groups layers into an object with training and inference features.\n\n Args:\n inputs: The input(s) of the model: a `keras.Input` object or list of\n `keras.Input` objects.\n outputs: The output(s) of the model. See Functional API example below.\n name: String, the name of the model.\n\n There are two ways to instantiate a `Model`:\n\n 1 - With the \"Functional API\", where you start from `Input`,\n you chain layer calls to specify the model's forward pass,\n and finally you create your model from inputs and outputs:\n\n ```python\n import tensorflow as tf\n\n inputs = tf.keras.Input(shape=(3,))\n x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)\n outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)\n model = tf.keras.Model(inputs=inputs, outputs=outputs)\n ```\n\n Note: Only dicts, lists, and tuples of input tensors are supported. Nested\n inputs are not supported (e.g. lists of list or dicts of dict).\n\n A new Functional API model can also be created by using the\n intermediate tensors. This enables you to quickly extract sub-components\n of the model.\n\n Example:\n\n ```python\n inputs = keras.Input(shape=(None, None, 3))\n processed = keras.layers.RandomCrop(width=32, height=32)(inputs)\n conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)\n pooling = keras.layers.GlobalAveragePooling2D()(conv)\n feature = keras.layers.Dense(10)(pooling)\n\n full_model = keras.Model(inputs, feature)\n backbone = keras.Model(processed, conv)\n activations = keras.Model(conv, feature)\n ```\n\n Note that the `backbone` and `activations` models are not\n created with `keras.Input` objects, but with the tensors that are originated\n from `keras.Inputs` objects. Under the hood, the layers and weights will\n be shared across these models, so that user can train the `full_model`, and\n use `backbone` or `activations` to do feature extraction.\n The inputs and outputs of the model can be nested structures of tensors as\n well, and the created models are standard Functional API models that support\n all the existing APIs.\n\n 2 - By subclassing the `Model` class: in that case, you should define your\n layers in `__init__()` and you should implement the model's forward pass\n in `call()`.\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n\n def call(self, inputs):\n x = self.dense1(inputs)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n If you subclass `Model`, you can optionally have\n a `training` argument (boolean) in `call()`, which you can use to specify\n a different behavior in training and inference:\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n self.dropout = tf.keras.layers.Dropout(0.5)\n\n def call(self, inputs, training=False):\n x = self.dense1(inputs)\n if training:\n x = self.dropout(x, training=training)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n Once the model is created, you can config the model with losses and metrics\n with `model.compile()`, train the model with `model.fit()`, or use the model\n to do prediction with `model.predict()`.\n ", "desc": "`Model` groups layers into an object with training and inference features.", "type": "API"}, {"name": "tf.keras.models", "docs": "Keras models API.\n", "desc": "Keras models API.", "type": "API"}, {"name": "tf.keras.models.clone_model", "docs": "Clone a Functional or Sequential `Model` instance.\n\n Model cloning is similar to calling a model on new inputs,\n except that it creates new layers (and thus new weights) instead\n of sharing the weights of the existing layers.\n\n Note that\n `clone_model` will not preserve the uniqueness of shared objects within the\n model (e.g. a single variable attached to two distinct layers will be\n restored as two separate variables).\n\n Args:\n model: Instance of `Model`\n (could be a Functional model or a Sequential model).\n input_tensors: optional list of input tensors or InputLayer objects\n to build the model upon. If not provided,\n new `Input` objects will be created.\n clone_function: Callable to be used to clone each layer in the target\n model (except `InputLayer` instances). It takes as argument the layer\n instance to be cloned, and returns the corresponding layer instance to\n be used in the model copy. If unspecified, this callable defaults to\n the following serialization/deserialization function:\n `lambda layer: layer.__class__.from_config(layer.get_config())`.\n By passing a custom callable, you can customize your copy of the\n model, e.g. by wrapping certain layers of interest (you might want to\n replace all `LSTM` instances with equivalent\n `Bidirectional(LSTM(...))` instances, for example).\n\n Returns:\n An instance of `Model` reproducing the behavior\n of the original model, on top of new inputs tensors,\n using newly instantiated weights. The cloned model may behave\n differently from the original model if a custom `clone_function`\n modifies the layer.\n\n Example:\n\n ```python\n # Create a test Sequential model.\n model = keras.Sequential([\n keras.Input(shape=(728,)),\n keras.layers.Dense(32, activation='relu'),\n keras.layers.Dense(1, activation='sigmoid'),\n ])\n # Create a copy of the test model (with freshly initialized weights).\n new_model = clone_model(model)\n ```\n\n Note that subclassed models cannot be cloned, since their internal\n layer structure is not known. To achieve equivalent functionality\n as `clone_model` in the case of a subclassed model, simply make sure\n that the model class implements `get_config()`\n (and optionally `from_config()`), and call:\n\n ```python\n new_model = model.__class__.from_config(model.get_config())\n ```\n ", "desc": "Clone a Functional or Sequential `Model` instance.", "type": "API"}, {"name": "tf.keras.models.load_model", "docs": "Loads a model saved via `model.save()`.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> model.save('/tmp/model')\n >>> loaded_model = tf.keras.models.load_model('/tmp/model')\n >>> x = tf.random.uniform((10, 3))\n >>> assert np.allclose(model.predict(x), loaded_model.predict(x))\n\n Note that the model weights may have different scoped names after being\n loaded. Scoped names include the model/layer names, such as\n `\"dense_1/kernel:0\"`. It is recommended that you use the layer properties to\n access specific variables, e.g. `model.get_layer(\"dense_1\").kernel`.\n\n Args:\n filepath: One of the following:\n - String or `pathlib.Path` object, path to the saved model\n - `h5py.File` object from which to load the model\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n compile: Boolean, whether to compile the model\n after loading.\n options: Optional `tf.saved_model.LoadOptions` object that specifies\n options for loading from SavedModel.\n\n Returns:\n A Keras model instance. If the original model was compiled, and saved with\n the optimizer, then the returned model will be compiled. Otherwise, the\n model will be left uncompiled. In the case that an uncompiled model is\n returned, a warning is displayed if the `compile` argument is set to\n `True`.\n\n Raises:\n ImportError: if loading from an hdf5 file and h5py is not available.\n IOError: In case of an invalid savefile.\n ", "desc": "Loads a model saved via `model.save()`.", "type": "API"}, {"name": "tf.keras.models.Model", "docs": "`Model` groups layers into an object with training and inference features.\n\n Args:\n inputs: The input(s) of the model: a `keras.Input` object or list of\n `keras.Input` objects.\n outputs: The output(s) of the model. See Functional API example below.\n name: String, the name of the model.\n\n There are two ways to instantiate a `Model`:\n\n 1 - With the \"Functional API\", where you start from `Input`,\n you chain layer calls to specify the model's forward pass,\n and finally you create your model from inputs and outputs:\n\n ```python\n import tensorflow as tf\n\n inputs = tf.keras.Input(shape=(3,))\n x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)\n outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)\n model = tf.keras.Model(inputs=inputs, outputs=outputs)\n ```\n\n Note: Only dicts, lists, and tuples of input tensors are supported. Nested\n inputs are not supported (e.g. lists of list or dicts of dict).\n\n A new Functional API model can also be created by using the\n intermediate tensors. This enables you to quickly extract sub-components\n of the model.\n\n Example:\n\n ```python\n inputs = keras.Input(shape=(None, None, 3))\n processed = keras.layers.RandomCrop(width=32, height=32)(inputs)\n conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)\n pooling = keras.layers.GlobalAveragePooling2D()(conv)\n feature = keras.layers.Dense(10)(pooling)\n\n full_model = keras.Model(inputs, feature)\n backbone = keras.Model(processed, conv)\n activations = keras.Model(conv, feature)\n ```\n\n Note that the `backbone` and `activations` models are not\n created with `keras.Input` objects, but with the tensors that are originated\n from `keras.Inputs` objects. Under the hood, the layers and weights will\n be shared across these models, so that user can train the `full_model`, and\n use `backbone` or `activations` to do feature extraction.\n The inputs and outputs of the model can be nested structures of tensors as\n well, and the created models are standard Functional API models that support\n all the existing APIs.\n\n 2 - By subclassing the `Model` class: in that case, you should define your\n layers in `__init__()` and you should implement the model's forward pass\n in `call()`.\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n\n def call(self, inputs):\n x = self.dense1(inputs)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n If you subclass `Model`, you can optionally have\n a `training` argument (boolean) in `call()`, which you can use to specify\n a different behavior in training and inference:\n\n ```python\n import tensorflow as tf\n\n class MyModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)\n self.dropout = tf.keras.layers.Dropout(0.5)\n\n def call(self, inputs, training=False):\n x = self.dense1(inputs)\n if training:\n x = self.dropout(x, training=training)\n return self.dense2(x)\n\n model = MyModel()\n ```\n\n Once the model is created, you can config the model with losses and metrics\n with `model.compile()`, train the model with `model.fit()`, or use the model\n to do prediction with `model.predict()`.\n ", "desc": "`Model` groups layers into an object with training and inference features.", "type": "API"}, {"name": "tf.keras.models.model_from_config", "docs": "Instantiates a Keras model from its config.\n\n Usage:\n ```\n # for a Functional API model\n tf.keras.Model().from_config(model.get_config())\n\n # for a Sequential model\n tf.keras.Sequential().from_config(model.get_config())\n ```\n\n Args:\n config: Configuration dictionary.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n\n Raises:\n TypeError: if `config` is not a dictionary.\n ", "desc": "Instantiates a Keras model from its config.", "type": "API"}, {"name": "tf.keras.models.model_from_json", "docs": "Parses a JSON model configuration string and returns a model instance.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> config = model.to_json()\n >>> loaded_model = tf.keras.models.model_from_json(config)\n\n Args:\n json_string: JSON string encoding a model configuration.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n ", "desc": "Parses a JSON model configuration string and returns a model instance.", "type": "API"}, {"name": "tf.keras.models.model_from_yaml", "docs": "Parses a yaml model configuration file and returns a model instance.\n\n Note: Since TF 2.6, this method is no longer supported and will raise a\n RuntimeError.\n\n Args:\n yaml_string: YAML string or open file encoding a model configuration.\n custom_objects: Optional dictionary mapping names\n (strings) to custom classes or functions to be\n considered during deserialization.\n\n Returns:\n A Keras model instance (uncompiled).\n\n Raises:\n RuntimeError: announces that the method poses a security risk\n ", "desc": "Parses a yaml model configuration file and returns a model instance.", "type": "API"}, {"name": "tf.keras.models.save_model", "docs": "Saves a model as a TensorFlow SavedModel or HDF5 file.\n\n See the [Serialization and Saving guide](https://keras.io/guides/serialization_and_saving/)\n for details.\n\n Usage:\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Dense(5, input_shape=(3,)),\n ... tf.keras.layers.Softmax()])\n >>> model.save('/tmp/model')\n >>> loaded_model = tf.keras.models.load_model('/tmp/model')\n >>> x = tf.random.uniform((10, 3))\n >>> assert np.allclose(model.predict(x), loaded_model.predict(x))\n\n Note that `model.save()` is an alias for `tf.keras.models.save_model()`.\n\n The SavedModel and HDF5 file contains:\n\n - the model's configuration (topology)\n - the model's weights\n - the model's optimizer's state (if any)\n\n Thus models can be reinstantiated in the exact same state, without any of the\n code used for model definition or training.\n\n Note that the model weights may have different scoped names after being\n loaded. Scoped names include the model/layer names, such as\n `\"dense_1/kernel:0\"`. It is recommended that you use the layer properties to\n access specific variables, e.g. `model.get_layer(\"dense_1\").kernel`.\n\n __SavedModel serialization format__\n\n Keras SavedModel uses `tf.saved_model.save` to save the model and all\n trackable objects attached to the model (e.g. layers and variables). The model\n config, weights, and optimizer are saved in the SavedModel. Additionally, for\n every Keras layer attached to the model, the SavedModel stores:\n\n * the config and metadata -- e.g. name, dtype, trainable status\n * traced call and loss functions, which are stored as TensorFlow subgraphs.\n\n The traced functions allow the SavedModel format to save and load custom\n layers without the original class definition.\n\n You can choose to not save the traced functions by disabling the `save_traces`\n option. This will decrease the time it takes to save the model and the\n amount of disk space occupied by the output SavedModel. If you enable this\n option, then you _must_ provide all custom class definitions when loading\n the model. See the `custom_objects` argument in `tf.keras.models.load_model`.\n\n Args:\n model: Keras model instance to be saved.\n filepath: One of the following:\n - String or `pathlib.Path` object, path where to save the model\n - `h5py.File` object where to save the model\n overwrite: Whether we should overwrite any existing model at the target\n location, or instead ask the user with a manual prompt.\n include_optimizer: If True, save optimizer's state together.\n save_format: Either 'tf' or 'h5', indicating whether to save the model\n to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5'\n in TF 1.X.\n signatures: Signatures to save with the SavedModel. Applicable to the 'tf'\n format only. Please see the `signatures` argument in\n `tf.saved_model.save` for details.\n options: (only applies to SavedModel format) `tf.saved_model.SaveOptions`\n object that specifies options for saving to SavedModel.\n save_traces: (only applies to SavedModel format) When enabled, the\n SavedModel will store the function traces for each layer. This\n can be disabled, so that only the configs of each layer are stored.\n Defaults to `True`. Disabling this will decrease serialization time and\n reduce file size, but it requires that all custom layers/models\n implement a `get_config()` method.\n\n Raises:\n ImportError: If save format is hdf5, and h5py is not available.\n ", "desc": "Saves a model as a TensorFlow SavedModel or HDF5 file.", "type": "API"}, {"name": "tf.keras.models.Sequential", "docs": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.\n\n `Sequential` provides training and inference features on this model.\n\n Examples:\n\n ```python\n # Optionally, the first layer can receive an `input_shape` argument:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n # Afterwards, we do automatic shape inference:\n model.add(tf.keras.layers.Dense(4))\n\n # This is identical to the following:\n model = tf.keras.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(8))\n\n # Note that you can also omit the `input_shape` argument.\n # In that case the model doesn't have any weights until the first call\n # to a training/evaluation method (since it isn't yet built):\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n # model.weights not created yet\n\n # Whereas if you specify the input shape, the model gets built\n # continuously as you are adding layers:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n model.add(tf.keras.layers.Dense(4))\n len(model.weights)\n # Returns \"4\"\n\n # When using the delayed-build pattern (no input shape specified), you can\n # choose to manually build your model by calling\n # `build(batch_input_shape)`:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n model.build((None, 16))\n len(model.weights)\n # Returns \"4\"\n\n # Note that when using the delayed-build pattern (no input shape specified),\n # the model gets built the first time you call `fit`, `eval`, or `predict`,\n # or the first time you call the model on some input data.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(1))\n model.compile(optimizer='sgd', loss='mse')\n # This builds the model for the first time:\n model.fit(x, y, batch_size=32, epochs=10)\n ```\n ", "desc": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.", "type": "API"}, {"name": "tf.keras.optimizers", "docs": "Built-in optimizer classes.\n\nFor more examples see the base class `tf.keras.optimizers.Optimizer`.\n\n", "desc": "Built-in optimizer classes.", "type": "API"}, {"name": "tf.keras.optimizers.Adadelta", "docs": "Optimizer that implements the Adadelta algorithm.\n\n Adadelta optimization is a stochastic gradient descent method that is based on\n adaptive learning rate per dimension to address two drawbacks:\n\n - The continual decay of learning rates throughout training.\n - The need for a manually selected global learning rate.\n\n Adadelta is a more robust extension of Adagrad that adapts learning rates\n based on a moving window of gradient updates, instead of accumulating all\n past gradients. This way, Adadelta continues learning even when many updates\n have been done. Compared to Adagrad, in the original version of Adadelta you\n don't have to set an initial learning rate. In this version, the initial\n learning rate can be set, as in most other Keras optimizers.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adadelta` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n rho: A `Tensor` or a floating point value. The decay rate.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adadelta\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Zeiler, 2012](http://arxiv.org/abs/1212.5701)\n ", "desc": "Optimizer that implements the Adadelta algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.Adagrad", "docs": "Optimizer that implements the Adagrad algorithm.\n\n Adagrad is an optimizer with parameter-specific learning rates,\n which are adapted relative to how frequently a parameter gets\n updated during training. The more updates a parameter receives,\n the smaller the updates.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adagrad` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n initial_accumulator_value: Floating point value.\n Starting value for the accumulators (per-parameter momentum values).\n Must be non-negative.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adagrad\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value..\n\n Reference:\n - [Duchi et al., 2011](\n http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).\n ", "desc": "Optimizer that implements the Adagrad algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.Adam", "docs": "Optimizer that implements the Adam algorithm.\n\n Adam optimization is a stochastic gradient descent method that is based on\n adaptive estimation of first-order and second-order moments.\n\n According to\n [Kingma et al., 2014](http://arxiv.org/abs/1412.6980),\n the method is \"*computationally\n efficient, has little memory requirement, invariant to diagonal rescaling of\n gradients, and is well suited for problems that are large in terms of\n data/parameters*\".\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use, The\n learning rate. Defaults to 0.001.\n beta_1: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use. The\n exponential decay rate for the 1st moment estimates. Defaults to 0.9.\n beta_2: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use, The\n exponential decay rate for the 2nd moment estimates. Defaults to 0.999.\n epsilon: A small constant for numerical stability. This epsilon is\n \"epsilon hat\" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to\n 1e-7.\n amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper \"On the Convergence of Adam and beyond\". Defaults to `False`.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.Adam(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> # The first step is `-learning_rate*sign(grad)`\n >>> var1.numpy()\n 9.9\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n - [Reddi et al., 2018](\n https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`.\n\n Notes:\n\n The default value of 1e-7 for epsilon might not be a good default in\n general. For example, when training an Inception network on ImageNet a\n current good choice is 1.0 or 0.1. Note that since Adam uses the\n formulation just before Section 2.1 of the Kingma and Ba paper rather than\n the formulation in Algorithm 1, the \"epsilon\" referred to here is \"epsilon\n hat\" in the paper.\n\n The sparse implementation of this algorithm (used when the gradient is an\n IndexedSlices object, typically because of `tf.gather` or an embedding\n lookup in the forward pass) does apply momentum to variable slices even if\n they were not used in the forward pass (meaning they have a gradient equal\n to zero). Momentum decay (beta1) is also applied to the entire momentum\n accumulator. This means that the sparse behavior is equivalent to the dense\n behavior (in contrast to some momentum implementations which ignore momentum\n unless a variable slice was actually used).\n ", "desc": "Optimizer that implements the Adam algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.Adamax", "docs": "Optimizer that implements the Adamax algorithm.\n\n It is a variant of Adam based on the infinity norm.\n Default parameters follow those provided in the paper.\n Adamax is sometimes superior to adam, specially in models with embeddings.\n\n Initialization:\n\n ```python\n m = 0 # Initialize initial 1st moment vector\n v = 0 # Initialize the exponentially weighted infinity norm\n t = 0 # Initialize timestep\n ```\n\n The update rule for parameter `w` with gradient `g` is\n described at the end of section 7.1 of the paper:\n\n ```python\n t += 1\n m = beta1 * m + (1 - beta) * g\n v = max(beta2 * v, abs(g))\n current_lr = learning_rate / (1 - beta1 ** t)\n w = w - current_lr * m / (v + epsilon)\n ```\n\n Similarly to `Adam`, the epsilon is added for numerical stability\n (especially to get rid of division by zero when `v_t == 0`).\n\n In contrast to `Adam`, the sparse implementation of this algorithm\n (used when the gradient is an IndexedSlices object, typically because of\n `tf.gather` or an embedding lookup in the forward pass) only updates\n variable slices and corresponding `m_t`, `v_t` terms when that part of\n the variable was used in the forward pass. This means that the sparse\n behavior is contrast to the dense behavior (similar to some momentum\n implementations which ignore momentum unless a variable slice was actually\n used).\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adamax\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n ", "desc": "Optimizer that implements the Adamax algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.deserialize", "docs": "Inverse of the `serialize` function.\n\n Args:\n config: Optimizer configuration dictionary.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras Optimizer instance.\n ", "desc": "Inverse of the `serialize` function.", "type": "API"}, {"name": "tf.keras.optimizers.Ftrl", "docs": "Optimizer that implements the FTRL algorithm.\n\n \"Follow The Regularized Leader\" (FTRL) is an optimization algorithm developed\n at Google for click-through rate prediction in the early 2010s. It is most\n suitable for shallow models with large and sparse feature spaces.\n The algorithm is described by\n [McMahan et al., 2013](https://research.google.com/pubs/archive/41159.pdf).\n The Keras version has support for both online L2 regularization\n (the L2 regularization described in the paper\n above) and shrinkage-type L2 regularization\n (which is the addition of an L2 penalty to the loss function).\n\n Initialization:\n\n ```python\n n = 0\n sigma = 0\n z = 0\n ```\n\n Update rule for one variable `w`:\n\n ```python\n prev_n = n\n n = n + g ** 2\n sigma = (sqrt(n) - sqrt(prev_n)) / lr\n z = z + g - sigma * w\n if abs(z) < lambda_1:\n w = 0\n else:\n w = (sgn(z) * lambda_1 - z) / ((beta + sqrt(n)) / alpha + lambda_2)\n ```\n\n Notation:\n\n - `lr` is the learning rate\n - `g` is the gradient for the variable\n - `lambda_1` is the L1 regularization strength\n - `lambda_2` is the L2 regularization strength\n\n Check the documentation for the `l2_shrinkage_regularization_strength`\n parameter for more details when shrinkage is enabled, in which case gradient\n is replaced with a gradient with shrinkage.\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n learning_rate_power: A float value, must be less or equal to zero.\n Controls how the learning rate decreases during training. Use zero for\n a fixed learning rate.\n initial_accumulator_value: The starting value for accumulators.\n Only zero or positive values are allowed.\n l1_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n l2_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Ftrl\"`.\n l2_shrinkage_regularization_strength: A float value, must be greater than\n or equal to zero. This differs from L2 above in that the L2 above is a\n stabilization penalty, whereas this L2 shrinkage is a magnitude penalty.\n When input is sparse shrinkage will only happen on the active weights.\n beta: A float value, representing the beta value from the paper.\n Defaults to 0.0.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [McMahan et al., 2013](\n https://research.google.com/pubs/archive/41159.pdf)\n ", "desc": "Optimizer that implements the FTRL algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.get", "docs": "Retrieves a Keras Optimizer instance.\n\n Args:\n identifier: Optimizer identifier, one of\n - String: name of an optimizer\n - Dictionary: configuration dictionary. - Keras Optimizer instance (it\n will be returned unchanged). - TensorFlow Optimizer instance (it\n will be wrapped as a Keras Optimizer).\n\n Returns:\n A Keras Optimizer instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras Optimizer instance.", "type": "API"}, {"name": "tf.keras.optimizers.Nadam", "docs": "Optimizer that implements the NAdam algorithm.\n Much like Adam is essentially RMSprop with momentum, Nadam is Adam with\n Nesterov momentum.\n\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Nadam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage Example:\n >>> opt = tf.keras.optimizers.Nadam(learning_rate=0.2)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> \"{:.1f}\".format(var1.numpy())\n 9.8\n\n Reference:\n - [Dozat, 2015](http://cs229.stanford.edu/proj2015/054_report.pdf).\n ", "desc": "Optimizer that implements the NAdam algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.Optimizer", "docs": "Base class for Keras optimizers.\n\n You should not use this class directly, but instead instantiate one of its\n subclasses such as `tf.keras.optimizers.SGD`, `tf.keras.optimizers.Adam`, etc.\n\n ### Usage\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 * var1 + 2 * var2 * var2\n # In graph mode, returns op that minimizes the loss by updating the listed\n # variables.\n opt_op = opt.minimize(loss, var_list=[var1, var2])\n opt_op.run()\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Usage in custom training loops\n\n In Keras models, sometimes variables are created when the model is first\n called, instead of construction time. Examples include 1) sequential models\n without input shape pre-defined, or 2) subclassed models. Pass var_list as\n callable in these cases.\n\n Example:\n\n ```python\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))\n model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))\n loss_fn = lambda: tf.keras.losses.mse(model(input), output)\n var_list_fn = lambda: model.trainable_weights\n for input, output in data:\n opt.minimize(loss_fn, var_list_fn)\n ```\n\n ### Processing gradients before applying them\n\n Calling `minimize()` takes care of both computing the gradients and\n applying them to the variables. If you want to process the gradients\n before applying them you can instead use the optimizer in three steps:\n\n 1. Compute the gradients with `tf.GradientTape`.\n 2. Process the gradients as you wish.\n 3. Apply the processed gradients with `apply_gradients()`.\n\n Example:\n\n ```python\n # Create an optimizer.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n # Compute the gradients for a list of variables.\n with tf.GradientTape() as tape:\n loss = \n vars = \n grads = tape.gradient(loss, vars)\n\n # Process the gradients, for example cap them, etc.\n # capped_grads = [MyCapper(g) for g in grads]\n processed_grads = [process_gradient(g) for g in grads]\n\n # Ask the optimizer to apply the processed gradients.\n opt.apply_gradients(zip(processed_grads, var_list))\n ```\n\n ### Use with `tf.distribute.Strategy`\n\n This optimizer class is `tf.distribute.Strategy` aware, which means it\n automatically sums gradients across all replicas. To average gradients,\n you divide your loss by the global batch size, which is done\n automatically if you use `tf.keras` built-in training or evaluation loops.\n See the `reduction` argument of your loss which should be set to\n `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` for averaging or\n `tf.keras.losses.Reduction.SUM` for not.\n\n To aggregate gradients yourself, call `apply_gradients` with\n `experimental_aggregate_gradients` set to False. This is useful if you need to\n process aggregated gradients.\n\n If you are not using these and you want to average gradients, you should use\n `tf.math.reduce_sum` to add up your per-example losses and then divide by the\n global batch size. Note that when using `tf.distribute.Strategy`, the first\n component of a tensor's shape is the *replica-local* batch size, which is off\n by a factor equal to the number of replicas being used to compute a single\n step. As a result, using `tf.math.reduce_mean` will give the wrong answer,\n resulting in gradients that can be many times too big.\n\n ### Variable Constraints\n\n All Keras optimizers respect variable constraints. If constraint function is\n passed to any variable, the constraint will be applied to the variable after\n the gradient has been applied to the variable.\n Important: If gradient is sparse tensor, variable constraint is not supported.\n\n ### Thread Compatibility\n\n The entire optimizer is currently thread compatible, not thread-safe. The user\n needs to perform synchronization if necessary.\n\n ### Slots\n\n Many optimizer subclasses, such as `Adam` and `Adagrad` allocate and manage\n additional variables associated with the variables to train. These are called\n Slots. Slots have names and you can ask the optimizer for the names of\n the slots that it uses. Once you have a slot name you can ask the optimizer\n for the variable it created to hold the slot value.\n\n This can be useful if you want to log debug a training algorithm, report stats\n about the slots, etc.\n\n ### Hyperparameters\n\n These are arguments passed to the optimizer subclass constructor\n (the `__init__` method), and then passed to `self._set_hyper()`.\n They can be either regular Python values (like 1.0), tensors, or\n callables. If they are callable, the callable will be called during\n `apply_gradients()` to get the value for the hyper parameter.\n\n Hyperparameters can be overwritten through user code:\n\n Example:\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 + 2 * var2\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n # update learning rate\n opt.learning_rate = 0.05\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Callable learning rate\n\n Optimizer accepts a callable learning rate in two ways. The first way is\n through built-in or customized\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The schedule will be\n called on each iteration with `schedule(iteration)`, a `tf.Variable`\n owned by the optimizer.\n\n Example:\n\n >>> var = tf.Variable(np.random.random(size=(1,)))\n >>> learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(\n ... initial_learning_rate=.01, decay_steps=20, decay_rate=.1)\n >>> opt = tf.keras.optimizers.SGD(learning_rate=learning_rate)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> var = tf.Variable(np.random.random(size=(1,)))\n >>> def lr_callable():\n ... return .1\n >>> opt = tf.keras.optimizers.SGD(learning_rate=lr_callable)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> opt = tf.keras.optimizers.RMSprop(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0 # d(loss) / d(var1) = var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> var1.numpy()\n 9.683772\n\n Reference:\n - [Hinton, 2012](\n http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)\n ", "desc": "Optimizer that implements the RMSprop algorithm.", "type": "API"}, {"name": "tf.keras.optimizers.schedules", "docs": "Public API for tf.keras.optimizers.schedules namespace.\n", "desc": "Public API for tf.keras.optimizers.schedules namespace.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.CosineDecay", "docs": "A LearningRateSchedule that uses a cosine decay schedule.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n return initial_learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(\n initial_learning_rate, decay_steps)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.CosineDecayRestarts", "docs": "A LearningRateSchedule that uses a cosine decay schedule with restarts.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function with\n restarts to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n\n The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more\n steps and with `m_mul` times initial learning rate as the new learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed_fn = (\n tf.keras.optimizers.schedules.CosineDecayRestarts(\n initial_learning_rate,\n first_decay_steps))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule with restarts.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.deserialize", "docs": "Instantiates a `LearningRateSchedule` object from a serialized form.\n\n Args:\n config: The serialized form of the `LearningRateSchedule`.\n Dictionary of the form {'class_name': str, 'config': dict}.\n custom_objects: A dictionary mapping class names (or function names) of\n custom (non-Keras) objects to class/functions.\n\n Returns:\n A `LearningRateSchedule` object.\n\n Example:\n\n ```python\n # Configuration for PolynomialDecay\n config = {\n 'class_name': 'PolynomialDecay',\n 'config': {'cycle': False,\n 'decay_steps': 10000,\n 'end_learning_rate': 0.01,\n 'initial_learning_rate': 0.1,\n 'name': None,\n 'power': 0.5}}\n lr_schedule = tf.keras.optimizers.schedules.deserialize(config)\n ```\n ", "desc": "Instantiates a `LearningRateSchedule` object from a serialized form.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.ExponentialDecay", "docs": "A LearningRateSchedule that uses an exponential decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies an exponential decay function\n to an optimizer step, given a provided initial learning rate.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate * decay_rate ^ (step / decay_steps)\n ```\n\n If the argument `staircase` is `True`, then `step / decay_steps` is\n an integer division and the decayed learning rate follows a\n staircase function.\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: When fitting a Keras model, decay every 100000 steps with a base\n of 0.96:\n\n ```python\n initial_learning_rate = 0.1\n lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate,\n decay_steps=100000,\n decay_rate=0.96,\n staircase=True)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an exponential decay schedule.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.InverseTimeDecay", "docs": "A LearningRateSchedule that uses an inverse time decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies the inverse decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * step / decay_step)\n ```\n\n or, if `staircase` is `True`, as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * floor(step / decay_step))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a Keras model when decaying 1/t with a rate of 0.5:\n\n ```python\n ...\n initial_learning_rate = 0.1\n decay_steps = 1.0\n decay_rate = 0.5\n learning_rate_fn = keras.optimizers.schedules.InverseTimeDecay(\n initial_learning_rate, decay_steps, decay_rate)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an inverse time decay schedule.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.LearningRateSchedule", "docs": "The learning rate schedule base class.\n\n You can use a learning rate schedule to modulate how the learning rate\n of your optimizer changes over time.\n\n Several built-in learning rate schedules are available, such as\n `tf.keras.optimizers.schedules.ExponentialDecay` or\n `tf.keras.optimizers.schedules.PiecewiseConstantDecay`:\n\n ```python\n lr_schedule = keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate=1e-2,\n decay_steps=10000,\n decay_rate=0.9)\n optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)\n ```\n\n A `LearningRateSchedule` instance can be passed in as the `learning_rate`\n argument of any optimizer.\n\n To implement your own schedule object, you should implement the `__call__`\n method, which takes a `step` argument (scalar integer tensor, the\n current training step count).\n Like for any other Keras object, you can also optionally\n make your object serializable by implementing the `get_config`\n and `from_config` methods.\n\n Example:\n\n ```python\n class MyLRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):\n\n def __init__(self, initial_learning_rate):\n self.initial_learning_rate = initial_learning_rate\n\n def __call__(self, step):\n return self.initial_learning_rate / (step + 1)\n\n optimizer = tf.keras.optimizers.SGD(learning_rate=MyLRSchedule(0.1))\n ```\n ", "desc": "The learning rate schedule base class.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.PiecewiseConstantDecay", "docs": "A LearningRateSchedule that uses a piecewise constant decay schedule.\n\n The function returns a 1-arg callable to compute the piecewise constant\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n\n Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5\n for the next 10000 steps, and 0.1 for any additional steps.\n\n ```python\n step = tf.Variable(0, trainable=False)\n boundaries = [100000, 110000]\n values = [1.0, 0.5, 0.1]\n learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(\n boundaries, values)\n\n # Later, whenever we perform an optimization step, we pass in the step.\n learning_rate = learning_rate_fn(step)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as the boundary tensors.\n\n The output of the 1-arg function that takes the `step`\n is `values[0]` when `step <= boundaries[0]`,\n `values[1]` when `step > boundaries[0]` and `step <= boundaries[1]`, ...,\n and values[-1] when `step > boundaries[-1]`.\n ", "desc": "A LearningRateSchedule that uses a piecewise constant decay schedule.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.PolynomialDecay", "docs": "A LearningRateSchedule that uses a polynomial decay schedule.\n\n It is commonly observed that a monotonically decreasing learning rate, whose\n degree of change is carefully chosen, results in a better performing model.\n This schedule applies a polynomial decay function to an optimizer step,\n given a provided `initial_learning_rate`, to reach an `end_learning_rate`\n in the given `decay_steps`.\n\n It requires a `step` value to compute the decayed learning rate. You\n can just pass a TensorFlow variable that you increment at each training\n step.\n\n The schedule is a 1-arg callable that produces a decayed learning rate\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n If `cycle` is True then a multiple of `decay_steps` is used, the first one\n that is bigger than `step`.\n\n ```python\n def decayed_learning_rate(step):\n decay_steps = decay_steps * ceil(step / decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using\n sqrt (i.e. power=0.5):\n\n ```python\n ...\n starter_learning_rate = 0.1\n end_learning_rate = 0.01\n decay_steps = 10000\n learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(\n starter_learning_rate,\n decay_steps,\n end_learning_rate,\n power=0.5)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a polynomial decay schedule.", "type": "API"}, {"name": "tf.keras.optimizers.schedules.serialize", "docs": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.\n\n Args:\n learning_rate_schedule: The `LearningRateSchedule` object to serialize.\n\n Returns:\n A JSON-serializable dict representing the object's config.\n\n Example:\n\n >>> lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n ... 0.1, decay_steps=100000, decay_rate=0.96, staircase=True)\n >>> tf.keras.optimizers.schedules.serialize(lr_schedule)\n {'class_name': 'ExponentialDecay', 'config': {...}}\n ", "desc": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.", "type": "API"}, {"name": "tf.keras.optimizers.serialize", "docs": "Serialize the optimizer configuration to JSON compatible python dict.\n\n The configuration can be used for persistence and reconstruct the `Optimizer`\n instance again.\n\n >>> tf.keras.optimizers.serialize(tf.keras.optimizers.SGD())\n {'class_name': 'SGD', 'config': {'name': 'SGD', 'learning_rate': 0.01,\n 'decay': 0.0, 'momentum': 0.0,\n 'nesterov': False}}\n\n Args:\n optimizer: An `Optimizer` instance to serialize.\n\n Returns:\n Python dict which contains the configuration of the input optimizer.\n ", "desc": "Serialize the optimizer configuration to JSON compatible python dict.", "type": "API"}, {"name": "tf.keras.optimizers.SGD", "docs": "Gradient descent (with momentum) optimizer.\n\n Update rule for parameter `w` with gradient `g` when `momentum` is 0:\n\n ```python\n w = w - learning_rate * g\n ```\n\n Update rule when `momentum` is larger than 0:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + velocity\n ```\n\n When `nesterov=True`, this rule becomes:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + momentum * velocity - learning_rate * g\n ```\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use. The\n learning rate. Defaults to 0.01.\n momentum: float hyperparameter >= 0 that accelerates gradient descent\n in the relevant\n direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient\n descent.\n nesterov: boolean. Whether to apply Nesterov momentum.\n Defaults to `False`.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"SGD\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n >>> var = tf.Variable(1.0)\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> # Step is `- learning_rate * grad`\n >>> var.numpy()\n 0.9\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)\n >>> var = tf.Variable(1.0)\n >>> val0 = var.value()\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> # First step is `- learning_rate * grad`\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val1 = var.value()\n >>> (val0 - val1).numpy()\n 0.1\n >>> # On later steps, step-size increases because of momentum\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val2 = var.value()\n >>> (val1 - val2).numpy()\n 0.18\n\n Reference:\n - For `nesterov=True`, See [Sutskever et al., 2013](\n http://jmlr.org/proceedings/papers/v28/sutskever13.pdf).\n ", "desc": "Gradient descent (with momentum) optimizer.", "type": "API"}, {"name": "tf.keras.preprocessing", "docs": "Utilities to preprocess data before training.\n\nDeprecated: `tf.keras.preprocessing` APIs do not operate on tensors and are\nnot recommended for new code. Prefer loading data with either\n`tf.keras.utils.text_dataset_from_directory` or\n`tf.keras.utils.image_dataset_from_directory`, and then transforming the output\n`tf.data.Dataset` with preprocessing layers. These approaches will offer\nbetter performance and intergration with the broader Tensorflow ecosystem. For\nmore information, see the tutorials for [loading text](\nhttps://www.tensorflow.org/tutorials/load_data/text), [loading images](\nhttps://www.tensorflow.org/tutorials/load_data/images), and [augmenting images](\nhttps://www.tensorflow.org/tutorials/images/data_augmentation), as well as the\n[preprocessing layer guide](\nhttps://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilities to preprocess data before training.", "type": "API"}, {"name": "tf.keras.preprocessing.image", "docs": "Utilies for image preprocessing and augmentation.\n\nDeprecated: `tf.keras.preprocessing.image` APIs do not operate on tensors and\nare not recommended for new code. Prefer loading data with\n`tf.keras.utils.image_dataset_from_directory`, and then transforming the output\n`tf.data.Dataset` with preprocessing layers. For more information, see the\ntutorials for [loading images](\nhttps://www.tensorflow.org/tutorials/load_data/images) and [augmenting images](\nhttps://www.tensorflow.org/tutorials/images/data_augmentation), as well as the\n[preprocessing layer guide](\nhttps://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilies for image preprocessing and augmentation.", "type": "API"}, {"name": "tf.keras.preprocessing.image.apply_affine_transform", "docs": "Applies an affine transformation specified by the parameters given.\n\n Args:\n x: 3D numpy array - a 2D image with one or more channels.\n theta: Rotation angle in degrees.\n tx: Width shift.\n ty: Heigh shift.\n shear: Shear angle in degrees.\n zx: Zoom in x direction.\n zy: Zoom in y direction\n row_axis: Index of axis for rows (aka Y axis) in the input\n image. Direction: left to right.\n col_axis: Index of axis for columns (aka X axis) in the input\n image. Direction: top to bottom.\n channel_axis: Index of axis for channels in the input image.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n order: int, order of interpolation\n\n Returns:\n The transformed version of the input.\n\n Raises:\n ImportError: if SciPy is not available.\n ", "desc": "Applies an affine transformation specified by the parameters given.", "type": "API"}, {"name": "tf.keras.preprocessing.image.apply_brightness_shift", "docs": "Performs a brightness shift.\n\n Args:\n x: Input tensor. Must be 3D.\n brightness: Float. The new brightness value.\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Default: True.\n\n Returns:\n Numpy image tensor.\n\n Raises:\n ImportError: if PIL is not available.\n ", "desc": "Performs a brightness shift.", "type": "API"}, {"name": "tf.keras.preprocessing.image.apply_channel_shift", "docs": "Performs a channel shift.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity: Transformation intensity.\n channel_axis: Index of axis for channels in the input tensor.\n\n Returns:\n Numpy image tensor.\n ", "desc": "Performs a channel shift.", "type": "API"}, {"name": "tf.keras.preprocessing.image.array_to_img", "docs": "Converts a 3D Numpy array to a PIL Image instance.\n\n Usage:\n\n ```python\n from PIL import Image\n img = np.random.random(size=(100, 100, 3))\n pil_img = tf.keras.preprocessing.image.array_to_img(img)\n ```\n\n\n Args:\n x: Input data, in any form that can be converted to a Numpy array.\n data_format: Image data format, can be either `\"channels_first\"` or\n `\"channels_last\"`. Defaults to `None`, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to `\"channels_last\"`).\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Defaults to `True`.\n dtype: Dtype to use. Default to `None`, in which case the global setting\n `tf.keras.backend.floatx()` is used (unless you changed it, it defaults\n to `\"float32\"`)\n\n Returns:\n A PIL Image instance.\n\n Raises:\n ImportError: if PIL is not available.\n ValueError: if invalid `x` or `data_format` is passed.\n ", "desc": "Converts a 3D Numpy array to a PIL Image instance.", "type": "API"}, {"name": "tf.keras.preprocessing.image.DirectoryIterator", "docs": "Iterator capable of reading images from a directory on disk.\n\n Deprecated: `tf.keras.preprocessing.image.DirectoryIterator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n directory: Path to the directory to read images from. Each subdirectory in\n this directory will be considered to contain images from one class, or\n alternatively you could specify class subdirectories via the `classes`\n argument.\n image_data_generator: Instance of `ImageDataGenerator` to use for random\n transformations and normalization.\n target_size: tuple of integers, dimensions to resize input images to.\n color_mode: One of `\"rgb\"`, `\"rgba\"`, `\"grayscale\"`. Color mode to read\n images.\n classes: Optional list of strings, names of subdirectories containing\n images from each class (e.g. `[\"dogs\", \"cats\"]`). It will be computed\n automatically if not set.\n class_mode: Mode for yielding the targets:\n - `\"binary\"`: binary targets (if there are only two classes),\n - `\"categorical\"`: categorical targets,\n - `\"sparse\"`: integer targets,\n - `\"input\"`: targets are images identical to input images (mainly used\n to work with autoencoders),\n - `None`: no targets get yielded (only input images are yielded).\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n seed: Random seed for data shuffling.\n data_format: String, one of `channels_first`, `channels_last`.\n save_to_dir: Optional directory where to save the pictures being yielded,\n in a viewable format. This is useful for visualizing the random\n transformations being applied, for debugging purposes.\n save_prefix: String prefix to use for saving sample images (if\n `save_to_dir` is set).\n save_format: Format to use for saving sample images (if `save_to_dir` is\n set).\n subset: Subset of data (`\"training\"` or `\"validation\"`) if\n validation_split is set in ImageDataGenerator.\n interpolation: Interpolation method used to resample the image if the\n target size is different from that of the loaded image. Supported\n methods are \"nearest\", \"bilinear\", and \"bicubic\". If PIL version 1.1.3\n or newer is installed, \"lanczos\" is also supported. If PIL version 3.4.0\n or newer is installed, \"box\" and \"hamming\" are also supported. By\n default, \"nearest\" is used.\n keep_aspect_ratio: Boolean, whether to resize images to a target size\n without aspect ratio distortion. The image is cropped in the center\n with target aspect ratio before resizing.\n dtype: Dtype to use for generated arrays.\n ", "desc": "Iterator capable of reading images from a directory on disk.", "type": "API"}, {"name": "tf.keras.preprocessing.image.ImageDataGenerator", "docs": "Generate batches of tensor image data with real-time data augmentation.\n\n Deprecated: `tf.keras.preprocessing.image.ImageDataGenerator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n The data will be looped over (in batches).\n\n Args:\n featurewise_center: Boolean. Set input mean to 0 over the dataset,\n feature-wise.\n samplewise_center: Boolean. Set each sample mean to 0.\n featurewise_std_normalization: Boolean. Divide inputs by std of the\n dataset, feature-wise.\n samplewise_std_normalization: Boolean. Divide each input by its std.\n zca_epsilon: epsilon for ZCA whitening. Default is 1e-6.\n zca_whitening: Boolean. Apply ZCA whitening.\n rotation_range: Int. Degree range for random rotations.\n width_shift_range: Float, 1-D array-like or int\n - float: fraction of total width, if < 1, or pixels if >= 1.\n - 1-D array-like: random elements from the array.\n - int: integer number of pixels from interval `(-width_shift_range,\n +width_shift_range)` - With `width_shift_range=2` possible values\n are integers `[-1, 0, +1]`, same as with `width_shift_range=[-1, 0,\n +1]`, while with `width_shift_range=1.0` possible values are floats\n in the interval [-1.0, +1.0).\n height_shift_range: Float, 1-D array-like or int\n - float: fraction of total height, if < 1, or pixels if >= 1.\n - 1-D array-like: random elements from the array.\n - int: integer number of pixels from interval `(-height_shift_range,\n +height_shift_range)` - With `height_shift_range=2` possible values\n are integers `[-1, 0, +1]`, same as with `height_shift_range=[-1, 0,\n +1]`, while with `height_shift_range=1.0` possible values are floats\n in the interval [-1.0, +1.0).\n brightness_range: Tuple or list of two floats. Range for picking a\n brightness shift value from.\n shear_range: Float. Shear Intensity (Shear angle in counter-clockwise\n direction in degrees)\n zoom_range: Float or [lower, upper]. Range for random zoom. If a float,\n `[lower, upper] = [1-zoom_range, 1+zoom_range]`.\n channel_shift_range: Float. Range for random channel shifts.\n fill_mode: One of {\"constant\", \"nearest\", \"reflect\" or \"wrap\"}. Default is\n 'nearest'. Points outside the boundaries of the input are filled\n according to the given mode:\n - 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k)\n - 'nearest': aaaaaaaa|abcd|dddddddd\n - 'reflect': abcddcba|abcd|dcbaabcd\n - 'wrap': abcdabcd|abcd|abcdabcd\n cval: Float or Int. Value used for points outside the boundaries when\n `fill_mode = \"constant\"`.\n horizontal_flip: Boolean. Randomly flip inputs horizontally.\n vertical_flip: Boolean. Randomly flip inputs vertically.\n rescale: rescaling factor. Defaults to None. If None or 0, no rescaling is\n applied, otherwise we multiply the data by the value provided (after\n applying all other transformations).\n preprocessing_function: function that will be applied on each input. The\n function will run after the image is resized and augmented.\n The function should take one argument: one image (Numpy tensor with\n rank 3), and should output a Numpy tensor with the same shape.\n data_format: Image data format, either \"channels_first\" or\n \"channels_last\". \"channels_last\" mode means that the images should have\n shape `(samples, height, width, channels)`, \"channels_first\" mode means\n that the images should have shape `(samples, channels, height, width)`.\n It defaults to the `image_data_format` value found in your Keras config\n file at `~/.keras/keras.json`. If you never set it, then it will be\n \"channels_last\".\n validation_split: Float. Fraction of images reserved for validation\n (strictly between 0 and 1).\n dtype: Dtype to use for the generated arrays.\n\n Raises:\n ValueError: If the value of the argument, `data_format` is other than\n `\"channels_last\"` or `\"channels_first\"`.\n ValueError: If the value of the argument, `validation_split` > 1\n or `validation_split` < 0.\n\n Examples:\n\n Example of using `.flow(x, y)`:\n\n ```python\n (x_train, y_train), (x_test, y_test) = cifar10.load_data()\n y_train = utils.to_categorical(y_train, num_classes)\n y_test = utils.to_categorical(y_test, num_classes)\n datagen = ImageDataGenerator(\n featurewise_center=True,\n featurewise_std_normalization=True,\n rotation_range=20,\n width_shift_range=0.2,\n height_shift_range=0.2,\n horizontal_flip=True,\n validation_split=0.2)\n # compute quantities required for featurewise normalization\n # (std, mean, and principal components if ZCA whitening is applied)\n datagen.fit(x_train)\n # fits the model on batches with real-time data augmentation:\n model.fit(datagen.flow(x_train, y_train, batch_size=32,\n subset='training'),\n validation_data=datagen.flow(x_train, y_train,\n batch_size=8, subset='validation'),\n steps_per_epoch=len(x_train) / 32, epochs=epochs)\n # here's a more \"manual\" example\n for e in range(epochs):\n print('Epoch', e)\n batches = 0\n for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):\n model.fit(x_batch, y_batch)\n batches += 1\n if batches >= len(x_train) / 32:\n # we need to break the loop by hand because\n # the generator loops indefinitely\n break\n ```\n\n Example of using `.flow_from_directory(directory)`:\n\n ```python\n train_datagen = ImageDataGenerator(\n rescale=1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)\n test_datagen = ImageDataGenerator(rescale=1./255)\n train_generator = train_datagen.flow_from_directory(\n 'data/train',\n target_size=(150, 150),\n batch_size=32,\n class_mode='binary')\n validation_generator = test_datagen.flow_from_directory(\n 'data/validation',\n target_size=(150, 150),\n batch_size=32,\n class_mode='binary')\n model.fit(\n train_generator,\n steps_per_epoch=2000,\n epochs=50,\n validation_data=validation_generator,\n validation_steps=800)\n ```\n\n Example of transforming images and masks together.\n\n ```python\n # we create two instances with the same arguments\n data_gen_args = dict(featurewise_center=True,\n featurewise_std_normalization=True,\n rotation_range=90,\n width_shift_range=0.1,\n height_shift_range=0.1,\n zoom_range=0.2)\n image_datagen = ImageDataGenerator(**data_gen_args)\n mask_datagen = ImageDataGenerator(**data_gen_args)\n # Provide the same seed and keyword arguments to the fit and flow methods\n seed = 1\n image_datagen.fit(images, augment=True, seed=seed)\n mask_datagen.fit(masks, augment=True, seed=seed)\n image_generator = image_datagen.flow_from_directory(\n 'data/images',\n class_mode=None,\n seed=seed)\n mask_generator = mask_datagen.flow_from_directory(\n 'data/masks',\n class_mode=None,\n seed=seed)\n # combine generators into one which yields image and masks\n train_generator = zip(image_generator, mask_generator)\n model.fit(\n train_generator,\n steps_per_epoch=2000,\n epochs=50)\n ```\n ", "desc": "Generate batches of tensor image data with real-time data augmentation.", "type": "API"}, {"name": "tf.keras.preprocessing.image.img_to_array", "docs": "Converts a PIL Image instance to a Numpy array.\n\n Usage:\n\n ```python\n from PIL import Image\n img_data = np.random.random(size=(100, 100, 3))\n img = tf.keras.preprocessing.image.array_to_img(img_data)\n array = tf.keras.preprocessing.image.img_to_array(img)\n ```\n\n\n Args:\n img: Input PIL Image instance.\n data_format: Image data format, can be either `\"channels_first\"` or\n `\"channels_last\"`. Defaults to `None`, in which case the global setting\n `tf.keras.backend.image_data_format()` is used (unless you changed it,\n it defaults to `\"channels_last\"`).\n dtype: Dtype to use. Default to `None`, in which case the global setting\n `tf.keras.backend.floatx()` is used (unless you changed it, it defaults\n to `\"float32\"`).\n\n Returns:\n A 3D Numpy array.\n\n Raises:\n ValueError: if invalid `img` or `data_format` is passed.\n ", "desc": "Converts a PIL Image instance to a Numpy array.", "type": "API"}, {"name": "tf.keras.preprocessing.image.Iterator", "docs": "Base class for image data iterators.\n\n Deprecated: `tf.keras.preprocessing.image.Iterator` is not recommended for\n new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Every `Iterator` must implement the `_get_batches_of_transformed_samples`\n method.\n\n Args:\n n: Integer, total number of samples in the dataset to loop over.\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n seed: Random seeding for data shuffling.\n ", "desc": "Base class for image data iterators.", "type": "API"}, {"name": "tf.keras.preprocessing.image.load_img", "docs": "Loads an image into PIL format.\n\n Usage:\n\n ```\n image = tf.keras.preprocessing.image.load_img(image_path)\n input_arr = tf.keras.preprocessing.image.img_to_array(image)\n input_arr = np.array([input_arr]) # Convert single image to a batch.\n predictions = model.predict(input_arr)\n ```\n\n Args:\n path: Path to image file.\n grayscale: DEPRECATED use `color_mode=\"grayscale\"`.\n color_mode: One of `\"grayscale\"`, `\"rgb\"`, `\"rgba\"`. Default: `\"rgb\"`.\n The desired image format.\n target_size: Either `None` (default to original size) or tuple of ints\n `(img_height, img_width)`.\n interpolation: Interpolation method used to resample the image if the\n target size is different from that of the loaded image. Supported\n methods are `\"nearest\"`, `\"bilinear\"`, and `\"bicubic\"`. If PIL version\n 1.1.3 or newer is installed, `\"lanczos\"` is also supported. If PIL\n version 3.4.0 or newer is installed, `\"box\"` and `\"hamming\"` are also\n supported. By default, `\"nearest\"` is used.\n keep_aspect_ratio: Boolean, whether to resize images to a target\n size without aspect ratio distortion. The image is cropped in\n the center with target aspect ratio before resizing.\n\n Returns:\n A PIL Image instance.\n\n Raises:\n ImportError: if PIL is not available.\n ValueError: if interpolation method is not supported.\n ", "desc": "Loads an image into PIL format.", "type": "API"}, {"name": "tf.keras.preprocessing.image.NumpyArrayIterator", "docs": "Iterator yielding data from a Numpy array.\n\n Deprecated: `tf.keras.preprocessing.image.NumpyArrayIterator` is not\n recommended for new code. Prefer loading images with\n `tf.keras.utils.image_dataset_from_directory` and transforming the output\n `tf.data.Dataset` with preprocessing layers. For more information, see the\n tutorials for [loading images](\n https://www.tensorflow.org/tutorials/load_data/images) and\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Numpy array of input data or tuple. If tuple, the second elements is\n either another numpy array or a list of numpy arrays, each of which gets\n passed through as an output without any modifications.\n y: Numpy array of targets data.\n image_data_generator: Instance of `ImageDataGenerator` to use for random\n transformations and normalization.\n batch_size: Integer, size of a batch.\n shuffle: Boolean, whether to shuffle the data between epochs.\n sample_weight: Numpy array of sample weights.\n seed: Random seed for data shuffling.\n data_format: String, one of `channels_first`, `channels_last`.\n save_to_dir: Optional directory where to save the pictures being yielded,\n in a viewable format. This is useful for visualizing the random\n transformations being applied, for debugging purposes.\n save_prefix: String prefix to use for saving sample images (if\n `save_to_dir` is set).\n save_format: Format to use for saving sample images (if `save_to_dir` is\n set).\n subset: Subset of data (`\"training\"` or `\"validation\"`) if\n validation_split is set in ImageDataGenerator.\n ignore_class_split: Boolean (default: False), ignore difference\n in number of classes in labels across train and validation\n split (useful for non-classification tasks)\n dtype: Dtype to use for the generated arrays.\n ", "desc": "Iterator yielding data from a Numpy array.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_brightness", "docs": "Performs a random brightness shift.\n\n Deprecated: `tf.keras.preprocessing.image.random_brightness` does not operate\n on tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomBrightness` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n brightness_range: Tuple of floats; brightness range.\n scale: Whether to rescale the image such that minimum and maximum values\n are 0 and 255 respectively. Default: True.\n\n Returns:\n Numpy image tensor.\n\n Raises:\n ValueError if `brightness_range` isn't a tuple.\n ", "desc": "Performs a random brightness shift.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_channel_shift", "docs": "Performs a random channel shift.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity_range: Transformation intensity.\n channel_axis: Index of axis for channels in the input tensor.\n\n Returns:\n Numpy image tensor.\n ", "desc": "Performs a random channel shift.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_rotation", "docs": "Performs a random rotation of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_rotation` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomRotation` which provides equivalent functionality as a\n preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n rg: Rotation range, in degrees.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Rotated Numpy image tensor.\n ", "desc": "Performs a random rotation of a Numpy image tensor.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_shear", "docs": "Performs a random spatial shear of a Numpy image tensor.\n\n Args:\n x: Input tensor. Must be 3D.\n intensity: Transformation intensity in degrees.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Sheared Numpy image tensor.\n ", "desc": "Performs a random spatial shear of a Numpy image tensor.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_shift", "docs": "Performs a random spatial shift of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_shift` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomTranslation` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n wrg: Width shift range, as a float fraction of the width.\n hrg: Height shift range, as a float fraction of the height.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Shifted Numpy image tensor.\n ", "desc": "Performs a random spatial shift of a Numpy image tensor.", "type": "API"}, {"name": "tf.keras.preprocessing.image.random_zoom", "docs": "Performs a random spatial zoom of a Numpy image tensor.\n\n Deprecated: `tf.keras.preprocessing.image.random_zoom` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.RandomZoom` which provides equivalent functionality as\n a preprocessing layer. For more information, see the tutorial for\n [augmenting images](\n https://www.tensorflow.org/tutorials/images/data_augmentation), as well as\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n Args:\n x: Input tensor. Must be 3D.\n zoom_range: Tuple of floats; zoom range for width and height.\n row_axis: Index of axis for rows in the input tensor.\n col_axis: Index of axis for columns in the input tensor.\n channel_axis: Index of axis for channels in the input tensor.\n fill_mode: Points outside the boundaries of the input\n are filled according to the given mode\n (one of `{'constant', 'nearest', 'reflect', 'wrap'}`).\n cval: Value used for points outside the boundaries\n of the input if `mode='constant'`.\n interpolation_order: int, order of spline interpolation.\n see `ndimage.interpolation.affine_transform`\n\n Returns:\n Zoomed Numpy image tensor.\n\n Raises:\n ValueError: if `zoom_range` isn't a tuple.\n ", "desc": "Performs a random spatial zoom of a Numpy image tensor.", "type": "API"}, {"name": "tf.keras.preprocessing.image.save_img", "docs": "Saves an image stored as a Numpy array to a path or file object.\n\n Args:\n path: Path or file object.\n x: Numpy array.\n data_format: Image data format, either `\"channels_first\"` or\n `\"channels_last\"`.\n file_format: Optional file format override. If omitted, the format to use\n is determined from the filename extension. If a file object was used\n instead of a filename, this parameter should always be used.\n scale: Whether to rescale image values to be within `[0, 255]`.\n **kwargs: Additional keyword arguments passed to `PIL.Image.save()`.\n ", "desc": "Saves an image stored as a Numpy array to a path or file object.", "type": "API"}, {"name": "tf.keras.preprocessing.image.smart_resize", "docs": "Resize images to a target size without aspect ratio distortion.\n\n Warning: `tf.keras.preprocessing.image.smart_resize` is not recommended for\n new code. Prefer `tf.keras.layers.Resizing`, which provides the same\n functionality as a preprocessing layer and adds `tf.RaggedTensor` support. See\n the [preprocessing layer guide](\n https://www.tensorflow.org/guide/keras/preprocessing_layers)\n for an overview of preprocessing layers.\n\n TensorFlow image datasets typically yield images that have each a different\n size. However, these images need to be batched before they can be\n processed by Keras layers. To be batched, images need to share the same height\n and width.\n\n You could simply do:\n\n ```python\n size = (200, 200)\n ds = ds.map(lambda img: tf.image.resize(img, size))\n ```\n\n However, if you do this, you distort the aspect ratio of your images, since\n in general they do not all have the same aspect ratio as `size`. This is\n fine in many cases, but not always (e.g. for GANs this can be a problem).\n\n Note that passing the argument `preserve_aspect_ratio=True` to `resize`\n will preserve the aspect ratio, but at the cost of no longer respecting the\n provided target size. Because `tf.image.resize` doesn't crop images,\n your output images will still have different sizes.\n\n This calls for:\n\n ```python\n size = (200, 200)\n ds = ds.map(lambda img: smart_resize(img, size))\n ```\n\n Your output images will actually be `(200, 200)`, and will not be distorted.\n Instead, the parts of the image that do not fit within the target size\n get cropped out.\n\n The resizing process is:\n\n 1. Take the largest centered crop of the image that has the same aspect ratio\n as the target size. For instance, if `size=(200, 200)` and the input image has\n size `(340, 500)`, we take a crop of `(340, 340)` centered along the width.\n 2. Resize the cropped image to the target size. In the example above,\n we resize the `(340, 340)` crop to `(200, 200)`.\n\n Args:\n x: Input image or batch of images (as a tensor or NumPy array). Must be in\n format `(height, width, channels)` or `(batch_size, height, width,\n channels)`.\n size: Tuple of `(height, width)` integer. Target size.\n interpolation: String, interpolation to use for resizing. Defaults to\n `'bilinear'`. Supports `bilinear`, `nearest`, `bicubic`, `area`,\n `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`.\n\n Returns:\n Array with shape `(size[0], size[1], channels)`. If the input image was a\n NumPy array, the output is a NumPy array, and if it was a TF tensor,\n the output is a TF tensor.\n ", "desc": "Resize images to a target size without aspect ratio distortion.", "type": "API"}, {"name": "tf.keras.preprocessing.image_dataset_from_directory", "docs": "Generates a `tf.data.Dataset` from image files in a directory.\n\n If your directory structure is:\n\n ```\n main_directory/\n ...class_a/\n ......a_image_1.jpg\n ......a_image_2.jpg\n ...class_b/\n ......b_image_1.jpg\n ......b_image_2.jpg\n ```\n\n Then calling `image_dataset_from_directory(main_directory, labels='inferred')`\n will return a `tf.data.Dataset` that yields batches of images from\n the subdirectories `class_a` and `class_b`, together with labels\n 0 and 1 (0 corresponding to `class_a` and 1 corresponding to `class_b`).\n\n Supported image formats: jpeg, png, bmp, gif.\n Animated gifs are truncated to the first frame.\n\n Args:\n directory: Directory where the data is located.\n If `labels` is \"inferred\", it should contain\n subdirectories, each containing images for a class.\n Otherwise, the directory structure is ignored.\n labels: Either \"inferred\"\n (labels are generated from the directory structure),\n None (no labels),\n or a list/tuple of integer labels of the same size as the number of\n image files found in the directory. Labels should be sorted according\n to the alphanumeric order of the image file paths\n (obtained via `os.walk(directory)` in Python).\n label_mode: String describing the encoding of `labels`. Options are:\n - 'int': means that the labels are encoded as integers\n (e.g. for `sparse_categorical_crossentropy` loss).\n - 'categorical' means that the labels are\n encoded as a categorical vector\n (e.g. for `categorical_crossentropy` loss).\n - 'binary' means that the labels (there can be only 2)\n are encoded as `float32` scalars with values 0 or 1\n (e.g. for `binary_crossentropy`).\n - None (no labels).\n class_names: Only valid if \"labels\" is \"inferred\". This is the explicit\n list of class names (must match names of subdirectories). Used\n to control the order of the classes\n (otherwise alphanumerical order is used).\n color_mode: One of \"grayscale\", \"rgb\", \"rgba\". Default: \"rgb\".\n Whether the images will be converted to\n have 1, 3, or 4 channels.\n batch_size: Size of the batches of data. Default: 32.\n If `None`, the data will not be batched\n (the dataset will yield individual samples).\n image_size: Size to resize images to after they are read from disk,\n specified as `(height, width)`. Defaults to `(256, 256)`.\n Since the pipeline processes batches of images that must all have\n the same size, this must be provided.\n shuffle: Whether to shuffle the data. Default: True.\n If set to False, sorts the data in alphanumeric order.\n seed: Optional random seed for shuffling and transformations.\n validation_split: Optional float between 0 and 1,\n fraction of data to reserve for validation.\n subset: Subset of the data to return.\n One of \"training\" or \"validation\".\n Only used if `validation_split` is set.\n interpolation: String, the interpolation method used when resizing images.\n Defaults to `bilinear`. Supports `bilinear`, `nearest`, `bicubic`,\n `area`, `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`.\n follow_links: Whether to visits subdirectories pointed to by symlinks.\n Defaults to False.\n crop_to_aspect_ratio: If True, resize the images without aspect\n ratio distortion. When the original aspect ratio differs from the target\n aspect ratio, the output image will be cropped so as to return the largest\n possible window in the image (of size `image_size`) that matches\n the target aspect ratio. By default (`crop_to_aspect_ratio=False`),\n aspect ratio may not be preserved.\n **kwargs: Legacy keyword arguments.\n\n Returns:\n A `tf.data.Dataset` object.\n - If `label_mode` is None, it yields `float32` tensors of shape\n `(batch_size, image_size[0], image_size[1], num_channels)`,\n encoding images (see below for rules regarding `num_channels`).\n - Otherwise, it yields a tuple `(images, labels)`, where `images`\n has shape `(batch_size, image_size[0], image_size[1], num_channels)`,\n and `labels` follows the format described below.\n\n Rules regarding labels format:\n - if `label_mode` is `int`, the labels are an `int32` tensor of shape\n `(batch_size,)`.\n - if `label_mode` is `binary`, the labels are a `float32` tensor of\n 1s and 0s of shape `(batch_size, 1)`.\n - if `label_mode` is `categorical`, the labels are a `float32` tensor\n of shape `(batch_size, num_classes)`, representing a one-hot\n encoding of the class index.\n\n Rules regarding number of channels in the yielded images:\n - if `color_mode` is `grayscale`,\n there's 1 channel in the image tensors.\n - if `color_mode` is `rgb`,\n there are 3 channel in the image tensors.\n - if `color_mode` is `rgba`,\n there are 4 channel in the image tensors.\n ", "desc": "Generates a `tf.data.Dataset` from image files in a directory.", "type": "API"}, {"name": "tf.keras.preprocessing.sequence", "docs": "Utilities for preprocessing sequence data.\n\nDeprecated: `tf.keras.preprocessing.sequence` APIs are not recommended for new\ncode. Prefer `tf.keras.utils.timeseries_dataset_from_array` and\nthe `tf.data` APIs which provide a much more flexible mechanisms for dealing\nwith sequences. See the [tf.data guide](https://www.tensorflow.org/guide/data)\nfor more details.\n\n", "desc": "Utilities for preprocessing sequence data.", "type": "API"}, {"name": "tf.keras.preprocessing.sequence.make_sampling_table", "docs": "Generates a word rank-based probabilistic sampling table.\n\n Used for generating the `sampling_table` argument for `skipgrams`.\n `sampling_table[i]` is the probability of sampling\n the word i-th most common word in a dataset\n (more common words should be sampled less frequently, for balance).\n\n The sampling probabilities are generated according\n to the sampling distribution used in word2vec:\n\n ```\n p(word) = (min(1, sqrt(word_frequency / sampling_factor) /\n (word_frequency / sampling_factor)))\n ```\n\n We assume that the word frequencies follow Zipf's law (s=1) to derive\n a numerical approximation of frequency(rank):\n\n `frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`\n where `gamma` is the Euler-Mascheroni constant.\n\n Args:\n size: Int, number of possible words to sample.\n sampling_factor: The sampling factor in the word2vec formula.\n\n Returns:\n A 1D Numpy array of length `size` where the ith entry\n is the probability that a word of rank i should be sampled.\n ", "desc": "Generates a word rank-based probabilistic sampling table.", "type": "API"}, {"name": "tf.keras.preprocessing.sequence.pad_sequences", "docs": "Pads sequences to the same length.\n\n This function transforms a list (of length `num_samples`)\n of sequences (lists of integers)\n into a 2D Numpy array of shape `(num_samples, num_timesteps)`.\n `num_timesteps` is either the `maxlen` argument if provided,\n or the length of the longest sequence in the list.\n\n Sequences that are shorter than `num_timesteps`\n are padded with `value` until they are `num_timesteps` long.\n\n Sequences longer than `num_timesteps` are truncated\n so that they fit the desired length.\n\n The position where padding or truncation happens is determined by\n the arguments `padding` and `truncating`, respectively.\n Pre-padding or removing values from the beginning of the sequence is the\n default.\n\n >>> sequence = [[1], [2, 3], [4, 5, 6]]\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence)\n array([[0, 0, 1],\n [0, 2, 3],\n [4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1)\n array([[-1, -1, 1],\n [-1, 2, 3],\n [ 4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post')\n array([[1, 0, 0],\n [2, 3, 0],\n [4, 5, 6]], dtype=int32)\n\n >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2)\n array([[0, 1],\n [2, 3],\n [5, 6]], dtype=int32)\n\n Args:\n sequences: List of sequences (each sequence is a list of integers).\n maxlen: Optional Int, maximum length of all sequences. If not provided,\n sequences will be padded to the length of the longest individual\n sequence.\n dtype: (Optional, defaults to `\"int32\"`). Type of the output sequences.\n To pad sequences with variable length strings, you can use `object`.\n padding: String, \"pre\" or \"post\" (optional, defaults to `\"pre\"`):\n pad either before or after each sequence.\n truncating: String, \"pre\" or \"post\" (optional, defaults to `\"pre\"`):\n remove values from sequences larger than\n `maxlen`, either at the beginning or at the end of the sequences.\n value: Float or String, padding value. (Optional, defaults to 0.)\n\n Returns:\n Numpy array with shape `(len(sequences), maxlen)`\n\n Raises:\n ValueError: In case of invalid values for `truncating` or `padding`,\n or in case of invalid shape for a `sequences` entry.\n ", "desc": "Pads sequences to the same length.", "type": "API"}, {"name": "tf.keras.preprocessing.sequence.skipgrams", "docs": "Generates skipgram word pairs.\n\n This function transforms a sequence of word indexes (list of integers)\n into tuples of words of the form:\n\n - (word, word in the same window), with label 1 (positive samples).\n - (word, random word from the vocabulary), with label 0 (negative samples).\n\n Read more about Skipgram in this gnomic paper by Mikolov et al.:\n [Efficient Estimation of Word Representations in\n Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)\n\n Args:\n sequence: A word sequence (sentence), encoded as a list\n of word indices (integers). If using a `sampling_table`,\n word indices are expected to match the rank\n of the words in a reference dataset (e.g. 10 would encode\n the 10-th most frequently occurring token).\n Note that index 0 is expected to be a non-word and will be skipped.\n vocabulary_size: Int, maximum possible word index + 1\n window_size: Int, size of sampling windows (technically half-window).\n The window of a word `w_i` will be\n `[i - window_size, i + window_size+1]`.\n negative_samples: Float >= 0. 0 for no negative (i.e. random) samples.\n 1 for same number as positive samples.\n shuffle: Whether to shuffle the word couples before returning them.\n categorical: bool. if False, labels will be\n integers (eg. `[0, 1, 1 .. ]`),\n if `True`, labels will be categorical, e.g.\n `[[1,0],[0,1],[0,1] .. ]`.\n sampling_table: 1D array of size `vocabulary_size` where the entry i\n encodes the probability to sample a word of rank i.\n seed: Random seed.\n\n Returns:\n couples, labels: where `couples` are int pairs and\n `labels` are either 0 or 1.\n\n Note:\n By convention, index 0 in the vocabulary is\n a non-word and will be skipped.\n ", "desc": "Generates skipgram word pairs.", "type": "API"}, {"name": "tf.keras.preprocessing.sequence.TimeseriesGenerator", "docs": "Utility class for generating batches of temporal data.\n\n Deprecated: `tf.keras.preprocessing.sequence.TimeseriesGenerator` does not\n operate on tensors and is not recommended for new code. Prefer using a\n `tf.data.Dataset` which provides a more efficient and flexible mechanism for\n batching, shuffling, and windowing input. See the\n [tf.data guide](https://www.tensorflow.org/guide/data) for more details.\n\n This class takes in a sequence of data-points gathered at\n equal intervals, along with time series parameters such as\n stride, length of history, etc., to produce batches for\n training/validation.\n\n Arguments:\n data: Indexable generator (such as list or Numpy array)\n containing consecutive data points (timesteps).\n The data should be at 2D, and axis 0 is expected\n to be the time dimension.\n targets: Targets corresponding to timesteps in `data`.\n It should have same length as `data`.\n length: Length of the output sequences (in number of timesteps).\n sampling_rate: Period between successive individual timesteps\n within sequences. For rate `r`, timesteps\n `data[i]`, `data[i-r]`, ... `data[i - length]`\n are used for create a sample sequence.\n stride: Period between successive output sequences.\n For stride `s`, consecutive output samples would\n be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc.\n start_index: Data points earlier than `start_index` will not be used\n in the output sequences. This is useful to reserve part of the\n data for test or validation.\n end_index: Data points later than `end_index` will not be used\n in the output sequences. This is useful to reserve part of the\n data for test or validation.\n shuffle: Whether to shuffle output samples,\n or instead draw them in chronological order.\n reverse: Boolean: if `true`, timesteps in each output sample will be\n in reverse chronological order.\n batch_size: Number of timeseries samples in each batch\n (except maybe the last one).\n\n Returns:\n A [Sequence](\n https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence)\n instance.\n\n Examples:\n ```python\n from keras.preprocessing.sequence import TimeseriesGenerator\n import numpy as np\n data = np.array([[i] for i in range(50)])\n targets = np.array([[i] for i in range(50)])\n data_gen = TimeseriesGenerator(data, targets,\n length=10, sampling_rate=2,\n batch_size=2)\n assert len(data_gen) == 20\n batch_0 = data_gen[0]\n x, y = batch_0\n assert np.array_equal(x,\n np.array([[[0], [2], [4], [6], [8]],\n [[1], [3], [5], [7], [9]]]))\n assert np.array_equal(y,\n np.array([[10], [11]]))\n ```\n ", "desc": "Utility class for generating batches of temporal data.", "type": "API"}, {"name": "tf.keras.preprocessing.text", "docs": "Utilities for text input preprocessing.\n\nDeprecated: `tf.keras.preprocessing.text` APIs are not recommended for new code.\nPrefer `tf.keras.utils.text_dataset_from_directory` and\n`tf.keras.layers.TextVectorization` which provide a more efficient approach\nfor preprocessing text input. For an introduction to these APIs, see\nthe [text loading tutorial]\n(https://www.tensorflow.org/tutorials/load_data/text)\nand [preprocessing layer guide]\n(https://www.tensorflow.org/guide/keras/preprocessing_layers).\n\n", "desc": "Utilities for text input preprocessing.", "type": "API"}, {"name": "tf.keras.preprocessing.text.hashing_trick", "docs": "Converts a text to a sequence of indexes in a fixed-size hashing space.\n\n Deprecated: `tf.keras.text.preprocessing.hashing_trick` does not operate on\n tensors and is not recommended for new code. Prefer `tf.keras.layers.Hashing`\n which provides equivalent functionality through a layer which accepts\n `tf.Tensor` input. See the [preprocessing layer guide]\n (https://www.tensorflow.org/guide/keras/preprocessing_layers)\n for an overview of preprocessing layers.\n\n Args:\n text: Input text (string).\n n: Dimension of the hashing space.\n hash_function: defaults to python `hash` function, can be 'md5' or\n any function that takes in input a string and returns a int.\n Note that 'hash' is not a stable hashing function, so\n it is not consistent across different runs, while 'md5'\n is a stable hashing function.\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default: ``!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\\\t\\\\n``,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to set the text to lowercase.\n split: str. Separator for word splitting.\n analyzer: function. Custom analyzer to split the text\n\n Returns:\n A list of integer word indices (unicity non-guaranteed).\n `0` is a reserved index that won't be assigned to any word.\n Two or more words may be assigned to the same index, due to possible\n collisions by the hashing function.\n The [probability](\n https://en.wikipedia.org/wiki/Birthday_problem#Probability_table)\n of a collision is in relation to the dimension of the hashing space and\n the number of distinct objects.\n ", "desc": "Converts a text to a sequence of indexes in a fixed-size hashing space.", "type": "API"}, {"name": "tf.keras.preprocessing.text.one_hot", "docs": "One-hot encodes a text into a list of word indexes of size `n`.\n\n Deprecated: `tf.keras.text.preprocessing.one_hot` does not operate on tensors\n and is not recommended for new code. Prefer `tf.keras.layers.Hashing` with\n `output_mode='one_hot'` which provides equivalent functionality through a\n layer which accepts `tf.Tensor` input. See the [preprocessing layer guide]\n (https://www.tensorflow.org/guide/keras/preprocessing_layers)\n for an overview of preprocessing layers.\n\n This function receives as input a string of text and returns a\n list of encoded integers each corresponding to a word (or token)\n in the given input string.\n\n Args:\n input_text: Input text (string).\n n: int. Size of vocabulary.\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default:\n ```\n '!\"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\\t\\n\n ```,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to set the text to lowercase.\n split: str. Separator for word splitting.\n analyzer: function. Custom analyzer to split the text\n\n Returns:\n List of integers in `[1, n]`. Each integer encodes a word\n (unicity non-guaranteed).\n ", "desc": "One-hot encodes a text into a list of word indexes of size `n`.", "type": "API"}, {"name": "tf.keras.preprocessing.text.text_to_word_sequence", "docs": "Converts a text to a sequence of words (or tokens).\n\n Deprecated: `tf.keras.preprocessing.text.text_to_word_sequence` does not\n operate on tensors and is not recommended for new code. Prefer\n `tf.strings.regex_replace` and `tf.strings.split` which provide equivalent\n functionality and accept `tf.Tensor` input. For an overview of text handling\n in Tensorflow, see the [text loading tutorial]\n (https://www.tensorflow.org/tutorials/load_data/text).\n\n This function transforms a string of text into a list of words\n while ignoring `filters` which include punctuations by default.\n\n >>> sample_text = 'This is a sample sentence.'\n >>> tf.keras.preprocessing.text.text_to_word_sequence(sample_text)\n ['this', 'is', 'a', 'sample', 'sentence']\n\n Args:\n input_text: Input text (string).\n filters: list (or concatenation) of characters to filter out, such as\n punctuation. Default: ``'!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\\\t\\\\n'``,\n includes basic punctuation, tabs, and newlines.\n lower: boolean. Whether to convert the input to lowercase.\n split: str. Separator for word splitting.\n\n Returns:\n A list of words (or tokens).\n ", "desc": "Converts a text to a sequence of words (or tokens).", "type": "API"}, {"name": "tf.keras.preprocessing.text.Tokenizer", "docs": "Text tokenization utility class.\n\n Deprecated: `tf.keras.preprocessing.text.Tokenizer` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.TextVectorization` which provides equivalent functionality\n through a layer which accepts `tf.Tensor` input. See the\n [text loading tutorial](https://www.tensorflow.org/tutorials/load_data/text)\n for an overview of the layer and text handling in tensorflow.\n\n This class allows to vectorize a text corpus, by turning each\n text into either a sequence of integers (each integer being the index\n of a token in a dictionary) or into a vector where the coefficient\n for each token could be binary, based on word count, based on tf-idf...\n\n By default, all punctuation is removed, turning the texts into\n space-separated sequences of words\n (words maybe include the `'` character). These sequences are then\n split into lists of tokens. They will then be indexed or vectorized.\n\n `0` is a reserved index that won't be assigned to any word.\n\n Args:\n num_words: the maximum number of words to keep, based\n on word frequency. Only the most common `num_words-1` words will\n be kept.\n filters: a string where each element is a character that will be\n filtered from the texts. The default is all punctuation, plus\n tabs and line breaks, minus the `'` character.\n lower: boolean. Whether to convert the texts to lowercase.\n split: str. Separator for word splitting.\n char_level: if True, every character will be treated as a token.\n oov_token: if given, it will be added to word_index and used to\n replace out-of-vocabulary words during text_to_sequence calls\n analyzer: function. Custom analyzer to split the text.\n The default analyzer is text_to_word_sequence\n ", "desc": "Text tokenization utility class.", "type": "API"}, {"name": "tf.keras.preprocessing.text.tokenizer_from_json", "docs": "Parses a JSON tokenizer configuration and returns a tokenizer instance.\n\n Deprecated: `tf.keras.preprocessing.text.Tokenizer` does not operate on\n tensors and is not recommended for new code. Prefer\n `tf.keras.layers.TextVectorization` which provides equivalent functionality\n through a layer which accepts `tf.Tensor` input. See the\n [text loading tutorial](https://www.tensorflow.org/tutorials/load_data/text)\n for an overview of the layer and text handling in tensorflow.\n\n Args:\n json_string: JSON string encoding a tokenizer configuration.\n\n Returns:\n A Keras Tokenizer instance\n ", "desc": "Parses a JSON tokenizer configuration and returns a tokenizer instance.", "type": "API"}, {"name": "tf.keras.preprocessing.text_dataset_from_directory", "docs": "Generates a `tf.data.Dataset` from text files in a directory.\n\n If your directory structure is:\n\n ```\n main_directory/\n ...class_a/\n ......a_text_1.txt\n ......a_text_2.txt\n ...class_b/\n ......b_text_1.txt\n ......b_text_2.txt\n ```\n\n Then calling `text_dataset_from_directory(main_directory, labels='inferred')`\n will return a `tf.data.Dataset` that yields batches of texts from\n the subdirectories `class_a` and `class_b`, together with labels\n 0 and 1 (0 corresponding to `class_a` and 1 corresponding to `class_b`).\n\n Only `.txt` files are supported at this time.\n\n Args:\n directory: Directory where the data is located.\n If `labels` is \"inferred\", it should contain\n subdirectories, each containing text files for a class.\n Otherwise, the directory structure is ignored.\n labels: Either \"inferred\"\n (labels are generated from the directory structure),\n None (no labels),\n or a list/tuple of integer labels of the same size as the number of\n text files found in the directory. Labels should be sorted according\n to the alphanumeric order of the text file paths\n (obtained via `os.walk(directory)` in Python).\n label_mode: String describing the encoding of `labels`. Options are:\n - 'int': means that the labels are encoded as integers\n (e.g. for `sparse_categorical_crossentropy` loss).\n - 'categorical' means that the labels are\n encoded as a categorical vector\n (e.g. for `categorical_crossentropy` loss).\n - 'binary' means that the labels (there can be only 2)\n are encoded as `float32` scalars with values 0 or 1\n (e.g. for `binary_crossentropy`).\n - None (no labels).\n class_names: Only valid if \"labels\" is \"inferred\". This is the explicit\n list of class names (must match names of subdirectories). Used\n to control the order of the classes\n (otherwise alphanumerical order is used).\n batch_size: Size of the batches of data. Default: 32.\n If `None`, the data will not be batched\n (the dataset will yield individual samples).\n max_length: Maximum size of a text string. Texts longer than this will\n be truncated to `max_length`.\n shuffle: Whether to shuffle the data. Default: True.\n If set to False, sorts the data in alphanumeric order.\n seed: Optional random seed for shuffling and transformations.\n validation_split: Optional float between 0 and 1,\n fraction of data to reserve for validation.\n subset: Subset of the data to return.\n One of \"training\" or \"validation\".\n Only used if `validation_split` is set.\n follow_links: Whether to visits subdirectories pointed to by symlinks.\n Defaults to False.\n\n Returns:\n A `tf.data.Dataset` object.\n - If `label_mode` is None, it yields `string` tensors of shape\n `(batch_size,)`, containing the contents of a batch of text files.\n - Otherwise, it yields a tuple `(texts, labels)`, where `texts`\n has shape `(batch_size,)` and `labels` follows the format described\n below.\n\n Rules regarding labels format:\n - if `label_mode` is `int`, the labels are an `int32` tensor of shape\n `(batch_size,)`.\n - if `label_mode` is `binary`, the labels are a `float32` tensor of\n 1s and 0s of shape `(batch_size, 1)`.\n - if `label_mode` is `categorical`, the labels are a `float32` tensor\n of shape `(batch_size, num_classes)`, representing a one-hot\n encoding of the class index.\n ", "desc": "Generates a `tf.data.Dataset` from text files in a directory.", "type": "API"}, {"name": "tf.keras.preprocessing.timeseries_dataset_from_array", "docs": "Creates a dataset of sliding windows over a timeseries provided as array.\n\n This function takes in a sequence of data-points gathered at\n equal intervals, along with time series parameters such as\n length of the sequences/windows, spacing between two sequence/windows, etc.,\n to produce batches of timeseries inputs and targets.\n\n Args:\n data: Numpy array or eager tensor\n containing consecutive data points (timesteps).\n Axis 0 is expected to be the time dimension.\n targets: Targets corresponding to timesteps in `data`.\n `targets[i]` should be the target\n corresponding to the window that starts at index `i`\n (see example 2 below).\n Pass None if you don't have target data (in this case the dataset will\n only yield the input data).\n sequence_length: Length of the output sequences (in number of timesteps).\n sequence_stride: Period between successive output sequences.\n For stride `s`, output samples would\n start at index `data[i]`, `data[i + s]`, `data[i + 2 * s]`, etc.\n sampling_rate: Period between successive individual timesteps\n within sequences. For rate `r`, timesteps\n `data[i], data[i + r], ... data[i + sequence_length]`\n are used for creating a sample sequence.\n batch_size: Number of timeseries samples in each batch\n (except maybe the last one). If `None`, the data will not be batched\n (the dataset will yield individual samples).\n shuffle: Whether to shuffle output samples,\n or instead draw them in chronological order.\n seed: Optional int; random seed for shuffling.\n start_index: Optional int; data points earlier (exclusive)\n than `start_index` will not be used\n in the output sequences. This is useful to reserve part of the\n data for test or validation.\n end_index: Optional int; data points later (exclusive) than `end_index`\n will not be used in the output sequences.\n This is useful to reserve part of the data for test or validation.\n\n Returns:\n A tf.data.Dataset instance. If `targets` was passed, the dataset yields\n tuple `(batch_of_sequences, batch_of_targets)`. If not, the dataset yields\n only `batch_of_sequences`.\n\n Example 1:\n\n Consider indices `[0, 1, ... 99]`.\n With `sequence_length=10, sampling_rate=2, sequence_stride=3`,\n `shuffle=False`, the dataset will yield batches of sequences\n composed of the following indices:\n\n ```\n First sequence: [0 2 4 6 8 10 12 14 16 18]\n Second sequence: [3 5 7 9 11 13 15 17 19 21]\n Third sequence: [6 8 10 12 14 16 18 20 22 24]\n ...\n Last sequence: [78 80 82 84 86 88 90 92 94 96]\n ```\n\n In this case the last 3 data points are discarded since no full sequence\n can be generated to include them (the next sequence would have started\n at index 81, and thus its last step would have gone over 99).\n\n Example 2: Temporal regression.\n\n Consider an array `data` of scalar values, of shape `(steps,)`.\n To generate a dataset that uses the past 10\n timesteps to predict the next timestep, you would use:\n\n ```python\n input_data = data[:-10]\n targets = data[10:]\n dataset = tf.keras.preprocessing.timeseries_dataset_from_array(\n input_data, targets, sequence_length=10)\n for batch in dataset:\n inputs, targets = batch\n assert np.array_equal(inputs[0], data[:10]) # First sequence: steps [0-9]\n assert np.array_equal(targets[0], data[10]) # Corresponding target: step 10\n break\n ```\n\n Example 3: Temporal regression for many-to-many architectures.\n\n Consider two arrays of scalar values `X` and `Y`,\n both of shape `(100,)`. The resulting dataset should consist samples with\n 20 timestamps each. The samples should not overlap.\n To generate a dataset that uses the current timestamp\n to predict the corresponding target timestep, you would use:\n\n ```python\n X = np.arange(100)\n Y = X*2\n\n sample_length = 20\n input_dataset = tf.keras.preprocessing.timeseries_dataset_from_array(\n X, None, sequence_length=sample_length, sequence_stride=sample_length)\n target_dataset = tf.keras.preprocessing.timeseries_dataset_from_array(\n Y, None, sequence_length=sample_length, sequence_stride=sample_length)\n\n for batch in zip(input_dataset, target_dataset):\n inputs, targets = batch\n assert np.array_equal(inputs[0], X[:sample_length])\n\n # second sample equals output timestamps 20-40\n assert np.array_equal(targets[1], Y[sample_length:2*sample_length])\n break\n ```\n ", "desc": "Creates a dataset of sliding windows over a timeseries provided as array.", "type": "API"}, {"name": "tf.keras.regularizers", "docs": "Built-in regularizers.\n", "desc": "Built-in regularizers.", "type": "API"}, {"name": "tf.keras.regularizers.deserialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.keras.regularizers.get", "docs": "Retrieve a regularizer instance from a config or identifier.", "desc": "Retrieve a regularizer instance from a config or identifier.", "type": "API"}, {"name": "tf.keras.regularizers.L1", "docs": "A regularizer that applies a L1 regularization penalty.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n L1 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l1')\n\n In this case, the default value used is `l1=0.01`.\n\n Arguments:\n l1: Float; L1 regularization factor.\n ", "desc": "A regularizer that applies a L1 regularization penalty.", "type": "API"}, {"name": "tf.keras.regularizers.l1_l2", "docs": "Create a regularizer that applies both L1 and L2 penalties.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n The L2 regularization penalty is computed as:\n `loss = l2 * reduce_sum(square(x))`\n\n Args:\n l1: Float; L1 regularization factor.\n l2: Float; L2 regularization factor.\n\n Returns:\n An L1L2 Regularizer with the given regularization factors.\n ", "desc": "Create a regularizer that applies both L1 and L2 penalties.", "type": "API"}, {"name": "tf.keras.regularizers.L1L2", "docs": "A regularizer that applies both L1 and L2 regularization penalties.\n\n The L1 regularization penalty is computed as:\n `loss = l1 * reduce_sum(abs(x))`\n\n The L2 regularization penalty is computed as\n `loss = l2 * reduce_sum(square(x))`\n\n L1L2 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l1_l2')\n\n In this case, the default values used are `l1=0.01` and `l2=0.01`.\n\n Arguments:\n l1: Float; L1 regularization factor.\n l2: Float; L2 regularization factor.\n ", "desc": "A regularizer that applies both L1 and L2 regularization penalties.", "type": "API"}, {"name": "tf.keras.regularizers.L2", "docs": "A regularizer that applies a L2 regularization penalty.\n\n The L2 regularization penalty is computed as:\n `loss = l2 * reduce_sum(square(x))`\n\n L2 may be passed to a layer as a string identifier:\n\n >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l2')\n\n In this case, the default value used is `l2=0.01`.\n\n Arguments:\n l2: Float; L2 regularization factor.\n ", "desc": "A regularizer that applies a L2 regularization penalty.", "type": "API"}, {"name": "tf.keras.regularizers.Regularizer", "docs": "Regularizer base class.\n\n Regularizers allow you to apply penalties on layer parameters or layer\n activity during optimization. These penalties are summed into the loss\n function that the network optimizes.\n\n Regularization penalties are applied on a per-layer basis. The exact API will\n depend on the layer, but many layers (e.g. `Dense`, `Conv1D`, `Conv2D` and\n `Conv3D`) have a unified API.\n\n These layers expose 3 keyword arguments:\n\n - `kernel_regularizer`: Regularizer to apply a penalty on the layer's kernel\n - `bias_regularizer`: Regularizer to apply a penalty on the layer's bias\n - `activity_regularizer`: Regularizer to apply a penalty on the layer's output\n\n All layers (including custom layers) expose `activity_regularizer` as a\n settable property, whether or not it is in the constructor arguments.\n\n The value returned by the `activity_regularizer` is divided by the input\n batch size so that the relative weighting between the weight regularizers and\n the activity regularizers does not change with the batch size.\n\n You can access a layer's regularization penalties by calling `layer.losses`\n after calling the layer on inputs.\n\n ## Example\n\n >>> layer = tf.keras.layers.Dense(\n ... 5, input_dim=5,\n ... kernel_initializer='ones',\n ... kernel_regularizer=tf.keras.regularizers.L1(0.01),\n ... activity_regularizer=tf.keras.regularizers.L2(0.01))\n >>> tensor = tf.ones(shape=(5, 5)) * 2.0\n >>> out = layer(tensor)\n\n >>> # The kernel regularization term is 0.25\n >>> # The activity regularization term (after dividing by the batch size) is 5\n >>> tf.math.reduce_sum(layer.losses)\n \n\n ## Available penalties\n\n ```python\n tf.keras.regularizers.L1(0.3) # L1 Regularization Penalty\n tf.keras.regularizers.L2(0.1) # L2 Regularization Penalty\n tf.keras.regularizers.L1L2(l1=0.01, l2=0.01) # L1 + L2 penalties\n ```\n\n ## Directly calling a regularizer\n\n Compute a regularization loss on a tensor by directly calling a regularizer\n as if it is a one-argument function.\n\n E.g.\n >>> regularizer = tf.keras.regularizers.L2(2.)\n >>> tensor = tf.ones(shape=(5, 5))\n >>> regularizer(tensor)\n \n\n\n ## Developing new regularizers\n\n Any function that takes in a weight matrix and returns a scalar\n tensor can be used as a regularizer, e.g.:\n\n >>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l1')\n ... def l1_reg(weight_matrix):\n ... return 0.01 * tf.math.reduce_sum(tf.math.abs(weight_matrix))\n ...\n >>> layer = tf.keras.layers.Dense(5, input_dim=5,\n ... kernel_initializer='ones', kernel_regularizer=l1_reg)\n >>> tensor = tf.ones(shape=(5, 5))\n >>> out = layer(tensor)\n >>> layer.losses\n []\n\n Alternatively, you can write your custom regularizers in an\n object-oriented way by extending this regularizer base class, e.g.:\n\n >>> @tf.keras.utils.register_keras_serializable(package='Custom', name='l2')\n ... class L2Regularizer(tf.keras.regularizers.Regularizer):\n ... def __init__(self, l2=0.): # pylint: disable=redefined-outer-name\n ... self.l2 = l2\n ...\n ... def __call__(self, x):\n ... return self.l2 * tf.math.reduce_sum(tf.math.square(x))\n ...\n ... def get_config(self):\n ... return {'l2': float(self.l2)}\n ...\n >>> layer = tf.keras.layers.Dense(\n ... 5, input_dim=5, kernel_initializer='ones',\n ... kernel_regularizer=L2Regularizer(l2=0.5))\n\n >>> tensor = tf.ones(shape=(5, 5))\n >>> out = layer(tensor)\n >>> layer.losses\n []\n\n ### A note on serialization and deserialization:\n\n Registering the regularizers as serializable is optional if you are just\n training and executing models, exporting to and from SavedModels, or saving\n and loading weight checkpoints.\n\n Registration is required for saving and\n loading models to HDF5 format, Keras model cloning, some visualization\n utilities, and exporting models to and from JSON. If using this functionality,\n you must make sure any python process running your model has also defined\n and registered your custom regularizer.\n ", "desc": "Regularizer base class.", "type": "API"}, {"name": "tf.keras.regularizers.serialize", "docs": "", "desc": "", "type": "API"}, {"name": "tf.keras.Sequential", "docs": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.\n\n `Sequential` provides training and inference features on this model.\n\n Examples:\n\n ```python\n # Optionally, the first layer can receive an `input_shape` argument:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n # Afterwards, we do automatic shape inference:\n model.add(tf.keras.layers.Dense(4))\n\n # This is identical to the following:\n model = tf.keras.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(8))\n\n # Note that you can also omit the `input_shape` argument.\n # In that case the model doesn't have any weights until the first call\n # to a training/evaluation method (since it isn't yet built):\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n # model.weights not created yet\n\n # Whereas if you specify the input shape, the model gets built\n # continuously as you are adding layers:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8, input_shape=(16,)))\n model.add(tf.keras.layers.Dense(4))\n len(model.weights)\n # Returns \"4\"\n\n # When using the delayed-build pattern (no input shape specified), you can\n # choose to manually build your model by calling\n # `build(batch_input_shape)`:\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(4))\n model.build((None, 16))\n len(model.weights)\n # Returns \"4\"\n\n # Note that when using the delayed-build pattern (no input shape specified),\n # the model gets built the first time you call `fit`, `eval`, or `predict`,\n # or the first time you call the model on some input data.\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(8))\n model.add(tf.keras.layers.Dense(1))\n model.compile(optimizer='sgd', loss='mse')\n # This builds the model for the first time:\n model.fit(x, y, batch_size=32, epochs=10)\n ```\n ", "desc": "`Sequential` groups a linear stack of layers into a `tf.keras.Model`.", "type": "API"}, {"name": "tf.keras.utils", "docs": "Public Keras utilities.\n", "desc": "Public Keras utilities.", "type": "API"}, {"name": "tf.keras.utils.custom_object_scope", "docs": "Exposes custom classes/functions to Keras deserialization internals.\n\n Under a scope `with custom_object_scope(objects_dict)`, Keras methods such\n as `tf.keras.models.load_model` or `tf.keras.models.model_from_config`\n will be able to deserialize any custom object referenced by a\n saved config (e.g. a custom layer or metric).\n\n Example:\n\n Consider a custom regularizer `my_regularizer`:\n\n ```python\n layer = Dense(3, kernel_regularizer=my_regularizer)\n config = layer.get_config() # Config contains a reference to `my_regularizer`\n ...\n # Later:\n with custom_object_scope({'my_regularizer': my_regularizer}):\n layer = Dense.from_config(config)\n ```\n\n Args:\n *args: Dictionary or dictionaries of `{name: object}` pairs.\n ", "desc": "Exposes custom classes/functions to Keras deserialization internals.", "type": "API"}, {"name": "tf.keras.utils.CustomObjectScope", "docs": "Exposes custom classes/functions to Keras deserialization internals.\n\n Under a scope `with custom_object_scope(objects_dict)`, Keras methods such\n as `tf.keras.models.load_model` or `tf.keras.models.model_from_config`\n will be able to deserialize any custom object referenced by a\n saved config (e.g. a custom layer or metric).\n\n Example:\n\n Consider a custom regularizer `my_regularizer`:\n\n ```python\n layer = Dense(3, kernel_regularizer=my_regularizer)\n config = layer.get_config() # Config contains a reference to `my_regularizer`\n ...\n # Later:\n with custom_object_scope({'my_regularizer': my_regularizer}):\n layer = Dense.from_config(config)\n ```\n\n Args:\n *args: Dictionary or dictionaries of `{name: object}` pairs.\n ", "desc": "Exposes custom classes/functions to Keras deserialization internals.", "type": "API"}, {"name": "tf.keras.utils.deserialize_keras_object", "docs": "Turns the serialized form of a Keras object back into an actual object.\n\n This function is for mid-level library implementers rather than end users.\n\n Importantly, this utility requires you to provide the dict of `module_objects`\n to use for looking up the object config; this is not populated by default.\n If you need a deserialization utility that has preexisting knowledge of\n built-in Keras objects, use e.g. `keras.layers.deserialize(config)`,\n `keras.metrics.deserialize(config)`, etc.\n\n Calling `deserialize_keras_object` while underneath the\n `SharedObjectLoadingScope` context manager will cause any already-seen shared\n objects to be returned as-is rather than creating a new object.\n\n Args:\n identifier: the serialized form of the object.\n module_objects: A dictionary of built-in objects to look the name up in.\n Generally, `module_objects` is provided by midlevel library implementers.\n custom_objects: A dictionary of custom objects to look the name up in.\n Generally, `custom_objects` is provided by the end user.\n printable_module_name: A human-readable string representing the type of the\n object. Printed in case of exception.\n\n Returns:\n The deserialized object.\n\n Example:\n\n A mid-level library implementer might want to implement a utility for\n retrieving an object from its config, as such:\n\n ```python\n def deserialize(config, custom_objects=None):\n return deserialize_keras_object(\n identifier,\n module_objects=globals(),\n custom_objects=custom_objects,\n name=\"MyObjectType\",\n )\n ```\n\n This is how e.g. `keras.layers.deserialize()` is implemented.\n ", "desc": "Turns the serialized form of a Keras object back into an actual object.", "type": "API"}, {"name": "tf.keras.utils.experimental", "docs": "Public API for tf.keras.utils.experimental namespace.\n", "desc": "Public API for tf.keras.utils.experimental namespace.", "type": "API"}, {"name": "tf.keras.utils.experimental.DatasetCreator", "docs": "Object that returns a `tf.data.Dataset` upon invoking.\n\n `tf.keras.utils.experimental.DatasetCreator` is designated as a supported type\n for `x`, or the input, in `tf.keras.Model.fit`. Pass an instance of this class\n to `fit` when using a callable (with a `input_context` argument) that returns\n a `tf.data.Dataset`.\n\n ```python\n model = tf.keras.Sequential([tf.keras.layers.Dense(10)])\n model.compile(tf.keras.optimizers.SGD(), loss=\"mse\")\n\n def dataset_fn(input_context):\n global_batch_size = 64\n batch_size = input_context.get_per_replica_batch_size(global_batch_size)\n dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat()\n dataset = dataset.shard(\n input_context.num_input_pipelines, input_context.input_pipeline_id)\n dataset = dataset.batch(batch_size)\n dataset = dataset.prefetch(2)\n return dataset\n\n input_options = tf.distribute.InputOptions(\n experimental_fetch_to_device=True,\n experimental_per_replica_buffer_size=2)\n model.fit(tf.keras.utils.experimental.DatasetCreator(\n dataset_fn, input_options=input_options), epochs=10, steps_per_epoch=10)\n ```\n\n `Model.fit` usage with `DatasetCreator` is intended to work across all\n `tf.distribute.Strategy`s, as long as `Strategy.scope` is used at model\n creation:\n\n ```python\n strategy = tf.distribute.experimental.ParameterServerStrategy(\n cluster_resolver)\n with strategy.scope():\n model = tf.keras.Sequential([tf.keras.layers.Dense(10)])\n model.compile(tf.keras.optimizers.SGD(), loss=\"mse\")\n\n def dataset_fn(input_context):\n ...\n\n input_options = ...\n model.fit(tf.keras.utils.experimental.DatasetCreator(\n dataset_fn, input_options=input_options), epochs=10, steps_per_epoch=10)\n ```\n\n Note: When using `DatasetCreator`, `steps_per_epoch` argument in `Model.fit`\n must be provided as the cardinality of such input cannot be inferred.\n\n Args:\n dataset_fn: A callable that takes a single argument of type\n `tf.distribute.InputContext`, which is used for batch size calculation and\n cross-worker input pipeline sharding (if neither is needed, the\n `InputContext` parameter can be ignored in the `dataset_fn`), and returns\n a `tf.data.Dataset`.\n input_options: Optional `tf.distribute.InputOptions`, used for specific\n options when used with distribution, for example, whether to prefetch\n dataset elements to accelerator device memory or host device memory, and\n prefetch buffer size in the replica device memory. No effect if not used\n with distributed training. See `tf.distribute.InputOptions` for more\n information.\n ", "desc": "Object that returns a `tf.data.Dataset` upon invoking.", "type": "API"}, {"name": "tf.keras.utils.GeneratorEnqueuer", "docs": "Builds a queue out of a data generator.\n\n The provided generator can be finite in which case the class will throw\n a `StopIteration` exception.\n\n Args:\n generator: a generator function which yields data\n use_multiprocessing: use multiprocessing if True, otherwise threading\n random_seed: Initial seed for workers,\n will be incremented by one for each worker.\n ", "desc": "Builds a queue out of a data generator.", "type": "API"}, {"name": "tf.keras.utils.get_custom_objects", "docs": "Retrieves a live reference to the global dictionary of custom objects.\n\n Updating and clearing custom objects using `custom_object_scope`\n is preferred, but `get_custom_objects` can\n be used to directly access the current collection of custom objects.\n\n Example:\n\n ```python\n get_custom_objects().clear()\n get_custom_objects()['MyObject'] = MyObject\n ```\n\n Returns:\n Global dictionary of names to classes (`_GLOBAL_CUSTOM_OBJECTS`).\n ", "desc": "Retrieves a live reference to the global dictionary of custom objects.", "type": "API"}, {"name": "tf.keras.utils.get_file", "docs": "Downloads a file from a URL if it not already in the cache.\n\n By default the file at the url `origin` is downloaded to the\n cache_dir `~/.keras`, placed in the cache_subdir `datasets`,\n and given the filename `fname`. The final location of a file\n `example.txt` would therefore be `~/.keras/datasets/example.txt`.\n\n Files in tar, tar.gz, tar.bz, and zip formats can also be extracted.\n Passing a hash will verify the file after download. The command line\n programs `shasum` and `sha256sum` can compute the hash.\n\n Example:\n\n ```python\n path_to_downloaded_file = tf.keras.utils.get_file(\n \"flower_photos\",\n \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\",\n untar=True)\n ```\n\n Args:\n fname: Name of the file. If an absolute path `/path/to/file.txt` is\n specified the file will be saved at that location. If `None`, the\n name of the file at `origin` will be used.\n origin: Original URL of the file.\n untar: Deprecated in favor of `extract` argument.\n boolean, whether the file should be decompressed\n md5_hash: Deprecated in favor of `file_hash` argument.\n md5 hash of the file for verification\n file_hash: The expected hash string of the file after download.\n The sha256 and md5 hash algorithms are both supported.\n cache_subdir: Subdirectory under the Keras cache dir where the file is\n saved. If an absolute path `/path/to/folder` is\n specified the file will be saved at that location.\n hash_algorithm: Select the hash algorithm to verify the file.\n options are `'md5'`, `'sha256'`, and `'auto'`.\n The default 'auto' detects the hash algorithm in use.\n extract: True tries extracting the file as an Archive, like tar or zip.\n archive_format: Archive format to try for extracting the file.\n Options are `'auto'`, `'tar'`, `'zip'`, and `None`.\n `'tar'` includes tar, tar.gz, and tar.bz files.\n The default `'auto'` corresponds to `['tar', 'zip']`.\n None or an empty list will return no matches found.\n cache_dir: Location to store cached files, when None it\n defaults to the default directory `~/.keras/`.\n\n Returns:\n Path to the downloaded file\n ", "desc": "Downloads a file from a URL if it not already in the cache.", "type": "API"}, {"name": "tf.keras.utils.get_registered_name", "docs": "Returns the name registered to an object within the Keras framework.\n\n This function is part of the Keras serialization and deserialization\n framework. It maps objects to the string names associated with those objects\n for serialization/deserialization.\n\n Args:\n obj: The object to look up.\n\n Returns:\n The name associated with the object, or the default Python name if the\n object is not registered.\n ", "desc": "Returns the name registered to an object within the Keras framework.", "type": "API"}, {"name": "tf.keras.utils.get_registered_object", "docs": "Returns the class associated with `name` if it is registered with Keras.\n\n This function is part of the Keras serialization and deserialization\n framework. It maps strings to the objects associated with them for\n serialization/deserialization.\n\n Example:\n ```\n def from_config(cls, config, custom_objects=None):\n if 'my_custom_object_name' in config:\n config['hidden_cls'] = tf.keras.utils.get_registered_object(\n config['my_custom_object_name'], custom_objects=custom_objects)\n ```\n\n Args:\n name: The name to look up.\n custom_objects: A dictionary of custom objects to look the name up in.\n Generally, custom_objects is provided by the user.\n module_objects: A dictionary of custom objects to look the name up in.\n Generally, module_objects is provided by midlevel library implementers.\n\n Returns:\n An instantiable class associated with 'name', or None if no such class\n exists.\n ", "desc": "Returns the class associated with `name` if it is registered with Keras.", "type": "API"}, {"name": "tf.keras.utils.get_source_inputs", "docs": "Returns the list of input tensors necessary to compute `tensor`.\n\n Output will always be a list of tensors\n (potentially with 1 element).\n\n Args:\n tensor: The tensor to start from.\n layer: Origin layer of the tensor. Will be\n determined via tensor._keras_history if not provided.\n node_index: Origin node index of the tensor.\n\n Returns:\n List of input tensors.\n ", "desc": "Returns the list of input tensors necessary to compute `tensor`.", "type": "API"}, {"name": "tf.keras.utils.model_to_dot", "docs": "Convert a Keras model to dot format.\n\n Args:\n model: A Keras model instance.\n show_shapes: whether to display shape information.\n show_dtype: whether to display layer dtypes.\n show_layer_names: whether to display layer names.\n rankdir: `rankdir` argument passed to PyDot,\n a string specifying the format of the plot:\n 'TB' creates a vertical plot;\n 'LR' creates a horizontal plot.\n expand_nested: whether to expand nested models into clusters.\n dpi: Dots per inch.\n subgraph: whether to return a `pydot.Cluster` instance.\n layer_range: input of `list` containing two `str` items, which is the\n starting layer name and ending layer name (both inclusive) indicating\n the range of layers for which the `pydot.Dot` will be generated. It\n also accepts regex patterns instead of exact name. In such case, start\n predicate will be the first element it matches to `layer_range[0]`\n and the end predicate will be the last element it matches to\n `layer_range[1]`. By default `None` which considers all layers of\n model. Note that you must pass range such that the resultant subgraph\n must be complete.\n show_layer_activations: Display layer activations (only for layers that\n have an `activation` property).\n\n Returns:\n A `pydot.Dot` instance representing the Keras model or\n a `pydot.Cluster` instance representing nested model if\n `subgraph=True`.\n\n Raises:\n ValueError: if `model_to_dot` is called before the model is built.\n ImportError: if graphviz or pydot are not available.\n ", "desc": "Convert a Keras model to dot format.", "type": "API"}, {"name": "tf.keras.utils.normalize", "docs": "Normalizes a Numpy array.\n\n Args:\n x: Numpy array to normalize.\n axis: axis along which to normalize.\n order: Normalization order (e.g. `order=2` for L2 norm).\n\n Returns:\n A normalized copy of the array.\n ", "desc": "Normalizes a Numpy array.", "type": "API"}, {"name": "tf.keras.utils.OrderedEnqueuer", "docs": "Builds a Enqueuer from a Sequence.\n\n Args:\n sequence: A `tf.keras.utils.data_utils.Sequence` object.\n use_multiprocessing: use multiprocessing if True, otherwise threading\n shuffle: whether to shuffle the data at the beginning of each epoch\n ", "desc": "Builds a Enqueuer from a Sequence.", "type": "API"}, {"name": "tf.keras.utils.pack_x_y_sample_weight", "docs": "Packs user-provided data into a tuple.\n\n This is a convenience utility for packing data into the tuple formats\n that `Model.fit` uses.\n\n Standalone usage:\n\n >>> x = tf.ones((10, 1))\n >>> data = tf.keras.utils.pack_x_y_sample_weight(x)\n >>> isinstance(data, tf.Tensor)\n True\n >>> y = tf.ones((10, 1))\n >>> data = tf.keras.utils.pack_x_y_sample_weight(x, y)\n >>> isinstance(data, tuple)\n True\n >>> x, y = data\n\n Args:\n x: Features to pass to `Model`.\n y: Ground-truth targets to pass to `Model`.\n sample_weight: Sample weight for each element.\n\n Returns:\n Tuple in the format used in `Model.fit`.\n ", "desc": "Packs user-provided data into a tuple.", "type": "API"}, {"name": "tf.keras.utils.plot_model", "docs": "Converts a Keras model to dot format and save to a file.\n\n Example:\n\n ```python\n input = tf.keras.Input(shape=(100,), dtype='int32', name='input')\n x = tf.keras.layers.Embedding(\n output_dim=512, input_dim=10000, input_length=100)(input)\n x = tf.keras.layers.LSTM(32)(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n x = tf.keras.layers.Dense(64, activation='relu')(x)\n output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x)\n model = tf.keras.Model(inputs=[input], outputs=[output])\n dot_img_file = '/tmp/model_1.png'\n tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True)\n ```\n\n Args:\n model: A Keras model instance\n to_file: File name of the plot image.\n show_shapes: whether to display shape information.\n show_dtype: whether to display layer dtypes.\n show_layer_names: whether to display layer names.\n rankdir: `rankdir` argument passed to PyDot,\n a string specifying the format of the plot: 'TB' creates a vertical\n plot; 'LR' creates a horizontal plot.\n expand_nested: Whether to expand nested models into clusters.\n dpi: Dots per inch.\n layer_range: input of `list` containing two `str` items, which is the\n starting layer name and ending layer name (both inclusive) indicating the\n range of layers for which the plot will be generated. It also accepts\n regex patterns instead of exact name. In such case, start predicate will\n be the first element it matches to `layer_range[0]` and the end predicate\n will be the last element it matches to `layer_range[1]`. By default `None`\n which considers all layers of model. Note that you must pass range such\n that the resultant subgraph must be complete.\n show_layer_activations: Display layer activations (only for layers that\n have an `activation` property).\n\n Raises:\n ValueError: if `plot_model` is called before the model is built.\n\n Returns:\n A Jupyter notebook Image object if Jupyter is installed.\n This enables in-line display of the model plots in notebooks.\n ", "desc": "Converts a Keras model to dot format and save to a file.", "type": "API"}, {"name": "tf.keras.utils.Progbar", "docs": "Displays a progress bar.\n\n Args:\n target: Total number of steps expected, None if unknown.\n width: Progress bar width on screen.\n verbose: Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose)\n stateful_metrics: Iterable of string names of metrics that should *not* be\n averaged over time. Metrics in this list will be displayed as-is. All\n others will be averaged by the progbar before display.\n interval: Minimum visual progress update interval (in seconds).\n unit_name: Display name for step counts (usually \"step\" or \"sample\").\n ", "desc": "Displays a progress bar.", "type": "API"}, {"name": "tf.keras.utils.register_keras_serializable", "docs": "Registers an object with the Keras serialization framework.\n\n This decorator injects the decorated class or function into the Keras custom\n object dictionary, so that it can be serialized and deserialized without\n needing an entry in the user-provided custom object dict. It also injects a\n function that Keras will call to get the object's serializable string key.\n\n Note that to be serialized and deserialized, classes must implement the\n `get_config()` method. Functions do not have this requirement.\n\n The object will be registered under the key 'package>name' where `name`,\n defaults to the object name if not passed.\n\n Args:\n package: The package that this class belongs to.\n name: The name to serialize this class under in this package. If None, the\n class' name will be used.\n\n Returns:\n A decorator that registers the decorated class with the passed names.\n ", "desc": "Registers an object with the Keras serialization framework.", "type": "API"}, {"name": "tf.keras.utils.Sequence", "docs": "Base object for fitting to a sequence of data, such as a dataset.\n\n Every `Sequence` must implement the `__getitem__` and the `__len__` methods.\n If you want to modify your dataset between epochs you may implement\n `on_epoch_end`.\n The method `__getitem__` should return a complete batch.\n\n Notes:\n\n `Sequence` are a safer way to do multiprocessing. This structure guarantees\n that the network will only train once\n on each sample per epoch which is not the case with generators.\n\n Examples:\n\n ```python\n from skimage.io import imread\n from skimage.transform import resize\n import numpy as np\n import math\n\n # Here, `x_set` is list of path to the images\n # and `y_set` are the associated classes.\n\n class CIFAR10Sequence(Sequence):\n\n def __init__(self, x_set, y_set, batch_size):\n self.x, self.y = x_set, y_set\n self.batch_size = batch_size\n\n def __len__(self):\n return math.ceil(len(self.x) / self.batch_size)\n\n def __getitem__(self, idx):\n batch_x = self.x[idx * self.batch_size:(idx + 1) *\n self.batch_size]\n batch_y = self.y[idx * self.batch_size:(idx + 1) *\n self.batch_size]\n\n return np.array([\n resize(imread(file_name), (200, 200))\n for file_name in batch_x]), np.array(batch_y)\n ```\n ", "desc": "Base object for fitting to a sequence of data, such as a dataset.", "type": "API"}, {"name": "tf.keras.utils.SequenceEnqueuer", "docs": "Base class to enqueue inputs.\n\n The task of an Enqueuer is to use parallelism to speed up preprocessing.\n This is done with processes or threads.\n\n Example:\n\n ```python\n enqueuer = SequenceEnqueuer(...)\n enqueuer.start()\n datas = enqueuer.get()\n for data in datas:\n # Use the inputs; training, evaluating, predicting.\n # ... stop sometime.\n enqueuer.stop()\n ```\n\n The `enqueuer.get()` should be an infinite stream of data.\n ", "desc": "Base class to enqueue inputs.", "type": "API"}, {"name": "tf.keras.utils.serialize_keras_object", "docs": "Serialize a Keras object into a JSON-compatible representation.\n\n Calls to `serialize_keras_object` while underneath the\n `SharedObjectSavingScope` context manager will cause any objects re-used\n across multiple layers to be saved with a special shared object ID. This\n allows the network to be re-created properly during deserialization.\n\n Args:\n instance: The object to serialize.\n\n Returns:\n A dict-like, JSON-compatible representation of the object's config.\n ", "desc": "Serialize a Keras object into a JSON-compatible representation.", "type": "API"}, {"name": "tf.keras.utils.to_categorical", "docs": "Converts a class vector (integers) to binary class matrix.\n\n E.g. for use with `categorical_crossentropy`.\n\n Args:\n y: Array-like with class values to be converted into a matrix\n (integers from 0 to `num_classes - 1`).\n num_classes: Total number of classes. If `None`, this would be inferred\n as `max(y) + 1`.\n dtype: The data type expected by the input. Default: `'float32'`.\n\n Returns:\n A binary matrix representation of the input. The class axis is placed\n last.\n\n Example:\n\n >>> a = tf.keras.utils.to_categorical([0, 1, 2, 3], num_classes=4)\n >>> a = tf.constant(a, shape=[4, 4])\n >>> print(a)\n tf.Tensor(\n [[1. 0. 0. 0.]\n [0. 1. 0. 0.]\n [0. 0. 1. 0.]\n [0. 0. 0. 1.]], shape=(4, 4), dtype=float32)\n\n >>> b = tf.constant([.9, .04, .03, .03,\n ... .3, .45, .15, .13,\n ... .04, .01, .94, .05,\n ... .12, .21, .5, .17],\n ... shape=[4, 4])\n >>> loss = tf.keras.backend.categorical_crossentropy(a, b)\n >>> print(np.around(loss, 5))\n [0.10536 0.82807 0.1011 1.77196]\n\n >>> loss = tf.keras.backend.categorical_crossentropy(a, a)\n >>> print(np.around(loss, 5))\n [0. 0. 0. 0.]\n ", "desc": "Converts a class vector (integers) to binary class matrix.", "type": "API"}, {"name": "tf.keras.utils.unpack_x_y_sample_weight", "docs": "Unpacks user-provided data tuple.\n\n This is a convenience utility to be used when overriding\n `Model.train_step`, `Model.test_step`, or `Model.predict_step`.\n This utility makes it easy to support data of the form `(x,)`,\n `(x, y)`, or `(x, y, sample_weight)`.\n\n Standalone usage:\n\n >>> features_batch = tf.ones((10, 5))\n >>> labels_batch = tf.zeros((10, 5))\n >>> data = (features_batch, labels_batch)\n >>> # `y` and `sample_weight` will default to `None` if not provided.\n >>> x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data)\n >>> sample_weight is None\n True\n\n Example in overridden `Model.train_step`:\n\n ```python\n class MyModel(tf.keras.Model):\n\n def train_step(self, data):\n # If `sample_weight` is not provided, all samples will be weighted\n # equally.\n x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data)\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True)\n loss = self.compiled_loss(\n y, y_pred, sample_weight, regularization_losses=self.losses)\n trainable_variables = self.trainable_variables\n gradients = tape.gradient(loss, trainable_variables)\n self.optimizer.apply_gradients(zip(gradients, trainable_variables))\n\n self.compiled_metrics.update_state(y, y_pred, sample_weight)\n return {m.name: m.result() for m in self.metrics}\n ```\n\n Args:\n data: A tuple of the form `(x,)`, `(x, y)`, or `(x, y, sample_weight)`.\n\n Returns:\n The unpacked tuple, with `None`s for `y` and `sample_weight` if they are not\n provided.\n ", "desc": "Unpacks user-provided data tuple.", "type": "API"}, {"name": "tf.keras.wrappers", "docs": "Public API for tf.keras.wrappers namespace.\n", "desc": "Public API for tf.keras.wrappers namespace.", "type": "API"}, {"name": "tf.keras.wrappers.scikit_learn", "docs": "Wrapper for using the Scikit-Learn API with Keras models.\n", "desc": "Wrapper for using the Scikit-Learn API with Keras models.", "type": "API"}, {"name": "tf.keras.wrappers.scikit_learn.KerasClassifier", "docs": "Implementation of the scikit-learn classifier API for Keras.\n\n DEPRECATED. Use [Sci-Keras](https://github.com/adriangb/scikeras) instead.\n See https://www.adriangb.com/scikeras/stable/migration.html\n for help migrating.\n ", "desc": "Implementation of the scikit-learn classifier API for Keras.", "type": "API"}, {"name": "tf.keras.wrappers.scikit_learn.KerasRegressor", "docs": "Implementation of the scikit-learn regressor API for Keras.\n\n DEPRECATED. Use [Sci-Keras](https://github.com/adriangb/scikeras) instead.\n See https://www.adriangb.com/scikeras/stable/migration.html\n for help migrating.\n ", "desc": "Implementation of the scikit-learn regressor API for Keras.", "type": "API"}, {"name": "tf.less", "docs": "Returns the truth value of (x < y) element-wise.\n\n *NOTE*: `math.less` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less(x, y) ==> [False, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 7])\n tf.math.less(x, y) ==> [False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x < y) element-wise.", "type": "API"}, {"name": "tf.less_equal", "docs": "Returns the truth value of (x <= y) element-wise.\n\n *NOTE*: `math.less_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less_equal(x, y) ==> [True, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 6])\n tf.math.less_equal(x, y) ==> [True, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x <= y) element-wise.", "type": "API"}, {"name": "tf.linalg", "docs": "Operations for linear algebra.\n", "desc": "Operations for linear algebra.", "type": "API"}, {"name": "tf.linalg.adjoint", "docs": "Transposes the last two dimensions of and conjugates tensor `matrix`.\n\n For example:\n\n ```python\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.adjoint(x) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n ```\n\n Args:\n matrix: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`,\n or `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op` (optional).\n\n Returns:\n The adjoint (a.k.a. Hermitian transpose a.k.a. conjugate transpose) of\n matrix.\n ", "desc": "Transposes the last two dimensions of and conjugates tensor `matrix`.", "type": "API"}, {"name": "tf.linalg.band_part", "docs": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.\n\n The `band` part is computed as follows:\n Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a\n tensor with the same shape where\n\n `band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.\n\n The indicator function\n\n `in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&\n (num_upper < 0 || (n-m) <= num_upper)`.\n\n For example:\n\n ```\n # if 'input' is [[ 0, 1, 2, 3]\n # [-1, 0, 1, 2]\n # [-2, -1, 0, 1]\n # [-3, -2, -1, 0]],\n\n tf.linalg.band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]\n [-1, 0, 1, 2]\n [ 0, -1, 0, 1]\n [ 0, 0, -1, 0]],\n\n tf.linalg.band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]\n [-1, 0, 1, 0]\n [-2, -1, 0, 1]\n [ 0, -2, -1, 0]]\n ```\n\n Useful special cases:\n\n ```\n tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.\n tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.\n tf.linalg.band_part(input, 0, 0) ==> Diagonal.\n ```\n\n Args:\n input: A `Tensor`. Rank `k` tensor.\n num_lower: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D tensor. Number of subdiagonals to keep. If negative, keep entire\n lower triangle.\n num_upper: A `Tensor`. Must have the same type as `num_lower`.\n 0-D tensor. Number of superdiagonals to keep. If negative, keep\n entire upper triangle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.", "type": "API"}, {"name": "tf.linalg.banded_triangular_solve", "docs": "Solve triangular systems of equations with a banded solver.\n\n `bands` is a tensor of shape `[..., K, M]`, where `K` represents the number\n of bands stored. This corresponds to a batch of `M` by `M` matrices, whose\n `K` subdiagonals (when `lower` is `True`) are stored.\n\n This operator broadcasts the batch dimensions of `bands` and the batch\n dimensions of `rhs`.\n\n\n Examples:\n\n Storing 2 bands of a 3x3 matrix.\n Note that first element in the second row is ignored due to\n the 'LEFT_RIGHT' padding.\n\n >>> x = [[2., 3., 4.], [1., 2., 3.]]\n >>> x2 = [[2., 3., 4.], [10000., 2., 3.]]\n >>> y = tf.zeros([3, 3])\n >>> z = tf.linalg.set_diag(y, x, align='LEFT_RIGHT', k=(-1, 0))\n >>> z\n \n >>> soln = tf.linalg.banded_triangular_solve(x, tf.ones([3, 1]))\n >>> soln\n \n >>> are_equal = soln == tf.linalg.banded_triangular_solve(x2, tf.ones([3, 1]))\n >>> tf.reduce_all(are_equal).numpy()\n True\n >>> are_equal = soln == tf.linalg.triangular_solve(z, tf.ones([3, 1]))\n >>> tf.reduce_all(are_equal).numpy()\n True\n\n Storing 2 superdiagonals of a 4x4 matrix. Because of the 'LEFT_RIGHT' padding\n the last element of the first row is ignored.\n\n >>> x = [[2., 3., 4., 5.], [-1., -2., -3., -4.]]\n >>> y = tf.zeros([4, 4])\n >>> z = tf.linalg.set_diag(y, x, align='LEFT_RIGHT', k=(0, 1))\n >>> z\n \n >>> soln = tf.linalg.banded_triangular_solve(x, tf.ones([4, 1]), lower=False)\n >>> soln\n \n >>> are_equal = (soln == tf.linalg.triangular_solve(\n ... z, tf.ones([4, 1]), lower=False))\n >>> tf.reduce_all(are_equal).numpy()\n True\n\n\n Args:\n bands: A `Tensor` describing the bands of the left hand side, with shape\n `[..., K, M]`. The `K` rows correspond to the diagonal to the `K - 1`-th\n diagonal (the diagonal is the top row) when `lower` is `True` and\n otherwise the `K - 1`-th superdiagonal to the diagonal (the diagonal is\n the bottom row) when `lower` is `False`. The bands are stored with\n 'LEFT_RIGHT' alignment, where the superdiagonals are padded on the right\n and subdiagonals are padded on the left. This is the alignment cuSPARSE\n uses. See `tf.linalg.set_diag` for more details.\n rhs: A `Tensor` of shape [..., M] or [..., M, N] and with the same dtype as\n `diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known\n statically, `rhs` will be treated as a matrix rather than a vector.\n lower: An optional `bool`. Defaults to `True`. Boolean indicating whether\n `bands` represents a lower or upper triangular matrix.\n adjoint: An optional `bool`. Defaults to `False`. Boolean indicating whether\n to solve with the matrix's block-wise adjoint.\n name: A name to give this `Op` (optional).\n\n Returns:\n A `Tensor` of shape [..., M] or [..., M, N] containing the solutions.\n ", "desc": "Solve triangular systems of equations with a banded solver.", "type": "API"}, {"name": "tf.linalg.cholesky", "docs": "Computes the Cholesky decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be symmetric and positive definite. Only the lower-triangular\n part of the input will be used for this operation. The upper-triangular part\n will not be read.\n\n The output is a tensor of the same shape as the input\n containing the Cholesky decompositions for all input submatrices `[..., :, :]`.\n\n **Note**: The gradient computation on GPU is faster for large matrices but\n not for large batch dimensions when the submatrices are small. In this\n case it might be faster to use the CPU.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the Cholesky decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.linalg.cholesky_solve", "docs": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.\n\n Specifically, returns `X` from `A X = RHS`, where `A = L L^T`, `L` is the\n `chol` arg and `RHS` is the `rhs` arg.\n\n ```python\n # Solve 10 separate 2x2 linear systems:\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 1\n chol = tf.linalg.cholesky(A) # shape 10 x 2 x 2\n X = tf.linalg.cholesky_solve(chol, RHS) # shape 10 x 2 x 1\n # tf.matmul(A, X) ~ RHS\n X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]\n\n # Solve five linear systems (K = 5) for every member of the length 10 batch.\n A = ... # shape 10 x 2 x 2\n RHS = ... # shape 10 x 2 x 5\n ...\n X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]\n ```\n\n Args:\n chol: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.\n Cholesky factorization of `A`, e.g. `chol = tf.linalg.cholesky(A)`.\n For that reason, only the lower triangular parts (including the diagonal)\n of the last two dimensions of `chol` are used. The strictly upper part is\n assumed to be zero and not accessed.\n rhs: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.\n name: A name to give this `Op`. Defaults to `cholesky_solve`.\n\n Returns:\n Solution to `A x = rhs`, shape `[..., M, K]`.\n ", "desc": "Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.", "type": "API"}, {"name": "tf.linalg.cross", "docs": "Compute the pairwise cross product.\n\n `a` and `b` must be the same shape; they can either be simple 3-element vectors,\n or any shape where the innermost dimension is 3. In the latter case, each pair\n of corresponding 3-element vectors is cross-multiplied independently.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n A tensor containing 3-element vectors.\n b: A `Tensor`. Must have the same type as `a`.\n Another tensor, of same type and shape as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the pairwise cross product.", "type": "API"}, {"name": "tf.linalg.det", "docs": "Computes the determinant of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor containing the determinants\n for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the determinant of one or more square matrices.", "type": "API"}, {"name": "tf.linalg.diag", "docs": "Returns a batched diagonal tensor with given batched diagonal values.\n\n Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th\n diagonals of a matrix, with everything else padded with `padding`. `num_rows`\n and `num_cols` specify the dimension of the innermost matrix of the output. If\n both are not specified, the op assumes the innermost matrix is square and\n infers its size from `k` and the innermost dimension of `diagonal`. If only\n one of them is specified, the op assumes the unspecified value is the smallest\n possible based on other criteria.\n\n Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor\n has rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only\n one diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has\n rank `r` with shape `[I, J, ..., L, num_rows, num_cols]`.\n\n The second innermost dimension of `diagonal` has double meaning. When `k` is\n scalar or `k[0] == k[1]`, `M` is part of the batch size [I, J, ..., M], and\n the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper\n padding_value ; otherwise\n ```\n\n Otherwise, `M` is treated as the number of diagonals for the matrix in the\n same batch (`M = k[1]-k[0]+1`), and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n padding_value ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)\n [5, 6, 7, 8]])\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)\n [0, 2, 0, 0],\n [0, 0, 3, 0],\n [0, 0, 0, 4]],\n [[5, 0, 0, 0],\n [0, 6, 0, 0],\n [0, 0, 7, 0],\n [0, 0, 0, 8]]]\n\n # A superdiagonal (per batch).\n diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_diag(diagonal, k = 1)\n ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)\n [0, 0, 2, 0],\n [0, 0, 0, 3],\n [0, 0, 0, 0]],\n [[0, 4, 0, 0],\n [0, 0, 5, 0],\n [0, 0, 0, 6],\n [0, 0, 0, 0]]]\n\n # A tridiagonal band (per batch).\n diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [0, 4, 5]],\n [[2, 3, 0],\n [6, 7, 9],\n [0, 9, 1]]])\n tf.matrix_diag(diagonals, k = (-1, 1))\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 2, 3],\n [6, 7, 9],\n [9, 1, 0]]])\n tf.matrix_diag(diagonals, k = (-1, 1), align=\"RIGHT_LEFT\")\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # Rectangular matrix.\n diagonal = np.array([1, 2]) # Input shape: (2)\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)\n ==> [[0, 0, 0, 0], # Output shape: (3, 4)\n [1, 0, 0, 0],\n [0, 2, 0, 0]]\n\n # Rectangular matrix with inferred num_cols and padding_value = 9.\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)\n ==> [[9, 9], # Output shape: (3, 2)\n [1, 9],\n [9, 2]]\n ```\n\n Args:\n diagonal: A `Tensor` with `rank k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n num_rows: The number of rows of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n num_cols: The number of columns of the output matrix. If it is not provided,\n the op assumes the output matrix is a square matrix and infers the matrix\n size from `d_lower`, `d_upper`, and the innermost dimension of `diagonal`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with given batched diagonal values.", "type": "API"}, {"name": "tf.linalg.diag_part", "docs": "Returns the batched diagonal part of a batched tensor.\n\n Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched\n `input`.\n\n Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`.\n Let `max_diag_len` be the maximum length among all diagonals to be extracted,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n Let `num_diags` be the number of diagonals to extract,\n `num_diags = k[1] - k[0] + 1`.\n\n If `num_diags == 1`, the output tensor is of rank `r - 1` with shape\n `[I, J, ..., L, max_diag_len]` and values:\n\n ```\n diagonal[i, j, ..., l, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.\n\n Otherwise, the output tensor has rank `r` with dimensions\n `[I, J, ..., L, num_diags, max_diag_len]` with values:\n\n ```\n diagonal[i, j, ..., l, m, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)\n [5, 6, 7, 8],\n [9, 8, 7, 6]],\n [[5, 4, 3, 2],\n [1, 2, 3, 4],\n [5, 6, 7, 8]]])\n\n # A main diagonal from each batch.\n tf.linalg.diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)\n [5, 2, 7]]\n\n # A superdiagonal from each batch.\n tf.linalg.diag_part(input, k = 1)\n ==> [[2, 7, 6], # Output shape: (2, 3)\n [4, 3, 8]]\n\n # A band from each batch.\n tf.linalg.diag_part(input, k = (-1, 2))\n ==> [[[3, 8, 0], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [0, 5, 8]],\n [[3, 4, 0],\n [4, 3, 8],\n [5, 2, 7],\n [0, 1, 6]]]\n\n # RIGHT_LEFT alignment.\n tf.linalg.diag_part(input, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[0, 3, 8], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [5, 8, 0]],\n [[0, 3, 4],\n [4, 3, 8],\n [5, 2, 7],\n [1, 6, 0]]]\n\n # max_diag_len can be shorter than the main diagonal.\n tf.linalg.diag_part(input, k = (-2, -1))\n ==> [[[5, 8],\n [0, 9]],\n [[1, 6],\n [0, 5]]]\n\n # padding_value = 9\n tf.linalg.diag_part(input, k = (1, 3), padding_value = 9)\n ==> [[[4, 9, 9], # Output shape: (2, 3, 3)\n [3, 8, 9],\n [2, 7, 6]],\n [[2, 9, 9],\n [3, 4, 9],\n [4, 3, 8]]]\n\n ```\n\n Args:\n input: A `Tensor` with `rank k >= 2`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n padding_value: The value to fill the area outside the specified diagonal\n band with. Default is 0.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`.\n\n Raises:\n InvalidArgumentError: When `k` is out of bound or when `k[0]>k[1:]`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.linalg.eig", "docs": "Computes the eigen decomposition of a batch of matrices.\n\n The eigenvalues\n and eigenvectors for a non-Hermitian matrix in general are complex. The\n eigenvectors are not guaranteed to be linearly independent.\n\n Computes the eigenvalues and right eigenvectors of the innermost\n N-by-N matrices in `tensor` such that\n `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of\n each inner inner matrix is referenced.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order.\n v: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most\n matrices contain eigenvectors of the corresponding matrices in `tensor`\n ", "desc": "Computes the eigen decomposition of a batch of matrices.", "type": "API"}, {"name": "tf.linalg.eigh", "docs": "Computes the eigen decomposition of a batch of self-adjoint matrices.\n\n Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices\n in `tensor` such that\n `tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of\n each inner inner matrix is referenced.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. Sorted in non-decreasing order.\n v: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most\n matrices contain eigenvectors of the corresponding matrices in `tensor`\n ", "desc": "Computes the eigen decomposition of a batch of self-adjoint matrices.", "type": "API"}, {"name": "tf.linalg.eigvals", "docs": "Computes the eigenvalues of one or more matrices.\n\n Note: If your program backpropagates through this function, you should replace\n it with a call to tf.linalg.eig (possibly ignoring the second output) to\n avoid computing the eigen decomposition twice. This is because the\n eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See\n _SelfAdjointEigV2Grad in linalg_grad.py.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`\n eigenvalues of `tensor[..., :, :]`.\n ", "desc": "Computes the eigenvalues of one or more matrices.", "type": "API"}, {"name": "tf.linalg.eigvalsh", "docs": "Computes the eigenvalues of one or more self-adjoint matrices.\n\n Note: If your program backpropagates through this function, you should replace\n it with a call to tf.linalg.eigh (possibly ignoring the second output) to\n avoid computing the eigen decomposition twice. This is because the\n eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See\n _SelfAdjointEigV2Grad in linalg_grad.py.\n\n Args:\n tensor: `Tensor` of shape `[..., N, N]`.\n name: string, optional name of the operation.\n\n Returns:\n e: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`\n eigenvalues of `tensor[..., :, :]`.\n ", "desc": "Computes the eigenvalues of one or more self-adjoint matrices.", "type": "API"}, {"name": "tf.linalg.einsum", "docs": "Tensor contraction over specified indices and outer product.\n\n Einsum allows defining Tensors by defining their element-wise computation.\n This computation is defined by `equation`, a shorthand form based on Einstein\n summation. As an example, consider multiplying two matrices A and B to form a\n matrix C. The elements of C are given by:\n\n $$ C_{i,k} = \\sum_j A_{i,j} B_{j,k} $$\n\n or\n\n ```\n C[i,k] = sum_j A[i,j] * B[j,k]\n ```\n\n The corresponding einsum `equation` is:\n\n ```\n ij,jk->ik\n ```\n\n In general, to convert the element-wise equation into the `equation` string,\n use the following procedure (intermediate strings for matrix multiplication\n example provided in parentheses):\n\n 1. remove variable names, brackets, and commas, (`ik = sum_j ij * jk`)\n 2. replace \"*\" with \",\", (`ik = sum_j ij , jk`)\n 3. drop summation signs, and (`ik = ij, jk`)\n 4. move the output to the right, while replacing \"=\" with \"->\". (`ij,jk->ik`)\n\n Note: If the output indices are not specified repeated indices are summed.\n So `ij,jk->ik` can be simplified to `ij,jk`.\n\n Many common operations can be expressed in this way. For example:\n\n **Matrix multiplication**\n\n >>> m0 = tf.random.normal(shape=[2, 3])\n >>> m1 = tf.random.normal(shape=[3, 5])\n >>> e = tf.einsum('ij,jk->ik', m0, m1)\n >>> # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n Repeated indices are summed if the output indices are not specified.\n\n >>> e = tf.einsum('ij,jk', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]\n >>> print(e.shape)\n (2, 5)\n\n\n **Dot product**\n\n >>> u = tf.random.normal(shape=[5])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,i->', u, v) # output = sum_i u[i]*v[i]\n >>> print(e.shape)\n ()\n\n **Outer product**\n\n >>> u = tf.random.normal(shape=[3])\n >>> v = tf.random.normal(shape=[5])\n >>> e = tf.einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]\n >>> print(e.shape)\n (3, 5)\n\n **Transpose**\n\n >>> m = tf.ones(2,3)\n >>> e = tf.einsum('ij->ji', m0) # output[j,i] = m0[i,j]\n >>> print(e.shape)\n (3, 2)\n\n **Diag**\n\n >>> m = tf.reshape(tf.range(9), [3,3])\n >>> diag = tf.einsum('ii->i', m)\n >>> print(diag.shape)\n (3,)\n\n **Trace**\n\n >>> # Repeated indices are summed.\n >>> trace = tf.einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]\n >>> assert trace == sum(diag)\n >>> print(trace.shape)\n ()\n\n **Batch matrix multiplication**\n\n >>> s = tf.random.normal(shape=[7,5,3])\n >>> t = tf.random.normal(shape=[7,3,2])\n >>> e = tf.einsum('bij,bjk->bik', s, t)\n >>> # output[a,i,k] = sum_j s[a,i,j] * t[a, j, k]\n >>> print(e.shape)\n (7, 5, 2)\n\n This method does not support broadcasting on named-axes. All axes with\n matching labels should have the same length. If you have length-1 axes,\n use `tf.squeeze` or `tf.reshape` to eliminate them.\n\n To write code that is agnostic to the number of indices in the input\n use an ellipsis. The ellipsis is a placeholder for \"whatever other indices\n fit here\".\n\n For example, to perform a NumPy-style broadcasting-batch-matrix multiplication\n where the matrix multiply acts on the last two axes of the input, use:\n\n >>> s = tf.random.normal(shape=[11, 7, 5, 3])\n >>> t = tf.random.normal(shape=[11, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Einsum **will** broadcast over axes covered by the ellipsis.\n\n >>> s = tf.random.normal(shape=[11, 1, 5, 3])\n >>> t = tf.random.normal(shape=[1, 7, 3, 2])\n >>> e = tf.einsum('...ij,...jk->...ik', s, t)\n >>> print(e.shape)\n (11, 7, 5, 2)\n\n Args:\n equation: a `str` describing the contraction, in the same format as\n `numpy.einsum`.\n *inputs: the inputs to contract (each one a `Tensor`), whose shapes should\n be consistent with `equation`.\n **kwargs:\n - optimize: Optimization strategy to use to find contraction path using\n opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or\n 'auto'. (optional, default: 'greedy').\n - name: A name for the operation (optional).\n\n Returns:\n The contracted `Tensor`, with shape determined by `equation`.\n\n Raises:\n ValueError: If\n - the format of `equation` is incorrect,\n - number of inputs or their shapes are inconsistent with `equation`.\n ", "desc": "Tensor contraction over specified indices and outer product.", "type": "API"}, {"name": "tf.linalg.experimental", "docs": "Public API for tf.linalg.experimental namespace.\n", "desc": "Public API for tf.linalg.experimental namespace.", "type": "API"}, {"name": "tf.linalg.experimental.conjugate_gradient", "docs": "Conjugate gradient solver.\n\n Solves a linear system of equations `A*x = rhs` for self-adjoint, positive\n definite matrix `A` and right-hand side vector `rhs`, using an iterative,\n matrix-free algorithm where the action of the matrix A is represented by\n `operator`. The iteration terminates when either the number of iterations\n exceeds `max_iter` or when the residual norm has been reduced to `tol`\n times its initial value, i.e. \\\\(||rhs - A x_k|| <= tol ||rhs||\\\\).\n\n Args:\n operator: A `LinearOperator` that is self-adjoint and positive definite.\n rhs: A possibly batched vector of shape `[..., N]` containing the right-hand\n size vector.\n preconditioner: A `LinearOperator` that approximates the inverse of `A`.\n An efficient preconditioner could dramatically improve the rate of\n convergence. If `preconditioner` represents matrix `M`(`M` approximates\n `A^{-1}`), the algorithm uses `preconditioner.apply(x)` to estimate\n `A^{-1}x`. For this to be useful, the cost of applying `M` should be\n much lower than computing `A^{-1}` directly.\n x: A possibly batched vector of shape `[..., N]` containing the initial\n guess for the solution.\n tol: A float scalar convergence tolerance.\n max_iter: An integer giving the maximum number of iterations.\n name: A name scope for the operation.\n\n Returns:\n output: A namedtuple representing the final state with fields:\n - i: A scalar `int32` `Tensor`. Number of iterations executed.\n - x: A rank-1 `Tensor` of shape `[..., N]` containing the computed\n solution.\n - r: A rank-1 `Tensor` of shape `[.., M]` containing the residual vector.\n - p: A rank-1 `Tensor` of shape `[..., N]`. `A`-conjugate basis vector.\n - gamma: \\\\(r \\dot M \\dot r\\\\), equivalent to \\\\(||r||_2^2\\\\) when\n `preconditioner=None`.\n ", "desc": "Conjugate gradient solver.", "type": "API"}, {"name": "tf.linalg.expm", "docs": "Computes the matrix exponential of one or more square matrices.\n\n $$exp(A) = \\sum_{n=0}^\\infty A^n/n!$$\n\n The exponential is computed using a combination of the scaling and squaring\n method and the Pade approximation. Details can be found in:\n Nicholas J. Higham, \"The scaling and squaring method for the matrix\n exponential revisited,\" SIAM J. Matrix Anal. Applic., 26:1179-1193, 2005.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the exponential for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`, or\n `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op` (optional).\n\n Returns:\n the matrix exponential of the input.\n\n Raises:\n ValueError: An unsupported type is provided as input.\n\n @compatibility(scipy)\n Equivalent to scipy.linalg.expm\n @end_compatibility\n ", "desc": "Computes the matrix exponential of one or more square matrices.", "type": "API"}, {"name": "tf.linalg.eye", "docs": "Construct an identity matrix, or a batch of matrices.\n\n See also `tf.ones`, `tf.zeros`, `tf.fill`, `tf.one_hot`.\n\n ```python\n # Construct one identity matrix.\n tf.eye(2)\n ==> [[1., 0.],\n [0., 1.]]\n\n # Construct a batch of 3 identity matrices, each 2 x 2.\n # batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.\n batch_identity = tf.eye(2, batch_shape=[3])\n\n # Construct one 2 x 3 \"identity\" matrix\n tf.eye(2, num_columns=3)\n ==> [[ 1., 0., 0.],\n [ 0., 1., 0.]]\n ```\n\n Args:\n num_rows: Non-negative `int32` scalar `Tensor` giving the number of rows\n in each batch matrix.\n num_columns: Optional non-negative `int32` scalar `Tensor` giving the number\n of columns in each batch matrix. Defaults to `num_rows`.\n batch_shape: A list or tuple of Python integers or a 1-D `int32` `Tensor`.\n If provided, the returned `Tensor` will have leading batch dimensions of\n this shape.\n dtype: The type of an element in the resulting `Tensor`\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `Tensor` of shape `batch_shape + [num_rows, num_columns]`\n ", "desc": "Construct an identity matrix, or a batch of matrices.", "type": "API"}, {"name": "tf.linalg.global_norm", "docs": "Computes the global norm of multiple tensors.\n\n Given a tuple or list of tensors `t_list`, this operation returns the\n global norm of the elements in all tensors in `t_list`. The global norm is\n computed as:\n\n `global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`\n\n Any entries in `t_list` that are of type None are ignored.\n\n Args:\n t_list: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.\n name: A name for the operation (optional).\n\n Returns:\n A 0-D (scalar) `Tensor` of type `float`.\n\n Raises:\n TypeError: If `t_list` is not a sequence.\n ", "desc": "Computes the global norm of multiple tensors.", "type": "API"}, {"name": "tf.linalg.inv", "docs": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).\n\n \n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the inverse for all input submatrices `[..., :, :]`.\n\n The op uses LU decomposition with partial pivoting to compute the inverses.\n\n If a matrix is not invertible there is no guarantee what the op does. It\n may detect the condition and raise an exception or it may simply return a\n garbage result.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).", "type": "API"}, {"name": "tf.linalg.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.linalg.LinearOperator", "docs": "Base class defining a [batch of] linear operator[s].\n\n Subclasses of `LinearOperator` provide access to common methods on a\n (batch) matrix, without the need to materialize the matrix. This allows:\n\n * Matrix free computations\n * Operators that take advantage of special structure, while providing a\n consistent API to users.\n\n #### Subclassing\n\n To enable a public method, subclasses should implement the leading-underscore\n version of the method. The argument signature should be identical except for\n the omission of `name=\"...\"`. For example, to enable\n `matmul(x, adjoint=False, name=\"matmul\")` a subclass should implement\n `_matmul(x, adjoint=False)`.\n\n #### Performance contract\n\n Subclasses should only implement the assert methods\n (e.g. `assert_non_singular`) if they can be done in less than `O(N^3)`\n time.\n\n Class docstrings should contain an explanation of computational complexity.\n Since this is a high-performance library, attention should be paid to detail,\n and explanations can include constants as well as Big-O notation.\n\n #### Shape compatibility\n\n `LinearOperator` subclasses should operate on a [batch] matrix with\n compatible shape. Class docstrings should define what is meant by compatible\n shape. Some subclasses may not support batching.\n\n Examples:\n\n `x` is a batch matrix with compatible shape for `matmul` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], b >= 0,\n x.shape = [B1,...,Bb] + [N, R]\n ```\n\n `rhs` is a batch matrix with compatible shape for `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], b >= 0,\n rhs.shape = [B1,...,Bb] + [M, R]\n ```\n\n #### Example docstring for subclasses.\n\n This operator acts like a (batch) matrix `A` with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `m x n` matrix. Again, this matrix `A` may not be materialized, but for\n purposes of identifying and working with compatible arguments the shape is\n relevant.\n\n Examples:\n\n ```python\n some_tensor = ... shape = ????\n operator = MyLinOp(some_tensor)\n\n operator.shape()\n ==> [2, 4, 4]\n\n operator.log_abs_determinant()\n ==> Shape [2] Tensor\n\n x = ... Shape [2, 4, 5] Tensor\n\n operator.matmul(x)\n ==> Shape [2, 4, 5] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on batch matrices with compatible shape.\n FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE\n\n #### Performance\n\n FILL THIS IN\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n\n #### Initialization parameters\n\n All subclasses of `LinearOperator` are expected to pass a `parameters`\n argument to `super().__init__()`. This should be a `dict` containing\n the unadulterated arguments passed to the subclass `__init__`. For example,\n `MyLinearOperator` with an initializer should look like:\n\n ```python\n def __init__(self, operator, is_square=False, name=None):\n parameters = dict(\n operator=operator,\n is_square=is_square,\n name=name\n )\n ...\n super().__init__(..., parameters=parameters)\n ```\n\n Users can then access `my_linear_operator.parameters` to see all arguments\n passed to its initializer.\n ", "desc": "Base class defining a [batch of] linear operator[s].", "type": "API"}, {"name": "tf.linalg.LinearOperatorAdjoint", "docs": "`LinearOperator` representing the adjoint of another operator.\n\n This operator represents the adjoint of another operator.\n\n ```python\n # Create a 2 x 2 linear operator.\n operator = LinearOperatorFullMatrix([[1 - i., 3.], [0., 1. + i]])\n operator_adjoint = LinearOperatorAdjoint(operator)\n\n operator_adjoint.to_dense()\n ==> [[1. + i, 0.]\n [3., 1 - i]]\n\n operator_adjoint.shape\n ==> [2, 2]\n\n operator_adjoint.log_abs_determinant()\n ==> - log(2)\n\n x = ... Shape [2, 4] Tensor\n operator_adjoint.matmul(x)\n ==> Shape [2, 4] Tensor, equal to operator.matmul(x, adjoint=True)\n ```\n\n #### Performance\n\n The performance of `LinearOperatorAdjoint` depends on the underlying\n operators performance.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` representing the adjoint of another operator.", "type": "API"}, {"name": "tf.linalg.LinearOperatorBlockDiag", "docs": "Combines one or more `LinearOperators` in to a Block Diagonal matrix.\n\n This operator combines one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator`, whose underlying matrix representation\n has each operator `opi` on the main diagonal, and zero's elsewhere.\n\n #### Shape compatibility\n\n If `opj` acts like a [batch] matrix `Aj`, then `op_combined` acts like\n the [batch] matrix formed by having each matrix `Aj` on the main\n diagonal.\n\n Each `opj` is required to represent a matrix, and hence will have\n shape `batch_shape_j + [M_j, N_j]`.\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the combined operator\n has shape `broadcast_batch_shape + [sum M_j, sum N_j]`, where\n `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`,\n `j = 1,...,J`, assuming the intermediate batch shapes broadcast.\n\n Arguments to `matmul`, `matvec`, `solve`, and `solvevec` may either be single\n `Tensor`s or lists of `Tensor`s that are interpreted as blocks. The `j`th\n element of a blockwise list of `Tensor`s must have dimensions that match\n `opj` for the given method. If a list of blocks is input, then a list of\n blocks is returned as well.\n\n When the `opj` are not guaranteed to be square, this operator's methods might\n fail due to the combined operator not being square and/or lack of efficient\n methods.\n\n ```python\n # Create a 4 x 4 linear operator combined of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n operator = LinearOperatorBlockDiag([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 2., 0., 0.],\n [3., 4., 0., 0.],\n [0., 0., 1., 0.],\n [0., 0., 0., 1.]]\n\n operator.shape\n ==> [4, 4]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x1 = ... # Shape [2, 2] Tensor\n x2 = ... # Shape [2, 2] Tensor\n x = tf.concat([x1, x2], 0) # Shape [2, 4] Tensor\n operator.matmul(x)\n ==> tf.concat([operator_1.matmul(x1), operator_2.matmul(x2)])\n\n # Create a 5 x 4 linear operator combining three blocks.\n operator_1 = LinearOperatorFullMatrix([[1.], [3.]])\n operator_2 = LinearOperatorFullMatrix([[1., 6.]])\n operator_3 = LinearOperatorFullMatrix([[2.], [7.]])\n operator = LinearOperatorBlockDiag([operator_1, operator_2, operator_3])\n\n operator.to_dense()\n ==> [[1., 0., 0., 0.],\n [3., 0., 0., 0.],\n [0., 1., 6., 0.],\n [0., 0., 0., 2.]]\n [0., 0., 0., 7.]]\n\n operator.shape\n ==> [5, 4]\n\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])\n operator_44 = LinearOperatorFullMatrix(matrix)\n\n # Create a [1, 3] batch of 5 x 5 linear operators.\n matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])\n operator_55 = LinearOperatorFullMatrix(matrix_55)\n\n # Combine to create a [2, 3] batch of 9 x 9 operators.\n operator_99 = LinearOperatorBlockDiag([operator_44, operator_55])\n\n # Create a shape [2, 3, 9] vector.\n x = tf.random.normal(shape=[2, 3, 9])\n operator_99.matmul(x)\n ==> Shape [2, 3, 9] Tensor\n\n # Create a blockwise list of vectors.\n x = [tf.random.normal(shape=[2, 3, 4]), tf.random.normal(shape=[2, 3, 5])]\n operator_99.matmul(x)\n ==> [Shape [2, 3, 4] Tensor, Shape [2, 3, 5] Tensor]\n ```\n\n #### Performance\n\n The performance of `LinearOperatorBlockDiag` on any operation is equal to\n the sum of the individual operators' operations.\n\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Combines one or more `LinearOperators` in to a Block Diagonal matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorBlockLowerTriangular", "docs": "Combines `LinearOperators` into a blockwise lower-triangular matrix.\n\n This operator is initialized with a nested list of linear operators, which\n are combined into a new `LinearOperator` whose underlying matrix\n representation is square and has each operator on or below the main diagonal,\n and zero's elsewhere. Each element of the outer list is a list of\n `LinearOperators` corresponding to a row-partition of the blockwise structure.\n The number of `LinearOperator`s in row-partion `i` must be equal to `i`.\n\n For example, a blockwise `3 x 3` `LinearOperatorBlockLowerTriangular` is\n initialized with the list `[[op_00], [op_10, op_11], [op_20, op_21, op_22]]`,\n where the `op_ij`, `i < 3, j <= i`, are `LinearOperator` instances. The\n `LinearOperatorBlockLowerTriangular` behaves as the following blockwise\n matrix, where `0` represents appropriately-sized [batch] matrices of zeros:\n\n ```none\n [[op_00, 0, 0],\n [op_10, op_11, 0],\n [op_20, op_21, op_22]]\n ```\n\n Each `op_jj` on the diagonal is required to represent a square matrix, and\n hence will have shape `batch_shape_j + [M_j, M_j]`. `LinearOperator`s in row\n `j` of the blockwise structure must have `range_dimension` equal to that of\n `op_jj`, and `LinearOperators` in column `j` must have `domain_dimension`\n equal to that of `op_jj`.\n\n If each `op_jj` on the diagonal has shape `batch_shape_j + [M_j, M_j]`, then\n the combined operator has shape `broadcast_batch_shape + [sum M_j, sum M_j]`,\n where `broadcast_batch_shape` is the mutual broadcast of `batch_shape_j`,\n `j = 0, 1, ..., J`, assuming the intermediate batch shapes broadcast.\n Even if the combined shape is well defined, the combined operator's\n methods may fail due to lack of broadcasting ability in the defining\n operators' methods.\n\n For example, to create a 4 x 4 linear operator combined of three 2 x 2\n operators:\n >>> operator_0 = tf.linalg.LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n >>> operator_1 = tf.linalg.LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n >>> operator_2 = tf.linalg.LinearOperatorLowerTriangular([[5., 6.], [7., 8]])\n >>> operator = LinearOperatorBlockLowerTriangular(\n ... [[operator_0], [operator_1, operator_2]])\n\n >>> operator.to_dense()\n \n\n >>> operator.shape\n TensorShape([4, 4])\n\n >>> operator.log_abs_determinant()\n \n\n >>> x0 = [[1., 6.], [-3., 4.]]\n >>> x1 = [[0., 2.], [4., 0.]]\n >>> x = tf.concat([x0, x1], 0) # Shape [2, 4] Tensor\n >>> operator.matmul(x)\n \n\n The above `matmul` is equivalent to:\n >>> tf.concat([operator_0.matmul(x0),\n ... operator_1.matmul(x0) + operator_2.matmul(x1)], axis=0)\n \n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n For example:\n\n Create a [2, 3] batch of 4 x 4 linear operators:\n >>> matrix_44 = tf.random.normal(shape=[2, 3, 4, 4])\n >>> operator_44 = tf.linalg.LinearOperatorFullMatrix(matrix_44)\n\n Create a [1, 3] batch of 5 x 4 linear operators:\n >>> matrix_54 = tf.random.normal(shape=[1, 3, 5, 4])\n >>> operator_54 = tf.linalg.LinearOperatorFullMatrix(matrix_54)\n\n Create a [1, 3] batch of 5 x 5 linear operators:\n >>> matrix_55 = tf.random.normal(shape=[1, 3, 5, 5])\n >>> operator_55 = tf.linalg.LinearOperatorFullMatrix(matrix_55)\n\n Combine to create a [2, 3] batch of 9 x 9 operators:\n >>> operator_99 = LinearOperatorBlockLowerTriangular(\n ... [[operator_44], [operator_54, operator_55]])\n >>> operator_99.shape\n TensorShape([2, 3, 9, 9])\n\n Create a shape [2, 1, 9] batch of vectors and apply the operator to it.\n >>> x = tf.random.normal(shape=[2, 1, 9])\n >>> y = operator_99.matvec(x)\n >>> y.shape\n TensorShape([2, 3, 9])\n\n Create a blockwise list of vectors and apply the operator to it. A blockwise\n list is returned.\n >>> x4 = tf.random.normal(shape=[2, 1, 4])\n >>> x5 = tf.random.normal(shape=[2, 3, 5])\n >>> y_blockwise = operator_99.matvec([x4, x5])\n >>> y_blockwise[0].shape\n TensorShape([2, 3, 4])\n >>> y_blockwise[1].shape\n TensorShape([2, 3, 5])\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorBlockLowerTriangular` consisting of `D`\n row-partitions and `D` column-partitions, such that the total number of\n operators is `N = D * (D + 1) // 2`.\n\n * `operator.matmul` has complexity equal to the sum of the `matmul`\n complexities of the individual operators.\n * `operator.solve` has complexity equal to the sum of the `solve` complexities\n of the operators on the diagonal and the `matmul` complexities of the\n operators off the diagonal.\n * `operator.determinant` has complexity equal to the sum of the `determinant`\n complexities of the operators on the diagonal.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Combines `LinearOperators` into a blockwise lower-triangular matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorCirculant", "docs": "`LinearOperator` acting like a circulant matrix.\n\n This operator acts like a circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of circulant matrices\n\n Circulant means the entries of `A` are generated by a single vector, the\n convolution kernel `h`: `A_{mn} := h_{m-n mod N}`. With `h = [w, x, y, z]`,\n\n ```\n A = |w z y x|\n |x w z y|\n |y x w z|\n |z y x w|\n ```\n\n This means that the result of matrix multiplication `v = Au` has `Lth` column\n given circular convolution between `h` with the `Lth` column of `u`.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions. Define the discrete Fourier transform (DFT) and its inverse by\n\n ```\n DFT[ h[n] ] = H[k] := sum_{n = 0}^{N - 1} h_n e^{-i 2pi k n / N}\n IDFT[ H[k] ] = h[n] = N^{-1} sum_{k = 0}^{N - 1} H_k e^{i 2pi k n / N}\n ```\n\n From these definitions, we see that\n\n ```\n H[0] = sum_{n = 0}^{N - 1} h_n\n H[1] = \"the first positive frequency\"\n H[N - 1] = \"the first negative frequency\"\n ```\n\n Loosely speaking, with `*` element-wise multiplication, matrix multiplication\n is equal to the action of a Fourier multiplier: `A u = IDFT[ H * DFT[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT[u]` be the `[N, R]`\n matrix with `rth` column equal to the DFT of the `rth` column of `u`.\n Define the `IDFT` similarly.\n Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT[ H * (DFT[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n Letting `U` be the `kth` Euclidean basis vector, and `U = IDFT[u]`.\n The above formulas show that`A U = H_k * U`. We conclude that the elements\n of `H` are the eigenvalues of this operator. Therefore\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N]`. We say that `H` is a Hermitian spectrum\n if, with `%` meaning modulus division,\n\n ```H[..., n % N] = ComplexConjugate[ H[..., (-n) % N] ]```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n #### Example of a self-adjoint positive definite operator\n\n ```python\n # spectrum is real ==> operator is self-adjoint\n # spectrum is positive ==> operator is positive definite\n spectrum = [6., 4, 2]\n\n operator = LinearOperatorCirculant(spectrum)\n\n # IFFT[spectrum]\n operator.convolution_kernel()\n ==> [4 + 0j, 1 + 0.58j, 1 - 0.58j]\n\n operator.to_dense()\n ==> [[4 + 0.0j, 1 - 0.6j, 1 + 0.6j],\n [1 + 0.6j, 4 + 0.0j, 1 - 0.6j],\n [1 - 0.6j, 1 + 0.6j, 4 + 0.0j]]\n ```\n\n #### Example of defining in terms of a real convolution kernel\n\n ```python\n # convolution_kernel is real ==> spectrum is Hermitian.\n convolution_kernel = [1., 2., 1.]]\n spectrum = tf.signal.fft(tf.cast(convolution_kernel, tf.complex64))\n\n # spectrum is Hermitian ==> operator is real.\n # spectrum is shape [3] ==> operator is shape [3, 3]\n # We force the input/output type to be real, which allows this to operate\n # like a real matrix.\n operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)\n\n operator.to_dense()\n ==> [[ 1, 1, 2],\n [ 2, 1, 1],\n [ 1, 2, 1]]\n ```\n\n #### Example of Hermitian spectrum\n\n ```python\n # spectrum is shape [3] ==> operator is shape [3, 3]\n # spectrum is Hermitian ==> operator is real.\n spectrum = [1, 1j, -1j]\n\n operator = LinearOperatorCirculant(spectrum)\n\n operator.to_dense()\n ==> [[ 0.33 + 0j, 0.91 + 0j, -0.24 + 0j],\n [-0.24 + 0j, 0.33 + 0j, 0.91 + 0j],\n [ 0.91 + 0j, -0.24 + 0j, 0.33 + 0j]\n ```\n\n #### Example of forcing real `dtype` when spectrum is Hermitian\n\n ```python\n # spectrum is shape [4] ==> operator is shape [4, 4]\n # spectrum is real ==> operator is self-adjoint\n # spectrum is Hermitian ==> operator is real\n # spectrum has positive real part ==> operator is positive-definite.\n spectrum = [6., 4, 2, 4]\n\n # Force the input dtype to be float32.\n # Cast the output to float32. This is fine because the operator will be\n # real due to Hermitian spectrum.\n operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)\n\n operator.shape\n ==> [4, 4]\n\n operator.to_dense()\n ==> [[4, 1, 0, 1],\n [1, 4, 1, 0],\n [0, 1, 4, 1],\n [1, 0, 1, 4]]\n\n # convolution_kernel = tf.signal.ifft(spectrum)\n operator.convolution_kernel()\n ==> [4, 1, 0, 1]\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n\n References:\n Toeplitz and Circulant Matrices - A Review:\n [Gray, 2006](https://www.nowpublishers.com/article/Details/CIT-006)\n ([pdf](https://ee.stanford.edu/~gray/toeplitz.pdf))\n ", "desc": "`LinearOperator` acting like a circulant matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorCirculant2D", "docs": "`LinearOperator` acting like a block circulant matrix.\n\n This operator acts like a block circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of block circulant matrices\n\n If `A` is block circulant, with block sizes `N0, N1` (`N0 * N1 = N`):\n `A` has a block circulant structure, composed of `N0 x N0` blocks, with each\n block an `N1 x N1` circulant matrix.\n\n For example, with `W`, `X`, `Y`, `Z` each circulant,\n\n ```\n A = |W Z Y X|\n |X W Z Y|\n |Y X W Z|\n |Z Y X W|\n ```\n\n Note that `A` itself will not in general be circulant.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions.\n\n If `H.shape = [N0, N1]`, (`N0 * N1 = N`):\n Loosely speaking, matrix multiplication is equal to the action of a\n Fourier multiplier: `A u = IDFT2[ H DFT2[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT2[u]` be the\n `[N0, N1, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, R]` and taking\n a two dimensional DFT across the first two dimensions. Let `IDFT2` be the\n inverse of `DFT2`. Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT2[ H * (DFT2[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N0, N1]`, we say that `H` is a Hermitian\n spectrum if, with `%` indicating modulus division,\n\n ```\n H[..., n0 % N0, n1 % N1] = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1 ].\n ```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n ### Example of a self-adjoint positive definite operator\n\n ```python\n # spectrum is real ==> operator is self-adjoint\n # spectrum is positive ==> operator is positive definite\n spectrum = [[1., 2., 3.],\n [4., 5., 6.],\n [7., 8., 9.]]\n\n operator = LinearOperatorCirculant2D(spectrum)\n\n # IFFT[spectrum]\n operator.convolution_kernel()\n ==> [[5.0+0.0j, -0.5-.3j, -0.5+.3j],\n [-1.5-.9j, 0, 0],\n [-1.5+.9j, 0, 0]]\n\n operator.to_dense()\n ==> Complex self adjoint 9 x 9 matrix.\n ```\n\n #### Example of defining in terms of a real convolution kernel,\n\n ```python\n # convolution_kernel is real ==> spectrum is Hermitian.\n convolution_kernel = [[1., 2., 1.], [5., -1., 1.]]\n spectrum = tf.signal.fft2d(tf.cast(convolution_kernel, tf.complex64))\n\n # spectrum is shape [2, 3] ==> operator is shape [6, 6]\n # spectrum is Hermitian ==> operator is real.\n operator = LinearOperatorCirculant2D(spectrum, input_output_dtype=tf.float32)\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a block circulant matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorCirculant3D", "docs": "`LinearOperator` acting like a nested block circulant matrix.\n\n This operator acts like a block circulant matrix `A` with\n shape `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of block circulant matrices\n\n If `A` is nested block circulant, with block sizes `N0, N1, N2`\n (`N0 * N1 * N2 = N`):\n `A` has a block structure, composed of `N0 x N0` blocks, with each\n block an `N1 x N1` block circulant matrix.\n\n For example, with `W`, `X`, `Y`, `Z` each block circulant,\n\n ```\n A = |W Z Y X|\n |X W Z Y|\n |Y X W Z|\n |Z Y X W|\n ```\n\n Note that `A` itself will not in general be circulant.\n\n #### Description in terms of the frequency spectrum\n\n There is an equivalent description in terms of the [batch] spectrum `H` and\n Fourier transforms. Here we consider `A.shape = [N, N]` and ignore batch\n dimensions.\n\n If `H.shape = [N0, N1, N2]`, (`N0 * N1 * N2 = N`):\n Loosely speaking, matrix multiplication is equal to the action of a\n Fourier multiplier: `A u = IDFT3[ H DFT3[u] ]`.\n Precisely speaking, given `[N, R]` matrix `u`, let `DFT3[u]` be the\n `[N0, N1, N2, R]` `Tensor` defined by re-shaping `u` to `[N0, N1, N2, R]` and\n taking a three dimensional DFT across the first three dimensions. Let `IDFT3`\n be the inverse of `DFT3`. Matrix multiplication may be expressed columnwise:\n\n ```(A u)_r = IDFT3[ H * (DFT3[u])_r ]```\n\n #### Operator properties deduced from the spectrum.\n\n * This operator is positive definite if and only if `Real{H} > 0`.\n\n A general property of Fourier transforms is the correspondence between\n Hermitian functions and real valued transforms.\n\n Suppose `H.shape = [B1,...,Bb, N0, N1, N2]`, we say that `H` is a Hermitian\n spectrum if, with `%` meaning modulus division,\n\n ```\n H[..., n0 % N0, n1 % N1, n2 % N2]\n = ComplexConjugate[ H[..., (-n0) % N0, (-n1) % N1, (-n2) % N2] ].\n ```\n\n * This operator corresponds to a real matrix if and only if `H` is Hermitian.\n * This operator is self-adjoint if and only if `H` is real.\n\n See e.g. \"Discrete-Time Signal Processing\", Oppenheim and Schafer.\n\n ### Examples\n\n See `LinearOperatorCirculant` and `LinearOperatorCirculant2D` for examples.\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorCirculant` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(R*N*Log[N])`\n * `operator.solve(x)` is `O(R*N*Log[N])`\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a nested block circulant matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorComposition", "docs": "Composes one or more `LinearOperators`.\n\n This operator composes one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator` with action defined by:\n\n ```\n op_composed(x) := op1(op2(...(opJ(x)...))\n ```\n\n If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the\n [batch] matrix formed with the multiplication `A1 A2...AJ`.\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have\n `N_j = M_{j+1}`, in which case the composed operator has shape equal to\n `broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the\n mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate\n batch shapes broadcast. Even if the composed shape is well defined, the\n composed operator's methods may fail due to lack of broadcasting ability in\n the defining operators' methods.\n\n ```python\n # Create a 2 x 2 linear operator composed of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [0., 1.]])\n operator = LinearOperatorComposition([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 2.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 5 linear operators.\n matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])\n operator_45 = LinearOperatorFullMatrix(matrix)\n\n # Create a [2, 3] batch of 5 x 6 linear operators.\n matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])\n operator_56 = LinearOperatorFullMatrix(matrix_56)\n\n # Compose to create a [2, 3] batch of 4 x 6 operators.\n operator_46 = LinearOperatorComposition([operator_45, operator_56])\n\n # Create a shape [2, 3, 6, 2] vector.\n x = tf.random.normal(shape=[2, 3, 6, 2])\n operator.matmul(x)\n ==> Shape [2, 3, 4, 2] Tensor\n ```\n\n #### Performance\n\n The performance of `LinearOperatorComposition` on any operation is equal to\n the sum of the individual operators' operations.\n\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Composes one or more `LinearOperators`.", "type": "API"}, {"name": "tf.linalg.LinearOperatorDiag", "docs": "`LinearOperator` acting like a [batch] square diagonal matrix.\n\n This operator acts like a [batch] diagonal matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorDiag` is initialized with a (batch) vector.\n\n ```python\n # Create a 2 x 2 diagonal linear operator.\n diag = [1., -1.]\n operator = LinearOperatorDiag(diag)\n\n operator.to_dense()\n ==> [[1., 0.]\n [0., -1.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n diag = tf.random.normal(shape=[2, 3, 4])\n operator = LinearOperatorDiag(diag)\n\n # Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible\n # since the batch dimensions, [2, 1], are broadcast to\n # operator.batch_shape = [2, 3].\n y = tf.random.normal(shape=[2, 1, 4, 2])\n x = operator.solve(y)\n ==> operator.matmul(x) = y\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` involves `N * R` multiplications.\n * `operator.solve(x)` involves `N` divisions and `N * R` multiplications.\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square diagonal matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorFullMatrix", "docs": "`LinearOperator` that wraps a [batch] matrix.\n\n This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `M x N` matrix.\n\n ```python\n # Create a 2 x 2 linear operator.\n matrix = [[1., 2.], [3., 4.]]\n operator = LinearOperatorFullMatrix(matrix)\n\n operator.to_dense()\n ==> [[1., 2.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n matrix = tf.random.normal(shape=[2, 3, 4, 4])\n operator = LinearOperatorFullMatrix(matrix)\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n #### Performance\n\n `LinearOperatorFullMatrix` has exactly the same performance as would be\n achieved by using standard `TensorFlow` matrix ops. Intelligent choices are\n made based on the following initialization hints.\n\n * If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a\n Cholesky factorization is used for the determinant and solve.\n\n In all cases, suppose `operator` is a `LinearOperatorFullMatrix` of shape\n `[M, N]`, and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` is `O(M * N * R)`.\n * If `M=N`, `operator.solve(x)` is `O(N^3 * R)`.\n * If `M=N`, `operator.determinant()` is `O(N^3)`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` that wraps a [batch] matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorHouseholder", "docs": "`LinearOperator` acting like a [batch] of Householder transformations.\n\n This operator acts like a [batch] of householder reflections with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorHouseholder` is initialized with a (batch) vector.\n\n A Householder reflection, defined via a vector `v`, which reflects points\n in `R^n` about the hyperplane orthogonal to `v` and through the origin.\n\n ```python\n # Create a 2 x 2 householder transform.\n vec = [1 / np.sqrt(2), 1. / np.sqrt(2)]\n operator = LinearOperatorHouseholder(vec)\n\n operator.to_dense()\n ==> [[0., -1.]\n [-1., -0.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of Householder transformations.", "type": "API"}, {"name": "tf.linalg.LinearOperatorIdentity", "docs": "`LinearOperator` acting like a [batch] square identity matrix.\n\n This operator acts like a [batch] identity matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorIdentity` is initialized with `num_rows`, and optionally\n `batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this\n operator efficiently passes through all arguments. If `batch_shape` is\n provided, broadcasting may occur, which will require making copies.\n\n ```python\n # Create a 2 x 2 identity matrix.\n operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32)\n\n operator.to_dense()\n ==> [[1., 0.]\n [0., 1.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> 0.\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor, same as x.\n\n y = tf.random.normal(shape=[3, 2, 4])\n # Note that y.shape is compatible with operator.shape because operator.shape\n # is broadcast to [3, 2, 2].\n # This broadcast does NOT require copying data, since we can infer that y\n # will be passed through without changing shape. We are always able to infer\n # this if the operator has no batch_shape.\n x = operator.solve(y)\n ==> Shape [3, 2, 4] Tensor, same as y.\n\n # Create a 2-batch of 2x2 identity matrices\n operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2])\n operator.to_dense()\n ==> [[[1., 0.]\n [0., 1.]],\n [[1., 0.]\n [0., 1.]]]\n\n # Here, even though the operator has a batch shape, the input is the same as\n # the output, so x can be passed through without a copy. The operator is able\n # to detect that no broadcast is necessary because both x and the operator\n # have statically defined shape.\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, same as x\n\n # Here the operator and x have different batch_shape, and are broadcast.\n # This requires a copy, since the output is different size than the input.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, equal to [x, x]\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n ### Performance\n\n If `batch_shape` initialization arg is `None`:\n\n * `operator.matmul(x)` is `O(1)`\n * `operator.solve(x)` is `O(1)`\n * `operator.determinant()` is `O(1)`\n\n If `batch_shape` initialization arg is provided, and static checks cannot\n rule out the need to broadcast:\n\n * `operator.matmul(x)` is `O(D1*...*Dd*N*R)`\n * `operator.solve(x)` is `O(D1*...*Dd*N*R)`\n * `operator.determinant()` is `O(B1*...*Bb)`\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square identity matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorInversion", "docs": "`LinearOperator` representing the inverse of another operator.\n\n This operator represents the inverse of another operator.\n\n ```python\n # Create a 2 x 2 linear operator.\n operator = LinearOperatorFullMatrix([[1., 0.], [0., 2.]])\n operator_inv = LinearOperatorInversion(operator)\n\n operator_inv.to_dense()\n ==> [[1., 0.]\n [0., 0.5]]\n\n operator_inv.shape\n ==> [2, 2]\n\n operator_inv.log_abs_determinant()\n ==> - log(2)\n\n x = ... Shape [2, 4] Tensor\n operator_inv.matmul(x)\n ==> Shape [2, 4] Tensor, equal to operator.solve(x)\n ```\n\n #### Performance\n\n The performance of `LinearOperatorInversion` depends on the underlying\n operators performance: `solve` and `matmul` are swapped, and determinant is\n inverted.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` representing the inverse of another operator.", "type": "API"}, {"name": "tf.linalg.LinearOperatorKronecker", "docs": "Kronecker product between two `LinearOperators`.\n\n This operator composes one or more linear operators `[op1,...,opJ]`,\n building a new `LinearOperator` representing the Kronecker product:\n `op1 x op2 x .. opJ` (we omit parentheses as the Kronecker product is\n associative).\n\n If `opj` has shape `batch_shape_j + [M_j, N_j]`, then the composed operator\n will have shape equal to `broadcast_batch_shape + [prod M_j, prod N_j]`,\n where the product is over all operators.\n\n ```python\n # Create a 4 x 4 linear operator composed of two 2 x 2 operators.\n operator_1 = LinearOperatorFullMatrix([[1., 2.], [3., 4.]])\n operator_2 = LinearOperatorFullMatrix([[1., 0.], [2., 1.]])\n operator = LinearOperatorKronecker([operator_1, operator_2])\n\n operator.to_dense()\n ==> [[1., 0., 2., 0.],\n [2., 1., 4., 2.],\n [3., 0., 4., 0.],\n [6., 3., 8., 4.]]\n\n operator.shape\n ==> [4, 4]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [4, 2] Tensor\n operator.matmul(x)\n ==> Shape [4, 2] Tensor\n\n # Create a [2, 3] batch of 4 x 5 linear operators.\n matrix_45 = tf.random.normal(shape=[2, 3, 4, 5])\n operator_45 = LinearOperatorFullMatrix(matrix)\n\n # Create a [2, 3] batch of 5 x 6 linear operators.\n matrix_56 = tf.random.normal(shape=[2, 3, 5, 6])\n operator_56 = LinearOperatorFullMatrix(matrix_56)\n\n # Compose to create a [2, 3] batch of 20 x 30 operators.\n operator_large = LinearOperatorKronecker([operator_45, operator_56])\n\n # Create a shape [2, 3, 20, 2] vector.\n x = tf.random.normal(shape=[2, 3, 6, 2])\n operator_large.matmul(x)\n ==> Shape [2, 3, 30, 2] Tensor\n ```\n\n #### Performance\n\n The performance of `LinearOperatorKronecker` on any operation is equal to\n the sum of the individual operators' operations.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Kronecker product between two `LinearOperators`.", "type": "API"}, {"name": "tf.linalg.LinearOperatorLowerTriangular", "docs": "`LinearOperator` acting like a [batch] square lower triangular matrix.\n\n This operator acts like a [batch] lower triangular matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix.\n\n `LinearOperatorLowerTriangular` is initialized with a `Tensor` having\n dimensions `[B1,...,Bb, N, N]`. The upper triangle of the last two\n dimensions is ignored.\n\n ```python\n # Create a 2 x 2 lower-triangular linear operator.\n tril = [[1., 2.], [3., 4.]]\n operator = LinearOperatorLowerTriangular(tril)\n\n # The upper triangle is ignored.\n operator.to_dense()\n ==> [[1., 0.]\n [3., 4.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor\n\n # Create a [2, 3] batch of 4 x 4 linear operators.\n tril = tf.random.normal(shape=[2, 3, 4, 4])\n operator = LinearOperatorLowerTriangular(tril)\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorLowerTriangular` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` involves `N^2 * R` multiplications.\n * `operator.solve(x)` involves `N * R` size `N` back-substitutions.\n * `operator.determinant()` involves a size `N` `reduce_prod`.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square lower triangular matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorLowRankUpdate", "docs": "Perturb a `LinearOperator` with a rank `K` update.\n\n This operator acts like a [batch] matrix `A` with shape\n `[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `M x N` matrix.\n\n `LinearOperatorLowRankUpdate` represents `A = L + U D V^H`, where\n\n ```\n L, is a LinearOperator representing [batch] M x N matrices\n U, is a [batch] M x K matrix. Typically K << M.\n D, is a [batch] K x K matrix.\n V, is a [batch] N x K matrix. Typically K << N.\n V^H is the Hermitian transpose (adjoint) of V.\n ```\n\n If `M = N`, determinants and solves are done using the matrix determinant\n lemma and Woodbury identities, and thus require L and D to be non-singular.\n\n Solves and determinants will be attempted unless the \"is_non_singular\"\n property of L and D is False.\n\n In the event that L and D are positive-definite, and U = V, solves and\n determinants can be done using a Cholesky factorization.\n\n ```python\n # Create a 3 x 3 diagonal linear operator.\n diag_operator = LinearOperatorDiag(\n diag_update=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True,\n is_positive_definite=True)\n\n # Perturb with a rank 2 perturbation\n operator = LinearOperatorLowRankUpdate(\n operator=diag_operator,\n u=[[1., 2.], [-1., 3.], [0., 0.]],\n diag_update=[11., 12.],\n v=[[1., 2.], [-1., 3.], [10., 10.]])\n\n operator.shape\n ==> [3, 3]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [M, N], with b >= 0\n x.shape = [B1,...,Bb] + [N, R], with R >= 0.\n ```\n\n ### Performance\n\n Suppose `operator` is a `LinearOperatorLowRankUpdate` of shape `[M, N]`,\n made from a rank `K` update of `base_operator` which performs `.matmul(x)` on\n `x` having `x.shape = [N, R]` with `O(L_matmul*N*R)` complexity (and similarly\n for `solve`, `determinant`. Then, if `x.shape = [N, R]`,\n\n * `operator.matmul(x)` is `O(L_matmul*N*R + K*N*R)`\n\n and if `M = N`,\n\n * `operator.solve(x)` is `O(L_matmul*N*R + N*K*R + K^2*R + K^3)`\n * `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)`\n\n If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular`, `self_adjoint`, `positive_definite`,\n `diag_update_positive` and `square`. These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "Perturb a `LinearOperator` with a rank `K` update.", "type": "API"}, {"name": "tf.linalg.LinearOperatorPermutation", "docs": "`LinearOperator` acting like a [batch] of permutation matrices.\n\n This operator acts like a [batch] of permutations with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorPermutation` is initialized with a (batch) vector.\n\n A permutation, is defined by an integer vector `v` whose values are unique\n and are in the range `[0, ... n]`. Applying the permutation on an input\n matrix has the folllowing meaning: the value of `v` at index `i`\n says to move the `v[i]`-th row of the input matrix to the `i`-th row.\n Because all values are unique, this will result in a permutation of the\n rows the input matrix. Note, that the permutation vector `v` has the same\n semantics as `tf.transpose`.\n\n ```python\n # Create a 3 x 3 permutation matrix that swaps the last two columns.\n vec = [0, 2, 1]\n operator = LinearOperatorPermutation(vec)\n\n operator.to_dense()\n ==> [[1., 0., 0.]\n [0., 0., 1.]\n [0., 1., 0.]]\n\n operator.shape\n ==> [3, 3]\n\n # This will be zero.\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of permutation matrices.", "type": "API"}, {"name": "tf.linalg.LinearOperatorScaledIdentity", "docs": "`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.\n\n This operator acts like a scaled [batch] identity matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n a scaled version of the `N x N` identity matrix.\n\n `LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier`\n (a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the\n `multiplier` determines the scale for each batch member.\n\n ```python\n # Create a 2 x 2 scaled identity matrix.\n operator = LinearOperatorIdentity(num_rows=2, multiplier=3.)\n\n operator.to_dense()\n ==> [[3., 0.]\n [0., 3.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.log_abs_determinant()\n ==> 2 * Log[3]\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> 3 * x\n\n y = tf.random.normal(shape=[3, 2, 4])\n # Note that y.shape is compatible with operator.shape because operator.shape\n # is broadcast to [3, 2, 2].\n x = operator.solve(y)\n ==> 3 * x\n\n # Create a 2-batch of 2x2 identity matrices\n operator = LinearOperatorIdentity(num_rows=2, multiplier=5.)\n operator.to_dense()\n ==> [[[5., 0.]\n [0., 5.]],\n [[5., 0.]\n [0., 5.]]]\n\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> 5 * x\n\n # Here the operator and x have different batch_shape, and are broadcast.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> 5 * x\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n ### Performance\n\n * `operator.matmul(x)` is `O(D1*...*Dd*N*R)`\n * `operator.solve(x)` is `O(D1*...*Dd*N*R)`\n * `operator.determinant()` is `O(D1*...*Dd)`\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.", "type": "API"}, {"name": "tf.linalg.LinearOperatorToeplitz", "docs": "`LinearOperator` acting like a [batch] of toeplitz matrices.\n\n This operator acts like a [batch] Toeplitz matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x N` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n #### Description in terms of toeplitz matrices\n\n Toeplitz means that `A` has constant diagonals. Hence, `A` can be generated\n with two vectors. One represents the first column of the matrix, and the\n other represents the first row.\n\n Below is a 4 x 4 example:\n\n ```\n A = |a b c d|\n |e a b c|\n |f e a b|\n |g f e a|\n ```\n\n #### Example of a Toeplitz operator.\n\n ```python\n # Create a 3 x 3 Toeplitz operator.\n col = [1., 2., 3.]\n row = [1., 4., -9.]\n operator = LinearOperatorToeplitz(col, row)\n\n operator.to_dense()\n ==> [[1., 4., -9.],\n [2., 1., 4.],\n [3., 2., 1.]]\n\n operator.shape\n ==> [3, 3]\n\n operator.log_abs_determinant()\n ==> scalar Tensor\n\n x = ... Shape [3, 4] Tensor\n operator.matmul(x)\n ==> Shape [3, 4] Tensor\n ```\n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] of toeplitz matrices.", "type": "API"}, {"name": "tf.linalg.LinearOperatorTridiag", "docs": "`LinearOperator` acting like a [batch] square tridiagonal matrix.\n\n This operator acts like a [batch] square tridiagonal matrix `A` with shape\n `[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x M` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n Example usage:\n\n Create a 3 x 3 tridiagonal linear operator.\n\n >>> superdiag = [3., 4., 5.]\n >>> diag = [1., -1., 2.]\n >>> subdiag = [6., 7., 8]\n >>> operator = tf.linalg.LinearOperatorTridiag(\n ... [superdiag, diag, subdiag],\n ... diagonals_format='sequence')\n >>> operator.to_dense()\n \n >>> operator.shape\n TensorShape([3, 3])\n\n Scalar Tensor output.\n\n >>> operator.log_abs_determinant()\n \n\n Create a [2, 3] batch of 4 x 4 linear operators.\n\n >>> diagonals = tf.random.normal(shape=[2, 3, 3, 4])\n >>> operator = tf.linalg.LinearOperatorTridiag(\n ... diagonals,\n ... diagonals_format='compact')\n\n Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible\n since the batch dimensions, [2, 1], are broadcast to\n operator.batch_shape = [2, 3].\n\n >>> y = tf.random.normal(shape=[2, 1, 4, 2])\n >>> x = operator.solve(y)\n >>> x\n \n\n #### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, N], with b >= 0\n x.shape = [C1,...,Cc] + [N, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb].\n ```\n\n #### Performance\n\n Suppose `operator` is a `LinearOperatorTridiag` of shape `[N, N]`,\n and `x.shape = [N, R]`. Then\n\n * `operator.matmul(x)` will take O(N * R) time.\n * `operator.solve(x)` will take O(N * R) time.\n\n If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and\n `[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] square tridiagonal matrix.", "type": "API"}, {"name": "tf.linalg.LinearOperatorZeros", "docs": "`LinearOperator` acting like a [batch] zero matrix.\n\n This operator acts like a [batch] zero matrix `A` with shape\n `[B1,...,Bb, N, M]` for some `b >= 0`. The first `b` indices index a\n batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is\n an `N x M` matrix. This matrix `A` is not materialized, but for\n purposes of broadcasting this shape will be relevant.\n\n `LinearOperatorZeros` is initialized with `num_rows`, and optionally\n `num_columns, `batch_shape`, and `dtype` arguments. If `num_columns` is\n `None`, then this operator will be initialized as a square matrix. If\n `batch_shape` is `None`, this operator efficiently passes through all\n arguments. If `batch_shape` is provided, broadcasting may occur, which will\n require making copies.\n\n ```python\n # Create a 2 x 2 zero matrix.\n operator = LinearOperatorZero(num_rows=2, dtype=tf.float32)\n\n operator.to_dense()\n ==> [[0., 0.]\n [0., 0.]]\n\n operator.shape\n ==> [2, 2]\n\n operator.determinant()\n ==> 0.\n\n x = ... Shape [2, 4] Tensor\n operator.matmul(x)\n ==> Shape [2, 4] Tensor, same as x.\n\n # Create a 2-batch of 2x2 zero matrices\n operator = LinearOperatorZeros(num_rows=2, batch_shape=[2])\n operator.to_dense()\n ==> [[[0., 0.]\n [0., 0.]],\n [[0., 0.]\n [0., 0.]]]\n\n # Here, even though the operator has a batch shape, the input is the same as\n # the output, so x can be passed through without a copy. The operator is able\n # to detect that no broadcast is necessary because both x and the operator\n # have statically defined shape.\n x = ... Shape [2, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, same as tf.zeros_like(x)\n\n # Here the operator and x have different batch_shape, and are broadcast.\n # This requires a copy, since the output is different size than the input.\n x = ... Shape [1, 2, 3]\n operator.matmul(x)\n ==> Shape [2, 2, 3] Tensor, equal to tf.zeros_like([x, x])\n ```\n\n ### Shape compatibility\n\n This operator acts on [batch] matrix with compatible shape.\n `x` is a batch matrix with compatible shape for `matmul` and `solve` if\n\n ```\n operator.shape = [B1,...,Bb] + [N, M], with b >= 0\n x.shape = [C1,...,Cc] + [M, R],\n and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]\n ```\n\n #### Matrix property hints\n\n This `LinearOperator` is initialized with boolean flags of the form `is_X`,\n for `X = non_singular, self_adjoint, positive_definite, square`.\n These have the following meaning:\n\n * If `is_X == True`, callers should expect the operator to have the\n property `X`. This is a promise that should be fulfilled, but is *not* a\n runtime assert. For example, finite floating point precision may result\n in these promises being violated.\n * If `is_X == False`, callers should expect the operator to not have `X`.\n * If `is_X == None` (the default), callers should have no expectation either\n way.\n ", "desc": "`LinearOperator` acting like a [batch] zero matrix.", "type": "API"}, {"name": "tf.linalg.logdet", "docs": "Computes log of the determinant of a hermitian positive definite matrix.\n\n ```python\n # Compute the determinant of a matrix while reducing the chance of over- or\n underflow:\n A = ... # shape 10 x 10\n det = tf.exp(tf.linalg.logdet(A)) # scalar\n ```\n\n Args:\n matrix: A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`,\n or `complex128` with shape `[..., M, M]`.\n name: A name to give this `Op`. Defaults to `logdet`.\n\n Returns:\n The natural log of the determinant of `matrix`.\n\n @compatibility(numpy)\n Equivalent to numpy.linalg.slogdet, although no sign is returned since only\n hermitian positive definite matrices are supported.\n @end_compatibility\n ", "desc": "Computes log of the determinant of a hermitian positive definite matrix.", "type": "API"}, {"name": "tf.linalg.logm", "docs": "Computes the matrix logarithm of one or more square matrices:\n\n \n \\\\(log(exp(A)) = A\\\\)\n\n This op is only defined for complex matrices. If A is positive-definite and\n real, then casting to a complex matrix, taking the logarithm and casting back\n to a real matrix will give the correct result.\n\n This function computes the matrix logarithm using the Schur-Parlett algorithm.\n Details of the algorithm can be found in Section 11.6.2 of:\n Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008.\n ISBN 978-0-898716-46-7.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the exponential for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix logarithm of one or more square matrices:", "type": "API"}, {"name": "tf.linalg.lstsq", "docs": "Solves one or more linear least-squares problems.\n\n `matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose\n inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a\n `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K`\n matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares\n sense.\n\n Below we will use the following notation for each pair of matrix and\n right-hand sides in the batch:\n\n `matrix`=\\\\(A \\in \\Re^{m \\times n}\\\\),\n `rhs`=\\\\(B \\in \\Re^{m \\times k}\\\\),\n `output`=\\\\(X \\in \\Re^{n \\times k}\\\\),\n `l2_regularizer`=\\\\(\\lambda\\\\).\n\n If `fast` is `True`, then the solution is computed by solving the normal\n equations using Cholesky decomposition. Specifically, if \\\\(m \\ge n\\\\) then\n \\\\(X = (A^T A + \\lambda I)^{-1} A^T B\\\\), which solves the least-squares\n problem \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||A Z - B||_F^2 +\n \\lambda ||Z||_F^2\\\\). If \\\\(m \\lt n\\\\) then `output` is computed as\n \\\\(X = A^T (A A^T + \\lambda I)^{-1} B\\\\), which (for \\\\(\\lambda = 0\\\\)) is\n the minimum-norm solution to the under-determined linear system, i.e.\n \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k}} ||Z||_F^2 \\\\), subject to\n \\\\(A Z = B\\\\). Notice that the fast path is only numerically stable when\n \\\\(A\\\\) is numerically full rank and has a condition number\n \\\\(\\mathrm{cond}(A) \\lt \\frac{1}{\\sqrt{\\epsilon_{mach}}}\\\\) or\\\\(\\lambda\\\\)\n is sufficiently large.\n\n If `fast` is `False` an algorithm based on the numerically robust complete\n orthogonal decomposition is used. This computes the minimum-norm\n least-squares solution, even when \\\\(A\\\\) is rank deficient. This path is\n typically 6-7 times slower than the fast path. If `fast` is `False` then\n `l2_regularizer` is ignored.\n\n Args:\n matrix: `Tensor` of shape `[..., M, N]`.\n rhs: `Tensor` of shape `[..., M, K]`.\n l2_regularizer: 0-D `double` `Tensor`. Ignored if `fast=False`.\n fast: bool. Defaults to `True`.\n name: string, optional name of the operation.\n\n Returns:\n output: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form\n `M`-by-`K` matrices that solve the equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least\n squares sense.\n\n Raises:\n NotImplementedError: linalg.lstsq is currently disabled for complex128\n and l2_regularizer != 0 due to poor accuracy.\n ", "desc": "Solves one or more linear least-squares problems.", "type": "API"}, {"name": "tf.linalg.lu", "docs": "Computes the LU decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be invertible.\n\n The output consists of two tensors LU and P containing the LU decomposition\n of all input submatrices `[..., :, :]`. LU encodes the lower triangular and\n upper triangular factors.\n\n For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of\n shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower\n triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose\n entries correspond to the upper triangular part, including the diagonal, of LU.\n\n P represents a permutation matrix encoded as a list of indices each between `0`\n and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to\n P, then the L, U and P satisfies P_mat * input = L * U.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of\n size `[M, M]`.\n output_idx_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (lu, p).\n\n lu: A `Tensor`. Has the same type as `input`.\n p: A `Tensor` of type `output_idx_type`.\n ", "desc": "Computes the LU decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.linalg.lu_matrix_inverse", "docs": "Computes the inverse given the LU decomposition(s) of one or more matrices.\n\n This op is conceptually identical to,\n\n ```python\n inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))\n tf.assert_near(tf.matrix_inverse(X), inv_X)\n # ==> True\n ```\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_matrix_inverse').\n\n Returns:\n inv_x: The matrix_inv, i.e.,\n `tf.matrix_inverse(tf.linalg.lu_reconstruct(lu, perm))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n inv_x = tf.linalg.lu_matrix_inverse(*tf.linalg.lu(x))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```\n\n ", "desc": "Computes the inverse given the LU decomposition(s) of one or more matrices.", "type": "API"}, {"name": "tf.linalg.lu_reconstruct", "docs": "The reconstruct one or more matrices from their LU decomposition(s).\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_reconstruct').\n\n Returns:\n x: The original input to `tf.linalg.lu`, i.e., `x` as in,\n `lu_reconstruct(*tf.linalg.lu(x))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n x_reconstructed = tf.linalg.lu_reconstruct(*tf.linalg.lu(x))\n tf.assert_near(x, x_reconstructed)\n # ==> True\n ```\n\n ", "desc": "The reconstruct one or more matrices from their LU decomposition(s).", "type": "API"}, {"name": "tf.linalg.lu_solve", "docs": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if `matmul(P,\n matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if `matmul(P, matmul(L, U)) =\n X` then `perm = argmax(P)`.\n rhs: Matrix-shaped float `Tensor` representing targets for which to solve;\n `A X = RHS`. To handle vector cases, use: `lu_solve(..., rhs[...,\n tf.newaxis])[..., 0]`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., 'lu_solve').\n\n Returns:\n x: The `X` in `A @ X = RHS`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[1., 2],\n [3, 4]],\n [[7, 8],\n [3, 4]]]\n inv_x = tf.linalg.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```\n\n ", "desc": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.", "type": "API"}, {"name": "tf.linalg.matmul", "docs": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n where the inner 2 dimensions specify valid matrix multiplication dimensions,\n and any further outer dimensions specify matching batch size.\n\n Both matrices must be of the same type. The supported types are:\n `bfloat16`, `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, `complex128`.\n\n Either matrix can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the matrices contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices (rank-2 tensors) with\n datatypes `bfloat16` or `float32`.\n\n A simple 2-D tensor matrix multiplication:\n\n >>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n >>> a # 2-D tensor\n \n >>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])\n >>> b # 2-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n A batch matrix multiplication with batch shape [2]:\n\n >>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> a # 3-D tensor\n \n >>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])\n >>> b # 3-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n Since python >= 3.5 the @ operator is supported\n (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow,\n it simply calls the `tf.matmul()` function, so the following lines are\n equivalent:\n\n >>> d = a @ b @ [[10], [11]]\n >>> d = tf.matmul(tf.matmul(a, b), [[10], [11]])\n\n Args:\n a: `tf.Tensor` of type `float16`, `float32`, `float64`, `int32`,\n `complex64`, `complex128` and rank > 1.\n b: `tf.Tensor` with same type and rank as `a`.\n transpose_a: If `True`, `a` is transposed before multiplication.\n transpose_b: If `True`, `b` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n adjoint_b: If `True`, `b` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n output_type: The output datatype if needed. Defaults to None in which case\n the output_type is the same as input type. Currently only works when input\n tensors are type (u)int8 and output_type can be int32.\n name: Name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same type as `a` and `b` where each inner-most matrix\n is the product of the corresponding matrices in `a` and `b`, e.g. if all\n transpose or adjoint attributes are `False`:\n\n `output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`,\n for all indices `i`, `j`.\n\n Note: This is matrix product, not element-wise product.\n\n\n Raises:\n ValueError: If `transpose_a` and `adjoint_a`, or `transpose_b` and\n `adjoint_b` are both set to `True`.\n TypeError: If output_type is specified but the types of `a`, `b` and\n `output_type` is not (u)int8, (u)int8 and int32.\n ", "desc": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.linalg.matrix_rank", "docs": "Compute the matrix rank of one or more matrices.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n tol: Threshold below which the singular value is counted as 'zero'.\n Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`).\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: 'matrix_rank'.\n\n Returns:\n matrix_rank: (Batch of) `int32` scalars representing the number of non-zero\n singular values.\n ", "desc": "Compute the matrix rank of one or more matrices.", "type": "API"}, {"name": "tf.linalg.matrix_transpose", "docs": "Transposes last two dimensions of tensor `a`.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [4, 5, 6]])\n tf.linalg.matrix_transpose(x) # [[1, 4],\n # [2, 5],\n # [3, 6]]\n\n x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n [4 + 4j, 5 + 5j, 6 + 6j]])\n tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j],\n # [2 - 2j, 5 - 5j],\n # [3 - 3j, 6 - 6j]]\n\n # Matrix with two batch dimensions.\n # x.shape is [1, 2, 3, 4]\n # tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3]\n ```\n\n Note that `tf.matmul` provides kwargs allowing for transpose of arguments.\n This is done with minimal cost, and is preferable to using this function. E.g.\n\n ```python\n # Good! Transpose is taken at minimal additional cost.\n tf.matmul(matrix, b, transpose_b=True)\n\n # Inefficient!\n tf.matmul(matrix, tf.linalg.matrix_transpose(b))\n ```\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, `linalg.matrix_transpose` returns a new\n tensor with the items permuted.\n @end_compatibility\n\n Args:\n a: A `Tensor` with `rank >= 2`.\n name: A name for the operation (optional).\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.linalg.matrix_transpose(input)).\n\n Returns:\n A transposed batch matrix `Tensor`.\n\n Raises:\n ValueError: If `a` is determined statically to have `rank < 2`.\n ", "desc": "Transposes last two dimensions of tensor `a`.", "type": "API"}, {"name": "tf.linalg.matvec", "docs": "Multiplies matrix `a` by vector `b`, producing `a` * `b`.\n\n The matrix `a` must, following any transpositions, be a tensor of rank >= 2,\n with `shape(a)[-1] == shape(b)[-1]`, and `shape(a)[:-2]` able to broadcast\n with `shape(b)[:-1]`.\n\n Both `a` and `b` must be of the same type. The supported types are:\n `float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.\n\n Matrix `a` can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the inputs contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices/vectors (rank-2/1\n tensors) with datatypes `bfloat16` or `float32`.\n\n For example:\n\n ```python\n # 2-D tensor `a`\n # [[1, 2, 3],\n # [4, 5, 6]]\n a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n\n # 1-D tensor `b`\n # [7, 9, 11]\n b = tf.constant([7, 9, 11], shape=[3])\n\n # `a` * `b`\n # [ 58, 64]\n c = tf.linalg.matvec(a, b)\n\n\n # 3-D tensor `a`\n # [[[ 1, 2, 3],\n # [ 4, 5, 6]],\n # [[ 7, 8, 9],\n # [10, 11, 12]]]\n a = tf.constant(np.arange(1, 13, dtype=np.int32),\n shape=[2, 2, 3])\n\n # 2-D tensor `b`\n # [[13, 14, 15],\n # [16, 17, 18]]\n b = tf.constant(np.arange(13, 19, dtype=np.int32),\n shape=[2, 3])\n\n # `a` * `b`\n # [[ 86, 212],\n # [410, 563]]\n c = tf.linalg.matvec(a, b)\n ```\n\n Args:\n a: `Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`,\n `complex128` and rank > 1.\n b: `Tensor` with same type as `a` and compatible dimensions.\n transpose_a: If `True`, `a` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix.\n name: Name for the operation (optional).\n\n Returns:\n A `Tensor` of the same type as `a` and `b` where each inner-most vector is\n the product of the corresponding matrices in `a` and vectors in `b`, e.g. if\n all transpose or adjoint attributes are `False`:\n\n `output`[..., i] = sum_k (`a`[..., i, k] * `b`[..., k]), for all indices i.\n\n Note: This is matrix-vector product, not element-wise product.\n\n\n Raises:\n ValueError: If transpose_a and adjoint_a are both set to True.\n ", "desc": "Multiplies matrix `a` by vector `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.linalg.norm", "docs": "Computes the norm of vectors, matrices, and tensors.\n\n This function can compute several different vector norms (the 1-norm, the\n Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\n matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\n Args:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are `'fro'`, `'euclidean'`,\n `1`, `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply:\n a) The Frobenius norm `'fro'` is not defined for vectors,\n b) If axis is a 2-tuple (matrix norm), only `'euclidean'`, '`fro'`, `1`,\n `2`, `np.inf` are supported.\n See the description of `axis` on how to compute norms for a batch of\n vectors or matrices stored in a tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`.\n If `axis` is a Python integer, the input is considered a batch of vectors,\n and `axis` determines the axis in `tensor` over which to compute vector\n norms.\n If `axis` is a 2-tuple of Python integers it is considered a batch of\n matrices and `axis` determines the axes in `tensor` over which to compute\n a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n keepdims: If True, the axis indicated in `axis` are kept with size 1.\n Otherwise, the dimensions in `axis` are removed from the output shape.\n name: The name of the op.\n\n Returns:\n output: A `Tensor` of the same type as tensor, containing the vector or\n matrix norms. If `keepdims` is True then the rank of output is equal to\n the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,\n if `axis` is an integer, the rank of `output` is one less than the rank\n of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less\n than the rank of `tensor`.\n\n Raises:\n ValueError: If `ord` or `axis` is invalid.\n\n @compatibility(numpy)\n Mostly equivalent to numpy.linalg.norm.\n Not supported: ord <= 0, 2-norm for matrices, nuclear norm.\n Other differences:\n a) If axis is `None`, treats the flattened `tensor` as a vector\n regardless of rank.\n b) Explicitly supports 'euclidean' norm as the default, including for\n higher order tensors.\n @end_compatibility\n ", "desc": "Computes the norm of vectors, matrices, and tensors.", "type": "API"}, {"name": "tf.linalg.normalize", "docs": "Normalizes `tensor` along dimension `axis` using specified norm.\n\n This uses `tf.linalg.norm` to compute the norm along `axis`.\n\n This function can compute several different vector norms (the 1-norm, the\n Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\n matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\n Args:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are `'fro'`, `'euclidean'`, `1`,\n `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply: a) The Frobenius norm `'fro'` is not defined for\n vectors, b) If axis is a 2-tuple (matrix norm), only `'euclidean'`,\n '`fro'`, `1`, `2`, `np.inf` are supported. See the description of `axis`\n on how to compute norms for a batch of vectors or matrices stored in a\n tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`. If `axis` is a Python integer, the\n input is considered a batch of vectors, and `axis` determines the axis in\n `tensor` over which to compute vector norms. If `axis` is a 2-tuple of\n Python integers it is considered a batch of matrices and `axis` determines\n the axes in `tensor` over which to compute a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n name: The name of the op.\n\n Returns:\n normalized: A normalized `Tensor` with the same shape as `tensor`.\n norm: The computed norms with the same shape and dtype `tensor` but the\n final axis is 1 instead. Same as running\n `tf.cast(tf.linalg.norm(tensor, ord, axis keepdims=True), tensor.dtype)`.\n\n Raises:\n ValueError: If `ord` or `axis` is invalid.\n ", "desc": "Normalizes `tensor` along dimension `axis` using specified norm.", "type": "API"}, {"name": "tf.linalg.pinv", "docs": "Compute the Moore-Penrose pseudo-inverse of one or more matrices.\n\n Calculate the [generalized inverse of a matrix](\n https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its\n singular-value decomposition (SVD) and including all large singular values.\n\n The pseudo-inverse of a matrix `A`, is defined as: 'the matrix that 'solves'\n [the least-squares problem] `A @ x = b`,' i.e., if `x_hat` is a solution, then\n `A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if\n `U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then\n `A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1]\n\n This function is analogous to [`numpy.linalg.pinv`](\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html).\n It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the\n default `rcond` is `1e-15`. Here the default is\n `10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n rcond: `Tensor` of small singular value cutoffs. Singular values smaller\n (in modulus) than `rcond` * largest_singular_value (again, in modulus) are\n set to zero. Must broadcast against `tf.shape(a)[:-2]`.\n Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`.\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: 'pinv'.\n\n Returns:\n a_pinv: (Batch of) pseudo-inverse of input `a`. Has same shape as `a` except\n rightmost two dimensions are transposed.\n\n Raises:\n TypeError: if input `a` does not have `float`-like `dtype`.\n ValueError: if input `a` has fewer than 2 dimensions.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n a = tf.constant([[1., 0.4, 0.5],\n [0.4, 0.2, 0.25],\n [0.5, 0.25, 0.35]])\n tf.matmul(tf.linalg.pinv(a), a)\n # ==> array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]], dtype=float32)\n\n a = tf.constant([[1., 0.4, 0.5, 1.],\n [0.4, 0.2, 0.25, 2.],\n [0.5, 0.25, 0.35, 3.]])\n tf.matmul(tf.linalg.pinv(a), a)\n # ==> array([[ 0.76, 0.37, 0.21, -0.02],\n [ 0.37, 0.43, -0.33, 0.02],\n [ 0.21, -0.33, 0.81, 0.01],\n [-0.02, 0.02, 0.01, 1. ]], dtype=float32)\n ```\n\n #### References\n\n [1]: G. Strang. 'Linear Algebra and Its Applications, 2nd Ed.' Academic Press,\n Inc., 1980, pp. 139-142.\n ", "desc": "Compute the Moore-Penrose pseudo-inverse of one or more matrices.", "type": "API"}, {"name": "tf.linalg.qr", "docs": "Computes the QR decompositions of one or more matrices.\n\n Computes the QR decomposition of each inner matrix in `tensor` such that\n `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`\n\n Currently, the gradient for the QR decomposition is well-defined only when\n the first `P` columns of the inner matrix are linearly independent, where\n `P` is the minimum of `M` and `N`, the 2 inner-most dimmensions of `tensor`.\n\n ```python\n # a is a tensor.\n # q is a tensor of orthonormal matrices.\n # r is a tensor of upper triangular matrices.\n q, r = qr(a)\n q_full, r_full = qr(a, full_matrices=True)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.\n full_matrices: An optional `bool`. Defaults to `False`.\n If true, compute full-sized `q` and `r`. If false\n (the default), compute only the leading `P` columns of `q`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (q, r).\n\n q: A `Tensor`. Has the same type as `input`.\n r: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the QR decompositions of one or more matrices.", "type": "API"}, {"name": "tf.linalg.set_diag", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the specified diagonals of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n `input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or\n `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`.\n Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`.\n `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`.\n `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n\n The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`.\n If `k` is scalar or `k[0] == k[1]`:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n\n Otherwise,\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)\n [7, 7, 7, 7],\n [7, 7, 7, 7]],\n [[7, 7, 7, 7],\n [7, 7, 7, 7],\n [7, 7, 7, 7]]])\n diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_set_diag(input, diagonal)\n ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [7, 2, 7, 7],\n [7, 7, 3, 7]],\n [[4, 7, 7, 7],\n [7, 5, 7, 7],\n [7, 7, 6, 7]]]\n\n # A superdiagonal (per batch).\n tf.matrix_set_diag(input, diagonal, k = 1)\n ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)\n [7, 7, 2, 7],\n [7, 7, 7, 3]],\n [[7, 4, 7, 7],\n [7, 7, 5, 7],\n [7, 7, 7, 6]]]\n\n # A band of diagonals.\n diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [0, 4, 5]],\n [[1, 2, 0],\n [5, 6, 4],\n [6, 1, 2],\n [0, 3, 4]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2))\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n # RIGHT_LEFT alignment.\n diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 1, 2],\n [5, 6, 4],\n [6, 1, 2],\n [3, 4, 0]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2), align=\"RIGHT_LEFT\")\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n ```\n\n Args:\n input: A `Tensor` with rank `k + 1`, where `k >= 1`.\n diagonal: A `Tensor` with rank `k`, when `d_lower == d_upper`, or `k + 1`,\n otherwise. `k >= 1`.\n name: A name for the operation (optional).\n k: Diagonal offset(s). Positive value means superdiagonal, 0 refers to the\n main diagonal, and negative value means subdiagonals. `k` can be a single\n integer (for a single diagonal) or a pair of integers specifying the low\n and high ends of a matrix band. `k[0]` must not be larger than `k[1]`.\n align: Some diagonals are shorter than `max_diag_len` and need to be padded.\n `align` is a string specifying how superdiagonals and subdiagonals should\n be aligned, respectively. There are four possible alignments: \"RIGHT_LEFT\"\n (default), \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\"\n aligns superdiagonals to the right (left-pads the row) and subdiagonals to\n the left (right-pads the row). It is the packing format LAPACK uses.\n cuSPARSE uses \"LEFT_RIGHT\", which is the opposite alignment.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.linalg.slogdet", "docs": "Computes the sign and the log of the absolute value of the determinant of\n\n one or more square matrices.\n\n The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions\n form square matrices. The outputs are two tensors containing the signs and\n absolute values of the log determinants for all N input submatrices\n `[..., :, :]` such that `determinant = sign*exp(log_abs_determinant)`.\n The `log_abs_determinant` is computed as `det(P)*sum(log(diag(LU)))` where `LU`\n is the `LU` decomposition of the input and `P` is the corresponding\n permutation matrix.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[N, M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sign, log_abs_determinant).\n\n sign: A `Tensor`. Has the same type as `input`.\n log_abs_determinant: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the sign and the log of the absolute value of the determinant of", "type": "API"}, {"name": "tf.linalg.solve", "docs": "Solves systems of linear equations.\n\n `Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is\n a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix\n satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.\n If `adjoint` is `True` then each output matrix satisfies\n `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n adjoint: An optional `bool`. Defaults to `False`.\n Boolean indicating whether to solve with `matrix` or its (block-wise)\n adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves systems of linear equations.", "type": "API"}, {"name": "tf.linalg.sqrtm", "docs": "Computes the matrix square root of one or more square matrices:\n\n matmul(sqrtm(A), sqrtm(A)) = A\n\n The input matrix should be invertible. If the input matrix is real, it should\n have no eigenvalues which are real and negative (pairs of complex conjugate\n eigenvalues are allowed).\n\n The matrix square root is computed by first reducing the matrix to\n quasi-triangular form with the real Schur decomposition. The square root\n of the quasi-triangular matrix is then computed directly. Details of\n the algorithm can be found in: Nicholas J. Higham, \"Computing real\n square roots of a real matrix\", Linear Algebra Appl., 1987.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the matrix square root for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix square root of one or more square matrices:", "type": "API"}, {"name": "tf.linalg.svd", "docs": "Computes the singular value decompositions of one or more matrices.\n\n Computes the SVD of each inner matrix in `tensor` such that\n `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) *\n transpose(conj(v[..., :, :]))`\n\n ```python\n # a is a tensor.\n # s is a tensor of singular values.\n # u is a tensor of left singular vectors.\n # v is a tensor of right singular vectors.\n s, u, v = svd(a)\n s = svd(a, compute_uv=False)\n ```\n\n Args:\n tensor: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and\n `N`.\n full_matrices: If true, compute full-sized `u` and `v`. If false\n (the default), compute only the leading `P` singular vectors.\n Ignored if `compute_uv` is `False`.\n compute_uv: If `True` then left and right singular vectors will be\n computed and returned in `u` and `v`, respectively. Otherwise, only the\n singular values will be computed, which can be significantly faster.\n name: string, optional name of the operation.\n\n Returns:\n s: Singular values. Shape is `[..., P]`. The values are sorted in reverse\n order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the\n second largest, etc.\n u: Left singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., M, P]`; if `full_matrices` is `True` then shape is\n `[..., M, M]`. Not returned if `compute_uv` is `False`.\n v: Right singular vectors. If `full_matrices` is `False` (default) then\n shape is `[..., N, P]`. If `full_matrices` is `True` then shape is\n `[..., N, N]`. Not returned if `compute_uv` is `False`.\n\n @compatibility(numpy)\n Mostly equivalent to numpy.linalg.svd, except that\n * The order of output arguments here is `s`, `u`, `v` when `compute_uv` is\n `True`, as opposed to `u`, `s`, `v` for numpy.linalg.svd.\n * full_matrices is `False` by default as opposed to `True` for\n numpy.linalg.svd.\n * tf.linalg.svd uses the standard definition of the SVD\n \\\\(A = U \\Sigma V^H\\\\), such that the left singular vectors of `a` are\n the columns of `u`, while the right singular vectors of `a` are the\n columns of `v`. On the other hand, numpy.linalg.svd returns the adjoint\n \\\\(V^H\\\\) as the third output argument.\n ```python\n import tensorflow as tf\n import numpy as np\n s, u, v = tf.linalg.svd(a)\n tf_a_approx = tf.matmul(u, tf.matmul(tf.linalg.diag(s), v, adjoint_b=True))\n u, s, v_adj = np.linalg.svd(a, full_matrices=False)\n np_a_approx = np.dot(u, np.dot(np.diag(s), v_adj))\n # tf_a_approx and np_a_approx should be numerically close.\n ```\n @end_compatibility\n ", "desc": "Computes the singular value decompositions of one or more matrices.", "type": "API"}, {"name": "tf.linalg.tensor_diag", "docs": "Returns a diagonal tensor with a given diagonal values.\n\n Given a `diagonal`, this operation returns a tensor with the `diagonal` and\n everything else padded with zeros. The diagonal is computed as follows:\n\n Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of\n rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:\n\n `output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.\n\n For example:\n\n ```\n # 'diagonal' is [1, 2, 3, 4]\n tf.diag(diagonal) ==> [[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]]\n ```\n\n Args:\n diagonal: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n Rank k tensor where k is at most 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a diagonal tensor with a given diagonal values.", "type": "API"}, {"name": "tf.linalg.tensor_diag_part", "docs": "Returns the diagonal part of the tensor.\n\n This operation returns a tensor with the `diagonal` part\n of the `input`. The `diagonal` part is computed as follows:\n\n Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a\n tensor of rank `k` with dimensions `[D1,..., Dk]` where:\n\n `diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.\n\n For a rank 2 tensor, `linalg.diag_part` and `linalg.tensor_diag_part`\n produce the same result. For rank 3 and higher, linalg.diag_part extracts\n the diagonal of each inner-most matrix in the tensor. An example where\n they differ is given below.\n\n >>> x = [[[[1111,1112],[1121,1122]],\n ... [[1211,1212],[1221,1222]]],\n ... [[[2111, 2112], [2121, 2122]],\n ... [[2211, 2212], [2221, 2222]]]\n ... ]\n >>> tf.linalg.tensor_diag_part(x)\n \n >>> tf.linalg.diag_part(x).shape\n TensorShape([2, 2, 2])\n\n Args:\n input: A `Tensor` with rank `2k`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor containing diagonals of `input`. Has the same type as `input`, and\n rank `k`.\n ", "desc": "Returns the diagonal part of the tensor.", "type": "API"}, {"name": "tf.linalg.tensordot", "docs": "Tensor contraction of a and b along specified axes and outer product.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `axes`.\n\n This operation corresponds to `numpy.tensordot(a, b, axes)`.\n\n Example 1: When `a` and `b` are matrices (order 2), the case `axes=1`\n is equivalent to matrix multiplication.\n\n Example 2: When `a` and `b` are matrices (order 2), the case\n `axes = [[1], [0]]` is equivalent to matrix multiplication.\n\n Example 3: When `a` and `b` are matrices (order 2), the case `axes=0` gives\n the outer product, a tensor of order 4.\n\n Example 4: Suppose that \\\\(a_{ijk}\\\\) and \\\\(b_{lmn}\\\\) represent two\n tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor\n \\\\(c_{jklm}\\\\) whose entry\n corresponding to the indices \\\\((j,k,l,m)\\\\) is given by:\n\n \\\\( c_{jklm} = \\sum_i a_{ijk} b_{lmi} \\\\).\n\n In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.\n\n Args:\n a: `Tensor` of type `float32` or `float64`.\n b: `Tensor` with the same type as `a`.\n axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].\n If axes is a scalar, sum over the last N axes of a and the first N axes of\n b in order. If axes is a list or `Tensor` the first and second row contain\n the set of unique integers specifying axes along which the contraction is\n computed, for `a` and `b`, respectively. The number of axes for `a` and\n `b` must be equal. If `axes=0`, computes the outer product between `a` and\n `b`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `a`.\n\n Raises:\n ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.\n IndexError: If the values in axes exceed the rank of the corresponding\n tensor.\n ", "desc": "Tensor contraction of a and b along specified axes and outer product.", "type": "API"}, {"name": "tf.linalg.trace", "docs": "Compute the trace of a tensor `x`.\n\n `trace(x)` returns the sum along the main diagonal of each inner-most matrix\n in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output\n is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where\n\n `output[i, j, k, ..., l] = trace(x[i, j, k, ..., l, :, :])`\n\n For example:\n\n ```python\n x = tf.constant([[1, 2], [3, 4]])\n tf.linalg.trace(x) # 5\n\n x = tf.constant([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n tf.linalg.trace(x) # 15\n\n x = tf.constant([[[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]],\n [[-1, -2, -3],\n [-4, -5, -6],\n [-7, -8, -9]]])\n tf.linalg.trace(x) # [15, -15]\n ```\n\n Args:\n x: tensor.\n name: A name for the operation (optional).\n\n Returns:\n The trace of input tensor.\n ", "desc": "Compute the trace of a tensor `x`.", "type": "API"}, {"name": "tf.linalg.triangular_solve", "docs": "Solve systems of linear equations with upper or lower triangular matrices.\n\n `matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form\n square matrices. If `lower` is `True` then the strictly upper triangular part\n of each inner-most matrix is assumed to be zero and not accessed. If `lower`\n is `False` then the strictly lower triangular part of each inner-most matrix\n is assumed to be zero and not accessed. `rhs` is a tensor of shape\n `[..., M, N]`.\n\n The output is a tensor of shape `[..., M, N]`. If `adjoint` is `True` then the\n innermost matrices in output satisfy matrix equations `\n sum_k matrix[..., i, k] * output[..., k, j] = rhs[..., i, j]`.\n If `adjoint` is `False` then the\n innermost matrices in output satisfy matrix equations\n `sum_k adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.\n\n Example:\n\n >>> a = tf.constant([[3, 0, 0, 0],\n ... [2, 1, 0, 0],\n ... [1, 0, 1, 0],\n ... [1, 1, 1, 1]], dtype=tf.float32)\n\n >>> b = tf.constant([[4], [2], [4], [2]], dtype=tf.float32)\n >>> x = tf.linalg.triangular_solve(a, b, lower=True)\n >>> x\n \n >>> tf.matmul(a, x)\n \n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`,\n `float32`, `half`, `complex64`, `complex128`. Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`. Shape is `[..., M,\n N]`.\n lower: An optional `bool`. Defaults to `True`. Boolean indicating whether\n the innermost matrices in matrix are lower or upper triangular.\n adjoint: An optional `bool`. Defaults to `False`. Boolean indicating whether\n to solve with matrix or its (block-wise) adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as matrix, and shape is `[..., M, N]`.\n\n ", "desc": "Solve systems of linear equations with upper or lower triangular matrices.", "type": "API"}, {"name": "tf.linalg.tridiagonal_matmul", "docs": "Multiplies tridiagonal matrix by matrix.\n\n `diagonals` is representation of 3-diagonal NxN matrix, which depends on\n `diagonals_format`.\n\n In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with\n two inner-most dimensions representing the square tridiagonal matrices.\n Elements outside of the three diagonals will be ignored.\n\n If `sequence` format, `diagonals` is list or tuple of three tensors:\n `[superdiag, maindiag, subdiag]`, each having shape [..., M]. Last element\n of `superdiag` first element of `subdiag` are ignored.\n\n In `compact` format the three diagonals are brought together into one tensor\n of shape `[..., 3, M]`, with last two dimensions containing superdiagonals,\n diagonals, and subdiagonals, in order. Similarly to `sequence` format,\n elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.\n\n The `sequence` format is recommended as the one with the best performance.\n\n `rhs` is matrix to the right of multiplication. It has shape `[..., M, N]`.\n\n Example:\n\n ```python\n superdiag = tf.constant([-1, -1, 0], dtype=tf.float64)\n maindiag = tf.constant([2, 2, 2], dtype=tf.float64)\n subdiag = tf.constant([0, -1, -1], dtype=tf.float64)\n diagonals = [superdiag, maindiag, subdiag]\n rhs = tf.constant([[1, 1], [1, 1], [1, 1]], dtype=tf.float64)\n x = tf.linalg.tridiagonal_matmul(diagonals, rhs, diagonals_format='sequence')\n ```\n\n Args:\n diagonals: A `Tensor` or tuple of `Tensor`s describing left-hand sides. The\n shape depends of `diagonals_format`, see description above. Must be\n `float32`, `float64`, `complex64`, or `complex128`.\n rhs: A `Tensor` of shape [..., M, N] and with the same dtype as `diagonals`.\n diagonals_format: one of `sequence`, or `compact`. Default is `compact`.\n name: A name to give this `Op` (optional).\n\n Returns:\n A `Tensor` of shape [..., M, N] containing the result of multiplication.\n\n Raises:\n ValueError: An unsupported type is provided as input, or when the input\n tensors have incorrect shapes.\n ", "desc": "Multiplies tridiagonal matrix by matrix.", "type": "API"}, {"name": "tf.linalg.tridiagonal_solve", "docs": "Solves tridiagonal systems of equations.\n\n The input can be supplied in various formats: `matrix`, `sequence` and\n `compact`, specified by the `diagonals_format` arg.\n\n In `matrix` format, `diagonals` must be a tensor of shape `[..., M, M]`, with\n two inner-most dimensions representing the square tridiagonal matrices.\n Elements outside of the three diagonals will be ignored.\n\n In `sequence` format, `diagonals` are supplied as a tuple or list of three\n tensors of shapes `[..., N]`, `[..., M]`, `[..., N]` representing\n superdiagonals, diagonals, and subdiagonals, respectively. `N` can be either\n `M-1` or `M`; in the latter case, the last element of superdiagonal and the\n first element of subdiagonal will be ignored.\n\n In `compact` format the three diagonals are brought together into one tensor\n of shape `[..., 3, M]`, with last two dimensions containing superdiagonals,\n diagonals, and subdiagonals, in order. Similarly to `sequence` format,\n elements `diagonals[..., 0, M-1]` and `diagonals[..., 2, 0]` are ignored.\n\n The `compact` format is recommended as the one with best performance. In case\n you need to cast a tensor into a compact format manually, use `tf.gather_nd`.\n An example for a tensor of shape [m, m]:\n\n ```python\n rhs = tf.constant([...])\n matrix = tf.constant([[...]])\n m = matrix.shape[0]\n dummy_idx = [0, 0] # An arbitrary element to use as a dummy\n indices = [[[i, i + 1] for i in range(m - 1)] + [dummy_idx], # Superdiagonal\n [[i, i] for i in range(m)], # Diagonal\n [dummy_idx] + [[i + 1, i] for i in range(m - 1)]] # Subdiagonal\n diagonals=tf.gather_nd(matrix, indices)\n x = tf.linalg.tridiagonal_solve(diagonals, rhs)\n ```\n\n Regardless of the `diagonals_format`, `rhs` is a tensor of shape `[..., M]` or\n `[..., M, K]`. The latter allows to simultaneously solve K systems with the\n same left-hand sides and K different right-hand sides. If `transpose_rhs`\n is set to `True` the expected shape is `[..., M]` or `[..., K, M]`.\n\n The batch dimensions, denoted as `...`, must be the same in `diagonals` and\n `rhs`.\n\n The output is a tensor of the same shape as `rhs`: either `[..., M]` or\n `[..., M, K]`.\n\n The op isn't guaranteed to raise an error if the input matrix is not\n invertible. `tf.debugging.check_numerics` can be applied to the output to\n detect invertibility problems.\n\n **Note**: with large batch sizes, the computation on the GPU may be slow, if\n either `partial_pivoting=True` or there are multiple right-hand sides\n (`K > 1`). If this issue arises, consider if it's possible to disable pivoting\n and have `K = 1`, or, alternatively, consider using CPU.\n\n On CPU, solution is computed via Gaussian elimination with or without partial\n pivoting, depending on `partial_pivoting` parameter. On GPU, Nvidia's cuSPARSE\n library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv\n\n Args:\n diagonals: A `Tensor` or tuple of `Tensor`s describing left-hand sides. The\n shape depends of `diagonals_format`, see description above. Must be\n `float32`, `float64`, `complex64`, or `complex128`.\n rhs: A `Tensor` of shape [..., M] or [..., M, K] and with the same dtype as\n `diagonals`. Note that if the shape of `rhs` and/or `diags` isn't known\n statically, `rhs` will be treated as a matrix rather than a vector.\n diagonals_format: one of `matrix`, `sequence`, or `compact`. Default is\n `compact`.\n transpose_rhs: If `True`, `rhs` is transposed before solving (has no effect\n if the shape of rhs is [..., M]).\n conjugate_rhs: If `True`, `rhs` is conjugated before solving.\n name: A name to give this `Op` (optional).\n partial_pivoting: whether to perform partial pivoting. `True` by default.\n Partial pivoting makes the procedure more stable, but slower. Partial\n pivoting is unnecessary in some cases, including diagonally dominant and\n symmetric positive definite matrices (see e.g. theorem 9.12 in [1]).\n perturb_singular: whether to perturb singular matrices to return a finite\n result. `False` by default. If true, solutions to systems involving\n a singular matrix will be computed by perturbing near-zero pivots in\n the partially pivoted LU decomposition. Specifically, tiny pivots are\n perturbed by an amount of order `eps * max_{ij} |U(i,j)|` to avoid\n overflow. Here `U` is the upper triangular part of the LU decomposition,\n and `eps` is the machine precision. This is useful for solving\n numerically singular systems when computing eigenvectors by inverse\n iteration.\n If `partial_pivoting` is `False`, `perturb_singular` must be `False` as\n well.\n\n Returns:\n A `Tensor` of shape [..., M] or [..., M, K] containing the solutions.\n If the input matrix is singular, the result is undefined.\n\n Raises:\n ValueError: Is raised if any of the following conditions hold:\n 1. An unsupported type is provided as input,\n 2. the input tensors have incorrect shapes,\n 3. `perturb_singular` is `True` but `partial_pivoting` is not.\n UnimplementedError: Whenever `partial_pivoting` is true and the backend is\n XLA, or whenever `perturb_singular` is true and the backend is\n XLA or GPU.\n\n [1] Nicholas J. Higham (2002). Accuracy and Stability of Numerical Algorithms:\n Second Edition. SIAM. p. 175. ISBN 978-0-89871-802-7.\n\n ", "desc": "Solves tridiagonal systems of equations.", "type": "API"}, {"name": "tf.linspace", "docs": "Generates evenly-spaced values in an interval along a given axis.\n\n A sequence of `num` evenly-spaced values are generated beginning at `start`\n along a given `axis`.\n If `num > 1`, the values in the sequence increase by\n `(stop - start) / (num - 1)`, so that the last one is exactly `stop`.\n If `num <= 0`, `ValueError` is raised.\n\n Matches\n [np.linspace](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)'s\n behaviour\n except when `num == 0`.\n\n For example:\n\n ```\n tf.linspace(10.0, 12.0, 3, name=\"linspace\") => [ 10.0 11.0 12.0]\n ```\n\n `Start` and `stop` can be tensors of arbitrary size:\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=0)\n \n\n `Axis` is where the values will be generated (the dimension in the\n returned tensor which corresponds to the axis will be equal to `num`)\n\n >>> tf.linspace([0., 5.], [10., 40.], 5, axis=-1)\n \n\n\n\n Args:\n start: A `Tensor`. Must be one of the following types: `bfloat16`,\n `float32`, `float64`. N-D tensor. First entry in the range.\n stop: A `Tensor`. Must have the same type and shape as `start`. N-D tensor.\n Last entry in the range.\n num: A `Tensor`. Must be one of the following types: `int32`, `int64`. 0-D\n tensor. Number of values to generate.\n name: A name for the operation (optional).\n axis: Axis along which the operation is performed (used only when N-D\n tensors are provided).\n\n Returns:\n A `Tensor`. Has the same type as `start`.\n ", "desc": "Generates evenly-spaced values in an interval along a given axis.", "type": "API"}, {"name": "tf.lite", "docs": "Public API for tf.lite namespace.\n", "desc": "Public API for tf.lite namespace.", "type": "API"}, {"name": "tf.lite.experimental", "docs": "Public API for tf.lite.experimental namespace.\n", "desc": "Public API for tf.lite.experimental namespace.", "type": "API"}, {"name": "tf.lite.experimental.load_delegate", "docs": "Returns loaded Delegate object.\n\n Example usage:\n\n ```\n import tensorflow as tf\n\n try:\n delegate = tf.lite.experimental.load_delegate('delegate.so')\n except ValueError:\n // Fallback to CPU\n\n if delegate:\n interpreter = tf.lite.Interpreter(\n model_path='model.tflite',\n experimental_delegates=[delegate])\n else:\n interpreter = tf.lite.Interpreter(model_path='model.tflite')\n ```\n\n This is typically used to leverage EdgeTPU for running TensorFlow Lite models.\n For more information see: https://coral.ai/docs/edgetpu/tflite-python/\n\n Args:\n library: Name of shared library containing the\n [TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates).\n options: Dictionary of options that are required to load the delegate. All\n keys and values in the dictionary should be convertible to str. Consult\n the documentation of the specific delegate for required and legal options.\n (default None)\n\n Returns:\n Delegate object.\n\n Raises:\n ValueError: Delegate failed to load.\n RuntimeError: If delegate loading is used on unsupported platform.\n ", "desc": "Returns loaded Delegate object.", "type": "API"}, {"name": "tf.lite.experimental.OpResolverType", "docs": "Different types of op resolvers for Tensorflow Lite.\n\n * `AUTO`: Indicates the op resolver that is chosen by default in TfLite\n Python, which is the \"BUILTIN\" as described below.\n * `BUILTIN`: Indicates the op resolver for built-in ops with optimized kernel\n implementation.\n * `BUILTIN_REF`: Indicates the op resolver for built-in ops with reference\n kernel implementation. It's generally used for testing and debugging.\n * `BUILTIN_WITHOUT_DEFAULT_DELEGATES`: Indicates the op resolver for\n built-in ops with optimized kernel implementation, but it will disable\n the application of default TfLite delegates (like the XNNPACK delegate) to\n the model graph. Generally this should not be used unless there are issues\n with the default configuration.\n ", "desc": "Different types of op resolvers for Tensorflow Lite.", "type": "API"}, {"name": "tf.lite.Interpreter", "docs": "Interpreter interface for running TensorFlow Lite models.\n\n Models obtained from `TfLiteConverter` can be run in Python with\n `Interpreter`.\n\n As an example, lets generate a simple Keras model and convert it to TFLite\n (`TfLiteConverter` also supports other input formats with `from_saved_model`\n and `from_concrete_function`)\n\n >>> x = np.array([[1.], [2.]])\n >>> y = np.array([[2.], [4.]])\n >>> model = tf.keras.models.Sequential([\n ... tf.keras.layers.Dropout(0.2),\n ... tf.keras.layers.Dense(units=1, input_shape=[1])\n ... ])\n >>> model.compile(optimizer='sgd', loss='mean_squared_error')\n >>> model.fit(x, y, epochs=1)\n >>> converter = tf.lite.TFLiteConverter.from_keras_model(model)\n >>> tflite_model = converter.convert()\n\n `tflite_model` can be saved to a file and loaded later, or directly into the\n `Interpreter`. Since TensorFlow Lite pre-plans tensor allocations to optimize\n inference, the user needs to call `allocate_tensors()` before any inference.\n\n >>> interpreter = tf.lite.Interpreter(model_content=tflite_model)\n >>> interpreter.allocate_tensors() # Needed before execution!\n\n Sample execution:\n\n >>> output = interpreter.get_output_details()[0] # Model has single output.\n >>> input = interpreter.get_input_details()[0] # Model has single input.\n >>> input_data = tf.constant(1., shape=[1, 1])\n >>> interpreter.set_tensor(input['index'], input_data)\n >>> interpreter.invoke()\n >>> interpreter.get_tensor(output['index']).shape\n (1, 1)\n\n Use `get_signature_runner()` for a more user-friendly inference API.\n ", "desc": "Interpreter interface for running TensorFlow Lite models.", "type": "API"}, {"name": "tf.lite.OpsSet", "docs": "Enum class defining the sets of ops available to generate TFLite models.\n\n WARNING: Experimental interface, subject to change.\n ", "desc": "Enum class defining the sets of ops available to generate TFLite models.", "type": "API"}, {"name": "tf.lite.Optimize", "docs": "Enum defining the optimizations to apply when generating a tflite model.\n\n DEFAULT\n Default optimization strategy that quantizes model weights. Enhanced\n optimizations are gained by providing a representative dataset that\n quantizes biases and activations as well.\n Converter will do its best to reduce size and latency, while minimizing\n the loss in accuracy.\n\n OPTIMIZE_FOR_SIZE\n Deprecated. Does the same as DEFAULT.\n\n OPTIMIZE_FOR_LATENCY\n Deprecated. Does the same as DEFAULT.\n\n EXPERIMENTAL_SPARSITY\n Experimental flag, subject to change.\n\n Enable optimization by taking advantage of the sparse model weights\n trained with pruning.\n\n The converter will inspect the sparsity pattern of the model weights and\n do its best to improve size and latency.\n The flag can be used alone to optimize float32 models with sparse weights.\n It can also be used together with the DEFAULT optimization mode to\n optimize quantized models with sparse weights.\n ", "desc": "Enum defining the optimizations to apply when generating a tflite model.", "type": "API"}, {"name": "tf.lite.RepresentativeDataset", "docs": "Representative dataset used to optimize the model.\n\n This is a generator function that provides a small dataset to calibrate or\n estimate the range, i.e, (min, max) of all floating-point arrays in the model\n (such as model input, activation outputs of intermediate layers, and model\n output) for quantization. Usually, this is a small subset of a few hundred\n samples randomly chosen, in no particular order, from the training or\n evaluation dataset.\n ", "desc": "Representative dataset used to optimize the model.", "type": "API"}, {"name": "tf.lite.TargetSpec", "docs": "Specification of target device used to optimize the model.\n\n Attributes:\n supported_ops: Experimental flag, subject to change. Set of `tf.lite.OpsSet`\n options, where each option represents a set of operators supported by the\n target device. (default {tf.lite.OpsSet.TFLITE_BUILTINS}))\n supported_types: Set of `tf.dtypes.DType` data types supported on the target\n device. If initialized, optimization might be driven by the smallest type\n in this set. (default set())\n experimental_select_user_tf_ops: Experimental flag, subject to change. Set\n of user's TensorFlow operators' names that are required in the TensorFlow\n Lite runtime. These ops will be exported as select TensorFlow ops in the\n model (in conjunction with the tf.lite.OpsSet.SELECT_TF_OPS flag). This is\n an advanced feature that should only be used if the client is using TF ops\n that may not be linked in by default with the TF ops that are provided\n when using the SELECT_TF_OPS path. The client is responsible for linking\n these ops into the target runtime.\n experimental_supported_backends: Experimental flag, subject to change.\n Set containing names of supported backends. Currently only \"GPU\" is\n supported, more options will be available later.\n ", "desc": "Specification of target device used to optimize the model.", "type": "API"}, {"name": "tf.lite.TFLiteConverter", "docs": "Converts a TensorFlow model into TensorFlow Lite model.\n\n Attributes:\n optimizations: Experimental flag, subject to change. Set of optimizations to\n apply. e.g {tf.lite.Optimize.DEFAULT}. (default None, must be None or a\n set of values of type `tf.lite.Optimize`)\n representative_dataset: A generator function used for integer quantization\n where each generated sample has the same order, type and shape as the\n inputs to the model. Usually, this is a small subset of a few hundred\n samples randomly chosen, in no particular order, from the training or\n evaluation dataset. This is an optional attribute, but required for full\n integer quantization, i.e, if `tf.int8` is the only supported type in\n `target_spec.supported_types`. Refer to `tf.lite.RepresentativeDataset`.\n (default None)\n target_spec: Experimental flag, subject to change. Specifications of target\n device, including supported ops set, supported types and a set of user's\n defined TensorFlow operators required in the TensorFlow Lite runtime.\n Refer to `tf.lite.TargetSpec`.\n inference_input_type: Data type of the input layer. Note that integer types\n (tf.int8 and tf.uint8) are currently only supported for post training\n integer quantization and quantization aware training. (default tf.float32,\n must be in {tf.float32, tf.int8, tf.uint8})\n inference_output_type: Data type of the output layer. Note that integer\n types (tf.int8 and tf.uint8) are currently only supported for post\n training integer quantization and quantization aware training. (default\n tf.float32, must be in {tf.float32, tf.int8, tf.uint8})\n allow_custom_ops: Boolean indicating whether to allow custom operations.\n When False, any unknown operation is an error. When True, custom ops are\n created for any op that is unknown. The developer needs to provide these\n to the TensorFlow Lite runtime with a custom resolver. (default False)\n exclude_conversion_metadata: Whether not to embed the conversion metadata\n into the converted model. (default False)\n experimental_new_converter: Experimental flag, subject to change. Enables\n MLIR-based conversion. (default True)\n experimental_new_quantizer: Experimental flag, subject to change. Enables\n MLIR-based quantization conversion instead of Flatbuffer-based conversion.\n (default True)\n experimental_enable_resource_variables: Experimental flag, subject to\n change. Enables resource variables to be converted by this converter. This\n is only allowed if from_saved_model interface is used. (default True)\n\n Example usage:\n\n ```python\n # Converting a SavedModel to a TensorFlow Lite model.\n converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)\n tflite_model = converter.convert()\n\n # Converting a tf.Keras model to a TensorFlow Lite model.\n converter = tf.lite.TFLiteConverter.from_keras_model(model)\n tflite_model = converter.convert()\n\n # Converting ConcreteFunctions to a TensorFlow Lite model.\n converter = tf.lite.TFLiteConverter.from_concrete_functions([func], model)\n tflite_model = converter.convert()\n\n # Converting a Jax model to a TensorFlow Lite model.\n converter = tf.lite.TFLiteConverter.experimental_from_jax([func], [[\n ('input1', input1), ('input2', input2)])\n tflite_model = converter.convert()\n ```\n ", "desc": "Converts a TensorFlow model into TensorFlow Lite model.", "type": "API"}, {"name": "tf.load_library", "docs": "Loads a TensorFlow plugin.\n\n \"library_location\" can be a path to a specific shared object, or a folder.\n If it is a folder, all shared objects that are named \"libtfkernel*\" will be\n loaded. When the library is loaded, kernels registered in the library via the\n `REGISTER_*` macros are made available in the TensorFlow process.\n\n Args:\n library_location: Path to the plugin or the folder of plugins.\n Relative or absolute filesystem path to a dynamic library file or folder.\n\n Returns:\n None\n\n Raises:\n OSError: When the file to be loaded is not found.\n RuntimeError: when unable to load the library.\n ", "desc": "Loads a TensorFlow plugin.", "type": "API"}, {"name": "tf.load_op_library", "docs": "Loads a TensorFlow plugin, containing custom ops and kernels.\n\n Pass \"library_filename\" to a platform-specific mechanism for dynamically\n loading a library. The rules for determining the exact location of the\n library are platform-specific and are not documented here. When the\n library is loaded, ops and kernels registered in the library via the\n `REGISTER_*` macros are made available in the TensorFlow process. Note\n that ops with the same name as an existing op are rejected and not\n registered with the process.\n\n Args:\n library_filename: Path to the plugin.\n Relative or absolute filesystem path to a dynamic library file.\n\n Returns:\n A python module containing the Python wrappers for Ops defined in\n the plugin.\n\n Raises:\n RuntimeError: when unable to load the library or get the python wrappers.\n ", "desc": "Loads a TensorFlow plugin, containing custom ops and kernels.", "type": "API"}, {"name": "tf.logical_and", "docs": "Returns the truth value of x AND y element-wise.\n\n Logical AND function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical AND with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical AND of the two input tensors.\n\n You can also use the `&` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_and(a, b)\n \n >>> a & b\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_and(c, x)\n \n >>> c & x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_and(y, z)\n \n >>> y & z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_and([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_all`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x AND y element-wise.", "type": "API"}, {"name": "tf.logical_not", "docs": "Returns the truth value of `NOT x` element-wise.\n\n Example:\n\n >>> tf.math.logical_not(tf.constant([True, False]))\n \n\n Args:\n x: A `Tensor` of type `bool`. A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of `NOT x` element-wise.", "type": "API"}, {"name": "tf.logical_or", "docs": "Returns the truth value of x OR y element-wise.\n\n Logical OR function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical OR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical OR of the two input tensors.\n\n You can also use the `|` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_or(a, b)\n \n >>> a | b\n \n\n >>> c = tf.constant([False])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_or(c, x)\n \n >>> c | x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_or(y, z)\n \n >>> y | z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_or([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_any`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x OR y element-wise.", "type": "API"}, {"name": "tf.lookup", "docs": "Public API for tf.lookup namespace.\n", "desc": "Public API for tf.lookup namespace.", "type": "API"}, {"name": "tf.lookup.experimental", "docs": "Public API for tf.lookup.experimental namespace.\n", "desc": "Public API for tf.lookup.experimental namespace.", "type": "API"}, {"name": "tf.lookup.experimental.DenseHashTable", "docs": "A mutable hash table with faster lookups and higher memory usage.\n\n Data can be inserted by calling the `insert` method and removed by calling the\n `remove` method. It does not support initialization via the init method.\n\n Compared to `MutableHashTable`, `DenseHashTable` offers generally faster\n `insert`, `remove` and `lookup` operations, in exchange for a higher overall\n memory footprint.\n\n It uses \"open addressing\" with quadratic reprobing to resolve collisions. This\n requires specifying two keys in the key space, `empty_key` and `deleted_key`,\n that can never inserted into the table.\n\n Unlike `MutableHashTable`, `DenseHashTable` does not require additional memory\n for temporary tensors created during checkpointing and restore operations.\n\n Example usage:\n\n >>> table = tf.lookup.experimental.DenseHashTable(\n ... key_dtype=tf.string,\n ... value_dtype=tf.int64,\n ... default_value=-1,\n ... empty_key='',\n ... deleted_key='$')\n >>> keys = tf.constant(['a', 'b', 'c'])\n >>> values = tf.constant([0, 1, 2], dtype=tf.int64)\n >>> table.insert(keys, values)\n >>> table.remove(tf.constant(['c']))\n >>> table.lookup(tf.constant(['a', 'b', 'c','d'])).numpy()\n array([ 0, 1, -1, -1])\n ", "desc": "A mutable hash table with faster lookups and higher memory usage.", "type": "API"}, {"name": "tf.lookup.KeyValueTensorInitializer", "docs": "Table initializers given `keys` and `values` tensors.\n\n >>> keys_tensor = tf.constant(['a', 'b', 'c'])\n >>> vals_tensor = tf.constant([7, 8, 9])\n >>> input_tensor = tf.constant(['a', 'f'])\n >>> init = tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor)\n >>> table = tf.lookup.StaticHashTable(\n ... init,\n ... default_value=-1)\n >>> table.lookup(input_tensor).numpy()\n array([ 7, -1], dtype=int32)\n\n ", "desc": "Table initializers given `keys` and `values` tensors.", "type": "API"}, {"name": "tf.lookup.StaticHashTable", "docs": "A generic hash table that is immutable once initialized.\n\n Example usage:\n\n >>> keys_tensor = tf.constant(['a', 'b', 'c'])\n >>> vals_tensor = tf.constant([7, 8, 9])\n >>> input_tensor = tf.constant(['a', 'f'])\n >>> table = tf.lookup.StaticHashTable(\n ... tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor),\n ... default_value=-1)\n >>> table.lookup(input_tensor).numpy()\n array([ 7, -1], dtype=int32)\n\n Or for more pythonic code:\n\n >>> table[input_tensor].numpy()\n array([ 7, -1], dtype=int32)\n\n The result of a lookup operation has the same shape as the argument:\n\n >>> input_tensor = tf.constant([['a', 'b'], ['c', 'd']])\n >>> table[input_tensor].numpy()\n array([[ 7, 8],\n [ 9, -1]], dtype=int32)\n\n\n ", "desc": "A generic hash table that is immutable once initialized.", "type": "API"}, {"name": "tf.lookup.StaticVocabularyTable", "docs": "String to Id table that assigns out-of-vocabulary keys to hash buckets.\n\n For example, if an instance of `StaticVocabularyTable` is initialized with a\n string-to-id initializer that maps:\n\n >>> init = tf.lookup.KeyValueTensorInitializer(\n ... keys=tf.constant(['emerson', 'lake', 'palmer']),\n ... values=tf.constant([0, 1, 2], dtype=tf.int64))\n >>> table = tf.lookup.StaticVocabularyTable(\n ... init,\n ... num_oov_buckets=5)\n\n The `Vocabulary` object will performs the following mapping:\n\n * `emerson -> 0`\n * `lake -> 1`\n * `palmer -> 2`\n * ` -> bucket_id`, where `bucket_id` will be between `3` and\n `3 + num_oov_buckets - 1 = 7`, calculated by:\n `hash() % num_oov_buckets + vocab_size`\n\n If input_tensor is:\n\n >>> input_tensor = tf.constant([\"emerson\", \"lake\", \"palmer\",\n ... \"king\", \"crimson\"])\n >>> table[input_tensor].numpy()\n array([0, 1, 2, 6, 7])\n\n If `initializer` is None, only out-of-vocabulary buckets are used.\n\n Example usage:\n\n >>> num_oov_buckets = 3\n >>> vocab = [\"emerson\", \"lake\", \"palmer\", \"crimnson\"]\n >>> import tempfile\n >>> f = tempfile.NamedTemporaryFile(delete=False)\n >>> f.write('\\n'.join(vocab).encode('utf-8'))\n >>> f.close()\n\n >>> init = tf.lookup.TextFileInitializer(\n ... f.name,\n ... key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE,\n ... value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER)\n >>> table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets)\n >>> table.lookup(tf.constant([\"palmer\", \"crimnson\" , \"king\",\n ... \"tarkus\", \"black\", \"moon\"])).numpy()\n array([2, 3, 5, 6, 6, 4])\n\n The hash function used for generating out-of-vocabulary buckets ID is\n Fingerprint64.\n\n Note that the out-of-vocabulary bucket IDs always range from the table `size`\n up to `size + num_oov_buckets - 1` regardless of the table values, which could\n cause unexpected collisions:\n\n >>> init = tf.lookup.KeyValueTensorInitializer(\n ... keys=tf.constant([\"emerson\", \"lake\", \"palmer\"]),\n ... values=tf.constant([1, 2, 3], dtype=tf.int64))\n >>> table = tf.lookup.StaticVocabularyTable(\n ... init,\n ... num_oov_buckets=1)\n >>> input_tensor = tf.constant([\"emerson\", \"lake\", \"palmer\", \"king\"])\n >>> table[input_tensor].numpy()\n array([1, 2, 3, 3])\n ", "desc": "String to Id table that assigns out-of-vocabulary keys to hash buckets.", "type": "API"}, {"name": "tf.lookup.TextFileIndex", "docs": "The key and value content to get from each line.\n\n This class defines the key and value used for `tf.lookup.TextFileInitializer`.\n\n The key and value content to get from each line is specified either\n by the following, or a value `>=0`.\n * `TextFileIndex.LINE_NUMBER` means use the line number starting from zero,\n expects data type int64.\n * `TextFileIndex.WHOLE_LINE` means use the whole line content, expects data\n type string.\n\n A value `>=0` means use the index (starting at zero) of the split line based\n on `delimiter`.\n ", "desc": "The key and value content to get from each line.", "type": "API"}, {"name": "tf.lookup.TextFileInitializer", "docs": "Table initializers from a text file.\n\n This initializer assigns one entry in the table for each line in the file.\n\n The key and value type of the table to initialize is given by `key_dtype` and\n `value_dtype`.\n\n The key and value content to get from each line is specified by\n the `key_index` and `value_index`.\n\n * `TextFileIndex.LINE_NUMBER` means use the line number starting from zero,\n expects data type int64.\n * `TextFileIndex.WHOLE_LINE` means use the whole line content, expects data\n type string.\n * A value `>=0` means use the index (starting at zero) of the split line based\n on `delimiter`.\n\n For example if we have a file with the following content:\n\n >>> import tempfile\n >>> f = tempfile.NamedTemporaryFile(delete=False)\n >>> content='\\n'.join([\"emerson 10\", \"lake 20\", \"palmer 30\",])\n >>> f.file.write(content.encode('utf-8'))\n >>> f.file.close()\n\n The following snippet initializes a table with the first column as keys and\n second column as values:\n\n * `emerson -> 10`\n * `lake -> 20`\n * `palmer -> 30`\n\n >>> init= tf.lookup.TextFileInitializer(\n ... filename=f.name,\n ... key_dtype=tf.string, key_index=0,\n ... value_dtype=tf.int64, value_index=1,\n ... delimiter=\" \")\n >>> table = tf.lookup.StaticHashTable(init, default_value=-1)\n >>> table.lookup(tf.constant(['palmer','lake','tarkus'])).numpy()\n\n Similarly to initialize the whole line as keys and the line number as values.\n\n * `emerson 10 -> 0`\n * `lake 20 -> 1`\n * `palmer 30 -> 2`\n\n >>> init = tf.lookup.TextFileInitializer(\n ... filename=f.name,\n ... key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE,\n ... value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER)\n >>> table = tf.lookup.StaticHashTable(init, -1)\n >>> table.lookup(tf.constant('palmer 30')).numpy()\n 2\n ", "desc": "Table initializers from a text file.", "type": "API"}, {"name": "tf.losses", "docs": "Built-in loss functions.\n", "desc": "Built-in loss functions.", "type": "API"}, {"name": "tf.losses.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.losses.BinaryCrossentropy", "docs": "Computes the cross-entropy loss between true labels and predicted labels.\n\n Use this cross-entropy loss for binary (0 or 1) classification applications.\n The loss function requires the following inputs:\n\n - `y_true` (true label): This is either 0 or 1.\n - `y_pred` (predicted value): This is the model's prediction, i.e, a single\n floating-point value which either represents a\n [logit](https://en.wikipedia.org/wiki/Logit), (i.e, value in [-inf, inf]\n when `from_logits=True`) or a probability (i.e, value in [0., 1.] when\n `from_logits=False`).\n\n **Recommended Usage:** (set `from_logits=True`)\n\n With `tf.keras` API:\n\n ```python\n model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n ....\n )\n ```\n\n As a standalone function:\n\n >>> # Example 1: (batch_size = 1, number of samples = 4)\n >>> y_true = [0, 1, 0, 0]\n >>> y_pred = [-18.6, 0.51, 2.94, -12.8]\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n\n >>> # Example 2: (batch_size = 2, number of samples = 4)\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[-18.6, 0.51], [2.94, -12.8]]\n >>> # Using default 'auto'/'sum_over_batch_size' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n >>> bce(y_true, y_pred).numpy()\n 0.865\n >>> # Using 'sample_weight' attribute\n >>> bce(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.243\n >>> # Using 'sum' reduction` type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> bce(y_true, y_pred).numpy()\n 1.730\n >>> # Using 'none' reduction type.\n >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> bce(y_true, y_pred).numpy()\n array([0.235, 1.496], dtype=float32)\n\n **Default Usage:** (set `from_logits=False`)\n\n >>> # Make the following updates to the above \"Recommended Usage\" section\n >>> # 1. Set `from_logits=False`\n >>> tf.keras.losses.BinaryCrossentropy() # OR ...('from_logits=False')\n >>> # 2. Update `y_pred` to use probabilities instead of logits\n >>> y_pred = [0.6, 0.3, 0.2, 0.8] # OR [[0.6, 0.3], [0.2, 0.8]]\n ", "desc": "Computes the cross-entropy loss between true labels and predicted labels.", "type": "API"}, {"name": "tf.losses.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.losses.categorical_hinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 3, size=(2,))\n >>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> pos = np.sum(y_true * y_pred, axis=-1)\n >>> neg = np.amax((1. - y_true) * y_pred, axis=-1)\n >>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be\n either `{-1, +1}` or `{0, 1}` (i.e. a one-hot-encoded tensor).\n y_pred: The predicted values.\n\n Returns:\n Categorical hinge loss values.\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.CategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided in a `one_hot` representation. If you want to\n provide labels as integers, please use `SparseCategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature.\n\n In the snippet below, there is `# classes` floating pointing values per\n example. The shape of both `y_pred` and `y_true` are\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy()\n >>> cce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> cce = tf.keras.losses.CategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.losses.CategoricalHinge", "docs": "Computes the categorical hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(neg - pos + 1, 0)`\n where `neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge()\n >>> h(y_true, y_pred).numpy()\n 1.4\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.6\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.8\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.CategoricalHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.2, 1.6], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge())\n ```\n ", "desc": "Computes the categorical hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.cosine_similarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and\n targets. If either `y_true` or `y_pred` is a zero vector, cosine\n similarity will be 0 regardless of the proximity between predictions\n and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]\n >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)\n >>> loss.numpy()\n array([-0., -0.999, 0.999], dtype=float32)\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n axis: Axis along which to determine similarity.\n\n Returns:\n Cosine similarity tensor.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.losses.CosineSimilarity", "docs": "Computes the cosine similarity between labels and predictions.\n\n Note that it is a number between -1 and 1. When it is a negative number\n between -1 and 0, 0 indicates orthogonality and values closer to -1\n indicate greater similarity. The values closer to 1 indicate greater\n dissimilarity. This makes it usable as a loss function in a setting\n where you try to maximize the proximity between predictions and targets.\n If either `y_true` or `y_pred` is a zero vector, cosine similarity will be 0\n regardless of the proximity between predictions and targets.\n\n `loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [1., 1.]]\n >>> y_pred = [[1., 0.], [1., 1.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1)\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = -((0. + 0.) + (0.5 + 0.5)) / 2\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.5\n\n >>> # Calling with 'sample_weight'.\n >>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n -0.0999\n\n >>> # Using 'sum' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> cosine_loss(y_true, y_pred).numpy()\n -0.999\n\n >>> # Using 'none' reduction type.\n >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1,\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> cosine_loss(y_true, y_pred).numpy()\n array([-0., -0.999], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1))\n ```\n\n Args:\n axis: The axis along which the cosine similarity is computed\n (the features axis). Defaults to -1.\n reduction: Type of `tf.keras.losses.Reduction` to apply to loss.\n Default value is `AUTO`. `AUTO` indicates that the reduction option will\n be determined by the usage context. For almost all cases this defaults to\n `SUM_OVER_BATCH_SIZE`. When used with `tf.distribute.Strategy`, outside of\n built-in training loops such as `tf.keras` `compile` and `fit`, using\n `AUTO` or `SUM_OVER_BATCH_SIZE` will raise an error. Please see this\n custom training [tutorial]\n (https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details.\n name: Optional name for the instance.\n ", "desc": "Computes the cosine similarity between labels and predictions.", "type": "API"}, {"name": "tf.losses.deserialize", "docs": "Deserializes a serialized loss class/function instance.\n\n Args:\n name: Loss configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Loss` instance or a loss function.\n ", "desc": "Deserializes a serialized loss class/function instance.", "type": "API"}, {"name": "tf.losses.get", "docs": "Retrieves a Keras loss as a `function`/`Loss` class instance.\n\n The `identifier` may be the string name of a loss function or `Loss` class.\n\n >>> loss = tf.keras.losses.get(\"categorical_crossentropy\")\n >>> type(loss)\n \n >>> loss = tf.keras.losses.get(\"CategoricalCrossentropy\")\n >>> type(loss)\n \n\n You can also specify `config` of the loss to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Loss` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> loss = tf.keras.losses.get(identifier)\n >>> type(loss)\n \n\n Args:\n identifier: A loss identifier. One of None or string name of a loss\n function/class or loss configuration dictionary or a loss function or a\n loss class instance.\n\n Returns:\n A Keras loss as a `function`/ `Loss` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras loss as a `function`/`Loss` class instance.", "type": "API"}, {"name": "tf.losses.Hinge", "docs": "Computes the hinge loss between `y_true` and `y_pred`.\n\n `loss = maximum(1 - y_true * y_pred, 0)`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Hinge()\n >>> h(y_true, y_pred).numpy()\n 1.3\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.55\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 2.6\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Hinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.1, 1.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge())\n ```\n ", "desc": "Computes the hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.Huber", "docs": "Computes the Huber loss between `y_true` and `y_pred`.\n\n For each value x in `error = y_true - y_pred`:\n\n ```\n loss = 0.5 * x^2 if |x| <= d\n loss = 0.5 * d^2 + d * (|x| - d) if |x| > d\n ```\n where d is `delta`. See: https://en.wikipedia.org/wiki/Huber_loss\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.Huber()\n >>> h(y_true, y_pred).numpy()\n 0.155\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.09\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 0.31\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.Huber(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([0.18, 0.13], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Huber())\n ```\n ", "desc": "Computes the Huber loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.KLDivergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> kl = tf.keras.losses.KLDivergence()\n >>> kl(y_true, y_pred).numpy()\n 0.458\n\n >>> # Calling with 'sample_weight'.\n >>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.366\n\n >>> # Using 'sum' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> kl(y_true, y_pred).numpy()\n 0.916\n\n >>> # Using 'none' reduction type.\n >>> kl = tf.keras.losses.KLDivergence(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> kl(y_true, y_pred).numpy()\n array([0.916, -3.08e-06], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence())\n ```\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.losses.LogCosh", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`,\n where x is the error `y_pred - y_true`.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> l = tf.keras.losses.LogCosh()\n >>> l(y_true, y_pred).numpy()\n 0.108\n\n >>> # Calling with 'sample_weight'.\n >>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.087\n\n >>> # Using 'sum' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> l(y_true, y_pred).numpy()\n 0.217\n\n >>> # Using 'none' reduction type.\n >>> l = tf.keras.losses.LogCosh(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> l(y_true, y_pred).numpy()\n array([0.217, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh())\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.losses.Loss", "docs": "Loss base class.\n\n To be implemented by subclasses:\n * `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.\n\n Example subclass implementation:\n\n ```python\n class MeanSquaredError(Loss):\n\n def call(self, y_true, y_pred):\n return tf.reduce_mean(tf.math.square(y_pred - y_true), axis=-1)\n ```\n\n When used with `tf.distribute.Strategy`, outside of built-in training loops\n such as `tf.keras` `compile` and `fit`, please use 'SUM' or 'NONE' reduction\n types, and reduce losses explicitly in your training loop. Using 'AUTO' or\n 'SUM_OVER_BATCH_SIZE' will raise an error.\n\n Please see this custom training [tutorial](\n https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details on this.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:\n\n ```python\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = (tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size))\n ```\n ", "desc": "Loss base class.", "type": "API"}, {"name": "tf.losses.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.losses.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.losses.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.losses.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.MeanAbsoluteError", "docs": "Computes the mean of absolute difference between labels and predictions.\n\n `loss = abs(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError()\n >>> mae(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mae(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mae = tf.keras.losses.MeanAbsoluteError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mae(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError())\n ```\n ", "desc": "Computes the mean of absolute difference between labels and predictions.", "type": "API"}, {"name": "tf.losses.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Formula:\n\n `loss = 100 * abs((y_true - y_pred) / y_true)`\n\n Note that to avoid dividing by zero, a small epsilon value\n is added to the denominator.\n\n Standalone usage:\n\n >>> y_true = [[2., 1.], [2., 3.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError()\n >>> mape(y_true, y_pred).numpy()\n 50.\n\n >>> # Calling with 'sample_weight'.\n >>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 20.\n\n >>> # Using 'sum' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mape(y_true, y_pred).numpy()\n 100.\n\n >>> # Using 'none' reduction type.\n >>> mape = tf.keras.losses.MeanAbsolutePercentageError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mape(y_true, y_pred).numpy()\n array([25., 75.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanAbsolutePercentageError())\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.MeanSquaredError", "docs": "Computes the mean of squares of errors between labels and predictions.\n\n `loss = square(y_true - y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError()\n >>> mse(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.25\n\n >>> # Using 'sum' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> mse(y_true, y_pred).numpy()\n 1.0\n\n >>> # Using 'none' reduction type.\n >>> mse = tf.keras.losses.MeanSquaredError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> mse(y_true, y_pred).numpy()\n array([0.5, 0.5], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError())\n ```\n ", "desc": "Computes the mean of squares of errors between labels and predictions.", "type": "API"}, {"name": "tf.losses.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = square(log(y_true + 1.) - log(y_pred + 1.))`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [1., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError()\n >>> msle(y_true, y_pred).numpy()\n 0.240\n\n >>> # Calling with 'sample_weight'.\n >>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()\n 0.120\n\n >>> # Using 'sum' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> msle(y_true, y_pred).numpy()\n 0.480\n\n >>> # Using 'none' reduction type.\n >>> msle = tf.keras.losses.MeanSquaredLogarithmicError(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> msle(y_true, y_pred).numpy()\n array([0.240, 0.240], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.MeanSquaredLogarithmicError())\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.losses.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.Poisson", "docs": "Computes the Poisson loss between `y_true` and `y_pred`.\n\n `loss = y_pred - y_true * log(y_pred)`\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[1., 1.], [0., 0.]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> p = tf.keras.losses.Poisson()\n >>> p(y_true, y_pred).numpy()\n 0.5\n\n >>> # Calling with 'sample_weight'.\n >>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()\n 0.4\n\n >>> # Using 'sum' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> p(y_true, y_pred).numpy()\n 0.999\n\n >>> # Using 'none' reduction type.\n >>> p = tf.keras.losses.Poisson(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> p(y_true, y_pred).numpy()\n array([0.999, 0.], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson())\n ```\n ", "desc": "Computes the Poisson loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.Reduction", "docs": "Types of loss reduction.\n\n Contains the following values:\n\n * `AUTO`: Indicates that the reduction option will be determined by the usage\n context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When\n used with `tf.distribute.Strategy`, outside of built-in training loops such\n as `tf.keras` `compile` and `fit`, we expect reduction value to be\n `SUM` or `NONE`. Using `AUTO` in that case will raise an error.\n * `NONE`: No **additional** reduction is applied to the output of the wrapped\n loss function. When non-scalar losses are returned to Keras functions like\n `fit`/`evaluate`, the unreduced vector loss is passed to the optimizer\n but the reported loss will be a scalar value.\n\n Caution: **Verify the shape of the outputs when using** `Reduction.NONE`.\n The builtin loss functions wrapped by the loss classes reduce\n one dimension (`axis=-1`, or `axis` if specified by loss function).\n `Reduction.NONE` just means that no **additional** reduction is applied by\n the class wrapper. For categorical losses with an example input shape of\n `[batch, W, H, n_classes]` the `n_classes` dimension is reduced. For\n pointwise losses you must include a dummy axis so that `[batch, W, H, 1]`\n is reduced to `[batch, W, H]`. Without the dummy axis `[batch, W, H]`\n will be incorrectly reduced to `[batch, W]`.\n\n * `SUM`: Scalar sum of weighted losses.\n * `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses.\n This reduction type is not supported when used with\n `tf.distribute.Strategy` outside of built-in training loops like `tf.keras`\n `compile`/`fit`.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:\n ```\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size)\n ```\n\n Please see the [custom training guide](\n https://www.tensorflow.org/tutorials/distribute/custom_training) for more\n details on this.\n ", "desc": "Types of loss reduction.", "type": "API"}, {"name": "tf.losses.serialize", "docs": "Serializes loss function or `Loss` instance.\n\n Args:\n loss: A Keras `Loss` instance or a loss function.\n\n Returns:\n Loss configuration dictionary.\n ", "desc": "Serializes loss function or `Loss` instance.", "type": "API"}, {"name": "tf.losses.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.losses.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy loss between the labels and predictions.\n\n Use this crossentropy loss function when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` loss.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy()\n >>> scce(y_true, y_pred).numpy()\n 1.177\n\n >>> # Calling with 'sample_weight'.\n >>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()\n 0.814\n\n >>> # Using 'sum' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> scce(y_true, y_pred).numpy()\n 2.354\n\n >>> # Using 'none' reduction type.\n >>> scce = tf.keras.losses.SparseCategoricalCrossentropy(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> scce(y_true, y_pred).numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.SparseCategoricalCrossentropy())\n ```\n ", "desc": "Computes the crossentropy loss between the labels and predictions.", "type": "API"}, {"name": "tf.losses.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.losses.SquaredHinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = square(maximum(1 - y_true * y_pred, 0))`\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Standalone usage:\n\n >>> y_true = [[0., 1.], [0., 0.]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> # Using 'auto'/'sum_over_batch_size' reduction type.\n >>> h = tf.keras.losses.SquaredHinge()\n >>> h(y_true, y_pred).numpy()\n 1.86\n\n >>> # Calling with 'sample_weight'.\n >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()\n 0.73\n\n >>> # Using 'sum' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.SUM)\n >>> h(y_true, y_pred).numpy()\n 3.72\n\n >>> # Using 'none' reduction type.\n >>> h = tf.keras.losses.SquaredHinge(\n ... reduction=tf.keras.losses.Reduction.NONE)\n >>> h(y_true, y_pred).numpy()\n array([1.46, 2.26], dtype=float32)\n\n Usage with the `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge())\n ```\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.make_ndarray", "docs": "Create a numpy ndarray from a tensor.\n\n Create a numpy ndarray with the same shape and data as the tensor.\n\n For example:\n\n ```python\n # Tensor a has shape (2,3)\n a = tf.constant([[1,2,3],[4,5,6]])\n proto_tensor = tf.make_tensor_proto(a) # convert `tensor a` to a proto tensor\n tf.make_ndarray(proto_tensor) # output: array([[1, 2, 3],\n # [4, 5, 6]], dtype=int32)\n # output has shape (2,3)\n ```\n\n Args:\n tensor: A TensorProto.\n\n Returns:\n A numpy array with the tensor contents.\n\n Raises:\n TypeError: if tensor has unsupported type.\n\n ", "desc": "Create a numpy ndarray from a tensor.", "type": "API"}, {"name": "tf.make_tensor_proto", "docs": "Create a TensorProto.\n\n In TensorFlow 2.0, representing tensors as protos should no longer be a\n common workflow. That said, this utility function is still useful for\n generating TF Serving request protos:\n\n ```python\n request = tensorflow_serving.apis.predict_pb2.PredictRequest()\n request.model_spec.name = \"my_model\"\n request.model_spec.signature_name = \"serving_default\"\n request.inputs[\"images\"].CopyFrom(tf.make_tensor_proto(X_new))\n ```\n\n `make_tensor_proto` accepts \"values\" of a python scalar, a python list, a\n numpy ndarray, or a numpy scalar.\n\n If \"values\" is a python scalar or a python list, make_tensor_proto\n first convert it to numpy ndarray. If dtype is None, the\n conversion tries its best to infer the right numpy data\n type. Otherwise, the resulting numpy array has a compatible data\n type with the given dtype.\n\n In either case above, the numpy ndarray (either the caller provided\n or the auto-converted) must have the compatible type with dtype.\n\n `make_tensor_proto` then converts the numpy array to a tensor proto.\n\n If \"shape\" is None, the resulting tensor proto represents the numpy\n array precisely.\n\n Otherwise, \"shape\" specifies the tensor's shape and the numpy array\n can not have more elements than what \"shape\" specifies.\n\n Args:\n values: Values to put in the TensorProto.\n dtype: Optional tensor_pb2 DataType value.\n shape: List of integers representing the dimensions of tensor.\n verify_shape: Boolean that enables verification of a shape of values.\n allow_broadcast: Boolean that enables allowing scalars and 1 length vector\n broadcasting. Cannot be true when verify_shape is true.\n\n Returns:\n A `TensorProto`. Depending on the type, it may contain data in the\n \"tensor_content\" attribute, which is not directly useful to Python programs.\n To access the values you should convert the proto back to a numpy ndarray\n with `tf.make_ndarray(proto)`.\n\n If `values` is a `TensorProto`, it is immediately returned; `dtype` and\n `shape` are ignored.\n\n Raises:\n TypeError: if unsupported types are provided.\n ValueError: if arguments have inappropriate values or if verify_shape is\n True and shape of values is not equals to a shape from the argument.\n\n ", "desc": "Create a TensorProto.", "type": "API"}, {"name": "tf.map_fn", "docs": "Transforms `elems` by applying `fn` to each element unstacked on axis 0. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dtype)`. They will be removed in a future version.\nInstructions for updating:\nUse fn_output_signature instead\n\nSee also `tf.scan`.\n\n`map_fn` unstacks `elems` on axis 0 to obtain a sequence of elements;\ncalls `fn` to transform each element; and then stacks the transformed\nvalues back together.\n\n#### Mapping functions with single-Tensor inputs and outputs\n\nIf `elems` is a single tensor and `fn`'s signature is `tf.Tensor->tf.Tensor`,\nthen `map_fn(fn, elems)` is equivalent to\n`tf.stack([fn(elem) for elem in tf.unstack(elems)])`. E.g.:\n\n>>> tf.map_fn(fn=lambda t: tf.range(t, t + 3), elems=tf.constant([3, 5, 2]))\n\n\n`map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape`.\n\n#### Mapping functions with multi-arity inputs and outputs\n\n`map_fn` also supports functions with multi-arity inputs and outputs:\n\n* If `elems` is a tuple (or nested structure) of tensors, then those tensors\n must all have the same outer-dimension size (`num_elems`); and `fn` is\n used to transform each tuple (or structure) of corresponding slices from\n `elems`. E.g., if `elems` is a tuple `(t1, t2, t3)`, then `fn` is used to\n transform each tuple of slices `(t1[i], t2[i], t3[i])`\n (where `0 <= i < num_elems`).\n\n* If `fn` returns a tuple (or nested structure) of tensors, then the\n result is formed by stacking corresponding elements from those structures.\n\n#### Specifying `fn`'s output signature\n\nIf `fn`'s input and output signatures are different, then the output\nsignature must be specified using `fn_output_signature`. (The input and\noutput signatures are differ if their structures, dtypes, or tensor types do\nnot match). E.g.:\n\n>>> tf.map_fn(fn=tf.strings.length, # input & output have different dtypes\n... elems=tf.constant([\"hello\", \"moon\"]),\n... fn_output_signature=tf.int32)\n\n>>> tf.map_fn(fn=tf.strings.join, # input & output have different structures\n... elems=[tf.constant(['The', 'A']), tf.constant(['Dog', 'Cat'])],\n... fn_output_signature=tf.string)\n\n\n`fn_output_signature` can be specified using any of the following:\n\n* A `tf.DType` or `tf.TensorSpec` (to describe a `tf.Tensor`)\n* A `tf.RaggedTensorSpec` (to describe a `tf.RaggedTensor`)\n* A `tf.SparseTensorSpec` (to describe a `tf.sparse.SparseTensor`)\n* A (possibly nested) tuple, list, or dict containing the above types.\n\n#### RaggedTensors\n\n`map_fn` supports `tf.RaggedTensor` inputs and outputs. In particular:\n\n* If `elems` is a `RaggedTensor`, then `fn` will be called with each\n row of that ragged tensor.\n * If `elems` has only one ragged dimension, then the values passed to\n `fn` will be `tf.Tensor`s.\n * If `elems` has multiple ragged dimensions, then the values passed to\n `fn` will be `tf.RaggedTensor`s with one fewer ragged dimension.\n\n* If the result of `map_fn` should be a `RaggedTensor`, then use a\n `tf.RaggedTensorSpec` to specify `fn_output_signature`.\n * If `fn` returns `tf.Tensor`s with varying sizes, then use a\n `tf.RaggedTensorSpec` with `ragged_rank=0` to combine them into a\n single ragged tensor (which will have ragged_rank=1).\n * If `fn` returns `tf.RaggedTensor`s, then use a `tf.RaggedTensorSpec`\n with the same `ragged_rank`.\n\n>>> # Example: RaggedTensor input\n>>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n>>> tf.map_fn(tf.reduce_sum, rt, fn_output_signature=tf.int32)\n\n\n>>> # Example: RaggedTensor output\n>>> elems = tf.constant([3, 5, 0, 2])\n>>> tf.map_fn(tf.range, elems,\n... fn_output_signature=tf.RaggedTensorSpec(shape=[None],\n... dtype=tf.int32))\n\n\nNote: `map_fn` should only be used if you need to map a function over the\n*rows* of a `RaggedTensor`. If you wish to map a function over the\nindividual values, then you should use:\n\n* `tf.ragged.map_flat_values(fn, rt)`\n (if fn is expressible as TensorFlow ops)\n* `rt.with_flat_values(map_fn(fn, rt.flat_values))`\n (otherwise)\n\nE.g.:\n\n>>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n>>> tf.ragged.map_flat_values(lambda x: x + 2, rt)\n\n\n#### SparseTensors\n\n`map_fn` supports `tf.sparse.SparseTensor` inputs and outputs. In particular:\n\n* If `elems` is a `SparseTensor`, then `fn` will be called with each row\n of that sparse tensor. In particular, the value passed to `fn` will be a\n `tf.sparse.SparseTensor` with one fewer dimension than `elems`.\n\n* If the result of `map_fn` should be a `SparseTensor`, then use a\n `tf.SparseTensorSpec` to specify `fn_output_signature`. The individual\n `SparseTensor`s returned by `fn` will be stacked into a single\n `SparseTensor` with one more dimension.\n\n>>> # Example: SparseTensor input\n>>> st = tf.sparse.SparseTensor([[0, 0], [2, 0], [2, 1]], [2, 3, 4], [4, 4])\n>>> tf.map_fn(tf.sparse.reduce_sum, st, fn_output_signature=tf.int32)\n\n\n>>> # Example: SparseTensor output\n>>> tf.sparse.to_dense(\n... tf.map_fn(tf.sparse.eye, tf.constant([2, 3]),\n... fn_output_signature=tf.SparseTensorSpec(None, tf.float32)))\n\n\nNote: `map_fn` should only be used if you need to map a function over the\n*rows* of a `SparseTensor`. If you wish to map a function over the nonzero\nvalues, then you should use:\n\n* If the function is expressible as TensorFlow ops, use:\n ```python\n tf.sparse.SparseTensor(st.indices, fn(st.values), st.dense_shape)\n ```\n* Otherwise, use:\n ```python\n tf.sparse.SparseTensor(st.indices, tf.map_fn(fn, st.values),\n st.dense_shape)\n ```\n\n#### `map_fn` vs. vectorized operations\n\n`map_fn` will apply the operations used by `fn` to each element of `elems`,\nresulting in `O(elems.shape[0])` total operations. This is somewhat\nmitigated by the fact that `map_fn` can process elements in parallel.\nHowever, a transform expressed using `map_fn` is still typically less\nefficient than an equivalent transform expressed using vectorized operations.\n\n`map_fn` should typically only be used if one of the following is true:\n\n* It is difficult or expensive to express the desired transform with\n vectorized operations.\n* `fn` creates large intermediate values, so an equivalent vectorized\n transform would take too much memory.\n* Processing elements in parallel is more efficient than an equivalent\n vectorized transform.\n* Efficiency of the transform is not critical, and using `map_fn` is\n more readable.\n\nE.g., the example given above that maps `fn=lambda t: tf.range(t, t + 3)`\nacross `elems` could be rewritten more efficiently using vectorized ops:\n\n>>> elems = tf.constant([3, 5, 2])\n>>> tf.range(3) + tf.expand_dims(elems, 1)\n\n\nIn some cases, `tf.vectorized_map` can be used to automatically convert a\nfunction to a vectorized equivalent.\n\n#### Eager execution\n\nWhen executing eagerly, `map_fn` does not execute in parallel even if\n`parallel_iterations` is set to a value > 1. You can still get the\nperformance benefits of running a function in parallel by using the\n`tf.function` decorator:\n\n>>> fn=lambda t: tf.range(t, t + 3)\n>>> @tf.function\n... def func(elems):\n... return tf.map_fn(fn, elems, parallel_iterations=3)\n>>> func(tf.constant([3, 5, 2]))\n\n\n\nNote: if you use the `tf.function` decorator, any non-TensorFlow Python\ncode that you may have written in your function won't get executed. See\n`tf.function` for more details. The recommendation would be to debug without\n`tf.function` but switch to it to get performance benefits of running `map_fn`\nin parallel.\n\nArgs:\n fn: The callable to be performed. It accepts one argument, which will have\n the same (possibly nested) structure as `elems`. Its output must have the\n same structure as `fn_output_signature` if one is provided; otherwise it\n must have the same structure as `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unstacked along their first dimension. `fn` will be applied to the\n nested sequence of the resulting slices. `elems` may include ragged and\n sparse tensors. `elems` must consist of at least one tensor.\n dtype: Deprecated: Equivalent to `fn_output_signature`.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel. When graph building, the default value is 10. While executing\n eagerly, the default value is set to 1.\n back_prop: (optional) Deprecated: prefer using `tf.stop_gradient` instead. False disables support for back propagation.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n infer_shape: (optional) False disables tests for consistent output shapes.\n name: (optional) Name prefix for the returned tensors.\n fn_output_signature: The output signature of `fn`. Must be specified if\n `fn`'s input and output signatures are different (i.e., if their\n structures, dtypes, or tensor types do not match).\n `fn_output_signature` can be specified using any of the following:\n\n * A `tf.DType` or `tf.TensorSpec` (to describe a `tf.Tensor`)\n * A `tf.RaggedTensorSpec` (to describe a `tf.RaggedTensor`)\n * A `tf.SparseTensorSpec` (to describe a `tf.sparse.SparseTensor`)\n * A (possibly nested) tuple, list, or dict containing the above types.\n\nReturns:\n A tensor or (possibly nested) sequence of tensors. Each tensor stacks the\n results of applying `fn` to tensors unstacked from `elems` along the first\n dimension, from first to last. The result may include ragged and sparse\n tensors.\n\nRaises:\n TypeError: if `fn` is not callable or the structure of the output of\n `fn` and `fn_output_signature` do not match.\n ValueError: if the lengths of the output of `fn` and `fn_output_signature`\n do not match, or if the `elems` does not contain any tensor.\n\nExamples:\n\n >>> elems = np.array([1, 2, 3, 4, 5, 6])\n >>> tf.map_fn(lambda x: x * x, elems)\n \n\n >>> elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))\n >>> tf.map_fn(lambda x: x[0] * x[1], elems, fn_output_signature=tf.int64)\n \n\n >>> elems = np.array([1, 2, 3])\n >>> tf.map_fn(lambda x: (x, -x), elems,\n ... fn_output_signature=(tf.int64, tf.int64))\n (,\n )", "desc": "Transforms `elems` by applying `fn` to each element unstacked on axis 0. (deprecated arguments)", "type": "API"}, {"name": "tf.math", "docs": "Math Operations.\n\nNote: Functions taking `Tensor` arguments can also take anything accepted by\n`tf.convert_to_tensor`.\n\nNote: Elementwise binary operations in TensorFlow follow [numpy-style\nbroadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).\n\nTensorFlow provides a variety of math functions including:\n\n* Basic arithmetic operators and trigonometric functions.\n* Special math functions (like: `tf.math.igamma` and `tf.math.zeta`)\n* Complex number functions (like: `tf.math.imag` and `tf.math.angle`)\n* Reductions and scans (like: `tf.math.reduce_mean` and `tf.math.cumsum`)\n* Segment functions (like: `tf.math.segment_sum`)\n\nSee: `tf.linalg` for matrix and tensor functions.\n\n\n\n## About Segmentation\n\nTensorFlow provides several operations that you can use to perform common\nmath computations on tensor segments.\nHere a segmentation is a partitioning of a tensor along\nthe first dimension, i.e. it defines a mapping from the first dimension onto\n`segment_ids`. The `segment_ids` tensor should be the size of\nthe first dimension, `d0`, with consecutive IDs in the range `0` to `k`,\nwhere `k [[0 0 0 0]\n# [5 6 7 8]]\n```\n\nThe standard `segment_*` functions assert that the segment indices are sorted.\nIf you have unsorted indices use the equivalent `unsorted_segment_` function.\nThese functions take an additional argument `num_segments` so that the output\ntensor can be efficiently allocated.\n\n``` python\nc = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\ntf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2)\n# ==> [[ 6, 8, 10, 12],\n# [-1, -2, -3, -4]]\n```\n\n\n", "desc": "Math Operations.", "type": "API"}, {"name": "tf.math.abs", "docs": "Computes the absolute value of a tensor.\n\n Given a tensor of integer or floating-point values, this operation returns a\n tensor of the same type, where each element contains the absolute value of the\n corresponding element in the input.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of type\n `float32` or `float64` that is the absolute value of each element in `x`. For\n a complex number \\\\(a + bj\\\\), its absolute value is computed as\n \\\\(\\sqrt{a^2 + b^2}\\\\).\n\n For example:\n\n >>> # real number\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.abs(x)\n \n\n >>> # complex number\n >>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])\n >>> tf.abs(x)\n \n\n Args:\n x: A `Tensor` or `SparseTensor` of type `float16`, `float32`, `float64`,\n `int32`, `int64`, `complex64` or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor` of the same size, type and sparsity as `x`,\n with absolute values. Note, for `complex64` or `complex128` input, the\n returned `Tensor` will be of type `float32` or `float64`, respectively.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)`", "desc": "Computes the absolute value of a tensor.", "type": "API"}, {"name": "tf.math.accumulate_n", "docs": "Returns the element-wise sum of a list of tensors.\n\n Optionally, pass `shape` and `tensor_dtype` for shape and type checking,\n otherwise, these are inferred.\n\n `accumulate_n` performs the same operation as `tf.math.add_n`.\n\n For example:\n\n ```python\n a = tf.constant([[1, 2], [3, 4]])\n b = tf.constant([[5, 0], [0, 6]])\n tf.math.accumulate_n([a, b, a]) # [[7, 4], [6, 14]]\n\n # Explicitly pass shape and type\n tf.math.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)\n # [[7, 4],\n # [6, 14]]\n ```\n\n Args:\n inputs: A list of `Tensor` objects, each with same shape and type.\n shape: Expected shape of elements of `inputs` (optional). Also controls the\n output shape of this op, which may affect type inference in other ops. A\n value of `None` means \"infer the input shape from the shapes in `inputs`\".\n tensor_dtype: Expected data type of `inputs` (optional). A value of `None`\n means \"infer the input dtype from `inputs[0]`\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Returns the element-wise sum of a list of tensors.", "type": "API"}, {"name": "tf.math.acos", "docs": "Computes acos of x element-wise.\n\n Provided an input tensor, the `tf.math.acos` operation\n returns the inverse cosine of each element of the tensor.\n If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`.\n\n Input range is `[-1, 1]` and the output has a range of `[0, pi]`.\n\n For example:\n\n >>> x = tf.constant([1.0, -0.5, 3.4, 0.2, 0.0, -2], dtype = tf.float32)\n >>> tf.math.acos(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Computes acos of x element-wise.", "type": "API"}, {"name": "tf.math.acosh", "docs": "Computes inverse hyperbolic cosine of x element-wise.\n\n Given an input tensor, the function computes inverse hyperbolic cosine of every element.\n Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.\n\n ```python\n x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.math.add", "docs": "Returns x + y element-wise.\n\n Example usages below.\n\n Add a scalar and a list:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.add(x, y)\n \n\n Note that binary `+` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x + y\n \n\n Add a tensor and a list of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([1, 2, 3, 4, 5])\n >>> tf.add(x, y)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**7 + 1, 2**7 + 2]\n >>> tf.add(x, y)\n \n\n When adding two input values of different shapes, `Add` follows NumPy\n broadcasting rules. The two input array shapes are compared element-wise.\n Starting with the trailing dimensions, the two dimensions either have to be\n equal or one of them needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(1, 2, 1, 3)\n >>> y = np.ones(6).reshape(2, 1, 3, 1)\n >>> tf.add(x, y).shape.as_list()\n [2, 2, 3, 3]\n\n Another example with two arrays of different dimension.\n\n >>> x = np.ones([1, 2, 1, 4])\n >>> y = np.ones([3, 4])\n >>> tf.add(x, y).shape.as_list()\n [1, 2, 3, 4]\n\n The reduction version of this elementwise operation is `tf.math.reduce_sum`\n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: bfloat16, half,\n float32, float64, uint8, int8, int16, int32, int64, complex64, complex128,\n string.\n y: A `tf.Tensor`. Must have the same type as x.\n name: A name for the operation (optional)\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.math.add_n", "docs": "Adds all input tensors element-wise.\n\n `tf.math.add_n` performs the same operation as `tf.math.accumulate_n`.\n\n This op does not [broadcast](\n https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)\n its inputs. If you need broadcasting, use `tf.math.add` (or the `+` operator)\n instead.\n\n For example:\n\n >>> a = tf.constant([[3, 5], [4, 8]])\n >>> b = tf.constant([[1, 6], [2, 9]])\n >>> tf.math.add_n([a, b, a])\n \n\n Args:\n inputs: A list of `tf.Tensor` or `tf.IndexedSlices` objects, each with the\n same shape and type. `tf.IndexedSlices` objects will be converted into\n dense tensors prior to adding.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same shape and type as the elements of `inputs`.\n\n Raises:\n ValueError: If `inputs` don't all have same shape and dtype or the shape\n cannot be inferred.\n ", "desc": "Adds all input tensors element-wise.", "type": "API"}, {"name": "tf.math.angle", "docs": "Returns the element-wise argument of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the argument of each element in `input` considered as a complex number.\n\n The elements in `input` are considered to be complex numbers of the form\n \\\\(a + bj\\\\), where *a* is the real part and *b* is the imaginary part.\n If `input` is real then *b* is zero by definition.\n\n The argument returned by this function is of the form \\\\(atan2(b, a)\\\\).\n If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```\n input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64)\n tf.math.angle(input).numpy()\n # ==> array([2.0131705, 1.056345 ], dtype=float32)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the element-wise argument of a complex (or real) tensor.", "type": "API"}, {"name": "tf.math.argmax", "docs": "Returns the index with the largest value across axes of a tensor.\n\n In case of identity returns the smallest index.\n\n For example:\n\n >>> A = tf.constant([2, 20, 30, 3, 6])\n >>> tf.math.argmax(A) # A[2] is maximum in tensor A\n \n >>> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8],\n ... [14, 45, 23, 5, 27]])\n >>> tf.math.argmax(B, 0)\n \n >>> tf.math.argmax(B, 1)\n \n >>> C = tf.constant([0, 0, 0, 0])\n >>> tf.math.argmax(C) # Returns smallest index in case of ties\n \n\n Args:\n input: A `Tensor`.\n axis: An integer, the axis to reduce across. Default to 0.\n output_type: An optional output dtype (`tf.int32` or `tf.int64`). Defaults\n to `tf.int64`.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the largest value across axes of a tensor.", "type": "API"}, {"name": "tf.math.argmin", "docs": "Returns the index with the smallest value across axes of a tensor.\n\n Returns the smallest index in case of ties.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`,\n `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`,\n `uint64`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `-rank(input), rank(input))`.\n Describes which axis of the input Tensor to reduce across. For vectors,\n use axis = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to\n `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n ", "desc": "Returns the index with the smallest value across axes of a tensor.", "type": "API"}, {"name": "tf.math.asin", "docs": "Computes the trignometric inverse sine of x element-wise.\n\n The `tf.math.asin` operation returns the inverse of `tf.math.sin`, such that\n if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.\n\n **Note**: The output of `tf.math.asin` will lie within the invertible range\n of sine, i.e [-pi/2, pi/2].\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.sin(x) # [0.8659266, 0.7068252]\n\n tf.math.asin(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse sine of x element-wise.", "type": "API"}, {"name": "tf.math.asinh", "docs": "Computes inverse hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic sine\n for every element in the tensor. Both input and output has a range of\n `[-inf, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.math.atan", "docs": "Computes the trignometric inverse tangent of x element-wise.\n\n The `tf.math.atan` operation returns the inverse of `tf.math.tan`, such that\n if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.\n\n **Note**: The output of `tf.math.atan` will lie within the invertible range\n of tan, i.e (-pi/2, pi/2).\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.tan(x) # [1.731261, 0.99920404]\n\n tf.math.atan(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse tangent of x element-wise.", "type": "API"}, {"name": "tf.math.atan2", "docs": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.\n\n This is the angle \\\\( \\theta \\in [-\\pi, \\pi] \\\\) such that\n \\\\[ x = r \\cos(\\theta) \\\\]\n and\n \\\\[ y = r \\sin(\\theta) \\\\]\n where \\\\(r = \\sqrt{x^2 + y^2} \\\\).\n\n For example:\n\n >>> x = [1., 1.]\n >>> y = [1., -1.]\n >>> print((tf.math.atan2(y,x) * (180 / np.pi)).numpy())\n [ 45. -45.]\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.", "type": "API"}, {"name": "tf.math.atanh", "docs": "Computes inverse hyperbolic tangent of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic tangent\n for every element in the tensor. Input range is `[-1,1]` and output range is\n `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the\n input is `1`, output will be `inf`. Values outside the range will have\n `nan` as output.\n\n ```python\n x = tf.constant([-float(\"inf\"), -1, -0.5, 1, 0, 0.5, 10, float(\"inf\")])\n tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic tangent of x element-wise.", "type": "API"}, {"name": "tf.math.bessel_i0", "docs": "Computes the Bessel i0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `i0e(x)` instead.\n\n >>> tf.math.special.bessel_i0([-1., -0.5, 0.5, 1.]).numpy()\n array([1.26606588, 1.06348337, 1.06348337, 1.26606588], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0\n @end_compatibility\n ", "desc": "Computes the Bessel i0 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.bessel_i0e", "docs": "Computes the Bessel i0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_i0e([-1., -0.5, 0.5, 1.]).numpy()\n array([0.46575961, 0.64503527, 0.64503527, 0.46575961], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i0e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i0e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.bessel_i1", "docs": "Computes the Bessel i1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `i1e(x)` instead.\n\n >>> tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1\n @end_compatibility\n ", "desc": "Computes the Bessel i1 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.bessel_i1e", "docs": "Computes the Bessel i1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_i1e([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.20791042, -0.15642083, 0.15642083, 0.20791042], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i1e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i1e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.betainc", "docs": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).\n\n The regularized incomplete beta integral is defined as:\n\n\n \\\\(I_x(a, b) = \\frac{B(x; a, b)}{B(a, b)}\\\\)\n\n where\n\n\n \\\\(B(x; a, b) = \\int_0^x t^{a-1} (1 - t)^{b-1} dt\\\\)\n\n\n is the incomplete beta function and \\\\(B(a, b)\\\\) is the *complete*\n beta function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n b: A `Tensor`. Must have the same type as `a`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).", "type": "API"}, {"name": "tf.math.bincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector with length\n `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise.\n If `weights` are non-None, then index `i` of the output stores the sum of the\n value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n ```python\n values = tf.constant([1,1,2,3,2,4,4,5])\n tf.math.bincount(values) #[0 2 2 1 2 1]\n ```\n Vector length = Maximum element in vector `values` is 5. Adding 1, which is 6\n will be the vector length.\n\n Each bin value in the output indicates number of occurrences of the particular\n index. Here, index 1 in output has a value 2. This indicates value 1 occurs\n two times in `values`.\n\n ```python\n values = tf.constant([1,1,2,3,2,4,4,5])\n weights = tf.constant([1,5,0,1,0,5,4,5])\n tf.math.bincount(values, weights=weights) #[0 6 0 1 9 5]\n ```\n Bin will be incremented by the corresponding weight instead of 1.\n Here, index 1 in output has a value 6. This is the summation of weights\n corresponding to the value in `values`.\n\n **Bin-counting on a certain axis**\n\n This example takes a 2 dimensional input and returns a `Tensor` with\n bincounting on each sample.\n\n >>> data = np.array([[1, 2, 3, 0], [0, 0, 1, 2]], dtype=np.int32)\n >>> tf.math.bincount(data, axis=-1)\n \n\n\n **Bin-counting with binary_output**\n\n This example gives binary output instead of counting the occurrence.\n\n >>> data = np.array([[1, 2, 3, 0], [0, 0, 1, 2]], dtype=np.int32)\n >>> tf.math.bincount(data, axis=-1, binary_output=True)\n \n\n Args:\n arr: A Tensor, RaggedTensor, or SparseTensor whose values should be counted.\n These tensors must have a rank of 2 if `axis=-1`.\n weights: If non-None, must be the same shape as arr. For each value in\n `arr`, the bin will be incremented by the corresponding weight instead of\n 1.\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `arr` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n dtype: If `weights` is None, determines the type of the output bins.\n name: A name scope for the associated operations (optional).\n axis: The axis to slice over. Axes at and below `axis` will be flattened\n before bin counting. Currently, only `0`, and `-1` are supported. If None,\n all axes will be flattened (identical to passing `0`).\n binary_output: If True, this op will output 1 instead of the number of times\n a token appears (equivalent to one_hot + reduce_any instead of one_hot +\n reduce_add). Defaults to False.\n\n Returns:\n A vector with the same dtype as `weights` or the given `dtype`. The bin\n values.\n\n Raises:\n `InvalidArgumentError` if negative values are provided as an input.\n\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.math.ceil", "docs": "Return the ceiling of the input, element-wise.\n\n For example:\n\n >>> tf.math.ceil([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`. `int32`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.ceil\n @end_compatibility\n ", "desc": "Return the ceiling of the input, element-wise.", "type": "API"}, {"name": "tf.math.confusion_matrix", "docs": "Computes the confusion matrix from predictions and labels.\n\n The matrix columns represent the prediction labels and the rows represent the\n real labels. The confusion matrix is always a 2-D array of shape `[n, n]`,\n where `n` is the number of valid labels for a given classification task. Both\n prediction and labels must be 1-D arrays of the same shape in order for this\n function to work.\n\n If `num_classes` is `None`, then `num_classes` will be set to one plus the\n maximum value in either predictions or labels. Class labels are expected to\n start at 0. For example, if `num_classes` is 3, then the possible labels\n would be `[0, 1, 2]`.\n\n If `weights` is not `None`, then each prediction contributes its\n corresponding weight to the total value of the confusion matrix cell.\n\n For example:\n\n ```python\n tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>\n [[0 0 0 0 0]\n [0 0 1 0 0]\n [0 0 1 0 0]\n [0 0 0 0 0]\n [0 0 0 0 1]]\n ```\n\n Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`,\n resulting in a 5x5 confusion matrix.\n\n Args:\n labels: 1-D `Tensor` of real labels for the classification task.\n predictions: 1-D `Tensor` of predictions for a given classification.\n num_classes: The possible number of labels the classification task can\n have. If this value is not provided, it will be calculated\n using both predictions and labels array.\n weights: An optional `Tensor` whose shape matches `predictions`.\n dtype: Data type of the confusion matrix.\n name: Scope name.\n\n Returns:\n A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion\n matrix, where `n` is the number of possible labels in the classification\n task.\n\n Raises:\n ValueError: If both predictions and labels are not 1-D vectors and have\n mismatched shapes, or if `weights` is not `None` and its shape doesn't\n match `predictions`.\n ", "desc": "Computes the confusion matrix from predictions and labels.", "type": "API"}, {"name": "tf.math.conj", "docs": "Returns the complex conjugate of a complex number.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of\n complex numbers that are the complex conjugate of each element in `x`. The\n complex numbers in `x` must be of the form \\\\(a + bj\\\\), where `a` is the\n real part and `b` is the imaginary part.\n\n The complex conjugate returned by this operation is of the form \\\\(a - bj\\\\).\n\n For example:\n\n >>> x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n >>> tf.math.conj(x)\n \n\n If `x` is real, it is returned unchanged.\n\n For example:\n\n >>> x = tf.constant([-2.25, 3.25])\n >>> tf.math.conj(x)\n \n\n Args:\n x: `Tensor` to conjugate. Must have numeric or variant type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that is the conjugate of `x` (with the same type).\n\n Raises:\n TypeError: If `x` is not a numeric tensor.\n\n @compatibility(numpy)\n Equivalent to numpy.conj.\n @end_compatibility\n ", "desc": "Returns the complex conjugate of a complex number.", "type": "API"}, {"name": "tf.math.cos", "docs": "Computes cos of x element-wise.\n\n Given an input tensor, this function computes cosine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes cos of x element-wise.", "type": "API"}, {"name": "tf.math.cosh", "docs": "Computes hyperbolic cosine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic cosine of every\n element in the tensor. Input range is `[-inf, inf]` and output range\n is `[1, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.math.count_nonzero", "docs": "Computes number of nonzero elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n entry in `axis`. If `keepdims` is true, the reduced dimensions\n are retained with length 1.\n\n If `axis` has no entries, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n **NOTE** Floating point comparison to zero is done by exact floating point\n equality check. Small values are **not** rounded to zero for purposes of\n the nonzero check.\n\n For example:\n\n ```python\n x = tf.constant([[0, 1, 0], [1, 1, 0]])\n tf.math.count_nonzero(x) # 3\n tf.math.count_nonzero(x, 0) # [1, 2, 0]\n tf.math.count_nonzero(x, 1) # [1, 2]\n tf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]]\n tf.math.count_nonzero(x, [0, 1]) # 3\n ```\n\n **NOTE** Strings are compared against zero-length empty string `\"\"`. Any\n string with a size greater than zero is already considered as nonzero.\n\n For example:\n ```python\n x = tf.constant([\"\", \"a\", \" \", \"b\", \"\"])\n tf.math.count_nonzero(x) # 3, with \"a\", \" \", and \"b\" as nonzero strings.\n ```\n\n Args:\n input: The tensor to reduce. Should be of numeric type, `bool`, or `string`.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input), rank(input))`.\n keepdims: If true, retains reduced dimensions with length 1.\n dtype: The output dtype; defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor (number of nonzero values).\n ", "desc": "Computes number of nonzero elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.cumprod", "docs": "Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the\n first element of the input is identical to the first element of the output:\n\n ```python\n tf.math.cumprod([a, b, c]) # [a, a * b, a * b * c]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumprod is\n performed\n instead:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True) # [1, a, a * b]\n ```\n\n By setting the `reverse` kwarg to `True`, the cumprod is performed in the\n opposite direction:\n\n ```python\n tf.math.cumprod([a, b, c], reverse=True) # [a * b * c, b * c, c]\n ```\n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n ```python\n tf.math.cumprod([a, b, c], exclusive=True, reverse=True) # [b * c, c, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumprod.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative product of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.math.cumsum", "docs": "Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:\n For example:\n\n >>> # tf.cumsum([a, b, c]) # [a, a + b, a + b + c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x)\n \n\n >>> # using varying `axis` values\n >>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]])\n >>> tf.cumsum(y, axis=0)\n \n >>> tf.cumsum(y, axis=1)\n \n\n By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed\n instead:\n\n >>> # tf.cumsum([a, b, c], exclusive=True) => [0, a, a + b]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True)\n \n\n By setting the `reverse` kwarg to `True`, the cumsum is performed in the\n opposite direction:\n\n >>> # tf.cumsum([a, b, c], reverse=True) # [a + b + c, b + c, c]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, reverse=True)\n \n\n This is more efficient than using separate `tf.reverse` ops.\n The `reverse` and `exclusive` kwargs can also be combined:\n\n >>> # tf.cumsum([a, b, c], exclusive=True, reverse=True) # [b + c, c, 0]\n >>> x = tf.constant([2, 4, 6, 8])\n >>> tf.cumsum(x, exclusive=True, reverse=True)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumsum.\n reverse: A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative sum of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.math.cumulative_logsumexp", "docs": "Compute the cumulative log-sum-exp of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumulative log-sum-exp, which means\n that the first element of the input is identical to the first element of\n the output.\n\n This operation is significantly more numerically stable than the equivalent\n tensorflow operation `tf.math.log(tf.math.cumsum(tf.math.exp(x)))`, although\n computes the same result given infinite numerical precision. However, note\n that in some cases, it may be less stable than `tf.math.reduce_logsumexp`\n for a given element, as it applies the \"log-sum-exp trick\" in a different\n way.\n\n More precisely, where `tf.math.reduce_logsumexp` uses the following trick:\n\n ```\n log(sum(exp(x))) == log(sum(exp(x - max(x)))) + max(x)\n ```\n\n it cannot be directly used here as there is no fast way of applying it\n to each prefix `x[:i]`. Instead, this function implements a prefix\n scan using pairwise log-add-exp, which is a commutative and associative\n (up to floating point precision) operator:\n\n ```\n log_add_exp(x, y) = log(exp(x) + exp(y))\n = log(1 + exp(min(x, y) - max(x, y))) + max(x, y)\n ```\n\n However, reducing using the above operator leads to a different computation\n tree (logs are taken repeatedly instead of only at the end), and the maximum\n is only computed pairwise instead of over the entire prefix. In general, this\n leads to a different and slightly less precise computation.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float16`, `float32`,\n `float64`.\n axis: A `Tensor` of type `int32` or `int64` (default: 0). Must be in the\n range `[-rank(x), rank(x))`.\n exclusive: If `True`, perform exclusive cumulative log-sum-exp.\n reverse: If `True`, performs the cumulative log-sum-exp in the reverse\n direction.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same shape and type as `x`.\n ", "desc": "Compute the cumulative log-sum-exp of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.math.digamma", "docs": "Computes Psi, the derivative of Lgamma (the log of the absolute value of\n\n `Gamma(x)`), element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes Psi, the derivative of Lgamma (the log of the absolute value of", "type": "API"}, {"name": "tf.math.divide", "docs": "Computes Python style division of `x` by `y`.\n\n For example:\n\n >>> x = tf.constant([16, 12, 11])\n >>> y = tf.constant([4, 6, 2])\n >>> tf.divide(x,y)\n \n\n Args:\n x: A `Tensor`\n y: A `Tensor`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with same shape as input\n ", "desc": "Computes Python style division of `x` by `y`.", "type": "API"}, {"name": "tf.math.divide_no_nan", "docs": "Computes a safe divide which returns 0 if `y` (denominator) is zero.\n\n For example:\n\n >>> tf.constant(3.0) / 0.0\n \n >>> tf.math.divide_no_nan(3.0, 0.0)\n \n\n Note that 0 is returned if `y` is 0 even if `x` is nonfinite:\n\n >>> tf.math.divide_no_nan(np.nan, 0.0)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for the operation (optional).\n\n Returns:\n The element-wise value of the x divided by y.\n ", "desc": "Computes a safe divide which returns 0 if `y` (denominator) is zero.", "type": "API"}, {"name": "tf.math.equal", "docs": "Returns the truth value of (x == y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise equality comparison, returning a Tensor of\n boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x == y) element-wise.", "type": "API"}, {"name": "tf.math.erf", "docs": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.\n\n For example:\n\n >>> tf.math.erf([[1.0, 2.0, 3.0], [0.0, -1.0, -2.0]])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.erf(x.values, ...), x.dense_shape)`", "desc": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.", "type": "API"}, {"name": "tf.math.erfc", "docs": "Computes the complementary error function of `x` element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the complementary error function of `x` element-wise.", "type": "API"}, {"name": "tf.math.erfcinv", "docs": "Computes the inverse of complementary error function.\n\n Given `x`, compute the inverse complementary error function of `x`.\n This function is the inverse of `tf.math.erfc`, and is defined on\n `[0, 2]`.\n\n >>> tf.math.erfcinv([0., 0.2, 1., 1.5, 2.])\n \n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse complementary error function of `x`.\n\n @compatibility(numpy)\n Equivalent to scipy.special.erfcinv\n @end_compatibility\n ", "desc": "Computes the inverse of complementary error function.", "type": "API"}, {"name": "tf.math.erfinv", "docs": "Compute inverse error function.\n\n Given `x`, compute the inverse error function of `x`. This function\n is the inverse of `tf.math.erf`.\n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse error function of `x`.\n ", "desc": "Compute inverse error function.", "type": "API"}, {"name": "tf.math.exp", "docs": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).\n\n This function computes the exponential of the input tensor element-wise.\n i.e. `math.exp(x)` or \\\\(e^x\\\\), where `x` is the input tensor.\n \\\\(e\\\\) denotes Euler's number and is approximately equal to 2.718281.\n Output is positive for any real input.\n\n >>> x = tf.constant(2.0)\n >>> tf.math.exp(x)\n \n\n >>> x = tf.constant([2.0, 8.0])\n >>> tf.math.exp(x)\n \n\n For complex numbers, the exponential value is calculated as\n $$\n e^{x+iy} = {e^x} {e^{iy}} = {e^x} ({\\cos (y) + i \\sin (y)})\n $$\n\n For `1+1j` the value would be computed as:\n $$\n e^1 (\\cos (1) + i \\sin (1)) = 2.7182817 \\times (0.5403023+0.84147096j)\n $$\n\n >>> x = tf.constant(1 + 1j)\n >>> tf.math.exp(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n\n @compatibility(numpy)\n Equivalent to np.exp\n @end_compatibility\n ", "desc": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).", "type": "API"}, {"name": "tf.math.expm1", "docs": "Computes `exp(x) - 1` element-wise.\n\n i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor.\n `e` denotes Euler's number and is approximately equal to 2.718281.\n\n ```python\n x = tf.constant(2.0)\n tf.math.expm1(x) ==> 6.389056\n\n x = tf.constant([2.0, 8.0])\n tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)\n\n x = tf.constant(1 + 1j)\n tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes `exp(x) - 1` element-wise.", "type": "API"}, {"name": "tf.math.floor", "docs": "Returns element-wise largest integer not greater than x.\n\n Both input range is `(-inf, inf)` and the\n output range consists of all integer values.\n\n For example:\n\n >>> x = tf.constant([1.3324, -1.5, 5.555, -2.532, 0.99, float(\"inf\")])\n >>> tf.floor(x).numpy()\n array([ 1., -2., 5., -3., 0., inf], dtype=float32)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as x.\n ", "desc": "Returns element-wise largest integer not greater than x.", "type": "API"}, {"name": "tf.math.floordiv", "docs": "Divides `x / y` elementwise, rounding toward the most negative integer.\n\n Mathematically, this is equivalent to floor(x / y). For example:\n floor(8.4 / 4.0) = floor(2.1) = 2.0\n floor(-8.4 / 4.0) = floor(-2.1) = -3.0\n This is equivalent to the '//' operator in Python 3.0 and above.\n\n Note: `x` and `y` must have the same type, and the result will have the same\n type as well.\n\n Args:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` rounded toward -infinity.\n\n Raises:\n TypeError: If the inputs are complex.\n ", "desc": "Divides `x / y` elementwise, rounding toward the most negative integer.", "type": "API"}, {"name": "tf.math.floormod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.math.greater", "docs": "Returns the truth value of (x > y) element-wise.\n\n *NOTE*: `math.greater` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 2, 5])\n tf.math.greater(x, y) ==> [False, True, True]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.greater(x, y) ==> [False, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x > y) element-wise.", "type": "API"}, {"name": "tf.math.greater_equal", "docs": "Returns the truth value of (x >= y) element-wise.\n\n *NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5, 2, 5, 10])\n tf.math.greater_equal(x, y) ==> [True, True, True, False]\n\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5])\n tf.math.greater_equal(x, y) ==> [True, False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x >= y) element-wise.", "type": "API"}, {"name": "tf.math.igamma", "docs": "Compute the lower regularized incomplete Gamma function `P(a, x)`.\n\n The lower regularized incomplete Gamma function is defined as:\n\n\n \\\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\\\)\n\n where\n\n \\\\(gamma(a, x) = \\\\int_{0}^{x} t^{a-1} exp(-t) dt\\\\)\n\n is the lower incomplete Gamma function.\n\n Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the lower regularized incomplete Gamma function `P(a, x)`.", "type": "API"}, {"name": "tf.math.igammac", "docs": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.\n\n The upper regularized incomplete Gamma function is defined as:\n\n \\\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\\\)\n\n where\n\n \\\\(Gamma(a, x) = \\int_{x}^{\\infty} t^{a-1} exp(-t) dt\\\\)\n\n is the upper incomplete Gamma function.\n\n Note, above `P(a, x)` (`Igamma`) is the lower regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.", "type": "API"}, {"name": "tf.math.imag", "docs": "Returns the imaginary part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the imaginary part of each element in `input` considered as a complex\n number. If `input` is real, a tensor of all zeros is returned.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.imag(x) # [4.75, 5.75]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float`, `double`,\n `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the imaginary part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.math.in_top_k", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is finite (not inf, -inf, or nan) and among\n the top `k` predictions among all predictions for example `i`. Note that the\n behavior of `InTopK` differs from the `TopK` op in its handling of ties; if\n multiple classes have the same prediction value and straddle the top-`k`\n boundary, all of those classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: An `int`. Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.math.invert_permutation", "docs": "Computes the inverse permutation of a tensor.\n\n This operation computes the inverse of an index permutation. It takes a 1-D\n integer tensor `x`, which represents the indices of a zero-based array, and\n swaps each value with its index position. In other words, for an output tensor\n `y` and an input tensor `x`, this operation computes the following:\n\n `y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`\n\n The values must include 0. There can be no duplicate values or negative values.\n\n For example:\n\n ```\n # tensor `x` is [3, 4, 0, 2, 1]\n invert_permutation(x) ==> [2, 4, 3, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the inverse permutation of a tensor.", "type": "API"}, {"name": "tf.math.is_finite", "docs": "Returns which elements of x are finite.\n\n @compatibility(numpy)\n Equivalent to np.isfinite\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])\n tf.math.is_finite(x) ==> [True, True, True, False, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are finite.", "type": "API"}, {"name": "tf.math.is_inf", "docs": "Returns which elements of x are Inf.\n\n @compatibility(numpy)\n Equivalent to np.isinf\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.inf, 6.8, np.inf])\n tf.math.is_inf(x) ==> [False, True, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are Inf.", "type": "API"}, {"name": "tf.math.is_nan", "docs": "Returns which elements of x are NaN.\n\n @compatibility(numpy)\n Equivalent to np.isnan\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])\n tf.math.is_nan(x) ==> [False, True, False, True, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are NaN.", "type": "API"}, {"name": "tf.math.is_non_decreasing", "docs": "Returns `True` if `x` is non-decreasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.\n If `x` has less than two elements, it is trivially non-decreasing.\n\n See also: `is_strictly_increasing`\n\n >>> x1 = tf.constant([1.0, 1.0, 3.0])\n >>> tf.math.is_non_decreasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_non_decreasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional). Defaults to \"is_non_decreasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is non-decreasing.", "type": "API"}, {"name": "tf.math.is_strictly_increasing", "docs": "Returns `True` if `x` is strictly increasing.\n\n Elements of `x` are compared in row-major order. The tensor `[x[0],...]`\n is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.\n If `x` has less than two elements, it is trivially strictly increasing.\n\n See also: `is_non_decreasing`\n\n >>> x1 = tf.constant([1.0, 2.0, 3.0])\n >>> tf.math.is_strictly_increasing(x1)\n \n >>> x2 = tf.constant([3.0, 1.0, 2.0])\n >>> tf.math.is_strictly_increasing(x2)\n \n\n Args:\n x: Numeric `Tensor`.\n name: A name for this operation (optional).\n Defaults to \"is_strictly_increasing\"\n\n Returns:\n Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.\n\n Raises:\n TypeError: if `x` is not a numeric tensor.\n ", "desc": "Returns `True` if `x` is strictly increasing.", "type": "API"}, {"name": "tf.math.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.math.lbeta", "docs": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.\n\n Given one-dimensional $z = [z_1,...,z_K]$, we define\n\n $$Beta(z) = \\frac{\\prod_j \\Gamma(z_j)}{\\Gamma(\\sum_j z_j)},$$\n\n where $\\Gamma$ is the gamma function.\n\n And for $n + 1$ dimensional $x$ with shape $[N_1, ..., N_n, K]$, we define\n\n $$lbeta(x)[i_1, ..., i_n] = \\log{|Beta(x[i_1, ..., i_n, :])|}.$$\n\n In other words, the last dimension is treated as the $z$ vector.\n\n Note that if $z = [u, v]$, then\n\n $$Beta(z) = \\frac{\\Gamma(u)\\Gamma(v)}{\\Gamma(u + v)}\n = \\int_0^1 t^{u-1} (1 - t)^{v-1} \\mathrm{d}t,$$\n\n which defines the traditional bivariate beta function.\n\n If the last dimension is empty, we follow the convention that the sum over\n the empty set is zero, and the product is one.\n\n Args:\n x: A rank `n + 1` `Tensor`, `n >= 0` with type `float`, or `double`.\n name: A name for the operation (optional).\n\n Returns:\n The logarithm of \\\\(|Beta(x)|\\\\) reducing along the last dimension.\n ", "desc": "Computes \\\\(ln(|Beta(x)|)\\\\), reducing along the last dimension.", "type": "API"}, {"name": "tf.math.less", "docs": "Returns the truth value of (x < y) element-wise.\n\n *NOTE*: `math.less` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less(x, y) ==> [False, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 7])\n tf.math.less(x, y) ==> [False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x < y) element-wise.", "type": "API"}, {"name": "tf.math.less_equal", "docs": "Returns the truth value of (x <= y) element-wise.\n\n *NOTE*: `math.less_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less_equal(x, y) ==> [True, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 6])\n tf.math.less_equal(x, y) ==> [True, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x <= y) element-wise.", "type": "API"}, {"name": "tf.math.lgamma", "docs": "Computes the log of the absolute value of `Gamma(x)` element-wise.\n\n For positive numbers, this function computes log((input - 1)!) for every element in the tensor.\n `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539`\n\n Example:\n\n ```python\n x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])\n tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the log of the absolute value of `Gamma(x)` element-wise.", "type": "API"}, {"name": "tf.math.log", "docs": "Computes natural logarithm of x element-wise.\n\n I.e., \\\\(y = \\log_e x\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log(x)\n \n\n See: https://en.wikipedia.org/wiki/Logarithm\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of x element-wise.", "type": "API"}, {"name": "tf.math.log_sigmoid", "docs": "Computes log sigmoid of `x` element-wise.\n\n Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability,\n we use `y = -tf.nn.softplus(-x)`.\n\n Args:\n x: A Tensor with type `float32` or `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n If a positive number is large, then its log_sigmoid will approach to 0 since\n the formula will be `y = log( / (1 + ) )` which\n approximates to `log (1)` which is 0.\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.log_sigmoid(x)\n \n\n If a negative number is large, its log_sigmoid will approach to the number\n itself since the formula will be `y = log( 1 / (1 + ) )` which is\n `log (1) - log ( (1 + ) )` which approximates to `- `\n that is the number itself.\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.log_sigmoid(x)\n \n ", "desc": "Computes log sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.math.log_softmax", "docs": "Computes log softmax activations.\n\n For each batch `i` and class `j` we have\n\n logsoftmax = logits - log(reduce_sum(exp(logits), axis))\n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `logits`. Same shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes log softmax activations.", "type": "API"}, {"name": "tf.math.log1p", "docs": "Computes natural logarithm of (1 + x) element-wise.\n\n I.e., \\\\(y = \\log_e (1 + x)\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log1p(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of (1 + x) element-wise.", "type": "API"}, {"name": "tf.math.logical_and", "docs": "Returns the truth value of x AND y element-wise.\n\n Logical AND function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical AND with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical AND of the two input tensors.\n\n You can also use the `&` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_and(a, b)\n \n >>> a & b\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_and(c, x)\n \n >>> c & x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_and(y, z)\n \n >>> y & z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_and([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_all`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x AND y element-wise.", "type": "API"}, {"name": "tf.math.logical_not", "docs": "Returns the truth value of `NOT x` element-wise.\n\n Example:\n\n >>> tf.math.logical_not(tf.constant([True, False]))\n \n\n Args:\n x: A `Tensor` of type `bool`. A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of `NOT x` element-wise.", "type": "API"}, {"name": "tf.math.logical_or", "docs": "Returns the truth value of x OR y element-wise.\n\n Logical OR function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical OR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical OR of the two input tensors.\n\n You can also use the `|` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_or(a, b)\n \n >>> a | b\n \n\n >>> c = tf.constant([False])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_or(c, x)\n \n >>> c | x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_or(y, z)\n \n >>> y | z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_or([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_any`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x OR y element-wise.", "type": "API"}, {"name": "tf.math.logical_xor", "docs": "Logical XOR function.\n\n x ^ y = (x | y) & ~(x & y)\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical XOR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical XOR of the two input tensors.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_xor(a, b)\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_xor(c, x)\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_xor(y, z)\n \n\n Args:\n x: A `tf.Tensor` type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n ", "desc": "Logical XOR function.", "type": "API"}, {"name": "tf.math.maximum", "docs": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.\n\n Example:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-2., 0., 2., 5.])\n >>> tf.math.maximum(x, y)\n \n\n Note that `maximum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.maximum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_max`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.", "type": "API"}, {"name": "tf.math.minimum", "docs": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.\n\n Both inputs are number-type tensors (except complex). `minimum` expects that\n both tensors have the same `dtype`.\n\n Examples:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-5., -2., 0., 3.])\n >>> tf.math.minimum(x, y)\n \n\n Note that `minimum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.minimum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_min`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.", "type": "API"}, {"name": "tf.math.mod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.math.multiply", "docs": "Returns an element-wise x * y.\n\n For example:\n\n >>> x = tf.constant(([1, 2, 3, 4]))\n >>> tf.math.multiply(x, x)\n \n\n Since `tf.math.multiply` will convert its arguments to `Tensor`s, you can also\n pass in non-`Tensor` arguments:\n\n >>> tf.math.multiply(7,6)\n \n\n If `x.shape` is not the same as `y.shape`, they will be broadcast to a\n compatible shape. (More about broadcasting\n [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)\n\n For example:\n\n >>> x = tf.ones([1, 2]);\n >>> y = tf.ones([2, 1]);\n >>> x * y # Taking advantage of operator overriding\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_prod`\n\n Args:\n x: A Tensor. Must be one of the following types: `bfloat16`,\n `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`,\n `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n\n A `Tensor`. Has the same type as `x`.\n\n Raises:\n\n * InvalidArgumentError: When `x` and `y` have incompatible shapes or types.\n ", "desc": "Returns an element-wise x * y.", "type": "API"}, {"name": "tf.math.multiply_no_nan", "docs": "Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.\n\n Note this is noncommutative: if y is NaN or infinite and x is 0, the result\n will be NaN.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for the operation (optional).\n\n Returns:\n The element-wise value of the x times y.\n ", "desc": "Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite.", "type": "API"}, {"name": "tf.math.ndtri", "docs": "Compute quantile of Standard Normal.\n\n Args:\n x: `Tensor` with type `float` or `double`.\n name: A name for the operation (optional).\n Returns:\n Inverse error function of `x`.\n ", "desc": "Compute quantile of Standard Normal.", "type": "API"}, {"name": "tf.math.negative", "docs": "Computes numerical negative value element-wise.\n\n I.e., \\\\(y = -x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)`", "desc": "Computes numerical negative value element-wise.", "type": "API"}, {"name": "tf.math.nextafter", "docs": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.\n\n This operation returns the same result as the C++ std::nextafter function.\n\n It can also return a subnormal number.\n\n @compatibility(cpp)\n Equivalent to C++ std::nextafter function.\n @end_compatibility\n\n Args:\n x1: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n x2: A `Tensor`. Must have the same type as `x1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x1`.\n ", "desc": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.", "type": "API"}, {"name": "tf.math.not_equal", "docs": "Returns the truth value of (x != y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise inequality comparison, returning a Tensor\n of boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.not_equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.not_equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x != y) element-wise.", "type": "API"}, {"name": "tf.math.polygamma", "docs": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).\n\n The polygamma function is defined as:\n\n\n \\\\(\\psi^{(a)}(x) = \\frac{d^a}{dx^a} \\psi(x)\\\\)\n\n where \\\\(\\psi(x)\\\\) is the digamma function.\n The polygamma function is defined only for non-negative integer orders \\\\a\\\\.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).", "type": "API"}, {"name": "tf.math.polyval", "docs": "Computes the elementwise value of a polynomial.\n\n If `x` is a tensor and `coeffs` is a list n + 1 tensors,\n this function returns the value of the n-th order polynomial\n\n `p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1)`\n\n evaluated using Horner's method, i.e.\n\n ```python\n p(x) = coeffs[n-1] + x * (coeffs[n-2] + ... + x * (coeffs[1] + x * coeffs[0]))\n ```\n\n Usage Example:\n\n >>> coefficients = [1.0, 2.5, -4.2]\n >>> x = 5.0\n >>> y = tf.math.polyval(coefficients, x)\n >>> y\n \n\n Usage Example:\n\n >>> tf.math.polyval([2, 1, 0], 3) # evaluates 2 * (3**2) + 1 * (3**1) + 0 * (3**0)\n \n\n `tf.math.polyval` can also be used in polynomial regression. Taking\n advantage of this function can facilitate writing a polynomial equation\n as compared to explicitly writing it out, especially for higher degree\n polynomials.\n\n >>> x = tf.constant(3)\n >>> theta1 = tf.Variable(2)\n >>> theta2 = tf.Variable(1)\n >>> theta3 = tf.Variable(0)\n >>> tf.math.polyval([theta1, theta2, theta3], x)\n \n\n Args:\n coeffs: A list of `Tensor` representing the coefficients of the polynomial.\n x: A `Tensor` representing the variable of the polynomial.\n name: A name for the operation (optional).\n\n Returns:\n A `tensor` of the shape as the expression p(x) with usual broadcasting\n rules for element-wise addition and multiplication applied.\n\n @compatibility(numpy)\n Equivalent to numpy.polyval.\n @end_compatibility\n ", "desc": "Computes the elementwise value of a polynomial.", "type": "API"}, {"name": "tf.math.pow", "docs": "Computes the power of one value to another.\n\n Given a tensor `x` and a tensor `y`, this operation computes \\\\(x^y\\\\) for\n corresponding elements in `x` and `y`. For example:\n\n ```python\n x = tf.constant([[2, 2], [3, 3]])\n y = tf.constant([[8, 16], [2, 3]])\n tf.pow(x, y) # [[256, 65536], [9, 27]]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n y: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`.\n ", "desc": "Computes the power of one value to another.", "type": "API"}, {"name": "tf.math.real", "docs": "Returns the real part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the real part of each element in `input` considered as a complex number.\n\n For example:\n\n ```python\n x = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j])\n tf.math.real(x) # [-2.25, 3.25]\n ```\n\n If `input` is already real, it is returned unchanged.\n\n Args:\n input: A `Tensor`. Must have numeric type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32` or `float64`.\n ", "desc": "Returns the real part of a complex (or real) tensor.", "type": "API"}, {"name": "tf.math.reciprocal", "docs": "Computes the reciprocal of x element-wise.\n\n I.e., \\\\(y = 1 / x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the reciprocal of x element-wise.", "type": "API"}, {"name": "tf.math.reciprocal_no_nan", "docs": "Performs a safe reciprocal operation, element wise.\n\n If a particular element is zero, the reciprocal for that element is\n also set to zero.\n\n For example:\n ```python\n x = tf.constant([2.0, 0.5, 0, 1], dtype=tf.float32)\n tf.math.reciprocal_no_nan(x) # [ 0.5, 2, 0.0, 1.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64` `complex64` or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n\n Raises:\n TypeError: x must be of a valid dtype.\n\n ", "desc": "Performs a safe reciprocal operation, element wise.", "type": "API"}, {"name": "tf.math.reduce_all", "docs": "Computes `tf.math.logical_and` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.logical_and` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.math.reduce_all(x)\n \n >>> tf.math.reduce_all(x, 0)\n \n >>> tf.math.reduce_all(x, 1)\n \n\n Args:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.all\n @end_compatibility\n ", "desc": "Computes `tf.math.logical_and` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_any", "docs": "Computes `tf.math.logical_or` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.logical_or` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.reduce_any(x)\n \n >>> tf.reduce_any(x, 0)\n \n >>> tf.reduce_any(x, 1)\n \n\n Args:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.any\n @end_compatibility\n ", "desc": "Computes `tf.math.logical_or` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_euclidean_norm", "docs": "Computes the Euclidean norm of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n ```python\n x = tf.constant([[1, 2, 3], [1, 1, 1]]) # x.dtype is tf.int32\n tf.math.reduce_euclidean_norm(x) # returns 4 as dtype is tf.int32\n y = tf.constant([[1, 2, 3], [1, 1, 1]], dtype = tf.float32)\n tf.math.reduce_euclidean_norm(y) # returns 4.1231055 which is sqrt(17)\n tf.math.reduce_euclidean_norm(y, 0) # [sqrt(2), sqrt(5), sqrt(10)]\n tf.math.reduce_euclidean_norm(y, 1) # [sqrt(14), sqrt(3)]\n tf.math.reduce_euclidean_norm(y, 1, keepdims=True) # [[sqrt(14)], [sqrt(3)]]\n tf.math.reduce_euclidean_norm(y, [0, 1]) # sqrt(17)\n ```\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor.\n ", "desc": "Computes the Euclidean norm of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_logsumexp", "docs": "Computes log(sum(exp(elements across dimensions of a tensor))).\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` has no entries, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n This function is more numerically stable than log(sum(exp(input))). It avoids\n overflows caused by taking the exp of large inputs and underflows caused by\n taking the log of small inputs.\n\n For example:\n\n ```python\n x = tf.constant([[0., 0., 0.], [0., 0., 0.]])\n tf.reduce_logsumexp(x) # log(6)\n tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)]\n tf.reduce_logsumexp(x, 1) # [log(3), log(3)]\n tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]]\n tf.reduce_logsumexp(x, [0, 1]) # log(6)\n ```\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n ", "desc": "Computes log(sum(exp(elements across dimensions of a tensor))).", "type": "API"}, {"name": "tf.math.reduce_max", "docs": "Computes `tf.math.maximum` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.maximum` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n Usage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_max(x)\n \n\n See the numpy docs for `np.amax` and `np.nanmax` behavior.\n\n Args:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n ", "desc": "Computes `tf.math.maximum` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_mean", "docs": "Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis` by computing the\n mean of elements across the dimensions in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a tensor with a single\n element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 1.], [2., 2.]])\n >>> tf.reduce_mean(x)\n \n >>> tf.reduce_mean(x, 0)\n \n >>> tf.reduce_mean(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.mean\n\n Please note that `np.mean` has a `dtype` parameter that could be used to\n specify the output type. By default this is `dtype=float64`. On the other\n hand, `tf.reduce_mean` has an aggressive type inference from `input_tensor`,\n for example:\n\n >>> x = tf.constant([1, 0, 1, 0])\n >>> tf.reduce_mean(x)\n \n >>> y = tf.constant([1., 0., 1., 0.])\n >>> tf.reduce_mean(y)\n \n\n @end_compatibility\n ", "desc": "Computes the mean of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_min", "docs": "Computes the `tf.math.minimum` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.minimum` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> a = tf.constant([\n ... [[1, 2], [3, 4]],\n ... [[1, 2], [3, 4]]\n ... ])\n >>> tf.reduce_min(a)\n \n\n Choosing a specific axis returns minimum element in the given axis:\n\n >>> b = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.reduce_min(b, axis=0)\n \n >>> tf.reduce_min(b, axis=1)\n \n\n Setting `keepdims` to `True` retains the dimension of `input_tensor`:\n\n >>> tf.reduce_min(a, keepdims=True)\n \n >>> tf.math.reduce_min(a, axis=0, keepdims=True)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.min\n @end_compatibility\n ", "desc": "Computes the `tf.math.minimum` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_prod", "docs": "Computes `tf.math.multiply` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.multiply` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n entry in `axis`. If `keepdims` is true, the reduced dimensions\n are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_prod(x)\n \n >>> tf.math.reduce_prod(x, 0)\n \n >>> tf.math.reduce_prod(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.prod\n @end_compatibility\n ", "desc": "Computes `tf.math.multiply` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_std", "docs": "Computes the standard deviation of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_std(x)\n \n >>> tf.math.reduce_std(x, 0)\n \n >>> tf.math.reduce_std(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real or complex type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name scope for the associated operations (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor. Note, for\n `complex64` or `complex128` input, the returned `Tensor` will be of type\n `float32` or `float64`, respectively.\n\n @compatibility(numpy)\n Equivalent to np.std\n\n Please note `np.std` has a `dtype` parameter that could be used to specify the\n output type. By default this is `dtype=float64`. On the other hand,\n `tf.math.reduce_std` has aggressive type inference from `input_tensor`.\n @end_compatibility\n ", "desc": "Computes the standard deviation of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_sum", "docs": "Computes the sum of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.add` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> # x has a shape of (2, 3) (two rows and three columns):\n >>> x = tf.constant([[1, 1, 1], [1, 1, 1]])\n >>> x.numpy()\n array([[1, 1, 1],\n [1, 1, 1]], dtype=int32)\n >>> # sum all the elements\n >>> # 1 + 1 + 1 + 1 + 1+ 1 = 6\n >>> tf.reduce_sum(x).numpy()\n 6\n >>> # reduce along the first dimension\n >>> # the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> tf.reduce_sum(x, 0).numpy()\n array([2, 2, 2], dtype=int32)\n >>> # reduce along the second dimension\n >>> # the result is [1, 1] + [1, 1] + [1, 1] = [3, 3]\n >>> tf.reduce_sum(x, 1).numpy()\n array([3, 3], dtype=int32)\n >>> # keep the original dimensions\n >>> tf.reduce_sum(x, 1, keepdims=True).numpy()\n array([[3],\n [3]], dtype=int32)\n >>> # reduce along both dimensions\n >>> # the result is 1 + 1 + 1 + 1 + 1 + 1 = 6\n >>> # or, equivalently, reduce along rows, then reduce the resultant array\n >>> # [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> # 2 + 2 + 2 = 6\n >>> tf.reduce_sum(x, [0, 1]).numpy()\n 6\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor)]`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor.\n\n @compatibility(numpy)\n Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to\n int64 while tensorflow returns the same dtype as the input.\n @end_compatibility\n ", "desc": "Computes the sum of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.reduce_variance", "docs": "Computes the variance of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_variance(x)\n \n >>> tf.math.reduce_variance(x, 0)\n \n >>> tf.math.reduce_variance(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real or complex type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name scope for the associated operations (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor. Note, for\n `complex64` or `complex128` input, the returned `Tensor` will be of type\n `float32` or `float64`, respectively.\n\n @compatibility(numpy)\n Equivalent to np.var\n\n Please note `np.var` has a `dtype` parameter that could be used to specify the\n output type. By default this is `dtype=float64`. On the other hand,\n `tf.math.reduce_variance` has aggressive type inference from `input_tensor`.\n @end_compatibility\n ", "desc": "Computes the variance of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.math.rint", "docs": "Returns element-wise integer closest to x.\n\n If the result is midway between two representable values,\n the even representable is chosen.\n For example:\n\n ```\n rint(-1.5) ==> -2.0\n rint(0.5000001) ==> 1.0\n rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise integer closest to x.", "type": "API"}, {"name": "tf.math.round", "docs": "Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example:\n\n ```python\n x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5])\n tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, or `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n ", "desc": "Rounds the values of a tensor to the nearest integer, element-wise.", "type": "API"}, {"name": "tf.math.rsqrt", "docs": "Computes reciprocal of square root of x element-wise.\n\n For example:\n\n >>> x = tf.constant([2., 0., -2.])\n >>> tf.math.rsqrt(x)\n \n\n Args:\n x: A `tf.Tensor`. Must be one of the following types: `bfloat16`, `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor`. Has the same type as `x`.\n ", "desc": "Computes reciprocal of square root of x element-wise.", "type": "API"}, {"name": "tf.math.scalar_mul", "docs": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.\n\n This is a special case of `tf.math.multiply`, where the first value must be a\n `scalar`. Unlike the general form of `tf.math.multiply`, this is operation is\n guaranteed to be efficient for `tf.IndexedSlices`.\n\n >>> x = tf.reshape(tf.range(30, dtype=tf.float32), [10, 3])\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = tf.gather(x, [1, 2]) # IndexedSlices\n ... z = tf.math.scalar_mul(10.0, y)\n\n Args:\n scalar: A 0-D scalar `Tensor`. Must have known shape.\n x: A `Tensor` or `IndexedSlices` to be scaled.\n name: A name for the operation (optional).\n\n Returns:\n `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.\n\n Raises:\n ValueError: if scalar is not a 0-D `scalar`.\n ", "desc": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.", "type": "API"}, {"name": "tf.math.segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\max_j(data_j)\\\\) where `max` is over `j` such\n that `segment_ids[j] == i`.\n\n If the max is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.math.segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\frac{\\sum_j data_j}{N}\\\\) where `mean` is\n over `j` such that `segment_ids[j] == i` and `N` is the total number of\n values summed.\n\n If the mean is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as a smaller following index when computing the numerator\n of the mean.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy()\n array([[2.5, 2.5, 2.5, 2.5],\n [5., 6., 7., 8.]], dtype=float32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.math.segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\min_j(data_j)\\\\) where `min` is over `j` such\n that `segment_ids[j] == i`.\n\n If the min is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.math.segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\prod_j data_j\\\\) where the product is over `j` such\n that `segment_ids[j] == i`.\n\n If the product is empty for a given segment ID `i`, `output[i] = 1`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.math.segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\sum_j data_j\\\\) where sum is over `j` such\n that `segment_ids[j] == i`.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_sum(c, tf.constant([0, 0, 1])).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.math.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.math.sign", "docs": "Returns an element-wise indication of the sign of a number.\n\n `y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0`.\n\n For complex numbers, `y = sign(x) = x / |x| if x != 0, otherwise y = 0`.\n\n Example usage:\n\n >>> # real number\n >>> tf.math.sign([0., 2., -3.])\n \n\n >>> # complex number\n >>> tf.math.sign([1 + 1j, 0 + 0j])\n \n\n Args:\n x: A Tensor. Must be one of the following types: bfloat16, half, float32,\n float64, int32, int64, complex64, complex128.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor. Has the same type as x.\n\n If x is a SparseTensor, returns SparseTensor(x.indices,\n tf.math.sign(x.values, ...), x.dense_shape).\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sign(x.values, ...), x.dense_shape)`", "desc": "Returns an element-wise indication of the sign of a number.", "type": "API"}, {"name": "tf.math.sin", "docs": "Computes sine of x element-wise.\n\n Given an input tensor, this function computes sine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10, float(\"inf\")])\n tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sine of x element-wise.", "type": "API"}, {"name": "tf.math.sinh", "docs": "Computes hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic sine of every\n element in the tensor. Input range is `[-inf,inf]` and output range\n is `[-inf,inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.math.sobol_sample", "docs": "Generates points from the Sobol sequence.\n\n Creates a Sobol sequence with `num_results` samples. Each sample has dimension\n `dim`. Skips the first `skip` samples.\n\n Args:\n dim: Positive scalar `Tensor` representing each sample's dimension.\n num_results: Positive scalar `Tensor` of dtype int32. The number of Sobol\n points to return in the output.\n skip: (Optional) Positive scalar `Tensor` of dtype int32. The number of\n initial points of the Sobol sequence to skip. Default value is 0.\n dtype: (Optional) The `tf.Dtype` of the sample. One of: `tf.float32` or\n `tf.float64`. Defaults to `tf.float32`.\n name: (Optional) Python `str` name prefixed to ops created by this function.\n\n Returns:\n `Tensor` of samples from Sobol sequence with `shape` [num_results, dim].\n ", "desc": "Generates points from the Sobol sequence.", "type": "API"}, {"name": "tf.math.softmax", "docs": "Computes softmax activations.\n\n Used for multi-class predictions. The sum of all outputs generated by softmax\n is 1.\n\n This function performs the equivalent of\n\n ```python\n softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis, keepdims=True)\n ```\n Example usage:\n\n >>> softmax = tf.nn.softmax([-1, 0., 1.])\n >>> softmax\n \n >>> sum(softmax)\n \n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type and shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes softmax activations.", "type": "API"}, {"name": "tf.math.softplus", "docs": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.\n\n `softplus` is a smooth approximation of `relu`. Like `relu`, `softplus` always\n takes on positive values.\n\n \n\n Example:\n\n >>> import tensorflow as tf\n >>> tf.math.softplus(tf.range(0, 2, dtype=tf.float32)).numpy()\n array([0.6931472, 1.3132616], dtype=float32)\n\n Args:\n features: `Tensor`\n name: Optional: name to associate with this operation.\n Returns:\n `Tensor`\n ", "desc": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.math.softsign", "docs": "Computes softsign: `features / (abs(features) + 1)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softsign: `features / (abs(features) + 1)`.", "type": "API"}, {"name": "tf.math.special", "docs": "Public API for tf.math.special namespace.\n", "desc": "Public API for tf.math.special namespace.", "type": "API"}, {"name": "tf.math.special.bessel_i0", "docs": "Computes the Bessel i0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `i0e(x)` instead.\n\n >>> tf.math.special.bessel_i0([-1., -0.5, 0.5, 1.]).numpy()\n array([1.26606588, 1.06348337, 1.06348337, 1.26606588], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0\n @end_compatibility\n ", "desc": "Computes the Bessel i0 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_i0e", "docs": "Computes the Bessel i0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_i0e([-1., -0.5, 0.5, 1.]).numpy()\n array([0.46575961, 0.64503527, 0.64503527, 0.46575961], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i0e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i0e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i0e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_i1", "docs": "Computes the Bessel i1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `i1e(x)` instead.\n\n >>> tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1\n @end_compatibility\n ", "desc": "Computes the Bessel i1 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_i1e", "docs": "Computes the Bessel i1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_i1e([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.20791042, -0.15642083, 0.15642083, 0.20791042], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.i1e\n @end_compatibility\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.bessel_i1e(x.values, ...), x.dense_shape)`", "desc": "Computes the Bessel i1e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_j0", "docs": "Computes the Bessel j0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_j0([0.5, 1., 2., 4.]).numpy()\n array([ 0.93846981, 0.76519769, 0.22389078, -0.39714981], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.j0\n @end_compatibility\n ", "desc": "Computes the Bessel j0 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_j1", "docs": "Computes the Bessel j1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_j1([0.5, 1., 2., 4.]).numpy()\n array([ 0.24226846, 0.44005059, 0.57672481, -0.06604333], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.j1\n @end_compatibility\n ", "desc": "Computes the Bessel j1 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_k0", "docs": "Computes the Bessel k0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n It is preferable to use the numerically stabler function `k0e(x)` instead.\n\n >>> tf.math.special.bessel_k0([0.5, 1., 2., 4.]).numpy()\n array([0.92441907, 0.42102444, 0.11389387, 0.01115968], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k0\n @end_compatibility\n ", "desc": "Computes the Bessel k0 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_k0e", "docs": "Computes the Bessel k0e function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_k0e([0.5, 1., 2., 4.]).numpy()\n array([1.52410939, 1.14446308, 0.84156822, 0.60929767], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k0e\n @end_compatibility\n ", "desc": "Computes the Bessel k0e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_k1", "docs": "Computes the Bessel k1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n It is preferable to use the numerically stabler function `k1e(x)` instead.\n\n >>> tf.math.special.bessel_k1([0.5, 1., 2., 4.]).numpy()\n array([1.65644112, 0.60190723, 0.13986588, 0.0124835 ], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k1\n @end_compatibility\n ", "desc": "Computes the Bessel k1 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_k1e", "docs": "Computes the Bessel k1e function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_k1e([0.5, 1., 2., 4.]).numpy()\n array([2.73100971, 1.63615349, 1.03347685, 0.68157595], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.k1e\n @end_compatibility\n ", "desc": "Computes the Bessel k1e function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_y0", "docs": "Computes the Bessel y0 function of `x` element-wise.\n\n Modified Bessel function of order 0.\n\n >>> tf.math.special.bessel_y0([0.5, 1., 2., 4.]).numpy()\n array([-0.44451873, 0.08825696, 0.51037567, -0.01694074], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.y0\n @end_compatibility\n ", "desc": "Computes the Bessel y0 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.bessel_y1", "docs": "Computes the Bessel y1 function of `x` element-wise.\n\n Modified Bessel function of order 1.\n\n >>> tf.math.special.bessel_y1([0.5, 1., 2., 4.]).numpy()\n array([-1.47147239, -0.78121282, -0.10703243, 0.39792571], dtype=float32)\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.y1\n @end_compatibility\n ", "desc": "Computes the Bessel y1 function of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.dawsn", "docs": "Computes Dawson's integral of `x` element-wise.\n\n Dawson's integral is defined as `exp(-x**2)` times the integral of\n `exp(t**2)` from `0` to `x`, with the domain of definition all real numbers.\n\n Dawson's function is odd.\n >>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy()\n array([-0.5380795, -0.4244364, 0.4244364, 0.5380795], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.dawsn\n @end_compatibility\n ", "desc": "Computes Dawson's integral of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.expint", "docs": "Computes the Exponential integral of `x` element-wise.\n\n The Exponential integral is defined as the integral of `exp(t) / t` from\n `-inf` to `x`, with the domain of definition all positive real numbers.\n\n >>> tf.math.special.expint([1., 1.1, 2.1, 4.1]).numpy()\n array([ 1.8951179, 2.1673784, 5.3332353, 21.048464], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.expi\n @end_compatibility\n ", "desc": "Computes the Exponential integral of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.fresnel_cos", "docs": "Computes Fresnel's cosine integral of `x` element-wise.\n\n The Fresnel cosine integral is defined as the integral of `cos(t^2)` from\n `0` to `x`, with the domain of definition all real numbers.\n\n The Fresnel cosine integral is odd.\n >>> tf.math.special.fresnel_cos([-1., -0.1, 0.1, 1.]).numpy()\n array([-0.7798934 , -0.09999753, 0.09999753, 0.7798934 ], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.fresnel second output.\n @end_compatibility\n ", "desc": "Computes Fresnel's cosine integral of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.fresnel_sin", "docs": "Computes Fresnel's sine integral of `x` element-wise.\n\n The Fresnel sine integral is defined as the integral of `sin(t^2)` from\n `0` to `x`, with the domain of definition all real numbers.\n\n >>> tf.math.special.fresnel_sin([-1., -0.1, 0.1, 1.]).numpy()\n array([-0.43825912, -0.00052359, 0.00052359, 0.43825912], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.fresnel first output.\n @end_compatibility\n ", "desc": "Computes Fresnel's sine integral of `x` element-wise.", "type": "API"}, {"name": "tf.math.special.spence", "docs": "Computes Spence's integral of `x` element-wise.\n\n Spence's integral is defined as the integral of `log(t) / (1 - t)` from\n `1` to `x`, with the domain of definition all non-negative real numbers.\n\n >>> tf.math.special.spence([0.5, 1., 2., 3.]).numpy()\n array([ 0.58224034, 0. , -0.82246685, -1.4367464], dtype=float32)\n\n This implementation is based off of the Cephes math library.\n\n Args:\n x: A `Tensor` or `SparseTensor`. Must be one of the following types:\n `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.spence\n @end_compatibility\n ", "desc": "Computes Spence's integral of `x` element-wise.", "type": "API"}, {"name": "tf.math.sqrt", "docs": "Computes element-wise square root of the input tensor.\n\n Note: This operation does not support integer types.\n\n >>> x = tf.constant([[4.0], [16.0]])\n >>> tf.sqrt(x)\n \n >>> y = tf.constant([[-4.0], [16.0]])\n >>> tf.sqrt(y)\n \n >>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)\n >>> tf.sqrt(z)\n \n\n Note: In order to support complex type, please provide an input tensor\n of `complex64` or `complex128`.\n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of same size, type and sparsity as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape)`", "desc": "Computes element-wise square root of the input tensor.", "type": "API"}, {"name": "tf.math.square", "docs": "Computes square of x element-wise.\n\n I.e., \\\\(y = x * x = x^2\\\\).\n\n >>> tf.math.square([-2., 0., 3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)`", "desc": "Computes square of x element-wise.", "type": "API"}, {"name": "tf.math.squared_difference", "docs": "Returns conj(x - y)(x - y) element-wise.\n\n *NOTE*: `math.squared_difference` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns conj(x - y)(x - y) element-wise.", "type": "API"}, {"name": "tf.math.subtract", "docs": "Returns x - y element-wise.\n\n *NOTE*: `tf.subtract` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Both input and output have a range `(-inf, inf)`.\n\n Example usages below.\n\n Subtract operation between an array and a scalar:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.subtract(x, y)\n \n >>> tf.subtract(y, x)\n \n\n Note that binary `-` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x - y\n \n\n Subtract operation between an array and a tensor of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([5, 4, 3, 2, 1])\n >>> tf.subtract(y, x)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**8 + 1, 2**8 + 2]\n >>> tf.subtract(x, y)\n \n\n When subtracting two input values of different shapes, `tf.subtract` follows the\n [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules)\n . The two input array shapes are compared element-wise. Starting with the\n trailing dimensions, the two dimensions either have to be equal or one of them\n needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(2, 1, 3)\n >>> tf.subtract(x, y)\n \n\n Example with inputs of different dimensions:\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(1, 6)\n >>> tf.subtract(x, y)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x - y element-wise.", "type": "API"}, {"name": "tf.math.tan", "docs": "Computes tan of x element-wise.\n\n Given an input tensor, this function computes tangent of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `(-inf, inf)`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes tan of x element-wise.", "type": "API"}, {"name": "tf.math.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.math.top_k", "docs": "Finds values and indices of the `k` largest entries for the last dimension.\n\n If the input is a vector (rank=1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n >>> result = tf.math.top_k([1, 2, 98, 1, 1, 99, 3, 1, 3, 96, 4, 1],\n ... k=3)\n >>> result.values.numpy()\n array([99, 98, 96], dtype=int32)\n >>> result.indices.numpy()\n array([5, 2, 9], dtype=int32)\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n >>> input = tf.random.normal(shape=(3,4,5,6))\n >>> k = 2\n >>> values, indices = tf.math.top_k(input, k=k)\n >>> values.shape.as_list()\n [3, 4, 5, 2]\n >>>\n >>> values.shape == indices.shape == input.shape[:-1] + [k]\n True\n\n The indices can be used to `gather` from a tensor who's shape matches `input`.\n\n >>> gathered_values = tf.gather(input, indices, batch_dims=-1)\n >>> assert tf.reduce_all(gathered_values == values)\n\n If two elements are equal, the lower-index element appears first.\n\n >>> result = tf.math.top_k([1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],\n ... k=3)\n >>> result.indices.numpy()\n array([0, 1, 3], dtype=int32)\n\n Args:\n input: 1-D or higher `Tensor` with last dimension at least `k`.\n k: 0-D `int32` `Tensor`. Number of top elements to look for along the last\n dimension (along each row for matrices).\n sorted: If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: Optional name for the operation.\n\n Returns:\n A tuple with two named fields:\n values: The `k` largest elements along each last dimensional slice.\n indices: The indices of `values` within the last dimension of `input`.\n ", "desc": "Finds values and indices of the `k` largest entries for the last dimension.", "type": "API"}, {"name": "tf.math.truediv", "docs": "Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 division operator semantics where all integer\n arguments are cast to floating types first. This op is generated by normal\n `x / y` division in Python 3 and in Python 2.7 with\n `from __future__ import division`. If you want integer division that rounds\n down, use `x // y` or `tf.math.floordiv`.\n\n `x` and `y` must have the same numeric type. If the inputs are floating\n point, the output will have the same type. If the inputs are integral, the\n inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`\n and `int64` (matching the behavior of Numpy).\n\n Args:\n x: `Tensor` numerator of numeric type.\n y: `Tensor` denominator of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` evaluated in floating point.\n\n Raises:\n TypeError: If `x` and `y` have different dtypes.\n ", "desc": "Divides x / y elementwise (using Python 3 division operator semantics).", "type": "API"}, {"name": "tf.math.unsorted_segment_max", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the maximum such that:\n\n \\\\(output_i = \\max_{j...} data[j...]\\\\) where max is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the maximum is empty for a given segment ID `i`, it outputs the smallest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::lowest()`.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.math.unsorted_segment_mean", "docs": "Computes the mean along segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Instead of computing the sum over segments, it computes the mean of all\n entries belonging to a segment such that:\n\n \\\\(output_i = 1/N_i \\sum_{j...} data[j...]\\\\) where the sum is over tuples\n `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the number of\n occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.math.unsorted_segment_min", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the minimum such that:\n\n \\\\(output_i = \\min_{j...} data_[j...]\\\\) where min is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the minimum is empty for a given segment ID `i`, it outputs the largest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::max()`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.math.unsorted_segment_prod", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the product of all\n entries belonging to a segment such that:\n\n \\\\(output_i = \\prod_{j...} data[j...]\\\\) where the product is over tuples\n `j...` such that `segment_ids[j...] == i`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n If there is no entry for a given segment ID `i`, it outputs 1.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.math.unsorted_segment_sqrt_n", "docs": "Computes the sum along segments of a tensor divided by the sqrt(N).\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n This operator is similar to the `tf.math.unsorted_segment_sum` operator.\n Additionally to computing the sum over segments, it divides the results by\n sqrt(N).\n\n \\\\(output_i = 1/sqrt(N_i) \\sum_{j...} data[j...]\\\\) where the sum is over\n tuples `j...` such that `segment_ids[j...] == i` with \\\\N_i\\\\ being the\n number of occurrences of id \\\\i\\\\.\n\n If there is no entry for a given segment ID `i`, it outputs 0.\n\n Note that this op only supports floating point and complex dtypes,\n due to tf.sqrt only supporting these types.\n\n If the given segment ID `i` is negative, the value is dropped and will not\n be added to the sum of the segment.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor` with floating point or complex dtype.\n segment_ids: An integer tensor whose shape is a prefix of `data.shape`.\n The values must be in the range `[0, num_segments)`.\n The values are always validated to be in range on CPU,\n never validated on GPU.\n num_segments: An integer scalar `Tensor`. The number of distinct segment\n IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has same shape as data, except for the first `segment_ids.rank`\n dimensions, which are replaced with a single dimension which has size\n `num_segments`.\n ", "desc": "Computes the sum along segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.math.unsorted_segment_sum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output[i] = \\sum_{j...} data[j...]\\\\) where the sum is over tuples `j...` such\n that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`\n need not be sorted and need not cover all values in the full\n range of valid values.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n If the given segment ID `i` is negative, the value is dropped and will not be\n added to the sum of the segment.\n\n `num_segments` should equal the number of distinct segment IDs.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n >>> c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]]\n >>> tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.math.xdivy", "docs": "Returns 0 if x == 0, and x / y otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x / y otherwise, elementwise.", "type": "API"}, {"name": "tf.math.xlog1py", "docs": "Compute x * log1p(y).\n\n Given `x` and `y`, compute `x * log1p(y)`. This function safely returns\n zero when `x = 0`, no matter what the value of `y` is.\n\n Example:\n\n >>> tf.math.xlog1py(0., 1.)\n \n >>> tf.math.xlog1py(1., 1.)\n \n >>> tf.math.xlog1py(2., 2.)\n \n >>> tf.math.xlog1py(0., -1.)\n \n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n y: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n `x * log1p(y)`.\n\n @compatibility(scipy)\n Equivalent to scipy.special.xlog1py\n @end_compatibility\n ", "desc": "Compute x * log1p(y).", "type": "API"}, {"name": "tf.math.xlogy", "docs": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.", "type": "API"}, {"name": "tf.math.zero_fraction", "docs": "Returns the fraction of zeros in `value`.\n\n If `value` is empty, the result is `nan`.\n\n This is useful in summaries to measure and report sparsity. For example,\n\n ```python\n z = tf.nn.relu(...)\n summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z))\n ```\n\n Args:\n value: A tensor of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n The fraction of zeros in `value`, with type `float32`.\n ", "desc": "Returns the fraction of zeros in `value`.", "type": "API"}, {"name": "tf.math.zeta", "docs": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).\n\n The Hurwitz zeta function is defined as:\n\n\n \\\\(\\zeta(x, q) = \\sum_{n=0}^{\\infty} (q + n)^{-x}\\\\)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n q: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).", "type": "API"}, {"name": "tf.matmul", "docs": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n where the inner 2 dimensions specify valid matrix multiplication dimensions,\n and any further outer dimensions specify matching batch size.\n\n Both matrices must be of the same type. The supported types are:\n `bfloat16`, `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, `complex128`.\n\n Either matrix can be transposed or adjointed (conjugated and transposed) on\n the fly by setting one of the corresponding flag to `True`. These are `False`\n by default.\n\n If one or both of the matrices contain a lot of zeros, a more efficient\n multiplication algorithm can be used by setting the corresponding\n `a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.\n This optimization is only available for plain matrices (rank-2 tensors) with\n datatypes `bfloat16` or `float32`.\n\n A simple 2-D tensor matrix multiplication:\n\n >>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])\n >>> a # 2-D tensor\n \n >>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])\n >>> b # 2-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n A batch matrix multiplication with batch shape [2]:\n\n >>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])\n >>> a # 3-D tensor\n \n >>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])\n >>> b # 3-D tensor\n \n >>> c = tf.matmul(a, b)\n >>> c # `a` * `b`\n \n\n Since python >= 3.5 the @ operator is supported\n (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow,\n it simply calls the `tf.matmul()` function, so the following lines are\n equivalent:\n\n >>> d = a @ b @ [[10], [11]]\n >>> d = tf.matmul(tf.matmul(a, b), [[10], [11]])\n\n Args:\n a: `tf.Tensor` of type `float16`, `float32`, `float64`, `int32`,\n `complex64`, `complex128` and rank > 1.\n b: `tf.Tensor` with same type and rank as `a`.\n transpose_a: If `True`, `a` is transposed before multiplication.\n transpose_b: If `True`, `b` is transposed before multiplication.\n adjoint_a: If `True`, `a` is conjugated and transposed before\n multiplication.\n adjoint_b: If `True`, `b` is conjugated and transposed before\n multiplication.\n a_is_sparse: If `True`, `a` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n b_is_sparse: If `True`, `b` is treated as a sparse matrix. Notice, this\n **does not support `tf.sparse.SparseTensor`**, it just makes optimizations\n that assume most values in `a` are zero.\n See `tf.sparse.sparse_dense_matmul`\n for some support for `tf.sparse.SparseTensor` multiplication.\n output_type: The output datatype if needed. Defaults to None in which case\n the output_type is the same as input type. Currently only works when input\n tensors are type (u)int8 and output_type can be int32.\n name: Name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of the same type as `a` and `b` where each inner-most matrix\n is the product of the corresponding matrices in `a` and `b`, e.g. if all\n transpose or adjoint attributes are `False`:\n\n `output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j])`,\n for all indices `i`, `j`.\n\n Note: This is matrix product, not element-wise product.\n\n\n Raises:\n ValueError: If `transpose_a` and `adjoint_a`, or `transpose_b` and\n `adjoint_b` are both set to `True`.\n TypeError: If output_type is specified but the types of `a`, `b` and\n `output_type` is not (u)int8, (u)int8 and int32.\n ", "desc": "Multiplies matrix `a` by matrix `b`, producing `a` * `b`.", "type": "API"}, {"name": "tf.matrix_square_root", "docs": "Computes the matrix square root of one or more square matrices:\n\n matmul(sqrtm(A), sqrtm(A)) = A\n\n The input matrix should be invertible. If the input matrix is real, it should\n have no eigenvalues which are real and negative (pairs of complex conjugate\n eigenvalues are allowed).\n\n The matrix square root is computed by first reducing the matrix to\n quasi-triangular form with the real Schur decomposition. The square root\n of the quasi-triangular matrix is then computed directly. Details of\n the algorithm can be found in: Nicholas J. Higham, \"Computing real\n square roots of a real matrix\", Linear Algebra Appl., 1987.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the matrix square root for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix square root of one or more square matrices:", "type": "API"}, {"name": "tf.maximum", "docs": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.\n\n Example:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-2., 0., 2., 5.])\n >>> tf.math.maximum(x, y)\n \n\n Note that `maximum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.maximum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_max`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.", "type": "API"}, {"name": "tf.meshgrid", "docs": "Broadcasts parameters for evaluation on an N-D grid.\n\n Given N one-dimensional coordinate arrays `*args`, returns a list `outputs`\n of N-D coordinate arrays for evaluating expressions on an N-D grid.\n\n Notes:\n\n `meshgrid` supports cartesian ('xy') and matrix ('ij') indexing conventions.\n When the `indexing` argument is set to 'xy' (the default), the broadcasting\n instructions for the first two dimensions are swapped.\n\n Examples:\n\n Calling `X, Y = meshgrid(x, y)` with the tensors\n\n ```python\n x = [1, 2, 3]\n y = [4, 5, 6]\n X, Y = tf.meshgrid(x, y)\n # X = [[1, 2, 3],\n # [1, 2, 3],\n # [1, 2, 3]]\n # Y = [[4, 4, 4],\n # [5, 5, 5],\n # [6, 6, 6]]\n ```\n\n Args:\n *args: `Tensor`s with rank 1.\n **kwargs:\n - indexing: Either 'xy' or 'ij' (optional, default: 'xy').\n - name: A name for the operation (optional).\n\n Returns:\n outputs: A list of N `Tensor`s with rank N.\n\n Raises:\n TypeError: When no keyword arguments (kwargs) are passed.\n ValueError: When indexing keyword argument is not one of `xy` or `ij`.\n ", "desc": "Broadcasts parameters for evaluation on an N-D grid.", "type": "API"}, {"name": "tf.metrics", "docs": "", "desc": "", "type": "API"}, {"name": "tf.metrics.Accuracy", "docs": "Calculates how often predictions equal labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Accuracy()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],\n ... sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Accuracy()])\n ```\n ", "desc": "Calculates how often predictions equal labels.", "type": "API"}, {"name": "tf.metrics.AUC", "docs": "Approximates the AUC (Area under the curve) of the ROC or PR curves.\n\n The AUC (Area under the curve) of the ROC (Receiver operating\n characteristic; default) or PR (Precision Recall) curves are quality measures\n of binary classifiers. Unlike the accuracy, and like cross-entropy\n losses, ROC-AUC and PR-AUC evaluate all the operational points of a model.\n\n This class approximates AUCs using a Riemann sum. During the metric\n accumulation phrase, predictions are accumulated within predefined buckets\n by value. The AUC is then computed by interpolating per-bucket averages. These\n buckets define the evaluated operational points.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the AUC.\n To discretize the AUC curve, a linearly spaced set of thresholds is used to\n compute pairs of recall and precision values. The area under the ROC-curve is\n therefore computed using the height of the recall values by the false positive\n rate, while the area under the PR-curve is the computed using the height of\n the precision values by the recall.\n\n This value is ultimately returned as `auc`, an idempotent operation that\n computes the area under a discretized curve of precision versus recall values\n (computed using the aforementioned variables). The `num_thresholds` variable\n controls the degree of discretization with larger numbers of thresholds more\n closely approximating the true AUC. The quality of the approximation may vary\n dramatically depending on `num_thresholds`. The `thresholds` parameter can be\n used to manually specify thresholds which split the predictions more evenly.\n\n For a best approximation of the real AUC, `predictions` should be distributed\n approximately uniformly in the range [0, 1] (if `from_logits=False`). The\n quality of the AUC approximation may be poor if this is not the case. Setting\n `summation_method` to 'minoring' or 'majoring' can help quantify the error in\n the approximation by providing lower or upper bound estimate of the AUC.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use when discretizing the roc curve. Values must be > 1.\n curve: (Optional) Specifies the name of the curve to be computed, 'ROC'\n [default] or 'PR' for the Precision-Recall-curve.\n summation_method: (Optional) Specifies the [Riemann summation method](\n https://en.wikipedia.org/wiki/Riemann_sum) used.\n 'interpolation' (default) applies mid-point summation scheme for `ROC`.\n For PR-AUC, interpolates (true/false) positives but not the ratio that\n is precision (see Davis & Goadrich 2006 for details);\n 'minoring' applies left summation\n for increasing intervals and right summation for decreasing intervals;\n 'majoring' does the opposite.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n thresholds: (Optional) A list of floating point values to use as the\n thresholds for discretizing the curve. If set, the `num_thresholds`\n parameter is ignored. Values should be in [0, 1]. Endpoint thresholds\n equal to {-epsilon, 1+epsilon} for a small positive epsilon value will\n be automatically included with these to correctly handle predictions\n equal to exactly 0 or 1.\n multi_label: boolean indicating whether multilabel data should be\n treated as such, wherein AUC is computed separately for each label and\n then averaged across labels, or (when False) if the data should be\n flattened into a single label before AUC computation. In the latter\n case, when multilabel data is passed to AUC, each label-prediction pair\n is treated as an individual data point. Should be set to False for\n multi-class data.\n num_labels: (Optional) The number of labels, used when `multi_label` is\n True. If `num_labels` is not specified, then state variables get created\n on the first call to `update_state`.\n label_weights: (Optional) list, array, or tensor of non-negative weights\n used to compute AUCs for multilabel data. When `multi_label` is True,\n the weights are applied to the individual label AUCs when they are\n averaged to produce the multi-label AUC. When it's False, they are used\n to weight the individual label predictions in computing the confusion\n matrix on the flattened data. Note that this is unlike class_weights in\n that class_weights weights the example depending on the value of its\n label, whereas label_weights depends only on the index of that label\n before flattening; therefore `label_weights` should not be used for\n multi-class data.\n from_logits: boolean indicating whether the predictions (`y_pred` in\n `update_state`) are probabilities or sigmoid logits. As a rule of thumb,\n when using a keras loss, the `from_logits` constructor argument of the\n loss should match the AUC `from_logits` constructor argument.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.AUC(num_thresholds=3)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> # threshold values are [0 - 1e-7, 0.5, 1 + 1e-7]\n >>> # tp = [2, 1, 0], fp = [2, 0, 0], fn = [0, 1, 2], tn = [0, 2, 2]\n >>> # tp_rate = recall = [1, 0.5, 0], fp_rate = [1, 0, 0]\n >>> # auc = ((((1+0.5)/2)*(1-0)) + (((0.5+0)/2)*(0-0))) = 0.75\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n # Reports the AUC of a model outputting a probability.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(),\n metrics=[tf.keras.metrics.AUC()])\n\n # Reports the AUC of a model outputting a logit.\n model.compile(optimizer='sgd',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)])\n ```\n ", "desc": "Approximates the AUC (Area under the curve) of the ROC or PR curves.", "type": "API"}, {"name": "tf.metrics.binary_accuracy", "docs": "Calculates how often predictions match binary labels.\n\n Standalone usage:\n >>> y_true = [[1], [1], [0], [0]]\n >>> y_pred = [[1], [1], [0], [0]]\n >>> m = tf.keras.metrics.binary_accuracy(y_true, y_pred)\n >>> assert m.shape == (4,)\n >>> m.numpy()\n array([1., 1., 1., 1.], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n threshold: (Optional) Float representing the threshold for deciding whether\n prediction values are 1 or 0.\n\n Returns:\n Binary accuracy values. shape = `[batch_size, d0, .. dN-1]`\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.metrics.binary_crossentropy", "docs": "Computes the binary crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1], [0, 0]]\n >>> y_pred = [[0.6, 0.4], [0.4, 0.6]]\n >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.916 , 0.714], dtype=float32)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels by\n squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing`\n for the target class and `0.5 * label_smoothing` for the non-target class.\n axis: The axis along which the mean is computed. Defaults to -1.\n\n Returns:\n Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the binary crossentropy loss.", "type": "API"}, {"name": "tf.metrics.BinaryAccuracy", "docs": "Calculates how often predictions match binary labels.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `binary accuracy`: an idempotent operation that simply\n divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n threshold: (Optional) Float representing the threshold for deciding\n whether prediction values are 1 or 0.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryAccuracy()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])\n >>> m.result().numpy()\n 0.75\n\n >>> m.reset_state()\n >>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match binary labels.", "type": "API"}, {"name": "tf.metrics.BinaryCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are only two\n label classes (0 and 1).\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional )Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed.\n e.g. `label_smoothing=0.2` means that we will use a value of `0.1` for\n label `0` and `0.9` for label `1`\".\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.BinaryCrossentropy()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.81492424\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162905\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.BinaryCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.metrics.categorical_accuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: One-hot ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Categorical accuracy values.\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.metrics.categorical_crossentropy", "docs": "Computes the categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [[0, 1, 0], [0, 0, 1]]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Tensor of one-hot true targets.\n y_pred: Tensor of predicted targets.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n label_smoothing: Float in [0, 1]. If > `0` then smooth the labels. For\n example, if `0.1`, use `0.1 / num_classes` for non-target labels\n and `0.9 + 0.1 / num_classes` for target labels.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Categorical crossentropy loss value.\n ", "desc": "Computes the categorical crossentropy loss.", "type": "API"}, {"name": "tf.metrics.CategoricalAccuracy", "docs": "Calculates how often predictions match one-hot labels.\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `categorical accuracy`: an idempotent operation that\n simply divides `total` by `count`.\n\n `y_pred` and `y_true` should be passed in as vectors of probabilities, rather\n than as labels. If necessary, use `tf.one_hot` to expand `y_true` as a vector.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalAccuracy()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],\n ... [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match one-hot labels.", "type": "API"}, {"name": "tf.metrics.CategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n This is the crossentropy metric class to be used when there are multiple\n label classes (2 or more). Here we assume that labels are given as a `one_hot`\n representation. eg., When labels values are [2, 0, 1],\n `y_true` = [[0, 0, 1], [1, 0, 0], [0, 1, 0]].\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n label_smoothing: (Optional) Float in [0, 1]. When > 0, label values are\n smoothed, meaning the confidence on label values are relaxed. e.g.\n `label_smoothing=0.2` means that we will use a value of `0.1` for label\n `0` and `0.9` for label `1`\"\n\n Standalone usage:\n\n >>> # EPSILON = 1e-7, y = y_true, y` = y_pred\n >>> # y` = clip_ops.clip_by_value(output, EPSILON, 1. - EPSILON)\n >>> # y` = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(y'), axis = -1)\n >>> # = -((log 0.95), (log 0.1))\n >>> # = [0.051, 2.302]\n >>> # Reduced xent = (0.051 + 2.302) / 2\n >>> m = tf.keras.metrics.CategoricalCrossentropy()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1, 0], [0, 0, 1]],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.metrics.CategoricalHinge", "docs": "Computes the categorical hinge metric between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.CategoricalHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.4000001\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.2\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CategoricalHinge()])\n ```\n ", "desc": "Computes the categorical hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.CosineSimilarity", "docs": "Computes the cosine similarity between the labels and predictions.\n\n `cosine similarity = (a . b) / ||a|| ||b||`\n\n See: [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).\n\n This metric keeps the average cosine similarity between `predictions` and\n `labels` over a stream of data.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n axis: (Optional) Defaults to -1. The dimension along which the cosine\n similarity is computed.\n\n Standalone usage:\n\n >>> # l2_norm(y_true) = [[0., 1.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414, 1./1.414]]\n >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]\n >>> # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))\n >>> # = ((0. + 0.) + (0.5 + 0.5)) / 2\n >>> m = tf.keras.metrics.CosineSimilarity(axis=1)\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]],\n ... sample_weight=[0.3, 0.7])\n >>> m.result().numpy()\n 0.6999999\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.CosineSimilarity(axis=1)])\n ```\n ", "desc": "Computes the cosine similarity between the labels and predictions.", "type": "API"}, {"name": "tf.metrics.deserialize", "docs": "Deserializes a serialized metric class/function instance.\n\n Args:\n config: Metric configuration.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras `Metric` instance or a metric function.\n ", "desc": "Deserializes a serialized metric class/function instance.", "type": "API"}, {"name": "tf.metrics.FalseNegatives", "docs": "Calculates the number of false negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalseNegatives()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [0, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalseNegatives()])\n ```\n ", "desc": "Calculates the number of false negatives.", "type": "API"}, {"name": "tf.metrics.FalsePositives", "docs": "Calculates the number of false positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n false positives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of false positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.FalsePositives()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.FalsePositives()])\n ```\n ", "desc": "Calculates the number of false positives.", "type": "API"}, {"name": "tf.metrics.get", "docs": "Retrieves a Keras metric as a `function`/`Metric` class instance.\n\n The `identifier` may be the string name of a metric function or class.\n\n >>> metric = tf.keras.metrics.get(\"categorical_crossentropy\")\n >>> type(metric)\n \n >>> metric = tf.keras.metrics.get(\"CategoricalCrossentropy\")\n >>> type(metric)\n \n\n You can also specify `config` of the metric to this function by passing dict\n containing `class_name` and `config` as an identifier. Also note that the\n `class_name` must map to a `Metric` class\n\n >>> identifier = {\"class_name\": \"CategoricalCrossentropy\",\n ... \"config\": {\"from_logits\": True}}\n >>> metric = tf.keras.metrics.get(identifier)\n >>> type(metric)\n \n\n Args:\n identifier: A metric identifier. One of None or string name of a metric\n function/class or metric configuration dictionary or a metric function or\n a metric class instance\n\n Returns:\n A Keras metric as a `function`/ `Metric` class instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras metric as a `function`/`Metric` class instance.", "type": "API"}, {"name": "tf.metrics.Hinge", "docs": "Computes the hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Hinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.3\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.1\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Hinge()])\n ```\n ", "desc": "Computes the hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.kl_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.KLD", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.KLDivergence", "docs": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.\n\n `metric = y_true * log(y_true / y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.KLDivergence()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 0.45814306\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.9162892\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.KLDivergence()])\n ```\n ", "desc": "Computes Kullback-Leibler divergence metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.kullback_leibler_divergence", "docs": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.\n\n `loss = y_true * log(y_true / y_pred)`\n\n See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1)\n >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)\n >>> assert np.array_equal(\n ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))\n\n Args:\n y_true: Tensor of true targets.\n y_pred: Tensor of predicted targets.\n\n Returns:\n A `Tensor` with loss.\n\n Raises:\n TypeError: If `y_true` cannot be cast to the `y_pred.dtype`.\n ", "desc": "Computes Kullback-Leibler divergence loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.log_cosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.metrics.logcosh", "docs": "Logarithm of the hyperbolic cosine of the prediction error.\n\n `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\n to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\n like the mean squared error, but will not be so strongly affected by the\n occasional wildly incorrect prediction.\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.logcosh(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> x = y_pred - y_true\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),\n ... atol=1e-5)\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.metrics.LogCoshError", "docs": "Computes the logarithm of the hyperbolic cosine of the prediction error.\n\n `logcosh = log((exp(x) + exp(-x))/2)`, where x is the error (y_pred - y_true)\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.LogCoshError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.10844523\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.21689045\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.LogCoshError()])\n ```\n ", "desc": "Computes the logarithm of the hyperbolic cosine of the prediction error.", "type": "API"}, {"name": "tf.metrics.MAE", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.metrics.MAPE", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.Mean", "docs": "Computes the (weighted) mean of the given values.\n\n For example, if values is [1, 3, 5, 7] then the mean is 4.\n If the weights were specified as [1, 1, 0, 0] then the mean would be 2.\n\n This metric creates two variables, `total` and `count` that are used to\n compute the average of `values`. This average is ultimately returned as `mean`\n which is an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Mean()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 4.0\n >>> m.reset_state()\n >>> m.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Mean(name='mean_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) mean of the given values.", "type": "API"}, {"name": "tf.metrics.mean_absolute_error", "docs": "Computes the mean absolute error between labels and predictions.\n\n `loss = mean(abs(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute error between labels and predictions.", "type": "API"}, {"name": "tf.metrics.mean_absolute_percentage_error", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n `loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.random(size=(2, 3))\n >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean absolute percentage error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.mean_squared_error", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.metrics.mean_squared_logarithmic_error", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.MeanAbsoluteError", "docs": "Computes the mean absolute error between the labels and predictions.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsoluteError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsoluteError()])\n ```\n ", "desc": "Computes the mean absolute error between the labels and predictions.", "type": "API"}, {"name": "tf.metrics.MeanAbsolutePercentageError", "docs": "Computes the mean absolute percentage error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanAbsolutePercentageError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 250000000.0\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 500000000.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanAbsolutePercentageError()])\n ```\n ", "desc": "Computes the mean absolute percentage error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.MeanIoU", "docs": "Computes the mean Intersection-Over-Union metric.\n\n General definition and computation:\n\n Intersection-Over-Union is a common evaluation metric for semantic image\n segmentation.\n\n For an individual class, the IoU metric is defined as follows:\n\n ```\n iou = true_positives / (true_positives + false_positives + false_negatives)\n ```\n\n To compute IoUs, the predictions are accumulated in a confusion matrix,\n weighted by `sample_weight` and the metric is then calculated from it.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Note that this class first computes IoUs for all individual classes, then\n returns the mean of these values.\n\n Args:\n num_classes: The possible number of labels the prediction task can have.\n This value must be provided, since a confusion matrix of dimension =\n [num_classes, num_classes] will be allocated.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> # cm = [[1, 1],\n >>> # [1, 1]]\n >>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]\n >>> # iou = true_positives / (sum_row + sum_col - true_positives))\n >>> # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33\n >>> m = tf.keras.metrics.MeanIoU(num_classes=2)\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1])\n >>> m.result().numpy()\n 0.33333334\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1],\n ... sample_weight=[0.3, 0.3, 0.3, 0.1])\n >>> m.result().numpy()\n 0.23809525\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])\n ```\n ", "desc": "Computes the mean Intersection-Over-Union metric.", "type": "API"}, {"name": "tf.metrics.MeanRelativeError", "docs": "Computes the mean relative error by normalizing with the given values.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the mean relative error. This is weighted by `sample_weight`, and\n it is ultimately returned as `mean_relative_error`:\n an idempotent operation that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n normalizer: The normalizer values with same shape as predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanRelativeError(normalizer=[1, 3, 2, 3])\n >>> m.update_state([1, 3, 2, 3], [2, 4, 6, 8])\n\n >>> # metric = mean(|y_pred - y_true| / normalizer)\n >>> # = mean([1, 1, 4, 5] / [1, 3, 2, 3]) = mean([1, 1/3, 2, 5/3])\n >>> # = 5/4 = 1.25\n >>> m.result().numpy()\n 1.25\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanRelativeError(normalizer=[1, 3])])\n ```\n ", "desc": "Computes the mean relative error by normalizing with the given values.", "type": "API"}, {"name": "tf.metrics.MeanSquaredError", "docs": "Computes the mean squared error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.25\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredError()])\n ```\n ", "desc": "Computes the mean squared error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.MeanSquaredLogarithmicError", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanSquaredLogarithmicError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.12011322\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.24022643\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()])\n ```\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.MeanTensor", "docs": "Computes the element-wise (weighted) mean of the given tensors.\n\n `MeanTensor` returns a tensor with the same shape of the input tensors. The\n mean value is updated by keeping local variables `total` and `count`. The\n `total` tracks the sum of the weighted values, and `count` stores the sum of\n the weighted counts.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n shape: (Optional) A list of integers, a tuple of integers, or a 1-D Tensor\n of type int32. If not specified, the shape is inferred from the values at\n the first call of update_state.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.MeanTensor()\n >>> m.update_state([0, 1, 2, 3])\n >>> m.update_state([4, 5, 6, 7])\n >>> m.result().numpy()\n array([2., 3., 4., 5.], dtype=float32)\n\n >>> m.update_state([12, 10, 8, 6], sample_weight= [0, 0.2, 0.5, 1])\n >>> m.result().numpy()\n array([2. , 3.6363635, 4.8 , 5.3333335], dtype=float32)\n\n >>> m = tf.keras.metrics.MeanTensor(dtype=tf.float64, shape=(1, 4))\n >>> m.result().numpy()\n array([[0., 0., 0., 0.]])\n >>> m.update_state([[0, 1, 2, 3]])\n >>> m.update_state([[4, 5, 6, 7]])\n >>> m.result().numpy()\n array([[2., 3., 4., 5.]])\n ", "desc": "Computes the element-wise (weighted) mean of the given tensors.", "type": "API"}, {"name": "tf.metrics.Metric", "docs": "Encapsulates metric logic and state.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n **kwargs: Additional layer keywords arguments.\n\n Standalone usage:\n\n ```python\n m = SomeMetric(...)\n for input in ...:\n m.update_state(input)\n print('Final result: ', m.result().numpy())\n ```\n\n Usage with `compile()` API:\n\n ```python\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(64, activation='relu'))\n model.add(tf.keras.layers.Dense(10, activation='softmax'))\n\n model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),\n loss=tf.keras.losses.CategoricalCrossentropy(),\n metrics=[tf.keras.metrics.CategoricalAccuracy()])\n\n data = np.random.random((1000, 32))\n labels = np.random.random((1000, 10))\n\n dataset = tf.data.Dataset.from_tensor_slices((data, labels))\n dataset = dataset.batch(32)\n\n model.fit(dataset, epochs=10)\n ```\n\n To be implemented by subclasses:\n * `__init__()`: All state variables should be created in this method by\n calling `self.add_weight()` like: `self.var = self.add_weight(...)`\n * `update_state()`: Has all updates to the state variables like:\n self.var.assign_add(...).\n * `result()`: Computes and returns a scalar value or a dict of scalar values\n for the metric from the state variables.\n\n Example subclass implementation:\n\n ```python\n class BinaryTruePositives(tf.keras.metrics.Metric):\n\n def __init__(self, name='binary_true_positives', **kwargs):\n super(BinaryTruePositives, self).__init__(name=name, **kwargs)\n self.true_positives = self.add_weight(name='tp', initializer='zeros')\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, tf.bool)\n y_pred = tf.cast(y_pred, tf.bool)\n\n values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))\n values = tf.cast(values, self.dtype)\n if sample_weight is not None:\n sample_weight = tf.cast(sample_weight, self.dtype)\n sample_weight = tf.broadcast_to(sample_weight, values.shape)\n values = tf.multiply(values, sample_weight)\n self.true_positives.assign_add(tf.reduce_sum(values))\n\n def result(self):\n return self.true_positives\n ```\n ", "desc": "Encapsulates metric logic and state.", "type": "API"}, {"name": "tf.metrics.MSE", "docs": "Computes the mean squared error between labels and predictions.\n\n After computing the squared distance between the inputs, the mean value over\n the last dimension is returned.\n\n `loss = mean(square(y_true - y_pred), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared error between labels and predictions.", "type": "API"}, {"name": "tf.metrics.MSLE", "docs": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n `loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.randint(0, 2, size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> y_true = np.maximum(y_true, 1e-7)\n >>> y_pred = np.maximum(y_pred, 1e-7)\n >>> assert np.allclose(\n ... loss.numpy(),\n ... np.mean(\n ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n Args:\n y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the mean squared logarithmic error between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.Poisson", "docs": "Computes the Poisson metric between `y_true` and `y_pred`.\n\n `metric = y_pred - y_true * log(y_pred)`\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Poisson()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.49999997\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.99999994\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Poisson()])\n ```\n ", "desc": "Computes the Poisson metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.Precision", "docs": "Computes the precision of the predictions with respect to the labels.\n\n The metric creates two local variables, `true_positives` and `false_positives`\n that are used to compute the precision. This value is ultimately returned as\n `precision`, an idempotent operation that simply divides `true_positives`\n by the sum of `true_positives` and `false_positives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, we'll calculate precision as how often on average a class\n among the top-k classes with the highest predicted values of a batch entry is\n correct and can be found in the label for that entry.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold and/or in the\n top-k highest predictions, and computing the fraction of them for which\n `class_id` is indeed a correct label.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate precision with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Precision()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n >>> # With top_k=2, it will calculate precision over y_true[:2] and y_pred[:2]\n >>> m = tf.keras.metrics.Precision(top_k=2)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.0\n\n >>> # With top_k=4, it will calculate precision over y_true[:4] and y_pred[:4]\n >>> m = tf.keras.metrics.Precision(top_k=4)\n >>> m.update_state([0, 0, 1, 1], [1, 1, 1, 1])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Precision()])\n ```\n ", "desc": "Computes the precision of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.metrics.PrecisionAtRecall", "docs": "Computes best precision where recall is >= specified value.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n precision at the given recall. The threshold for the given recall\n value is computed and used to evaluate the corresponding precision.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n recall: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.PrecisionAtRecall(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[2, 2, 2, 1, 1])\n >>> m.result().numpy()\n 0.33333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)])\n ```\n ", "desc": "Computes best precision where recall is >= specified value.", "type": "API"}, {"name": "tf.metrics.Recall", "docs": "Computes the recall of the predictions with respect to the labels.\n\n This metric creates two local variables, `true_positives` and\n `false_negatives`, that are used to compute the recall. This value is\n ultimately returned as `recall`, an idempotent operation that simply divides\n `true_positives` by the sum of `true_positives` and `false_negatives`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `top_k` is set, recall will be computed as how often on average a class\n among the labels of a batch entry is in the top-k predictions.\n\n If `class_id` is specified, we calculate recall by considering only the\n entries in the batch for which `class_id` is in the label, and computing the\n fraction of them for which `class_id` is above the threshold and/or in the\n top-k predictions.\n\n Args:\n thresholds: (Optional) A float value or a python list/tuple of float\n threshold values in [0, 1]. A threshold is compared with prediction\n values to determine the truth value of predictions (i.e., above the\n threshold is `true`, below is `false`). One metric value is generated\n for each threshold value. If neither thresholds nor top_k are set, the\n default is to calculate recall with `thresholds=0.5`.\n top_k: (Optional) Unset by default. An int value specifying the top-k\n predictions to consider when calculating recall.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Recall()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 0.6666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.Recall()])\n ```\n ", "desc": "Computes the recall of the predictions with respect to the labels.", "type": "API"}, {"name": "tf.metrics.RecallAtPrecision", "docs": "Computes best recall where precision is >= specified value.\n\n For a given score-label-distribution the required precision might not\n be achievable, in this case 0.0 is returned as recall.\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n recall at the given precision. The threshold for the given precision\n value is computed and used to evaluate the corresponding recall.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n Args:\n precision: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given precision.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RecallAtPrecision(0.8)\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],\n ... sample_weight=[1, 0, 0, 1])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RecallAtPrecision(precision=0.8)])\n ```\n ", "desc": "Computes best recall where precision is >= specified value.", "type": "API"}, {"name": "tf.metrics.RootMeanSquaredError", "docs": "Computes root mean squared error metric between `y_true` and `y_pred`.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.RootMeanSquaredError()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 0.70710677\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.RootMeanSquaredError()])\n ```\n ", "desc": "Computes root mean squared error metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.SensitivityAtSpecificity", "docs": "Computes best sensitivity where specificity is >= specified value.\n\n the sensitivity at a given specificity.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n sensitivity at the given specificity. The threshold for the given specificity\n value is computed and used to evaluate the corresponding sensitivity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n specificity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given specificity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SensitivityAtSpecificity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 1])\n >>> m.result().numpy()\n 0.333333\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SensitivityAtSpecificity()])\n ```\n ", "desc": "Computes best sensitivity where specificity is >= specified value.", "type": "API"}, {"name": "tf.metrics.serialize", "docs": "Serializes metric function or `Metric` instance.\n\n Args:\n metric: A Keras `Metric` instance or a metric function.\n\n Returns:\n Metric configuration dictionary.\n ", "desc": "Serializes metric function or `Metric` instance.", "type": "API"}, {"name": "tf.metrics.sparse_categorical_accuracy", "docs": "Calculates how often predictions match integer labels.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([0., 1.], dtype=float32)\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n Args:\n y_true: Integer ground truth values.\n y_pred: The prediction values.\n\n Returns:\n Sparse categorical accuracy values.\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.metrics.sparse_categorical_crossentropy", "docs": "Computes the sparse categorical crossentropy loss.\n\n Standalone usage:\n\n >>> y_true = [1, 2]\n >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> loss.numpy()\n array([0.0513, 2.303], dtype=float32)\n\n Args:\n y_true: Ground truth values.\n y_pred: The predicted values.\n from_logits: Whether `y_pred` is expected to be a logits tensor. By default,\n we assume that `y_pred` encodes a probability distribution.\n axis: Defaults to -1. The dimension along which the entropy is\n computed.\n\n Returns:\n Sparse categorical crossentropy loss value.\n ", "desc": "Computes the sparse categorical crossentropy loss.", "type": "API"}, {"name": "tf.metrics.sparse_top_k_categorical_accuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [2, 1]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.sparse_top_k_categorical_accuracy(\n ... y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: tensor of true targets.\n y_pred: tensor of predicted targets.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Sparse top K categorical accuracy value.\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.metrics.SparseCategoricalAccuracy", "docs": "Calculates how often predictions match integer labels.\n\n ```python\n acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))\n ```\n\n You can provide logits of classes as `y_pred`, since argmax of\n logits and probabilities are same.\n\n This metric creates two local variables, `total` and `count` that are used to\n compute the frequency with which `y_pred` matches `y_true`. This frequency is\n ultimately returned as `sparse categorical accuracy`: an idempotent operation\n that simply divides `total` by `count`.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseCategoricalAccuracy()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n ```\n ", "desc": "Calculates how often predictions match integer labels.", "type": "API"}, {"name": "tf.metrics.SparseCategoricalCrossentropy", "docs": "Computes the crossentropy metric between the labels and predictions.\n\n Use this crossentropy metric when there are two or more label classes.\n We expect labels to be provided as integers. If you want to provide labels\n using `one-hot` representation, please use `CategoricalCrossentropy` metric.\n There should be `# classes` floating point values per feature for `y_pred`\n and a single floating point value per feature for `y_true`.\n\n In the snippet below, there is a single floating point value per example for\n `y_true` and `# classes` floating pointing values per example for `y_pred`.\n The shape of `y_true` is `[batch_size]` and the shape of `y_pred` is\n `[batch_size, num_classes]`.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n from_logits: (Optional) Whether output is expected to be a logits tensor.\n By default, we consider that output encodes a probability distribution.\n axis: (Optional) Defaults to -1. The dimension along which the metric is\n computed.\n\n Standalone usage:\n\n >>> # y_true = one_hot(y_true) = [[0, 1, 0], [0, 0, 1]]\n >>> # logits = log(y_pred)\n >>> # softmax = exp(logits) / sum(exp(logits), axis=-1)\n >>> # softmax = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]]\n >>> # xent = -sum(y * log(softmax), 1)\n >>> # log(softmax) = [[-2.9957, -0.0513, -16.1181],\n >>> # [-2.3026, -0.2231, -2.3026]]\n >>> # y_true * log(softmax) = [[0, -0.0513, 0], [0, 0, -2.3026]]\n >>> # xent = [0.0513, 2.3026]\n >>> # Reduced xent = (0.0513 + 2.3026) / 2\n >>> m = tf.keras.metrics.SparseCategoricalCrossentropy()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]])\n >>> m.result().numpy()\n 1.1769392\n\n >>> m.reset_state()\n >>> m.update_state([1, 2],\n ... [[0.05, 0.95, 0], [0.1, 0.8, 0.1]],\n ... sample_weight=tf.constant([0.3, 0.7]))\n >>> m.result().numpy()\n 1.6271976\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseCategoricalCrossentropy()])\n ```\n ", "desc": "Computes the crossentropy metric between the labels and predictions.", "type": "API"}, {"name": "tf.metrics.SparseTopKCategoricalAccuracy", "docs": "Computes how often integer targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1)\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often integer targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.metrics.SpecificityAtSensitivity", "docs": "Computes best specificity where sensitivity is >= specified value.\n\n `Sensitivity` measures the proportion of actual positives that are correctly\n identified as such (tp / (tp + fn)).\n `Specificity` measures the proportion of actual negatives that are correctly\n identified as such (tn / (tn + fp)).\n\n This metric creates four local variables, `true_positives`, `true_negatives`,\n `false_positives` and `false_negatives` that are used to compute the\n specificity at the given sensitivity. The threshold for the given sensitivity\n value is computed and used to evaluate the corresponding specificity.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n If `class_id` is specified, we calculate precision by considering only the\n entries in the batch for which `class_id` is above the threshold predictions,\n and computing the fraction of them for which `class_id` is indeed a correct\n label.\n\n For additional information about specificity and sensitivity, see\n [the following](https://en.wikipedia.org/wiki/Sensitivity_and_specificity).\n\n Args:\n sensitivity: A scalar value in range `[0, 1]`.\n num_thresholds: (Optional) Defaults to 200. The number of thresholds to\n use for matching the given sensitivity.\n class_id: (Optional) Integer class ID for which we want binary metrics.\n This must be in the half-open interval `[0, num_classes)`, where\n `num_classes` is the last dimension of predictions.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SpecificityAtSensitivity(0.5)\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])\n >>> m.result().numpy()\n 0.66666667\n\n >>> m.reset_state()\n >>> m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8],\n ... sample_weight=[1, 1, 2, 2, 2])\n >>> m.result().numpy()\n 0.5\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SpecificityAtSensitivity()])\n ```\n ", "desc": "Computes best specificity where sensitivity is >= specified value.", "type": "API"}, {"name": "tf.metrics.squared_hinge", "docs": "Computes the squared hinge loss between `y_true` and `y_pred`.\n\n `loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)`\n\n Standalone usage:\n\n >>> y_true = np.random.choice([-1, 1], size=(2, 3))\n >>> y_pred = np.random.random(size=(2, 3))\n >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)\n >>> assert loss.shape == (2,)\n >>> assert np.array_equal(\n ... loss.numpy(),\n ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))\n\n Args:\n y_true: The ground truth values. `y_true` values are expected to be -1 or 1.\n If binary (0 or 1) labels are provided we will convert them to -1 or 1.\n shape = `[batch_size, d0, .. dN]`.\n y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Squared hinge loss values. shape = `[batch_size, d0, .. dN-1]`.\n ", "desc": "Computes the squared hinge loss between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.SquaredHinge", "docs": "Computes the squared hinge metric between `y_true` and `y_pred`.\n\n `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are\n provided we will convert them to -1 or 1.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.SquaredHinge()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])\n >>> m.result().numpy()\n 1.86\n\n >>> m.reset_state()\n >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]],\n ... sample_weight=[1, 0])\n >>> m.result().numpy()\n 1.46\n\n Usage with `compile()` API:\n\n ```python\n model.compile(\n optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.SquaredHinge()])\n ```\n ", "desc": "Computes the squared hinge metric between `y_true` and `y_pred`.", "type": "API"}, {"name": "tf.metrics.Sum", "docs": "Computes the (weighted) sum of the given values.\n\n For example, if values is [1, 3, 5, 7] then the sum is 16.\n If the weights were specified as [1, 1, 0, 0] then the sum would be 4.\n\n This metric creates one variable, `total`, that is used to compute the sum of\n `values`. This is ultimately returned as `sum`.\n\n If `sample_weight` is `None`, weights default to 1. Use `sample_weight` of 0\n to mask values.\n\n Args:\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.Sum()\n >>> m.update_state([1, 3, 5, 7])\n >>> m.result().numpy()\n 16.0\n\n Usage with `compile()` API:\n\n ```python\n model.add_metric(tf.keras.metrics.Sum(name='sum_1')(outputs))\n model.compile(optimizer='sgd', loss='mse')\n ```\n ", "desc": "Computes the (weighted) sum of the given values.", "type": "API"}, {"name": "tf.metrics.top_k_categorical_accuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Standalone usage:\n >>> y_true = [[0, 0, 1], [0, 1, 0]]\n >>> y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]\n >>> m = tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=3)\n >>> assert m.shape == (2,)\n >>> m.numpy()\n array([1., 1.], dtype=float32)\n\n Args:\n y_true: The ground truth values.\n y_pred: The prediction values.\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n\n Returns:\n Top K categorical accuracy value.\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.metrics.TopKCategoricalAccuracy", "docs": "Computes how often targets are in the top `K` predictions.\n\n Args:\n k: (Optional) Number of top elements to look at for computing accuracy.\n Defaults to 5.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1)\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])\n >>> m.result().numpy()\n 0.5\n\n >>> m.reset_state()\n >>> m.update_state([[0, 0, 1], [0, 1, 0]],\n ... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],\n ... sample_weight=[0.7, 0.3])\n >>> m.result().numpy()\n 0.3\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TopKCategoricalAccuracy()])\n ```\n ", "desc": "Computes how often targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.metrics.TrueNegatives", "docs": "Calculates the number of true negatives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true negatives. This metric creates one local variable, `accumulator`\n that is used to keep track of the number of true negatives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TrueNegatives()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 0, 0], [1, 1, 0, 0], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TrueNegatives()])\n ```\n ", "desc": "Calculates the number of true negatives.", "type": "API"}, {"name": "tf.metrics.TruePositives", "docs": "Calculates the number of true positives.\n\n If `sample_weight` is given, calculates the sum of the weights of\n true positives. This metric creates one local variable, `true_positives`\n that is used to keep track of the number of true positives.\n\n If `sample_weight` is `None`, weights default to 1.\n Use `sample_weight` of 0 to mask values.\n\n Args:\n thresholds: (Optional) Defaults to 0.5. A float value or a python\n list/tuple of float threshold values in [0, 1]. A threshold is compared\n with prediction values to determine the truth value of predictions\n (i.e., above the threshold is `true`, below is `false`). One metric\n value is generated for each threshold value.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Standalone usage:\n\n >>> m = tf.keras.metrics.TruePositives()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1])\n >>> m.result().numpy()\n 2.0\n\n >>> m.reset_state()\n >>> m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0])\n >>> m.result().numpy()\n 1.0\n\n Usage with `compile()` API:\n\n ```python\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[tf.keras.metrics.TruePositives()])\n ```\n ", "desc": "Calculates the number of true positives.", "type": "API"}, {"name": "tf.minimum", "docs": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.\n\n Both inputs are number-type tensors (except complex). `minimum` expects that\n both tensors have the same `dtype`.\n\n Examples:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-5., -2., 0., 3.])\n >>> tf.math.minimum(x, y)\n \n\n Note that `minimum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.minimum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_min`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.", "type": "API"}, {"name": "tf.mlir", "docs": "Public API for tf.mlir namespace.\n", "desc": "Public API for tf.mlir namespace.", "type": "API"}, {"name": "tf.mlir.experimental", "docs": "Public API for tf.mlir.experimental namespace.\n", "desc": "Public API for tf.mlir.experimental namespace.", "type": "API"}, {"name": "tf.mlir.experimental.convert_function", "docs": "Import a ConcreteFunction and convert it to a textual MLIR module.\n\n This API is only intended for inspecting the internals of TensorFlow and the\n string returned is at the moment intended for debugging purposes.\n\n A [tf.function](https://www.tensorflow.org/api_docs/python/tf/function) can be\n imported and converted from TensorFlow to TensorFlow MLIR with this API by\n extracting its ConcreteFunction (eagerly-executing wrapper around a\n [tf.Graph](https://www.tensorflow.org/api_docs/python/tf/Graph)).\n\n For example:\n >>> @tf.function\n ... def add(a, b):\n ... return a + b\n\n >>> concrete_function = add.get_concrete_function(\n ... tf.TensorSpec(None, tf.dtypes.float32),\n ... tf.TensorSpec(None, tf.dtypes.float32))\n >>> tf.mlir.experimental.convert_function(concrete_function)\n '...module attributes {...} {...}...'\n\n Args:\n concrete_function: An object of type ConcreteFunction.\n pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the\n module, see MLIR documentation for the\n [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification).\n show_debug_info: Whether to include locations in the emitted textual form.\n\n Returns:\n A textual representation of the MLIR module corresponding to the\n ConcreteFunction.\n\n Raises:\n InvalidArgumentError: if concrete_function is invalid or cannot be converted\n to MLIR.\n\n ", "desc": "Import a ConcreteFunction and convert it to a textual MLIR module.", "type": "API"}, {"name": "tf.mlir.experimental.convert_graph_def", "docs": "Import a GraphDef and convert it to a textual MLIR module.\n\n This API is only intended for inspecting the internals of TensorFlow and the\n string returned is at the moment intended for debugging purposes.\n\n Args:\n graph_def: An object of type graph_pb2.GraphDef or a textual proto\n representation of a valid GraphDef.\n pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the\n module, see MLIR documentation for the\n [textual pass pipeline syntax](https://mlir.llvm.org/docs/PassManagement/#textual-pass-pipeline-specification).\n show_debug_info: Whether to include locations in the emitted textual form.\n\n Returns:\n A textual representation of the MLIR module corresponding to the graphdef.\n\n Raises:\n InvalidArgumentError: if graph_def is invalid or cannot be converted to\n MLIR.\n\n ", "desc": "Import a GraphDef and convert it to a textual MLIR module.", "type": "API"}, {"name": "tf.Module", "docs": "Base neural network module class.\n\n A module is a named container for `tf.Variable`s, other `tf.Module`s and\n functions which apply to user input. For example a dense layer in a neural\n network might be implemented as a `tf.Module`:\n\n >>> class Dense(tf.Module):\n ... def __init__(self, input_dim, output_size, name=None):\n ... super(Dense, self).__init__(name=name)\n ... self.w = tf.Variable(\n ... tf.random.normal([input_dim, output_size]), name='w')\n ... self.b = tf.Variable(tf.zeros([output_size]), name='b')\n ... def __call__(self, x):\n ... y = tf.matmul(x, self.w) + self.b\n ... return tf.nn.relu(y)\n\n You can use the Dense layer as you would expect:\n\n >>> d = Dense(input_dim=3, output_size=2)\n >>> d(tf.ones([1, 3]))\n \n\n\n By subclassing `tf.Module` instead of `object` any `tf.Variable` or\n `tf.Module` instances assigned to object properties can be collected using\n the `variables`, `trainable_variables` or `submodules` property:\n\n >>> d.variables\n (,\n )\n\n\n Subclasses of `tf.Module` can also take advantage of the `_flatten` method\n which can be used to implement tracking of any other types.\n\n All `tf.Module` classes have an associated `tf.name_scope` which can be used\n to group operations in TensorBoard and create hierarchies for variable names\n which can help with debugging. We suggest using the name scope when creating\n nested submodules/parameters or for forward methods whose graph you might want\n to inspect in TensorBoard. You can enter the name scope explicitly using\n `with self.name_scope:` or you can annotate methods (apart from `__init__`)\n with `@tf.Module.with_name_scope`.\n\n >>> class MLP(tf.Module):\n ... def __init__(self, input_size, sizes, name=None):\n ... super(MLP, self).__init__(name=name)\n ... self.layers = []\n ... with self.name_scope:\n ... for size in sizes:\n ... self.layers.append(Dense(input_dim=input_size, output_size=size))\n ... input_size = size\n ... @tf.Module.with_name_scope\n ... def __call__(self, x):\n ... for layer in self.layers:\n ... x = layer(x)\n ... return x\n\n >>> module = MLP(input_size=5, sizes=[5, 5])\n >>> module.variables\n (,\n ,\n ,\n )\n ", "desc": "Base neural network module class.", "type": "API"}, {"name": "tf.multiply", "docs": "Returns an element-wise x * y.\n\n For example:\n\n >>> x = tf.constant(([1, 2, 3, 4]))\n >>> tf.math.multiply(x, x)\n \n\n Since `tf.math.multiply` will convert its arguments to `Tensor`s, you can also\n pass in non-`Tensor` arguments:\n\n >>> tf.math.multiply(7,6)\n \n\n If `x.shape` is not the same as `y.shape`, they will be broadcast to a\n compatible shape. (More about broadcasting\n [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)\n\n For example:\n\n >>> x = tf.ones([1, 2]);\n >>> y = tf.ones([2, 1]);\n >>> x * y # Taking advantage of operator overriding\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_prod`\n\n Args:\n x: A Tensor. Must be one of the following types: `bfloat16`,\n `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`,\n `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n\n A `Tensor`. Has the same type as `x`.\n\n Raises:\n\n * InvalidArgumentError: When `x` and `y` have incompatible shapes or types.\n ", "desc": "Returns an element-wise x * y.", "type": "API"}, {"name": "tf.name_scope", "docs": "A context manager for use when defining a Python op.\n\n This context manager pushes a name scope, which will make the name of all\n operations added within it have a prefix.\n\n For example, to define a new Python op called `my_op`:\n\n ```python\n def my_op(a, b, c, name=None):\n with tf.name_scope(\"MyOp\") as scope:\n a = tf.convert_to_tensor(a, name=\"a\")\n b = tf.convert_to_tensor(b, name=\"b\")\n c = tf.convert_to_tensor(c, name=\"c\")\n # Define some computation that uses `a`, `b`, and `c`.\n return foo_op(..., name=scope)\n ```\n\n When executed, the Tensors `a`, `b`, `c`, will have names `MyOp/a`, `MyOp/b`,\n and `MyOp/c`.\n\n Inside a `tf.function`, if the scope name already exists, the name will be\n made unique by appending `_n`. For example, calling `my_op` the second time\n will generate `MyOp_1/a`, etc.\n ", "desc": "A context manager for use when defining a Python op.", "type": "API"}, {"name": "tf.negative", "docs": "Computes numerical negative value element-wise.\n\n I.e., \\\\(y = -x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)`", "desc": "Computes numerical negative value element-wise.", "type": "API"}, {"name": "tf.nest", "docs": "Functions that work with structures.\n\nA structure is either:\n\n* one of the recognized Python collections, holding _nested structures_;\n* a value of any other type, typically a TensorFlow data type like Tensor,\n Variable, or of compatible types such as int, float, ndarray, etc. these are\n commonly referred to as _atoms_ of the structure.\n\nA structure of type `T` is a structure whose atomic items are of type `T`.\nFor example, a structure of `tf.Tensor` only contains `tf.Tensor` as its atoms.\n\nHistorically a _nested structure_ was called a _nested sequence_ in TensorFlow.\nA nested structure is sometimes called a _nest_ or a _tree_, but the formal\nname _nested structure_ is preferred.\n\nRefer to [Nesting Data Structures]\n(https://en.wikipedia.org/wiki/Nesting_(computing)#Data_structures).\n\nThe following collection types are recognized by `tf.nest` as nested\nstructures:\n\n* `collections.abc.Sequence` (except `string` and `bytes`).\n This includes `list`, `tuple`, and `namedtuple`.\n* `collections.abc.Mapping` (with sortable keys).\n This includes `dict` and `collections.OrderedDict`.\n* `collections.abc.MappingView` (with sortable keys).\n* [`attr.s` classes](https://www.attrs.org/).\n\nAny other values are considered **atoms**. Not all collection types are\nconsidered nested structures. For example, the following types are\nconsidered atoms:\n\n* `set`; `{\"a\", \"b\"}` is an atom, while `[\"a\", \"b\"]` is a nested structure.\n* [`dataclass` classes](https://docs.python.org/library/dataclasses.html)\n* `tf.Tensor`\n* `numpy.array`\n\n`tf.nest.is_nested` checks whether an object is a nested structure or an atom.\nFor example:\n\n >>> tf.nest.is_nested(\"1234\")\n False\n >>> tf.nest.is_nested([1, 3, [4, 5]])\n True\n >>> tf.nest.is_nested(((7, 8), (5, 6)))\n True\n >>> tf.nest.is_nested([])\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2})\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.keys())\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.values())\n True\n >>> tf.nest.is_nested({\"a\": 1, \"b\": 2}.items())\n True\n >>> tf.nest.is_nested(set([1, 2]))\n False\n >>> ones = tf.ones([2, 3])\n >>> tf.nest.is_nested(ones)\n False\n\nNote: A proper structure shall form a tree. The user shall ensure there is no\ncyclic references within the items in the structure,\ni.e., no references in the structure of the input of these functions\nshould be recursive. The behavior is undefined if there is a cycle.\n\n\n", "desc": "Functions that work with structures.", "type": "API"}, {"name": "tf.nest.assert_same_structure", "docs": "Asserts that two structures are nested in the same way.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n Note the method does not check the types of atoms inside the structures.\n\n Examples:\n\n * These atom vs. atom comparisons will pass:\n\n >>> tf.nest.assert_same_structure(1.5, tf.Variable(1, tf.uint32))\n >>> tf.nest.assert_same_structure(\"abc\", np.array([1, 2]))\n\n * These nested structure vs. nested structure comparisons will pass:\n\n >>> structure1 = (((1, 2), 3), 4, (5, 6))\n >>> structure2 = (((\"foo1\", \"foo2\"), \"foo3\"), \"foo4\", (\"foo5\", \"foo6\"))\n >>> structure3 = [((\"a\", \"b\"), \"c\"), \"d\", [\"e\", \"f\"]]\n >>> tf.nest.assert_same_structure(structure1, structure2)\n >>> tf.nest.assert_same_structure(structure1, structure3, check_types=False)\n\n >>> import collections\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple(\"bar\", \"a b\")(1, 2),\n ... collections.namedtuple(\"foo\", \"a b\")(2, 3),\n ... check_types=False)\n\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple(\"bar\", \"a b\")(1, 2),\n ... { \"a\": 1, \"b\": 2 },\n ... check_types=False)\n\n >>> tf.nest.assert_same_structure(\n ... { \"a\": 1, \"b\": 2, \"c\": 3 },\n ... { \"c\": 6, \"b\": 5, \"a\": 4 })\n\n >>> ragged_tensor1 = tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... row_splits=[0, 4, 4, 7, 8, 8])\n >>> ragged_tensor2 = tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4],\n ... row_splits=[0, 3])\n >>> tf.nest.assert_same_structure(\n ... ragged_tensor1,\n ... ragged_tensor2,\n ... expand_composites=True)\n\n * These examples will raise exceptions:\n\n >>> tf.nest.assert_same_structure([0, 1], np.array([0, 1]))\n Traceback (most recent call last):\n ...\n ValueError: The two structures don't have the same nested structure\n\n >>> tf.nest.assert_same_structure(\n ... collections.namedtuple('bar', 'a b')(1, 2),\n ... collections.namedtuple('foo', 'a b')(2, 3))\n Traceback (most recent call last):\n ...\n TypeError: The two structures don't have the same nested structure\n\n Args:\n nest1: an atom or a nested structure.\n nest2: an atom or a nested structure.\n check_types: if `True` (default) types of structures are checked as well,\n including the keys of dictionaries. If set to `False`, for example a list\n and a tuple of objects will look the same if they have the same size. Note\n that namedtuples with identical name and fields are always considered to\n have the same shallow structure. Two types will also be considered the\n same if they are both list subtypes (which allows \"list\" and\n \"_ListWrapper\" from trackable dependency tracking to compare equal).\n `check_types=True` only checks type of sub-structures. The types of atoms\n are not checked.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Raises:\n ValueError: If the two structures do not have the same number of atoms or\n if the two structures are not nested in the same way.\n TypeError: If the two structures differ in the type of sequence in any of\n their substructures. Only possible if `check_types` is `True`.\n ", "desc": "Asserts that two structures are nested in the same way.", "type": "API"}, {"name": "tf.nest.flatten", "docs": "Returns a flat list from a given structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n If the structure is an atom, then returns a single-item list: [structure].\n\n This is the inverse of the `nest.pack_sequence_as` method that takes in a\n flattened list and re-packs it into the nested structure.\n\n In the case of dict instances, the sequence consists of the values, sorted by\n key to ensure deterministic behavior. This is true also for OrderedDict\n instances: their sequence order is ignored, the sorting order of keys is used\n instead. The same convention is followed in `nest.pack_sequence_as`. This\n correctly repacks dicts and OrderedDicts after they have been flattened, and\n also allows flattening an OrderedDict and then repacking it back using a\n corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys\n cannot be flattened.\n\n Users must not modify any collections used in nest while this function is\n running.\n\n Examples:\n\n 1. Python dict (ordered by key):\n\n >>> dict = { \"key3\": \"value3\", \"key1\": \"value1\", \"key2\": \"value2\" }\n >>> tf.nest.flatten(dict)\n ['value1', 'value2', 'value3']\n\n 2. For a nested python tuple:\n\n >>> tuple = ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0)\n >>> tf.nest.flatten(tuple)\n [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]\n\n 3. For a nested dictionary of dictionaries:\n\n >>> dict = { \"key3\": {\"c\": (1.0, 2.0), \"a\": (3.0)},\n ... \"key1\": {\"m\": \"val1\", \"g\": \"val2\"} }\n >>> tf.nest.flatten(dict)\n ['val2', 'val1', 3.0, 1.0, 2.0]\n\n 4. Numpy array (will not flatten):\n\n >>> array = np.array([[1, 2], [3, 4]])\n >>> tf.nest.flatten(array)\n [array([[1, 2],\n [3, 4]])]\n\n 5. `tf.Tensor` (will not flatten):\n\n >>> tensor = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\n >>> tf.nest.flatten(tensor)\n []\n\n 6. `tf.RaggedTensor`: This is a composite tensor thats representation consists\n of a flattened list of 'values' and a list of 'row_splits' which indicate how\n to chop up the flattened list into different rows. For more details on\n `tf.RaggedTensor`, please visit\n https://www.tensorflow.org/api_docs/python/tf/RaggedTensor.\n\n with `expand_composites=False`, we just return the RaggedTensor as is.\n\n >>> tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]])\n >>> tf.nest.flatten(tensor, expand_composites=False)\n []\n\n with `expand_composites=True`, we return the component Tensors that make up\n the RaggedTensor representation (the values and row_splits tensors)\n\n >>> tensor = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2]])\n >>> tf.nest.flatten(tensor, expand_composites=True)\n [,\n ]\n\n Args:\n structure: an atom or a nested structure. Note, numpy arrays are considered\n atoms and are not flattened.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Returns:\n A Python list, the flattened version of the input.\n\n Raises:\n TypeError: The nest is or contains a dict with non-sortable keys.\n ", "desc": "Returns a flat list from a given structure.", "type": "API"}, {"name": "tf.nest.is_nested", "docs": "Returns true if its input is a nested structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a nested structure.\n\n Args:\n seq: the value to test.\n\n Returns:\n True if the input is a nested structure.\n ", "desc": "Returns true if its input is a nested structure.", "type": "API"}, {"name": "tf.nest.map_structure", "docs": "Creates a new structure by applying `func` to each atom in `structure`.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n Applies `func(x[0], x[1], ...)` where x[i] enumerates all atoms in\n `structure[i]`. All items in `structure` must have the same arity,\n and the return value will contain results with the same structure layout.\n\n Examples:\n\n * A single Python dict:\n\n >>> a = {\"hello\": 24, \"world\": 76}\n >>> tf.nest.map_structure(lambda p: p * 2, a)\n {'hello': 48, 'world': 152}\n\n * Multiple Python dictionaries:\n\n >>> d1 = {\"hello\": 24, \"world\": 76}\n >>> d2 = {\"hello\": 36, \"world\": 14}\n >>> tf.nest.map_structure(lambda p1, p2: p1 + p2, d1, d2)\n {'hello': 60, 'world': 90}\n\n * A single Python list:\n\n >>> a = [24, 76, \"ab\"]\n >>> tf.nest.map_structure(lambda p: p * 2, a)\n [48, 152, 'abab']\n\n * Scalars:\n\n >>> tf.nest.map_structure(lambda x, y: x + y, 3, 4)\n 7\n\n * Empty structures:\n\n >>> tf.nest.map_structure(lambda x: x + 1, ())\n ()\n\n * Check the types of iterables:\n\n >>> s1 = (((1, 2), 3), 4, (5, 6))\n >>> s1_list = [[[1, 2], 3], 4, [5, 6]]\n >>> tf.nest.map_structure(lambda x, y: None, s1, s1_list)\n Traceback (most recent call last):\n ...\n TypeError: The two structures don't have the same nested structure\n\n * Type check is set to False:\n\n >>> s1 = (((1, 2), 3), 4, (5, 6))\n >>> s1_list = [[[1, 2], 3], 4, [5, 6]]\n >>> tf.nest.map_structure(lambda x, y: None, s1, s1_list, check_types=False)\n (((None, None), None), None, (None, None))\n\n Args:\n func: A callable that accepts as many arguments as there are structures.\n *structure: atom or nested structure.\n **kwargs: Valid keyword args are:\n * `check_types`: If set to `True` (default) the types of iterables within\n the structures have to be same (e.g. `map_structure(func, [1], (1,))`\n raises a `TypeError` exception). To allow this set this argument to\n `False`. Note that namedtuples with identical name and fields are always\n considered to have the same shallow structure.\n * `expand_composites`: If set to `True`, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors. If `False` (the default), then composite tensors are\n not expanded.\n\n Returns:\n A new structure with the same arity as `structure[0]`, whose atoms\n correspond to `func(x[0], x[1], ...)` where `x[i]` is the atom in the\n corresponding location in `structure[i]`. If there are different structure\n types and `check_types` is `False` the structure types of the first\n structure will be used.\n\n Raises:\n TypeError: If `func` is not callable or if the structures do not match\n each other by depth tree.\n ValueError: If no structure is provided or if the structures do not match\n each other by type.\n ValueError: If wrong keyword arguments are provided.\n ", "desc": "Creates a new structure by applying `func` to each atom in `structure`.", "type": "API"}, {"name": "tf.nest.pack_sequence_as", "docs": "Returns a given flattened sequence packed into a given structure.\n\n Refer to [tf.nest](https://www.tensorflow.org/api_docs/python/tf/nest)\n for the definition of a structure.\n\n If `structure` is an atom, `flat_sequence` must be a single-item list;\n in this case the return value is `flat_sequence[0]`.\n\n If `structure` is or contains a dict instance, the keys will be sorted to\n pack the flat sequence in deterministic order. This is true also for\n `OrderedDict` instances: their sequence order is ignored, the sorting order of\n keys is used instead. The same convention is followed in `flatten`.\n This correctly repacks dicts and `OrderedDict`s after they have been\n flattened, and also allows flattening an `OrderedDict` and then repacking it\n back using a corresponding plain dict, or vice-versa.\n Dictionaries with non-sortable keys cannot be flattened.\n\n Examples:\n\n 1. Python dict:\n\n >>> structure = { \"key3\": \"\", \"key1\": \"\", \"key2\": \"\" }\n >>> flat_sequence = [\"value1\", \"value2\", \"value3\"]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n {'key3': 'value3', 'key1': 'value1', 'key2': 'value2'}\n\n 2. For a nested python tuple:\n\n >>> structure = (('a','b'), ('c','d','e'), 'f')\n >>> flat_sequence = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n ((1.0, 2.0), (3.0, 4.0, 5.0), 6.0)\n\n 3. For a nested dictionary of dictionaries:\n\n >>> structure = { \"key3\": {\"c\": ('alpha', 'beta'), \"a\": ('gamma')},\n ... \"key1\": {\"e\": \"val1\", \"d\": \"val2\"} }\n >>> flat_sequence = ['val2', 'val1', 3.0, 1.0, 2.0]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n {'key3': {'c': (1.0, 2.0), 'a': 3.0}, 'key1': {'e': 'val1', 'd': 'val2'}}\n\n 4. Numpy array (considered a scalar):\n\n >>> structure = ['a']\n >>> flat_sequence = [np.array([[1, 2], [3, 4]])]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n [array([[1, 2],\n [3, 4]])]\n\n 5. tf.Tensor (considered a scalar):\n\n >>> structure = ['a']\n >>> flat_sequence = [tf.constant([[1., 2., 3.], [4., 5., 6.]])]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence)\n []\n\n 6. `tf.RaggedTensor`: This is a composite tensor thats representation consists\n of a flattened list of 'values' and a list of 'row_splits' which indicate how\n to chop up the flattened list into different rows. For more details on\n `tf.RaggedTensor`, please visit\n https://www.tensorflow.org/api_docs/python/tf/RaggedTensor.\n\n With `expand_composites=False`, we treat RaggedTensor as a scalar.\n\n >>> structure = { \"foo\": tf.ragged.constant([[1, 2], [3]]),\n ... \"bar\": tf.constant([[5]]) }\n >>> flat_sequence = [ \"one\", \"two\" ]\n >>> tf.nest.pack_sequence_as(structure, flat_sequence,\n ... expand_composites=False)\n {'foo': 'two', 'bar': 'one'}\n\n With `expand_composites=True`, we expect that the flattened input contains\n the tensors making up the ragged tensor i.e. the values and row_splits\n tensors.\n\n >>> structure = { \"foo\": tf.ragged.constant([[1., 2.], [3.]]),\n ... \"bar\": tf.constant([[5.]]) }\n >>> tensors = tf.nest.flatten(structure, expand_composites=True)\n >>> print(tensors)\n [,\n ,\n ]\n >>> verified_tensors = [tf.debugging.check_numerics(t, 'invalid tensor: ')\n ... if t.dtype==tf.float32 else t\n ... for t in tensors]\n >>> tf.nest.pack_sequence_as(structure, verified_tensors,\n ... expand_composites=True)\n {'foo': ,\n 'bar': }\n\n Args:\n structure: Nested structure, whose structure is given by nested lists,\n tuples, and dicts. Note: numpy arrays and strings are considered\n scalars.\n flat_sequence: flat sequence to pack.\n expand_composites: If true, then composite tensors such as\n `tf.sparse.SparseTensor` and `tf.RaggedTensor` are expanded into their\n component tensors.\n\n Returns:\n packed: `flat_sequence` converted to have the same recursive structure as\n `structure`.\n\n Raises:\n ValueError: If `flat_sequence` and `structure` have different\n atom counts.\n TypeError: `structure` is or contains a dict with non-sortable keys.\n ", "desc": "Returns a given flattened sequence packed into a given structure.", "type": "API"}, {"name": "tf.nn", "docs": "Primitive Neural Net (NN) Operations.\n\n## Notes on padding\n\nSeveral neural network operations, such as `tf.nn.conv2d` and\n`tf.nn.max_pool2d`, take a `padding` parameter, which controls how the input is\npadded before running the operation. The input is padded by inserting values\n(typically zeros) before and after the tensor in each spatial dimension. The\n`padding` parameter can either be the string `'VALID'`, which means use no\npadding, or `'SAME'` which adds padding according to a formula which is\ndescribed below. Certain ops also allow the amount of padding per dimension to\nbe explicitly specified by passing a list to `padding`.\n\nIn the case of convolutions, the input is padded with zeros. In case of pools,\nthe padded input values are ignored. For example, in a max pool, the sliding\nwindow ignores padded values, which is equivalent to the padded values being\n`-infinity`.\n\n### `'VALID'` padding\n\nPassing `padding='VALID'` to an op causes no padding to be used. This causes the\noutput size to typically be smaller than the input size, even when the stride is\none. In the 2D case, the output size is computed as:\n\n```python\nout_height = ceil((in_height - filter_height + 1) / stride_height)\nout_width = ceil((in_width - filter_width + 1) / stride_width)\n```\n\nThe 1D and 3D cases are similar. Note `filter_height` and `filter_width` refer\nto the filter size after dilations (if any) for convolutions, and refer to the\nwindow size for pools.\n\n### `'SAME'` padding\n\nWith `'SAME'` padding, padding is applied to each spatial dimension. When the\nstrides are 1, the input is padded such that the output size is the same as the\ninput size. In the 2D case, the output size is computed as:\n\n```python\nout_height = ceil(in_height / stride_height)\nout_width = ceil(in_width / stride_width)\n```\n\nThe amount of padding used is the smallest amount that results in the output\nsize. The formula for the total amount of padding per dimension is:\n\n```python\nif (in_height % strides[1] == 0):\n pad_along_height = max(filter_height - stride_height, 0)\nelse:\n pad_along_height = max(filter_height - (in_height % stride_height), 0)\nif (in_width % strides[2] == 0):\n pad_along_width = max(filter_width - stride_width, 0)\nelse:\n pad_along_width = max(filter_width - (in_width % stride_width), 0)\n```\n\nFinally, the padding on the top, bottom, left and right are:\n\n```python\npad_top = pad_along_height // 2\npad_bottom = pad_along_height - pad_top\npad_left = pad_along_width // 2\npad_right = pad_along_width - pad_left\n```\n\nNote that the division by 2 means that there might be cases when the padding on\nboth sides (top vs bottom, right vs left) are off by one. In this case, the\nbottom and right sides always get the one additional padded pixel. For example,\nwhen pad_along_height is 5, we pad 2 pixels at the top and 3 pixels at the\nbottom. Note that this is different from existing libraries such as PyTorch and\nCaffe, which explicitly specify the number of padded pixels and always pad the\nsame number of pixels on both sides.\n\nHere is an example of `'SAME'` padding:\n\n>>> in_height = 5\n>>> filter_height = 3\n>>> stride_height = 2\n>>>\n>>> in_width = 2\n>>> filter_width = 2\n>>> stride_width = 1\n>>>\n>>> inp = tf.ones((2, in_height, in_width, 2))\n>>> filter = tf.ones((filter_height, filter_width, 2, 2))\n>>> strides = [stride_height, stride_width]\n>>> output = tf.nn.conv2d(inp, filter, strides, padding='SAME')\n>>> output.shape[1] # output_height: ceil(5 / 2)\n3\n>>> output.shape[2] # output_width: ceil(2 / 1)\n2\n\n### Explicit padding\n\nCertain ops, like `tf.nn.conv2d`, also allow a list of explicit padding amounts\nto be passed to the `padding` parameter. This list is in the same format as what\nis passed to `tf.pad`, except the padding must be a nested list, not a tensor.\nFor example, in the 2D case, the list is in the format `[[0, 0], [pad_top,\npad_bottom], [pad_left, pad_right], [0, 0]]` when `data_format` is its default\nvalue of `'NHWC'`. The two `[0, 0]` pairs indicate the batch and channel\ndimensions have no padding, which is required, as only spatial dimensions can\nhave padding.\n\nFor example:\n\n>>> inp = tf.ones((1, 3, 3, 1))\n>>> filter = tf.ones((2, 2, 1, 1))\n>>> strides = [1, 1]\n>>> padding = [[0, 0], [1, 2], [0, 1], [0, 0]]\n>>> output = tf.nn.conv2d(inp, filter, strides, padding=padding)\n>>> tuple(output.shape)\n(1, 5, 3, 1)\n>>> # Equivalently, tf.pad can be used, since convolutions pad with zeros.\n>>> inp = tf.pad(inp, padding)\n>>> # 'VALID' means to use no padding in conv2d (we already padded inp)\n>>> output2 = tf.nn.conv2d(inp, filter, strides, padding='VALID')\n>>> tf.debugging.assert_equal(output, output2)\n\n", "desc": "Primitive Neural Net (NN) Operations.", "type": "API"}, {"name": "tf.nn.all_candidate_sampler", "docs": "Generate the set of all classes.\n\n Deterministically generates and returns the set of all possible classes.\n For testing purposes. There is no need to use this, since you might as\n well use full softmax or full logistic regression.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of possible classes.\n unique: A `bool`. Ignored.\n unique.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n This operation deterministically returns the entire range\n `[0, num_sampled]`.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`. All returned values are 1.0.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`. All returned values are 1.0.\n ", "desc": "Generate the set of all classes.", "type": "API"}, {"name": "tf.nn.atrous_conv2d", "docs": "Atrous convolution (a.k.a. convolution with holes or dilated convolution).\n\n This function is a simpler wrapper around the more general\n `tf.nn.convolution`, and exists only for backwards compatibility. You can\n use `tf.nn.convolution` to perform 1-D, 2-D, or 3-D atrous convolution.\n\n Computes a 2-D atrous convolution, also known as convolution with holes or\n dilated convolution, given 4-D `value` and `filters` tensors. If the `rate`\n parameter is equal to one, it performs regular 2-D convolution. If the `rate`\n parameter is greater than one, it performs convolution with holes, sampling\n the input values every `rate` pixels in the `height` and `width` dimensions.\n This is equivalent to convolving the input with a set of upsampled filters,\n produced by inserting `rate - 1` zeros between two consecutive values of the\n filters along the `height` and `width` dimensions, hence the name atrous\n convolution or convolution with holes (the French word trous means holes in\n English).\n\n More specifically:\n\n ```\n output[batch, height, width, out_channel] =\n sum_{dheight, dwidth, in_channel} (\n filters[dheight, dwidth, in_channel, out_channel] *\n value[batch, height + rate*dheight, width + rate*dwidth, in_channel]\n )\n ```\n\n Atrous convolution allows us to explicitly control how densely to compute\n feature responses in fully convolutional networks. Used in conjunction with\n bilinear interpolation, it offers an alternative to `conv2d_transpose` in\n dense prediction tasks such as semantic image segmentation, optical flow\n computation, or depth estimation. It also allows us to effectively enlarge\n the field of view of filters without increasing the number of parameters or\n the amount of computation.\n\n For a description of atrous convolution and how it can be used for dense\n feature extraction, please see: (Chen et al., 2015). The same operation is\n investigated further in (Yu et al., 2016). Previous works that effectively\n use atrous convolution in different ways are, among others,\n (Sermanet et al., 2014) and (Giusti et al., 2013).\n Atrous convolution is also closely related to the so-called noble identities\n in multi-rate signal processing.\n\n There are many different ways to implement atrous convolution (see the refs\n above). The implementation here reduces\n\n ```python\n atrous_conv2d(value, filters, rate, padding=padding)\n ```\n\n to the following three operations:\n\n ```python\n paddings = ...\n net = space_to_batch(value, paddings, block_size=rate)\n net = conv2d(net, filters, strides=[1, 1, 1, 1], padding=\"VALID\")\n crops = ...\n net = batch_to_space(net, crops, block_size=rate)\n ```\n\n Advanced usage. Note the following optimization: A sequence of `atrous_conv2d`\n operations with identical `rate` parameters, 'SAME' `padding`, and filters\n with odd heights/ widths:\n\n ```python\n net = atrous_conv2d(net, filters1, rate, padding=\"SAME\")\n net = atrous_conv2d(net, filters2, rate, padding=\"SAME\")\n ...\n net = atrous_conv2d(net, filtersK, rate, padding=\"SAME\")\n ```\n\n can be equivalently performed cheaper in terms of computation and memory as:\n\n ```python\n pad = ... # padding so that the input dims are multiples of rate\n net = space_to_batch(net, paddings=pad, block_size=rate)\n net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding=\"SAME\")\n net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding=\"SAME\")\n ...\n net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding=\"SAME\")\n net = batch_to_space(net, crops=pad, block_size=rate)\n ```\n\n because a pair of consecutive `space_to_batch` and `batch_to_space` ops with\n the same `block_size` cancel out when their respective `paddings` and `crops`\n inputs are identical.\n\n Args:\n value: A 4-D `Tensor` of type `float`. It needs to be in the default \"NHWC\"\n format. Its shape is `[batch, in_height, in_width, in_channels]`.\n filters: A 4-D `Tensor` with the same type as `value` and shape\n `[filter_height, filter_width, in_channels, out_channels]`. `filters`'\n `in_channels` dimension must match that of `value`. Atrous convolution is\n equivalent to standard convolution with upsampled filters with effective\n height `filter_height + (filter_height - 1) * (rate - 1)` and effective\n width `filter_width + (filter_width - 1) * (rate - 1)`, produced by\n inserting `rate - 1` zeros along consecutive elements across the\n `filters`' spatial dimensions.\n rate: A positive int32. The stride with which we sample input values across\n the `height` and `width` dimensions. Equivalently, the rate by which we\n upsample the filter values by inserting zeros across the `height` and\n `width` dimensions. In the literature, the same parameter is sometimes\n called `input stride` or `dilation`.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `value`.\n Output shape with `'VALID'` padding is:\n\n [batch, height - 2 * (filter_width - 1),\n width - 2 * (filter_height - 1), out_channels].\n\n Output shape with `'SAME'` padding is:\n\n [batch, height, width, out_channels].\n\n Raises:\n ValueError: If input/output depth does not match `filters`' shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n\n References:\n Multi-Scale Context Aggregation by Dilated Convolutions:\n [Yu et al., 2016](https://arxiv.org/abs/1511.07122)\n ([pdf](https://arxiv.org/pdf/1511.07122.pdf))\n Semantic Image Segmentation with Deep Convolutional Nets and Fully\n Connected CRFs:\n [Chen et al., 2015](http://arxiv.org/abs/1412.7062)\n ([pdf](https://arxiv.org/pdf/1412.7062))\n OverFeat - Integrated Recognition, Localization and Detection using\n Convolutional Networks:\n [Sermanet et al., 2014](https://arxiv.org/abs/1312.6229)\n ([pdf](https://arxiv.org/pdf/1312.6229.pdf))\n Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks:\n [Giusti et al., 2013]\n (https://ieeexplore.ieee.org/abstract/document/6738831)\n ([pdf](https://arxiv.org/pdf/1302.1700.pdf))\n ", "desc": "Atrous convolution (a.k.a. convolution with holes or dilated convolution).", "type": "API"}, {"name": "tf.nn.atrous_conv2d_transpose", "docs": "The transpose of `atrous_conv2d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of\n `atrous_conv2d` rather than an actual deconvolution.\n\n Args:\n value: A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC`\n format. Its shape is `[batch, in_height, in_width, in_channels]`.\n filters: A 4-D `Tensor` with the same type as `value` and shape\n `[filter_height, filter_width, out_channels, in_channels]`. `filters`'\n `in_channels` dimension must match that of `value`. Atrous convolution is\n equivalent to standard convolution with upsampled filters with effective\n height `filter_height + (filter_height - 1) * (rate - 1)` and effective\n width `filter_width + (filter_width - 1) * (rate - 1)`, produced by\n inserting `rate - 1` zeros along consecutive elements across the\n `filters`' spatial dimensions.\n output_shape: A 1-D `Tensor` of shape representing the output shape of the\n deconvolution op, of form `[batch, out_height, out_width, out_channels]`.\n rate: A positive int32. The stride with which we sample input values across\n the `height` and `width` dimensions. Equivalently, the rate by which we\n upsample the filter values by inserting zeros across the `height` and\n `width` dimensions. In the literature, the same parameter is sometimes\n called `input stride` or `dilation`.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError: If input/output depth does not match `filters`' shape, or if\n padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less\n than one, or if the output_shape is not a tensor with 4 elements.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `atrous_conv2d`.", "type": "API"}, {"name": "tf.nn.avg_pool", "docs": "Performs the avg pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n input: Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape +\n [num_channels]` if `data_format` does not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n ksize: An int or list of `ints` that has length `1`, `N` or `N+2`. The size\n of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. Specifies the channel dimension. For N=1 it can be\n either \"NWC\" (default) or \"NCW\", for N=2 it can be either \"NHWC\" (default)\n or \"NCHW\" and for N=3 either \"NDHWC\" (default) or \"NCDHW\".\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The average pooled output tensor.\n ", "desc": "Performs the avg pooling on the input.", "type": "API"}, {"name": "tf.nn.avg_pool1d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Note internally this op reshapes and uses the underlying 2d operation.\n\n Args:\n input: A 3-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1` or `3`. The size of the\n window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1` or `3`. The stride of\n the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional string from: \"NWC\", \"NCW\". Defaults to \"NWC\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.nn.avg_pool2d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n input: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type\n `float32`, `float64`, `qint8`, `quint8`, or `qint32`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. 'NHWC' and 'NCHW' are supported.\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` with the same type as `value`. The average pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.nn.avg_pool3d", "docs": "Performs the average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n input: A 5-D `Tensor` of shape `[batch, depth, height, width, channels]`\n and type `float32`, `float64`, `qint8`, `quint8`, or `qint32`.\n ksize: An int or list of `ints` that has length `1`, `3` or `5`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `3` or `5`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. 'NDHWC' and 'NCDHW' are supported.\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` with the same type as `value`. The average pooled output tensor.\n ", "desc": "Performs the average pooling on the input.", "type": "API"}, {"name": "tf.nn.batch_norm_with_global_normalization", "docs": "Batch normalization.\n\n This op is deprecated. See `tf.nn.batch_normalization`.\n\n Args:\n input: A 4D input Tensor.\n mean: A 1D mean Tensor with size matching the last dimension of t.\n This is the first output from tf.nn.moments,\n or a saved moving average thereof.\n variance: A 1D variance Tensor with size matching the last dimension of t.\n This is the second output from tf.nn.moments,\n or a saved moving average thereof.\n beta: A 1D beta Tensor with size matching the last dimension of t.\n An offset to be added to the normalized tensor.\n gamma: A 1D gamma Tensor with size matching the last dimension of t.\n If \"scale_after_normalization\" is true, this tensor will be multiplied\n with the normalized tensor.\n variance_epsilon: A small float number to avoid dividing by 0.\n scale_after_normalization: A bool indicating whether the resulted tensor\n needs to be multiplied with gamma.\n name: A name for this operation (optional).\n\n Returns:\n A batch-normalized `t`.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift:\n [Ioffe et al., 2015](http://proceedings.mlr.press/v37/ioffe15.html)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.nn.batch_normalization", "docs": "Batch normalization.\n\n Normalizes a tensor by `mean` and `variance`, and applies (optionally) a\n `scale` \\\\(\\gamma\\\\) to it, as well as an `offset` \\\\(\\beta\\\\):\n\n \\\\(\\frac{\\gamma(x-\\mu)}{\\sigma}+\\beta\\\\)\n\n `mean`, `variance`, `offset` and `scale` are all expected to be of one of two\n shapes:\n\n * In all generality, they can have the same number of dimensions as the\n input `x`, with identical sizes as `x` for the dimensions that are not\n normalized over (the 'depth' dimension(s)), and dimension 1 for the\n others which are being normalized over.\n `mean` and `variance` in this case would typically be the outputs of\n `tf.nn.moments(..., keepdims=True)` during training, or running averages\n thereof during inference.\n * In the common case where the 'depth' dimension is the last dimension in\n the input tensor `x`, they may be one dimensional tensors of the same\n size as the 'depth' dimension.\n This is the case for example for the common `[batch, depth]` layout of\n fully-connected layers, and `[batch, height, width, depth]` for\n convolutions.\n `mean` and `variance` in this case would typically be the outputs of\n `tf.nn.moments(..., keepdims=False)` during training, or running averages\n thereof during inference.\n\n See equation 11 in Algorithm 2 of source:\n [Batch Normalization: Accelerating Deep Network Training by\n Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy]\n (http://arxiv.org/abs/1502.03167).\n\n Args:\n x: Input `Tensor` of arbitrary dimensionality.\n mean: A mean `Tensor`.\n variance: A variance `Tensor`.\n offset: An offset `Tensor`, often denoted \\\\(\\beta\\\\) in equations, or\n None. If present, will be added to the normalized tensor.\n scale: A scale `Tensor`, often denoted \\\\(\\gamma\\\\) in equations, or\n `None`. If present, the scale is applied to the normalized tensor.\n variance_epsilon: A small float number to avoid dividing by 0.\n name: A name for this operation (optional).\n\n Returns:\n the normalized, scaled, offset tensor.\n\n References:\n Batch Normalization - Accelerating Deep Network Training by Reducing\n Internal Covariate Shift:\n [Ioffe et al., 2015](http://arxiv.org/abs/1502.03167)\n ([pdf](http://proceedings.mlr.press/v37/ioffe15.pdf))\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.nn.bias_add", "docs": "Adds `bias` to `value`.\n\n This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D.\n Broadcasting is supported, so `value` may have any number of dimensions.\n Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the\n case where both types are quantized.\n\n Args:\n value: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,\n `int16`, `int8`, `complex64`, or `complex128`.\n bias: A 1-D `Tensor` with size matching the channel dimension of `value`.\n Must be the same type as `value` unless `value` is a quantized type,\n in which case a different quantized type may be used.\n data_format: A string. 'N...C' and 'NC...' are supported. If `None` (the\n default) is specified then 'N..C' is assumed.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n Raises:\n ValueError if data format is unrecognized, if `value` has less than two\n dimensions when `data_format` is 'N..C'/`None` or `value` has less\n then three dimensions when `data_format` is `NC..`, if `bias` does not\n have exactly one dimension (is a vector), or if the size of `bias`\n does not match the size of the channel dimension of `value`.\n ", "desc": "Adds `bias` to `value`.", "type": "API"}, {"name": "tf.nn.collapse_repeated", "docs": "Merge repeated labels into single labels.\n\n Args:\n labels: Tensor of shape [batch, max value in seq_length]\n seq_length: Tensor of shape [batch], sequence length of each batch element.\n name: A name for this `Op`. Defaults to \"collapse_repeated_labels\".\n\n Returns:\n A tuple `(collapsed_labels, new_seq_length)` where\n\n collapsed_labels: Tensor of shape [batch, max_seq_length] with repeated\n labels collapsed and padded to max_seq_length, eg:\n `[[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]`\n\n new_seq_length: int tensor of shape [batch] with new sequence lengths.\n ", "desc": "Merge repeated labels into single labels.", "type": "API"}, {"name": "tf.nn.compute_accidental_hits", "docs": "Compute the position ids in `sampled_candidates` matching `true_classes`.\n\n In Candidate Sampling, this operation facilitates virtually removing\n sampled classes which happen to match target classes. This is done\n in Sampled Softmax and Sampled Logistic.\n\n See our [Candidate Sampling Algorithms\n Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n\n We presuppose that the `sampled_candidates` are unique.\n\n We call it an 'accidental hit' when one of the target classes\n matches one of the sampled classes. This operation reports\n accidental hits as triples `(index, id, weight)`, where `index`\n represents the row number in `true_classes`, `id` represents the\n position in `sampled_candidates`, and weight is `-FLOAT_MAX`.\n\n The result of this op should be passed through a `sparse_to_dense`\n operation, then added to the logits of the sampled classes. This\n removes the contradictory effect of accidentally sampling the true\n target classes as noise classes for the same example.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled_candidates output of CandidateSampler.\n num_true: An `int`. The number of target classes per training example.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n indices: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.\n Values indicate rows in `true_classes`.\n ids: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.\n Values indicate positions in `sampled_candidates`.\n weights: A `Tensor` of type `float` and shape `[num_accidental_hits]`.\n Each value is `-FLOAT_MAX`.\n\n ", "desc": "Compute the position ids in `sampled_candidates` matching `true_classes`.", "type": "API"}, {"name": "tf.nn.compute_average_loss", "docs": "Scales per-example losses with sample_weights and computes their average.\n\n Usage with distribution strategy and custom training loop:\n\n ```python\n with strategy.scope():\n def compute_loss(labels, predictions, sample_weight=None):\n\n # If you are using a `Loss` class instead, set reduction to `NONE` so that\n # we can do the reduction afterwards and divide by global batch size.\n per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, predictions)\n\n # Compute loss that is scaled by sample_weight and by global batch size.\n return tf.nn.compute_average_loss(\n per_example_loss,\n sample_weight=sample_weight,\n global_batch_size=GLOBAL_BATCH_SIZE)\n ```\n\n Args:\n per_example_loss: Per-example loss.\n sample_weight: Optional weighting for each example.\n global_batch_size: Optional global batch size value. Defaults to (size of\n first dimension of `losses`) * (number of replicas).\n\n Returns:\n Scalar loss value.\n ", "desc": "Scales per-example losses with sample_weights and computes their average.", "type": "API"}, {"name": "tf.nn.conv_transpose", "docs": "The transpose of `convolution`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d`\n rather than an actual deconvolution.\n\n Args:\n input: An N+2 dimensional `Tensor` of shape\n `[batch_size] + input_spatial_shape + [in_channels]` if data_format does\n not start with \"NC\" (default), or\n `[batch_size, in_channels] + input_spatial_shape` if data_format starts\n with \"NC\". It must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n filters: An N+2 dimensional `Tensor` with the same type as `input` and\n shape `spatial_filter_shape + [in_channels, out_channels]`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the spatial dimensions. By default\n the `N` and `C` dimensions are set to 0. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n dilations: An int or list of `ints` that has length `1`, `N` or `N+2`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the spatial dimensions. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details.\n name: A name for the operation (optional). If not specified \"conv_transpose\"\n is used.\n\n Returns:\n A `Tensor` with the same type as `value`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `convolution`.", "type": "API"}, {"name": "tf.nn.conv1d", "docs": "Computes a 1-D convolution given 3-D input and filter tensors.\n\n Given an input tensor of shape\n `batch_shape + [in_width, in_channels]`\n if `data_format` is `\"NWC\"`, or\n `batch_shape + [in_channels, in_width]`\n if `data_format` is `\"NCW\"`,\n and a filter / kernel tensor of shape\n `[filter_width, in_channels, out_channels]`, this op reshapes\n the arguments to pass them to `conv2d` to perform the equivalent\n convolution operation.\n\n Internally, this op reshapes the input tensors and invokes `tf.nn.conv2d`.\n For example, if `data_format` does not start with `\"NC\"`, a tensor of shape\n `batch_shape + [in_width, in_channels]`\n is reshaped to\n `batch_shape + [1, in_width, in_channels]`,\n and the filter is reshaped to\n `[1, filter_width, in_channels, out_channels]`.\n The result is then reshaped back to\n `batch_shape + [out_width, out_channels]`\n \\(where out_width is a function of the stride and padding as in conv2d\\) and\n returned to the caller.\n\n Args:\n input: A Tensor of rank at least 3. Must be of type `float16`, `float32`, or\n `float64`.\n filters: A Tensor of rank at least 3. Must have the same type as `input`.\n stride: An int or list of `ints` that has length `1` or `3`. The number of\n entries by which the filter is moved right at each step.\n padding: 'SAME' or 'VALID'. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional `string` from `\"NWC\", \"NCW\"`. Defaults to `\"NWC\"`,\n the data is stored in the order of\n `batch_shape + [in_width, in_channels]`. The `\"NCW\"` format stores data\n as `batch_shape + [in_channels, in_width]`.\n dilations: An int or list of `ints` that has length `1` or `3` which\n defaults to 1. The dilation factor for each dimension of input. If set to\n k > 1, there will be k-1 skipped cells between each filter element on that\n dimension. Dilations in the batch and depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as input.\n\n Raises:\n ValueError: if `data_format` is invalid.\n ", "desc": "Computes a 1-D convolution given 3-D input and filter tensors.", "type": "API"}, {"name": "tf.nn.conv1d_transpose", "docs": "The transpose of `conv1d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is actually the transpose (gradient) of `conv1d`\n rather than an actual deconvolution.\n\n Args:\n input: A 3-D `Tensor` of type `float` and shape\n `[batch, in_width, in_channels]` for `NWC` data format or\n `[batch, in_channels, in_width]` for `NCW` data format.\n filters: A 3-D `Tensor` with the same type as `input` and shape\n `[filter_width, output_channels, in_channels]`. `filter`'s\n `in_channels` dimension must match that of `input`.\n output_shape: A 1-D `Tensor`, containing three elements, representing the\n output shape of the deconvolution op.\n strides: An int or list of `ints` that has length `1` or `3`. The number of\n entries by which the filter is moved right at each step.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. `'NWC'` and `'NCW'` are supported.\n dilations: An int or list of `ints` that has length `1` or `3` which\n defaults to 1. The dilation factor for each dimension of input. If set to\n k > 1, there will be k-1 skipped cells between each filter element on that\n dimension. Dilations in the batch and depth dimensions must be 1.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `input`.\n\n Raises:\n ValueError: If input/output depth does not match `filter`'s shape, if\n `output_shape` is not at 3-element vector, if `padding` is other than\n `'VALID'` or `'SAME'`, or if `data_format` is invalid.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv1d`.", "type": "API"}, {"name": "tf.nn.conv2d", "docs": "Computes a 2-D convolution given `input` and 4-D `filters` tensors.\n\n The `input` tensor may have rank `4` or higher, where shape dimensions `[:-3]`\n are considered batch dimensions (`batch_shape`).\n\n Given an input tensor of shape\n `batch_shape + [in_height, in_width, in_channels]` and a filter / kernel\n tensor of shape `[filter_height, filter_width, in_channels, out_channels]`,\n this op performs the following:\n\n 1. Flattens the filter to a 2-D matrix with shape\n `[filter_height * filter_width * in_channels, output_channels]`.\n 2. Extracts image patches from the input tensor to form a *virtual*\n tensor of shape `[batch, out_height, out_width,\n filter_height * filter_width * in_channels]`.\n 3. For each patch, right-multiplies the filter matrix and the image patch\n vector.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k] =\n sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *\n filter[di, dj, q, k]\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n\n Usage Example:\n\n >>> x_in = np.array([[\n ... [[2], [1], [2], [0], [1]],\n ... [[1], [3], [2], [2], [3]],\n ... [[1], [1], [3], [3], [0]],\n ... [[2], [2], [0], [1], [1]],\n ... [[0], [0], [3], [1], [2]], ]])\n >>> kernel_in = np.array([\n ... [ [[2, 0.1]], [[3, 0.2]] ],\n ... [ [[0, 0.3]],[[1, 0.4]] ], ])\n >>> x = tf.constant(x_in, dtype=tf.float32)\n >>> kernel = tf.constant(kernel_in, dtype=tf.float32)\n >>> tf.nn.conv2d(x, kernel, strides=[1, 1, 1, 1], padding='VALID')\n \n\n Args:\n input: A `Tensor`. Must be one of the following types:\n `half`, `bfloat16`, `float32`, `float64`.\n A Tensor of rank at least 4. The dimension order is interpreted according\n to the value of `data_format`; with the all-but-inner-3 dimensions acting\n as batch dimensions. See below for details.\n filters: A `Tensor`. Must have the same type as `input`.\n A 4-D tensor of shape\n `[filter_height, filter_width, in_channels, out_channels]`\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the `H` and `W` dimension. By default\n the `N` and `C` dimensions are set to 1. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`.\n Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n `batch_shape + [height, width, channels]`.\n Alternatively, the format could be \"NCHW\", the data storage order of:\n `batch_shape + [channels, height, width]`.\n dilations: An int or list of `ints` that has length `1`, `2` or `4`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `H` and `W` dimension. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 4-d tensor\n must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input` and the same outer batch shape.\n ", "desc": "Computes a 2-D convolution given `input` and 4-D `filters` tensors.", "type": "API"}, {"name": "tf.nn.conv2d_transpose", "docs": "The transpose of `conv2d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of\n `atrous_conv2d` rather than an actual deconvolution.\n\n Args:\n input: A 4-D `Tensor` of type `float` and shape `[batch, height, width,\n in_channels]` for `NHWC` data format or `[batch, in_channels, height,\n width]` for `NCHW` data format.\n filters: A 4-D `Tensor` with the same type as `input` and shape `[height,\n width, output_channels, in_channels]`. `filter`'s `in_channels` dimension\n must match that of `input`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the `H` and `W` dimension. By default\n the `N` and `C` dimensions are set to 0. The dimension order is determined\n by the value of `data_format`, see below for details.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: A string. 'NHWC' and 'NCHW' are supported.\n dilations: An int or list of `ints` that has length `1`, `2` or `4`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `H` and `W` dimension. By\n default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 4-d tensor\n must be 1.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `input`.\n\n Raises:\n ValueError: If input/output depth does not match `filter`'s shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv2d`.", "type": "API"}, {"name": "tf.nn.conv3d", "docs": "Computes a 3-D convolution given 5-D `input` and `filters` tensors.\n\n In signal processing, cross-correlation is a measure of similarity of\n two waveforms as a function of a time-lag applied to one of them. This\n is also known as a sliding dot product or sliding inner-product.\n\n Our Conv3D implements a form of cross-correlation.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, in_depth, in_height, in_width, in_channels]`.\n filters: A `Tensor`. Must have the same type as `input`.\n Shape `[filter_depth, filter_height, filter_width, in_channels,\n out_channels]`. `in_channels` must match between `input` and `filters`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 3-D convolution given 5-D `input` and `filters` tensors.", "type": "API"}, {"name": "tf.nn.conv3d_transpose", "docs": "The transpose of `conv3d`.\n\n This operation is sometimes called \"deconvolution\" after\n (Zeiler et al., 2010), but is really the transpose (gradient) of `conv3d`\n rather than an actual deconvolution.\n\n Args:\n input: A 5-D `Tensor` of type `float` and shape `[batch, depth, height,\n width, in_channels]` for `NDHWC` data format or `[batch, in_channels,\n depth, height, width]` for `NCDHW` data format.\n filters: A 5-D `Tensor` with the same type as `input` and shape `[depth,\n height, width, output_channels, in_channels]`. `filter`'s `in_channels`\n dimension must match that of `input`.\n output_shape: A 1-D `Tensor` representing the output shape of the\n deconvolution op.\n strides: An int or list of `ints` that has length `1`, `3` or `5`. The\n stride of the sliding window for each dimension of `input`. If a single\n value is given it is replicated in the `D`, `H` and `W` dimension. By\n default the `N` and `C` dimensions are set to 0. The dimension order is\n determined by the value of `data_format`, see below for details.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string. 'NDHWC' and 'NCDHW' are supported.\n dilations: An int or list of `ints` that has length `1`, `3` or `5`,\n defaults to 1. The dilation factor for each dimension of`input`. If a\n single value is given it is replicated in the `D`, `H` and `W` dimension.\n By default the `N` and `C` dimensions are set to 1. If set to k > 1, there\n will be k-1 skipped cells between each filter element on that dimension.\n The dimension order is determined by the value of `data_format`, see above\n for details. Dilations in the batch and depth dimensions if a 5-d tensor\n must be 1.\n name: Optional name for the returned tensor.\n\n Returns:\n A `Tensor` with the same type as `input`.\n\n References:\n Deconvolutional Networks:\n [Zeiler et al., 2010]\n (https://ieeexplore.ieee.org/abstract/document/5539957)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.4023&rep=rep1&type=pdf))\n ", "desc": "The transpose of `conv3d`.", "type": "API"}, {"name": "tf.nn.convolution", "docs": "Computes sums of N-D convolutions (actually cross-correlation).\n\n This also supports either output striding via the optional `strides` parameter\n or atrous convolution (also known as convolution with holes or dilated\n convolution, based on the French word \"trous\" meaning holes in English) via\n the optional `dilations` parameter. Currently, however, output striding\n is not supported for atrous convolutions.\n\n Specifically, in the case that `data_format` does not start with \"NC\", given\n a rank (N+2) `input` Tensor of shape\n\n [num_batches,\n input_spatial_shape[0],\n ...,\n input_spatial_shape[N-1],\n num_input_channels],\n\n a rank (N+2) `filters` Tensor of shape\n\n [spatial_filter_shape[0],\n ...,\n spatial_filter_shape[N-1],\n num_input_channels,\n num_output_channels],\n\n an optional `dilations` tensor of shape N (defaults to `[1]*N`) specifying\n the filter upsampling/input downsampling rate, and an optional list of N\n `strides` (defaults to `[1]*N`), this computes for each N-D spatial output\n position `(x[0], ..., x[N-1])`:\n\n ```\n output[b, x[0], ..., x[N-1], k] =\n sum_{z[0], ..., z[N-1], q}\n filter[z[0], ..., z[N-1], q, k] *\n padded_input[b,\n x[0]*strides[0] + dilation_rate[0]*z[0],\n ...,\n x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],\n q]\n ```\n\n where b is the index into the batch, k is the output channel number, q is the\n input channel number, and z is the N-D spatial offset within the filter. Here,\n `padded_input` is obtained by zero padding the input using an effective\n spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and\n output striding `strides`.\n\n In the case that `data_format` does start with `\"NC\"`, the `input` and output\n (but not the `filters`) are simply transposed as follows:\n\n ```python\n convolution(input, data_format, **kwargs) =\n tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]),\n **kwargs),\n [0, N+1] + range(1, N+1))\n ```\n\n It is required that 1 <= N <= 3.\n\n Args:\n input: An (N+2)-D `Tensor` of type `T`, of shape\n `[batch_size] + input_spatial_shape + [in_channels]` if data_format does\n not start with \"NC\" (default), or\n `[batch_size, in_channels] + input_spatial_shape` if data_format starts\n with \"NC\".\n filters: An (N+2)-D `Tensor` with the same type as `input` and shape\n `spatial_filter_shape + [in_channels, out_channels]`.\n padding: A string, either `\"VALID\"` or `\"SAME\"`. The padding algorithm.\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input such that output has the same\n height/width dimension as the input when the strides are 1. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n strides: Optional. Sequence of N ints >= 1. Specifies the output stride.\n Defaults to `[1]*N`. If any value of strides is > 1, then all values of\n dilation_rate must be 1.\n dilations: Optional. Sequence of N ints >= 1. Specifies the filter\n upsampling/input downsampling rate. In the literature, the same parameter\n is sometimes called `input stride` or `dilation`. The effective filter\n size used for the convolution will be `spatial_filter_shape +\n (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting\n (dilation_rate[i]-1) zeros between consecutive elements of the original\n filter in each spatial dimension i. If any value of dilation_rate is > 1,\n then all values of strides must be 1.\n name: Optional name for the returned tensor.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n\n Returns:\n A `Tensor` with the same type as `input` of shape\n\n `[batch_size] + output_spatial_shape + [out_channels]`\n\n if data_format is None or does not start with \"NC\", or\n\n `[batch_size, out_channels] + output_spatial_shape`\n\n if data_format starts with \"NC\",\n where `output_spatial_shape` depends on the value of `padding`.\n\n If padding == \"SAME\":\n output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])\n\n If padding == \"VALID\":\n output_spatial_shape[i] =\n ceil((input_spatial_shape[i] -\n (spatial_filter_shape[i]-1) * dilation_rate[i])\n / strides[i]).\n\n Raises:\n ValueError: If input/output depth does not match `filters` shape, if padding\n is other than `\"VALID\"` or `\"SAME\"`, or if data_format is invalid.\n\n ", "desc": "Computes sums of N-D convolutions (actually cross-correlation).", "type": "API"}, {"name": "tf.nn.crelu", "docs": "Computes Concatenated ReLU.\n\n Concatenates a ReLU which selects only the positive part of the activation\n with a ReLU which selects only the *negative* part of the activation.\n Note that as a result this non-linearity doubles the depth of the activations.\n Source: [Understanding and Improving Convolutional Neural Networks via\n Concatenated Rectified Linear Units. W. Shang, et\n al.](https://arxiv.org/abs/1603.05201)\n\n Args:\n features: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,\n `int16`, or `int8`.\n name: A name for the operation (optional).\n axis: The axis that the output values are concatenated along. Default is -1.\n\n Returns:\n A `Tensor` with the same type as `features`.\n\n References:\n Understanding and Improving Convolutional Neural Networks via Concatenated\n Rectified Linear Units:\n [Shang et al., 2016](http://proceedings.mlr.press/v48/shang16)\n ([pdf](http://proceedings.mlr.press/v48/shang16.pdf))\n ", "desc": "Computes Concatenated ReLU.", "type": "API"}, {"name": "tf.nn.ctc_beam_search_decoder", "docs": "Performs beam search decoding on the logits given in input.\n\n **Note** Although in general greedy search is a special case of beam-search\n with `top_paths=1` and `beam_width=1`, `ctc_beam_search_decoder` differs\n from `ctc_greedy_decoder` in the treatment of blanks when computing the\n probability of a sequence:\n - `ctc_beam_search_decoder` treats blanks as sequence termination\n - `ctc_greedy_decoder` treats blanks as regular elements\n\n Args:\n inputs: 3-D `float` `Tensor`, size `[max_time, batch_size, num_classes]`.\n The logits.\n sequence_length: 1-D `int32` vector containing sequence lengths, having size\n `[batch_size]`.\n beam_width: An int scalar >= 0 (beam search beam width).\n top_paths: An int scalar >= 0, <= beam_width (controls output size).\n\n Returns:\n A tuple `(decoded, log_probabilities)` where\n\n decoded: A list of length top_paths, where `decoded[j]`\n is a `SparseTensor` containing the decoded outputs:\n\n `decoded[j].indices`: Indices matrix `[total_decoded_outputs[j], 2]`;\n The rows store: `[batch, time]`.\n\n `decoded[j].values`: Values vector, size `[total_decoded_outputs[j]]`.\n The vector stores the decoded classes for beam `j`.\n\n `decoded[j].dense_shape`: Shape vector, size `(2)`.\n The shape values are: `[batch_size, max_decoded_length[j]]`.\n\n log_probability: A `float` matrix `[batch_size, top_paths]` containing\n sequence log-probabilities.\n ", "desc": "Performs beam search decoding on the logits given in input.", "type": "API"}, {"name": "tf.nn.ctc_greedy_decoder", "docs": "Performs greedy decoding on the logits given in input (best path).\n\n Given a tensor as `inputs`, the `blank_index` parameter defines the class\n index of the blank symbol.\n\n For example:\n\n If `blank_index` is equal to 1:\n\n >>> inf = float(\"inf\")\n >>> logits = tf.constant([[[ 0., -inf, -inf],\n ... [ -2.3, -inf, -0.1]],\n ... [[ -inf, -0.5, -inf],\n ... [ -inf, -inf, -0.1]],\n ... [[ -inf, -inf, -inf],\n ... [ -0.1, -inf, -2.3]]])\n >>> seq_lens = tf.constant([2, 3])\n >>> outputs = tf.nn.ctc_greedy_decoder(\n ... logits,\n ... seq_lens,\n ... blank_index=1)\n\n Notes:\n\n - Unlike `ctc_beam_search_decoder`, `ctc_greedy_decoder` considers blanks\n as regular elements when computing the probability of a sequence.\n - Default `blank_index` is `(num_classes - 1)`, unless overriden.\n\n If `merge_repeated` is `True`, merge repeated classes in output.\n This means that if consecutive logits' maximum indices are the same,\n only the first of these is emitted. The sequence `A B B * B * B` (where '*'\n is the blank label) becomes\n\n * `A B B B` if `merge_repeated=True`.\n * `A B B B B` if `merge_repeated=False`.\n\n Args:\n inputs: 3-D `float` `Tensor` sized `[max_time, batch_size, num_classes]`.\n The logits.\n sequence_length: 1-D `int32` vector containing sequence lengths, having size\n `[batch_size]`.\n merge_repeated: Boolean. Default: True.\n blank_index: (Optional). Default: `num_classes - 1`. Define the class index\n to use for the blank label. Negative values will start from num_classes,\n ie, -1 will reproduce the ctc_greedy_decoder behavior of using\n num_classes - 1 for the blank symbol, which corresponds to the default.\n\n Returns:\n A tuple `(decoded, neg_sum_logits)` where\n\n decoded: A single-element list. `decoded[0]`\n is an `SparseTensor` containing the decoded outputs s.t.:\n\n `decoded.indices`: Indices matrix `(total_decoded_outputs, 2)`.\n The rows store: `[batch, time]`.\n\n `decoded.values`: Values vector, size `(total_decoded_outputs)`.\n The vector stores the decoded classes.\n\n `decoded.dense_shape`: Shape vector, size `(2)`.\n The shape values are: `[batch_size, max_decoded_length]`\n\n neg_sum_logits: A `float` matrix `(batch_size x 1)` containing, for the\n sequence found, the negative of the sum of the greatest logit at each\n timeframe.\n ", "desc": "Performs greedy decoding on the logits given in input (best path).", "type": "API"}, {"name": "tf.nn.ctc_loss", "docs": "Computes CTC (Connectionist Temporal Classification) loss.\n\n This op implements the CTC loss as presented in (Graves et al., 2006).\n\n Notes:\n\n - Same as the \"Classic CTC\" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss\n setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True\n - Labels may be supplied as either a dense, zero-padded tensor with a\n vector of label sequence lengths OR as a SparseTensor.\n - On TPU and GPU: Only dense padded labels are supported.\n - On CPU: Caller may use SparseTensor or dense padded labels but calling with\n a SparseTensor will be significantly faster.\n - Default blank label is 0 rather num_classes - 1, unless overridden by\n blank_index.\n\n Args:\n labels: tensor of shape [batch_size, max_label_seq_length] or SparseTensor\n logits: tensor of shape [frames, batch_size, num_labels], if\n logits_time_major == False, shape is [batch_size, frames, num_labels].\n label_length: tensor of shape [batch_size], None if labels is SparseTensor\n Length of reference label sequence in labels.\n logit_length: tensor of shape [batch_size] Length of input sequence in\n logits.\n logits_time_major: (optional) If True (default), logits is shaped [time,\n batch, logits]. If False, shape is [batch, time, logits]\n unique: (optional) Unique label indices as computed by\n ctc_unique_labels(labels). If supplied, enable a faster, memory efficient\n implementation on TPU.\n blank_index: (optional) Set the class index to use for the blank label.\n Negative values will start from num_classes, ie, -1 will reproduce the\n ctc_loss behavior of using num_classes - 1 for the blank symbol. There is\n some memory/performance overhead to switching from the default of 0 as an\n additional shifted copy of the logits may be created.\n name: A name for this `Op`. Defaults to \"ctc_loss_dense\".\n\n Returns:\n loss: tensor of shape [batch_size], negative log probabilities.\n\n References:\n Connectionist Temporal Classification - Labeling Unsegmented Sequence Data\n with Recurrent Neural Networks:\n [Graves et al., 2006](https://dl.acm.org/citation.cfm?id=1143891)\n ([pdf](http://www.cs.toronto.edu/~graves/icml_2006.pdf))\n ", "desc": "Computes CTC (Connectionist Temporal Classification) loss.", "type": "API"}, {"name": "tf.nn.ctc_unique_labels", "docs": "Get unique labels and indices for batched labels for `tf.nn.ctc_loss`.\n\n For use with `tf.nn.ctc_loss` optional argument `unique`: This op can be\n used to preprocess labels in input pipeline to for better speed/memory use\n computing the ctc loss on TPU.\n\n Example:\n ctc_unique_labels([[3, 4, 4, 3]]) ->\n unique labels padded with 0: [[3, 4, 0, 0]]\n indices of original labels in unique: [0, 1, 1, 0]\n\n Args:\n labels: tensor of shape [batch_size, max_label_length] padded with 0.\n name: A name for this `Op`. Defaults to \"ctc_unique_labels\".\n\n Returns:\n tuple of\n - unique labels, tensor of shape `[batch_size, max_label_length]`\n - indices into unique labels, shape `[batch_size, max_label_length]`\n ", "desc": "Get unique labels and indices for batched labels for `tf.nn.ctc_loss`.", "type": "API"}, {"name": "tf.nn.depth_to_space", "docs": "DepthToSpace for tensors of type T.\n\n Rearranges data from depth into blocks of spatial data.\n This is the reverse transformation of SpaceToDepth. More specifically,\n this op outputs a copy of the input tensor where values from the `depth`\n dimension are moved in spatial blocks to the `height` and `width` dimensions.\n The attr `block_size` indicates the input block size and how the data is moved.\n\n * Chunks of data of size `block_size * block_size` from depth are rearranged\n into non-overlapping blocks of size `block_size x block_size`\n * The width the output tensor is `input_depth * block_size`, whereas the\n height is `input_height * block_size`.\n * The Y, X coordinates within each block of the output image are determined\n by the high order component of the input channel index.\n * The depth of the input tensor must be divisible by\n `block_size * block_size`.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates\n within the input image, bX, bY means coordinates\n within the output block, oC means output channels).\n The output would be the input transposed to the following layout:\n n,iY,bY,iX,bX,oC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 1, 1, 4]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1, 2, 3, 4]]]]\n\n ```\n\n This operation will output a tensor of shape `[1, 2, 2, 1]`:\n\n ```\n [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,\n the corresponding output will have 2x2 elements and will have a depth of\n 1 channel (1 = `4 / (block_size * block_size)`).\n The output element shape is `[2, 2, 1]`.\n\n For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.\n\n ```\n x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n This operation, for block size of 2, will return the following tensor of shape\n `[1, 2, 2, 3]`\n\n ```\n [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n\n ```\n\n Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 4 4 1]`:\n\n ```\n x = [[[ [1], [2], [5], [6]],\n [ [3], [4], [7], [8]],\n [ [9], [10], [13], [14]],\n [ [11], [12], [15], [16]]]]\n\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`.\n The size of the spatial block, same as in Space2Depth.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "DepthToSpace for tensors of type T.", "type": "API"}, {"name": "tf.nn.depthwise_conv2d", "docs": "Depthwise 2-D convolution.\n\n Given a 4D input tensor ('NHWC' or 'NCHW' data formats)\n and a filter tensor of shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`\n containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d`\n applies a different filter to each input channel (expanding from 1 channel\n to `channel_multiplier` channels for each), then concatenates the results\n together. The output has `in_channels * channel_multiplier` channels.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}\n filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,\n strides[2] * j + rate[1] * dj, k]\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the\n same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n If any value in `rate` is greater than 1, we perform atrous depthwise\n convolution, in which case all values in the `strides` tensor must be equal\n to 1.\n\n Usage Example:\n\n >>> x = np.array([\n ... [1., 2.],\n ... [3., 4.],\n ... [5., 6.]\n ... ], dtype=np.float32).reshape((1, 3, 2, 1))\n >>> kernel = np.array([\n ... [1., 2.],\n ... [3., 4]\n ... ], dtype=np.float32).reshape((2, 1, 1, 2))\n >>> tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],\n ... padding='VALID').numpy()\n array([[[[10., 14.],\n [14., 20.]],\n [[18., 26.],\n [22., 32.]]]], dtype=float32)\n\n >>> tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],\n ... padding=[[0, 0], [1, 0], [1, 0], [0, 0]]).numpy()\n array([[[[ 0., 0.],\n [ 3., 4.],\n [ 6., 8.]],\n [[ 0., 0.],\n [10., 14.],\n [14., 20.]],\n [[ 0., 0.],\n [18., 26.],\n [22., 32.]]]], dtype=float32)\n\n Args:\n input: 4-D with shape according to `data_format`.\n filter: 4-D with shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`.\n strides: 1-D of size 4. The stride of the sliding window for each\n dimension of `input`.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. When explicit padding is used and data_format\n is `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: The data format for input. Either \"NHWC\" (default) or \"NCHW\".\n dilations: 1-D of size 2. The dilation rate in which we sample input values\n across the `height` and `width` dimensions in atrous convolution. If it is\n greater than 1, then all values of strides must be 1.\n name: A name for this operation (optional).\n\n Returns:\n A 4-D `Tensor` with shape according to `data_format`. E.g., for\n \"NHWC\" format, shape is\n `[batch, out_height, out_width, in_channels * channel_multiplier].`\n ", "desc": "Depthwise 2-D convolution.", "type": "API"}, {"name": "tf.nn.depthwise_conv2d_backprop_filter", "docs": "Computes the gradients of depthwise convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape based on `data_format`. For example,\n if `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height,\n in_width, in_channels]` tensor.\n filter_sizes: A `Tensor` of type `int32`. An integer vector representing the\n tensor shape of `filter`, where `filter` is a 4-D `[filter_height,\n filter_width, in_channels, depthwise_multiplier]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`. 4-D with shape\n based on `data_format`. For example, if `data_format` is 'NHWC' then\n out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the filter.", "type": "API"}, {"name": "tf.nn.depthwise_conv2d_backprop_input", "docs": "Computes the gradients of depthwise convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`. An integer vector representing the\n shape of `input`, based on `data_format`. For example, if `data_format`\n is 'NHWC' then `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`,\n `float32`, `float64`. 4-D with shape `[filter_height, filter_width,\n in_channels, depthwise_multiplier]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`. 4-D with\n shape based on `data_format`. For example, if `data_format` is 'NHWC'\n then out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`. The stride of the sliding window for each\n dimension of the input of the convolution.\n padding: Controls how to pad the image before applying the convolution. Can\n be the string `\"SAME\"` or `\"VALID\"` indicating the type of padding\n algorithm to use, or a list indicating the explicit paddings at the start\n and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to\n `\"NHWC\"`. Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of: [batch, height,\n width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`. 1-D\n tensor of length 4. The dilation factor for each dimension of `input`. If\n set to k > 1, there will be k-1 skipped cells between each filter element\n on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the input.", "type": "API"}, {"name": "tf.nn.dilation2d", "docs": "Computes the grayscale dilation of 4-D `input` and 3-D `filters` tensors.\n\n The `input` tensor has shape `[batch, in_height, in_width, depth]` and the\n `filters` tensor has shape `[filter_height, filter_width, depth]`, i.e., each\n input channel is processed independently of the others with its own\n structuring function. The `output` tensor has shape\n `[batch, out_height, out_width, depth]`. The spatial dimensions of the output\n tensor depend on the `padding` algorithm. We currently only support the\n default \"NHWC\" `data_format`.\n\n In detail, the grayscale morphological 2-D dilation is the max-sum correlation\n (for consistency with `conv2d`, we use unmirrored filters):\n\n output[b, y, x, c] =\n max_{dy, dx} input[b,\n strides[1] * y + rates[1] * dy,\n strides[2] * x + rates[2] * dx,\n c] +\n filters[dy, dx, c]\n\n Max-pooling is a special case when the filter has size equal to the pooling\n kernel size and contains all zeros.\n\n Note on duality: The dilation of `input` by the `filters` is equal to the\n negation of the erosion of `-input` by the reflected `filters`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`,\n `uint32`, `uint64`.\n 4-D with shape `[batch, in_height, in_width, depth]`.\n filters: A `Tensor`. Must have the same type as `input`.\n 3-D with shape `[filter_height, filter_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the input\n tensor. Must be: `[1, stride_height, stride_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A `string`, only `\"NHWC\"` is currently supported.\n dilations: A list of `ints` that has length `>= 4`.\n The input stride for atrous morphological dilation. Must be:\n `[1, rate_height, rate_width, 1]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the grayscale dilation of 4-D `input` and 3-D `filters` tensors.", "type": "API"}, {"name": "tf.nn.dropout", "docs": "Computes dropout: randomly sets elements to zero to prevent overfitting.\n\n Warning: You should consider using\n `tf.nn.experimental.stateless_dropout` instead of this function. The\n difference between `tf.nn.experimental.stateless_dropout` and this\n function is analogous to the difference between\n `tf.random.stateless_uniform` and `tf.random.uniform`. Please see\n [Random number\n generation](https://www.tensorflow.org/guide/random_numbers) guide\n for a detailed description of the various RNG systems in TF. As the\n guide states, legacy stateful RNG ops like `tf.random.uniform` and\n `tf.nn.dropout` are not deprecated yet but highly discouraged,\n because their states are hard to control.\n\n Note: The behavior of dropout has changed between TensorFlow 1.x and 2.x.\n When converting 1.x code, please use named arguments to ensure behavior stays\n consistent.\n\n See also: `tf.keras.layers.Dropout` for a dropout layer.\n\n [Dropout](https://arxiv.org/abs/1207.0580) is useful for regularizing DNN\n models. Inputs elements are randomly set to zero (and the other elements are\n rescaled). This encourages each node to be independently useful, as it cannot\n rely on the output of other nodes.\n\n More precisely: With probability `rate` elements of `x` are set to `0`.\n The remaining elements are scaled up by `1.0 / (1 - rate)`, so that the\n expected value is preserved.\n\n >>> tf.random.set_seed(0)\n >>> x = tf.ones([3,5])\n >>> tf.nn.dropout(x, rate = 0.5, seed = 1).numpy()\n array([[2., 0., 0., 2., 2.],\n [2., 2., 2., 2., 2.],\n [2., 0., 2., 0., 2.]], dtype=float32)\n\n >>> tf.random.set_seed(0)\n >>> x = tf.ones([3,5])\n >>> tf.nn.dropout(x, rate = 0.8, seed = 1).numpy()\n array([[0., 0., 0., 5., 5.],\n [0., 5., 0., 5., 0.],\n [5., 0., 5., 0., 5.]], dtype=float32)\n\n >>> tf.nn.dropout(x, rate = 0.0) == x\n \n\n\n By default, each element is kept or dropped independently. If `noise_shape`\n is specified, it must be\n [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`\n will make independent decisions. This is useful for dropping whole\n channels from an image or sequence. For example:\n\n >>> tf.random.set_seed(0)\n >>> x = tf.ones([3,10])\n >>> tf.nn.dropout(x, rate = 2/3, noise_shape=[1,10], seed=1).numpy()\n array([[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],\n [0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],\n [0., 0., 0., 3., 3., 0., 3., 3., 3., 0.]], dtype=float32)\n\n Args:\n x: A floating point tensor.\n rate: A scalar `Tensor` with the same type as x. The probability\n that each element is dropped. For example, setting rate=0.1 would drop\n 10% of input elements.\n noise_shape: A 1-D integer `Tensor`, representing the\n shape for randomly generated keep/drop flags.\n seed: A Python integer. Used to create random seeds. See\n `tf.random.set_seed` for behavior.\n name: A name for this operation (optional).\n\n Returns:\n A Tensor of the same shape of `x`.\n\n Raises:\n ValueError: If `rate` is not in `[0, 1)` or if `x` is not a floating point\n tensor. `rate=1` is disallowed, because the output would be all zeros,\n which is likely not what was intended.\n ", "desc": "Computes dropout: randomly sets elements to zero to prevent overfitting.", "type": "API"}, {"name": "tf.nn.elu", "docs": "Computes the exponential linear function.\n\n The ELU function is defined as:\n\n * $ e ^ x - 1 $ if $ x < 0 $\n * $ x $ if $ x >= 0 $\n\n Examples:\n\n >>> tf.nn.elu(1.0)\n \n >>> tf.nn.elu(0.0)\n \n >>> tf.nn.elu(-1000.0)\n \n\n See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)\n ](http://arxiv.org/abs/1511.07289)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes the exponential linear function.", "type": "API"}, {"name": "tf.nn.embedding_lookup", "docs": "Looks up embeddings for the given `ids` from a list of tensors.\n\n This function is used to perform parallel lookups on the list of tensors in\n `params`. It is a generalization of `tf.gather`, where `params` is\n interpreted as a partitioning of a large embedding tensor.\n\n If `len(params) > 1`, each element `id` of `ids` is partitioned between the\n elements of `params` according to the \"div\" partition strategy, which means we\n assign ids to partitions in a contiguous manner. For instance, 13 ids are\n split across 5 partitions as:\n `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.\n\n If the id space does not evenly divide the number of partitions, each of the\n first `(max_id + 1) % len(params)` partitions will be assigned one more id.\n\n The results of the lookup are concatenated into a dense\n tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.\n\n Args:\n params: A single tensor representing the complete embedding tensor, or a\n list of tensors all of same shape except for the first dimension,\n representing sharded embedding tensors following \"div\" partition strategy.\n ids: A `Tensor` with type `int32` or `int64` containing the ids to be looked\n up in `params`.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is larger\n than this value.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as the tensors in `params`.\n\n For instance, if `params` is a 5x2 matrix:\n\n ```python\n [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]\n ```\n\n or a list of matrices:\n\n ```python\n params[0]: [[1, 2], [3, 4]]\n params[1]: [[5, 6], [7, 8]]\n params[2]: [[9, 10]]\n ```\n\n and `ids` is:\n\n ```python\n [0, 3, 4]\n ```\n\n The output will be a 3x2 matrix:\n\n ```python\n [[1, 2], [7, 8], [9, 10]]\n ```\n\n Raises:\n ValueError: If `params` is empty.\n ", "desc": "Looks up embeddings for the given `ids` from a list of tensors.", "type": "API"}, {"name": "tf.nn.embedding_lookup_sparse", "docs": "Looks up embeddings for the given ids and weights from a list of tensors.\n\n This op assumes that there is at least one id for each row in the dense tensor\n represented by sp_ids (i.e. there are no rows with empty features), and that\n all the indices of sp_ids are in canonical row-major order.\n\n `sp_ids` and `sp_weights` (if not None) are `SparseTensor`s with rank of 2.\n Embeddings are always aggregated along the last dimension.\n\n It also assumes that all id values lie in the range [0, p0), where p0\n is the sum of the size of params along dimension 0.\n\n If `len(params) > 1`, each element of `sp_ids` is partitioned between the\n elements of `params` according to the \"div\" partition strategy, which means we\n assign ids to partitions in a contiguous manner. For instance, 13 ids are\n split across 5 partitions as:\n `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.\n\n If the id space does not evenly divide the number of partitions, each of the\n first `(max_id + 1) % len(params)` partitions will be assigned one more id.\n\n Args:\n params: A single tensor representing the complete embedding tensor, or a\n list of tensors all of same shape except for the first dimension,\n representing sharded embedding tensors following \"div\" partition strategy.\n sp_ids: N x M `SparseTensor` of int64 ids where N is typically batch size\n and M is arbitrary.\n sp_weights: either a `SparseTensor` of float / double weights, or `None` to\n indicate all weights should be taken to be 1. If specified, `sp_weights`\n must have exactly the same shape and indices as `sp_ids`.\n combiner: A string specifying the reduction op. Currently \"mean\", \"sqrtn\"\n and \"sum\" are supported. \"sum\" computes the weighted sum of the embedding\n results for each row. \"mean\" is the weighted sum divided by the total\n weight. \"sqrtn\" is the weighted sum divided by the square root of the sum\n of the squares of the weights. Defaults to `mean`.\n max_norm: If not `None`, each embedding is clipped if its l2-norm is larger\n than this value, before combining.\n name: Optional name for the op.\n\n Returns:\n A dense tensor representing the combined embeddings for the\n sparse ids. For each row in the dense tensor represented by `sp_ids`, the op\n looks up the embeddings for all ids in that row, multiplies them by the\n corresponding weight, and combines these embeddings as specified.\n\n In other words, if\n\n `shape(combined params) = [p0, p1, ..., pm]`\n\n and\n\n `shape(sp_ids) = shape(sp_weights) = [d0, d1]`\n\n then\n\n `shape(output) = [d0, p1, ..., pm]`.\n\n For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are\n\n ```python\n [0, 0]: id 1, weight 2.0\n [0, 1]: id 3, weight 0.5\n [1, 0]: id 0, weight 1.0\n [2, 3]: id 1, weight 3.0\n ```\n\n with `combiner`=\"mean\", then the output will be a 3x20 matrix where\n\n ```python\n output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)\n output[1, :] = (params[0, :] * 1.0) / 1.0\n output[2, :] = (params[1, :] * 3.0) / 3.0\n ```\n\n Raises:\n TypeError: If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is\n neither `None` nor `SparseTensor`.\n ValueError: If `combiner` is not one of {\"mean\", \"sqrtn\", \"sum\"}.\n ", "desc": "Looks up embeddings for the given ids and weights from a list of tensors.", "type": "API"}, {"name": "tf.nn.erosion2d", "docs": "Computes the grayscale erosion of 4-D `value` and 3-D `filters` tensors.\n\n The `value` tensor has shape `[batch, in_height, in_width, depth]` and the\n `filters` tensor has shape `[filters_height, filters_width, depth]`, i.e.,\n each input channel is processed independently of the others with its own\n structuring function. The `output` tensor has shape\n `[batch, out_height, out_width, depth]`. The spatial dimensions of the\n output tensor depend on the `padding` algorithm. We currently only support the\n default \"NHWC\" `data_format`.\n\n In detail, the grayscale morphological 2-D erosion is given by:\n\n output[b, y, x, c] =\n min_{dy, dx} value[b,\n strides[1] * y - dilations[1] * dy,\n strides[2] * x - dilations[2] * dx,\n c] -\n filters[dy, dx, c]\n\n Duality: The erosion of `value` by the `filters` is equal to the negation of\n the dilation of `-value` by the reflected `filters`.\n\n Args:\n value: A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.\n filters: A `Tensor`. Must have the same type as `value`.\n 3-D with shape `[filters_height, filters_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The stride of the sliding window for each dimension of\n the input tensor. Must be: `[1, stride_height, stride_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A `string`, only `\"NHWC\"` is currently supported.\n dilations: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The input stride for atrous morphological dilation.\n Must be: `[1, rate_height, rate_width, 1]`.\n name: A name for the operation (optional). If not specified \"erosion2d\"\n is used.\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n 4-D with shape `[batch, out_height, out_width, depth]`.\n\n Raises:\n ValueError: If the `value` depth does not match `filters`' shape, or if\n padding is other than `'VALID'` or `'SAME'`.\n ", "desc": "Computes the grayscale erosion of 4-D `value` and 3-D `filters` tensors.", "type": "API"}, {"name": "tf.nn.fixed_unigram_candidate_sampler", "docs": "Samples a set of classes using the provided (fixed) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution is read from a file or passed in as an\n in-memory array. There is also an option to skew the distribution by\n applying a distortion power to the weights.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n vocab_file: Each valid line in this file (which should have a CSV-like\n format) corresponds to a valid word ID. IDs are in sequential order,\n starting from num_reserved_ids. The last entry in each line is expected\n to be a value corresponding to the count or relative probability. Exactly\n one of `vocab_file` and `unigrams` needs to be passed to this operation.\n distortion: The distortion is used to skew the unigram probability\n distribution. Each weight is first raised to the distortion's power\n before adding to the internal unigram distribution. As a result,\n `distortion = 1.0` gives regular unigram sampling (as defined by the vocab\n file), and `distortion = 0.0` gives a uniform distribution.\n num_reserved_ids: Optionally some reserved IDs can be added in the range\n `[0, num_reserved_ids)` by the users. One use case is that a special\n unknown word token is used as ID 0. These IDs will have a sampling\n probability of 0.\n num_shards: A sampler can be used to sample from a subset of the original\n range in order to speed up the whole computation through parallelism. This\n parameter (together with `shard`) indicates the number of partitions that\n are being used in the overall computation.\n shard: A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This\n parameter (together with `num_shards`) indicates the particular partition\n number of the operation, when partitioning is being used.\n unigrams: A list of unigram counts or probabilities, one per ID in\n sequential order. Exactly one of `vocab_file` and `unigrams` should be\n passed to this operation.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes using the provided (fixed) base distribution.", "type": "API"}, {"name": "tf.nn.fractional_avg_pool", "docs": "Performs fractional average pooling on the input.\n\n Fractional average pooling is similar to Fractional max pooling in the pooling\n region generation step. The only difference is that after pooling regions are\n generated, a mean operation is performed instead of a max operation in each\n pooling region.\n\n Args:\n value: A `Tensor`. 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: A list of `floats` that has length >= 4. Pooling ratio for\n each dimension of `value`, currently only supports row and col dimension\n and should be >= 1.0. For example, a valid pooling ratio looks like [1.0,\n 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't\n allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling\n ratio on height and width dimensions respectively.\n pseudo_random: An optional `bool`. Defaults to `False`. When set to `True`,\n generates the pooling sequence in a pseudorandom fashion, otherwise, in a\n random fashion. Check paper (Graham, 2015) for difference between\n pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`. When set to `True`,\n it means when pooling, the values at the boundary of adjacent pooling\n cells are used by both cells. For example:\n `index 0 1 2 3 4`\n `value 20 5 16 3 7`\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used\n twice. The result would be [20, 16] for fractional avg pooling.\n seed: An optional `int`. Defaults to `0`. If set to be non-zero, the\n random number generator is seeded by the given seed. Otherwise it is\n seeded by a random seed.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (`output`, `row_pooling_sequence`,\n `col_pooling_sequence`).\n output: Output `Tensor` after fractional avg pooling. Has the same type as\n `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n\n References:\n Fractional Max-Pooling:\n [Graham, 2015](https://arxiv.org/abs/1412.6071)\n ([pdf](https://arxiv.org/pdf/1412.6071.pdf))\n ", "desc": "Performs fractional average pooling on the input.", "type": "API"}, {"name": "tf.nn.fractional_max_pool", "docs": "Performs fractional max pooling on the input.\n\n Fractional max pooling is slightly different than regular max pooling. In\n regular max pooling, you downsize an input set by taking the maximum value of\n smaller N x N subsections of the set (often 2x2), and try to reduce the set by\n a factor of N, where N is an integer. Fractional max pooling, as you might\n expect from the word \"fractional\", means that the overall reduction ratio N\n does not have to be an integer.\n\n The sizes of the pooling regions are generated randomly but are fairly\n uniform. For example, let's look at the height dimension, and the constraints\n on the list of rows that will be pool boundaries.\n\n First we define the following:\n\n 1. input_row_length : the number of rows from the input set\n 2. output_row_length : which will be smaller than the input\n 3. alpha = input_row_length / output_row_length : our reduction ratio\n 4. K = floor(alpha)\n 5. row_pooling_sequence : this is the result list of pool boundary rows\n\n Then, row_pooling_sequence should satisfy:\n\n 1. a[0] = 0 : the first value of the sequence is 0\n 2. a[end] = input_row_length : the last value of the sequence is the size\n 3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size\n 4. length(row_pooling_sequence) = output_row_length+1\n\n Args:\n value: A `Tensor`. 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: An int or list of `ints` that has length `1`, `2` or `4`.\n Pooling ratio for each dimension of `value`, currently only supports row\n and col dimension and should be >= 1.0. For example, a valid pooling ratio\n looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0\n because we don't allow pooling on batch and channels dimensions. 1.44 and\n 1.73 are pooling ratio on height and width dimensions respectively.\n pseudo_random: An optional `bool`. Defaults to `False`. When set to `True`,\n generates the pooling sequence in a pseudorandom fashion, otherwise, in a\n random fashion. Check paper (Graham, 2015) for difference between\n pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`. When set to `True`,\n it means when pooling, the values at the boundary of adjacent pooling\n cells are used by both cells. For example:\n `index 0 1 2 3 4`\n `value 20 5 16 3 7`\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used\n twice. The result would be [20, 16] for fractional max pooling.\n seed: An optional `int`. Defaults to `0`. If set to be non-zero, the\n random number generator is seeded by the given seed. Otherwise it is\n seeded by a random seed.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (`output`, `row_pooling_sequence`,\n `col_pooling_sequence`).\n output: Output `Tensor` after fractional max pooling. Has the same type as\n `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n\n Raises:\n ValueError: If no seed is specified and op determinism is enabled.\n\n References:\n Fractional Max-Pooling:\n [Graham, 2015](https://arxiv.org/abs/1412.6071)\n ([pdf](https://arxiv.org/pdf/1412.6071.pdf))\n ", "desc": "Performs fractional max pooling on the input.", "type": "API"}, {"name": "tf.nn.gelu", "docs": "Compute the Gaussian Error Linear Unit (GELU) activation function.\n\n Gaussian error linear unit (GELU) computes\n `x * P(X <= x)`, where `P(X) ~ N(0, 1)`.\n The (GELU) nonlinearity weights inputs by their value, rather than gates\n inputs by their sign as in ReLU.\n\n For example:\n\n >>> x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype=tf.float32)\n >>> y = tf.nn.gelu(x)\n >>> y.numpy()\n array([-0.00404951, -0.15865529, 0. , 0.8413447 , 2.9959507 ],\n dtype=float32)\n >>> y = tf.nn.gelu(x, approximate=True)\n >>> y.numpy()\n array([-0.00363752, -0.15880796, 0. , 0.841192 , 2.9963627 ],\n dtype=float32)\n\n Args:\n features: A `Tensor` representing preactivation values.\n approximate: An optional `bool`. Defaults to `False`. Whether to enable\n approximation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `features`.\n\n References:\n [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415).\n ", "desc": "Compute the Gaussian Error Linear Unit (GELU) activation function.", "type": "API"}, {"name": "tf.nn.in_top_k", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is finite (not inf, -inf, or nan) and among\n the top `k` predictions among all predictions for example `i`. Note that the\n behavior of `InTopK` differs from the `TopK` op in its handling of ties; if\n multiple classes have the same prediction value and straddle the top-`k`\n boundary, all of those classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: An `int`. Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.nn.isotonic_regression", "docs": "Solves isotonic regression problems along the given axis.\n\n For each vector x, the problem solved is\n\n $$\\argmin_{y_1 >= y_2 >= ... >= y_n} \\sum_i (x_i - y_i)^2.$$\n\n As the solution is component-wise constant, a second tensor is returned that\n encodes the segments. The problems are solved over the given axis.\n\n Consider the following example, where we solve a batch of two problems. The\n first input is [3, 1, 2], while the second [1, 3, 4] (as the axis is 1).\n >>> x = tf.constant([[3, 1, 2], [1, 3, 4]], dtype=tf.float32)\n >>> y, segments = tf.nn.isotonic_regression(x, axis=1)\n >>> y # The solution.\n \n\n Note that the first solution has two blocks [2] and [1.5, 1.5]. The second\n solution is constant, and thus has a single segment. These segments are\n exactly what the second returned tensor encodes:\n\n >>> segments\n \n\n\n Args:\n inputs: A tensor holding the inputs.\n decreasing: If set to False, the inequalities in the optimizing constrained\n are flipped.\n axis: The axis along which the problems should be solved.\n\n Returns:\n output: The solutions, same shape as type as the input.\n segments: An int32 tensor, same shape as the input indicating the segments\n that have the same value. Specifically, those positions that have the same\n value correspond to the same segment. These values start at zero, and are\n monotonously increasing for each solution.\n ", "desc": "Solves isotonic regression problems along the given axis.", "type": "API"}, {"name": "tf.nn.l2_loss", "docs": "L2 Loss.\n\n Computes half the L2 norm of a tensor without the `sqrt`:\n\n output = sum(t ** 2) / 2\n\n Args:\n t: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Typically 2-D, but may have any dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "L2 Loss.", "type": "API"}, {"name": "tf.nn.l2_normalize", "docs": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(dim)`. They will be removed in a future version.\nInstructions for updating:\ndim is deprecated, use axis instead\n\nFor a 1-D tensor with `axis = 0`, computes\n\n output = x / sqrt(max(sum(x**2), epsilon))\n\nFor `x` with more dimensions, independently normalizes each 1-D slice along\ndimension `axis`.\n\n1-D tensor example:\n>>> x = tf.constant([3.0, 4.0])\n>>> tf.math.l2_normalize(x).numpy()\narray([0.6, 0.8], dtype=float32)\n\n2-D tensor example:\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 0).numpy()\narray([[0.6],\n [0.8]], dtype=float32)\n\n>>> x = tf.constant([[3.0], [4.0]])\n>>> tf.math.l2_normalize(x, 1).numpy()\narray([[1.],\n [1.]], dtype=float32)\n\nArgs:\n x: A `Tensor`.\n axis: Dimension along which to normalize. A scalar or a vector of\n integers.\n epsilon: A lower bound value for the norm. Will use `sqrt(epsilon)` as the\n divisor if `norm < sqrt(epsilon)`.\n name: A name for this operation (optional).\n dim: Deprecated, do not use.\n\nReturns:\n A `Tensor` with the same shape as `x`.", "desc": "Normalizes along dimension `axis` using an L2 norm. (deprecated arguments)", "type": "API"}, {"name": "tf.nn.leaky_relu", "docs": "Compute the Leaky ReLU activation function.\n\n Source: [Rectifier Nonlinearities Improve Neural Network Acoustic Models.\n AL Maas, AY Hannun, AY Ng - Proc. ICML, 2013]\n (https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf).\n\n Args:\n features: A `Tensor` representing preactivation values. Must be one of\n the following types: `float16`, `float32`, `float64`, `int32`, `int64`.\n alpha: Slope of the activation function at x < 0.\n name: A name for the operation (optional).\n\n Returns:\n The activation value.\n\n References:\n Rectifier Nonlinearities Improve Neural Network Acoustic Models:\n [Maas et al., 2013]\n (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.693.1422)\n ([pdf]\n (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.1422&rep=rep1&type=pdf))\n ", "desc": "Compute the Leaky ReLU activation function.", "type": "API"}, {"name": "tf.nn.learned_unigram_candidate_sampler", "docs": "Samples a set of classes from a distribution learned during training.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is constructed on the fly\n during training. It is a unigram distribution over the target\n classes seen so far during training. Every integer in `[0, range_max)`\n begins with a weight of 1, and is incremented by 1 each time it is\n seen as a target class. The base distribution is not saved to checkpoints,\n so it is reset when the model is reloaded.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes from a distribution learned during training.", "type": "API"}, {"name": "tf.nn.local_response_normalization", "docs": "Local Response Normalization.\n\n The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last\n dimension), and each vector is normalized independently. Within a given vector,\n each component is divided by the weighted, squared sum of inputs within\n `depth_radius`. In detail,\n\n sqr_sum[a, b, c, d] =\n sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)\n output = input / (bias + alpha * sqr_sum) ** beta\n\n For details, see [Krizhevsky et al., ImageNet classification with deep\n convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D.\n depth_radius: An optional `int`. Defaults to `5`.\n 0-D. Half-width of the 1-D normalization window.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually positive to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Local Response Normalization.", "type": "API"}, {"name": "tf.nn.log_poisson_loss", "docs": "Computes log Poisson loss given `log_input`.\n\n Gives the log-likelihood loss between the prediction and the target under the\n assumption that the target has a Poisson distribution.\n Caveat: By default, this is not the exact loss, but the loss minus a\n constant term [log(z!)]. That has no effect for optimization, but\n does not play well with relative loss comparisons. To compute an\n approximation of the log factorial term, specify\n compute_full_loss=True to enable Stirling's Approximation.\n\n For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson\n loss is\n\n -log(exp(-x) * (x^z) / z!)\n = -log(exp(-x) * (x^z)) + log(z!)\n ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n [ Note the second term is the Stirling's Approximation for log(z!).\n It is invariant to x and does not affect optimization, though\n important for correct relative loss comparisons. It is only\n computed when compute_full_loss == True. ]\n = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n\n Args:\n targets: A `Tensor` of the same type and shape as `log_input`.\n log_input: A `Tensor` of type `float32` or `float64`.\n compute_full_loss: whether to compute the full loss. If false, a constant\n term is dropped in favor of more efficient optimization.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `log_input` with the componentwise\n logistic losses.\n\n Raises:\n ValueError: If `log_input` and `targets` do not have the same shape.\n ", "desc": "Computes log Poisson loss given `log_input`.", "type": "API"}, {"name": "tf.nn.log_softmax", "docs": "Computes log softmax activations.\n\n For each batch `i` and class `j` we have\n\n logsoftmax = logits - log(reduce_sum(exp(logits), axis))\n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `logits`. Same shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes log softmax activations.", "type": "API"}, {"name": "tf.nn.lrn", "docs": "Local Response Normalization.\n\n The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last\n dimension), and each vector is normalized independently. Within a given vector,\n each component is divided by the weighted, squared sum of inputs within\n `depth_radius`. In detail,\n\n sqr_sum[a, b, c, d] =\n sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)\n output = input / (bias + alpha * sqr_sum) ** beta\n\n For details, see [Krizhevsky et al., ImageNet classification with deep\n convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D.\n depth_radius: An optional `int`. Defaults to `5`.\n 0-D. Half-width of the 1-D normalization window.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually positive to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Local Response Normalization.", "type": "API"}, {"name": "tf.nn.max_pool", "docs": "Performs max pooling on the input.\n\n For a given window of `ksize`, takes the maximum value within that window.\n Used for reducing computation and preventing overfitting.\n\n Consider an example of pooling with 2x2, non-overlapping windows:\n\n >>> matrix = tf.constant([\n ... [0, 0, 1, 7],\n ... [0, 2, 0, 0],\n ... [5, 2, 0, 0],\n ... [0, 0, 9, 8],\n ... ])\n >>> reshaped = tf.reshape(matrix, (1, 4, 4, 1))\n >>> tf.nn.max_pool(reshaped, ksize=2, strides=2, padding=\"SAME\")\n \n\n We can adjust the window size using the `ksize` parameter. For example, if we\n were to expand the window to 3:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=2, padding=\"SAME\")\n \n\n We've now picked up two additional large numbers (5 and 9) in two of the\n pooled spots.\n\n Note that our windows are now overlapping, since we're still moving by 2 units\n on each iteration. This is causing us to see the same 9 repeated twice, since\n it is part of two overlapping windows.\n\n We can adjust how far we move our window with each iteration using the\n `strides` parameter. Updating this to the same value as our window size\n eliminates the overlap:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=3, padding=\"SAME\")\n \n\n Because the window does not neatly fit into our input, padding is added around\n the edges, giving us the same result as when we used a 2x2 window. We can skip\n padding altogether and simply drop the windows that do not fully fit into our\n input by instead passing `\"VALID\"` to the `padding` argument:\n\n >>> tf.nn.max_pool(reshaped, ksize=3, strides=3, padding=\"VALID\")\n \n\n Now we've grabbed the largest value in the 3x3 window starting from the upper-\n left corner. Since no other windows fit in our input, they are dropped.\n\n Args:\n input: Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape +\n [num_channels]` if `data_format` does not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n ksize: An int or list of `ints` that has length `1`, `N` or `N+2`. The size\n of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `N` or `N+2`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit\n padding, the size of the paddings cannot be greater than the sliding\n window size.\n data_format: A string. Specifies the channel dimension. For N=1 it can be\n either \"NWC\" (default) or \"NCW\", for N=2 it can be either \"NHWC\" (default)\n or \"NCHW\" and for N=3 either \"NDHWC\" (default) or \"NCDHW\".\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs max pooling on the input.", "type": "API"}, {"name": "tf.nn.max_pool_with_argmax", "docs": "Performs max pooling on the input and outputs both max values and indices.\n\n The indices in `argmax` are flattened, so that a maximum value at position\n `[b, y, x, c]` becomes flattened index: `(y * width + x) * channels + c` if\n `include_batch_in_index` is False;\n `((b * height + y) * width + x) * channels + c`\n if `include_batch_in_index` is True.\n\n The indices returned are always in `[0, height) x [0, width)` before\n flattening, even if padding is involved and the mathematically correct answer\n is outside (either negative or too large). This is a bug, but fixing it is\n difficult to do in a safe backwards compatible way, especially due to\n flattening.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`,\n `uint32`, `uint64`.\n 4-D with shape `[batch, height, width, channels]`. Input to pool over.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`.\n The size of the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `2` or `4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional `string`, must be set to `\"NHWC\"`. Defaults to\n `\"NHWC\"`.\n Specify the data format of the input and output data.\n output_dtype: An optional `tf.DType` from: `tf.int32, tf.int64`.\n Defaults to `tf.int64`.\n The dtype of the returned argmax tensor.\n include_batch_in_index: An optional `boolean`. Defaults to `False`.\n Whether to include batch dimension in flattened index of `argmax`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, argmax).\n\n output: A `Tensor`. Has the same type as `input`.\n argmax: A `Tensor` of type `output_dtype`.\n ", "desc": "Performs max pooling on the input and outputs both max values and indices.", "type": "API"}, {"name": "tf.nn.max_pool1d", "docs": "Performs the max pooling on the input.\n\n Note internally this op reshapes and uses the underlying 2d operation.\n\n Args:\n input: A 3-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1` or `3`. The size of the\n window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1` or `3`. The stride of\n the sliding window for each dimension of the input tensor.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NWC\"`, this should be in the form `[[0, 0], [pad_left, pad_right], [0,\n 0]]`. When explicit padding used and data_format is `\"NCW\"`, this should\n be in the form `[[0, 0], [0, 0], [pad_left, pad_right]]`. When using\n explicit padding, the size of the paddings cannot be greater than the\n sliding window size.\n data_format: An optional string from: \"NWC\", \"NCW\". Defaults to \"NWC\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the max pooling on the input.", "type": "API"}, {"name": "tf.nn.max_pool2d", "docs": "Performs max pooling on 2D spatial data such as images.\n\n This is a more specific version of `tf.nn.max_pool` where the input tensor\n is 4D, representing 2D spatial data such as images. Using these APIs are\n equivalent\n\n Downsamples the input images along theirs spatial dimensions (height and\n width) by taking its maximum over an input window defined by `ksize`.\n The window is shifted by `strides` along each dimension.\n\n For example, for `strides=(2, 2)` and `padding=VALID` windows that extend\n outside of the input are not included in the output:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> # Add the `batch` and `channels` dimensions.\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding=\"VALID\")\n >>> result[0, :, :, 0]\n \n\n With `padding=SAME`, we get:\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding='SAME')\n >>> result[0, :, :, 0]\n \n\n We can also specify padding explicitly. The following example adds width-1\n padding on all sides (top, bottom, left, right):\n\n >>> x = tf.constant([[1., 2., 3., 4.],\n ... [5., 6., 7., 8.],\n ... [9., 10., 11., 12.]])\n >>> x = x[tf.newaxis, :, :, tf.newaxis]\n >>> result = tf.nn.max_pool2d(x, ksize=(2, 2), strides=(2, 2),\n ... padding=[[0, 0], [1, 1], [1, 1], [0, 0]])\n >>> result[0, :, :, 0]\n \n\n For more examples and detail, see `tf.nn.max_pool`.\n\n Args:\n input: A 4-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1`, `2` or `4`. The size of\n the window for each dimension of the input tensor. If only one integer is\n specified, then we apply the same window for all 4 dims. If two are\n provided then we use those for H, W dimensions and keep N, C dimension\n window size = 1.\n strides: An int or list of `ints` that has length `1`, `2` or `4`. The\n stride of the sliding window for each dimension of the input tensor. If\n only one integer is specified, we apply the same stride to all 4 dims. If\n two are provided we use those for the H, W dimensions and keep N, C of\n stride = 1.\n padding: Either the `string` `\"SAME\"` or `\"VALID\"` indicating the type of\n padding algorithm to use, or a list indicating the explicit paddings at\n the start and end of each dimension. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information. When explicit padding is used and data_format is\n `\"NHWC\"`, this should be in the form `[[0, 0], [pad_top, pad_bottom],\n [pad_left, pad_right], [0, 0]]`. When explicit padding used and\n data_format is `\"NCHW\"`, this should be in the form `[[0, 0], [0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right]]`. When using explicit\n padding, the size of the paddings cannot be greater than the sliding\n window size.\n data_format: A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs max pooling on 2D spatial data such as images.", "type": "API"}, {"name": "tf.nn.max_pool3d", "docs": "Performs the max pooling on the input.\n\n Args:\n input: A 5-D `Tensor` of the format specified by `data_format`.\n ksize: An int or list of `ints` that has length `1`, `3` or `5`. The size of\n the window for each dimension of the input tensor.\n strides: An int or list of `ints` that has length `1`, `3` or `5`. The\n stride of the sliding window for each dimension of the input tensor.\n padding: A string, either `'VALID'` or `'SAME'`. The padding algorithm. See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: An optional string from: \"NDHWC\", \"NCDHW\". Defaults to \"NDHWC\".\n The data format of the input and output data. With the default format\n \"NDHWC\", the data is stored in the order of: [batch, in_depth, in_height,\n in_width, in_channels]. Alternatively, the format could be \"NCDHW\", the\n data storage order is: [batch, in_channels, in_depth, in_height,\n in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of format specified by `data_format`.\n The max pooled output tensor.\n ", "desc": "Performs the max pooling on the input.", "type": "API"}, {"name": "tf.nn.moments", "docs": "Calculates the mean and variance of `x`.\n\n The mean and variance are calculated by aggregating the contents of `x`\n across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean\n and variance of a vector.\n\n Note: shift is currently not used; the true mean is computed and used.\n\n When using these moments for batch normalization (see\n `tf.nn.batch_normalization`):\n\n * for so-called \"global normalization\", used with convolutional filters with\n shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`.\n * for simple batch normalization pass `axes=[0]` (batch only).\n\n Args:\n x: A `Tensor`.\n axes: Array of ints. Axes along which to compute mean and\n variance.\n shift: Not used in the current implementation.\n keepdims: produce moments with the same dimensionality as the input.\n name: Name used to scope the operations that compute the moments.\n\n Returns:\n Two `Tensor` objects: `mean` and `variance`.\n ", "desc": "Calculates the mean and variance of `x`.", "type": "API"}, {"name": "tf.nn.nce_loss", "docs": "Computes and returns the noise-contrastive estimation training loss.\n\n See [Noise-contrastive estimation: A new estimation principle for\n unnormalized statistical\n models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).\n Also see our [Candidate Sampling Algorithms\n Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)\n\n A common use case is to use this method for training, and calculate the full\n sigmoid loss for evaluation or inference as in the following example:\n\n ```python\n if mode == \"train\":\n loss = tf.nn.nce_loss(\n weights=weights,\n biases=biases,\n labels=labels,\n inputs=inputs,\n ...)\n elif mode == \"eval\":\n logits = tf.matmul(inputs, tf.transpose(weights))\n logits = tf.nn.bias_add(logits, biases)\n labels_one_hot = tf.one_hot(labels, n_classes)\n loss = tf.nn.sigmoid_cross_entropy_with_logits(\n labels=labels_one_hot,\n logits=logits)\n loss = tf.reduce_sum(loss, axis=1)\n ```\n\n Note: when doing embedding lookup on `weights` and `bias`, \"div\" partition\n strategy will be used. Support for other partition strategy will be added\n later.\n\n Note: By default this uses a log-uniform (Zipfian) distribution for sampling,\n so your labels must be sorted in order of decreasing frequency to achieve\n good results. For more details, see\n `tf.random.log_uniform_candidate_sampler`.\n\n Note: In the case where `num_true` > 1, we assign to each target class\n the target probability 1 / `num_true` so that the target probabilities\n sum to 1 per-example.\n\n Note: It would be useful to allow a variable number of target classes per\n example. We hope to provide this functionality in a future release.\n For now, if you have a variable number of target classes, you can pad them\n out to a constant number by either repeating them or by padding\n with an otherwise unused class.\n\n Args:\n weights: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`\n objects whose concatenation along dimension 0 has shape [num_classes,\n dim]. The (possibly-partitioned) class embeddings.\n biases: A `Tensor` of shape `[num_classes]`. The class biases.\n labels: A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The\n target classes.\n inputs: A `Tensor` of shape `[batch_size, dim]`. The forward activations of\n the input network.\n num_sampled: An `int`. The number of negative classes to randomly sample\n per batch. This single sample of negative classes is evaluated for each\n element in the batch.\n num_classes: An `int`. The number of possible classes.\n num_true: An `int`. The number of target classes per training example.\n sampled_values: a tuple of (`sampled_candidates`, `true_expected_count`,\n `sampled_expected_count`) returned by a `*_candidate_sampler` function.\n (if None, we default to `log_uniform_candidate_sampler`)\n remove_accidental_hits: A `bool`. Whether to remove \"accidental hits\"\n where a sampled class equals one of the target classes. If set to `True`,\n this is a \"Sampled Logistic\" loss instead of NCE, and we are learning to\n generate log-odds instead of log probabilities. See our [Candidate\n Sampling Algorithms Reference]\n (https://www.tensorflow.org/extras/candidate_sampling.pdf). Default is\n False.\n name: A name for the operation (optional).\n\n Returns:\n A `batch_size` 1-D tensor of per-example NCE losses.\n ", "desc": "Computes and returns the noise-contrastive estimation training loss.", "type": "API"}, {"name": "tf.nn.normalize_moments", "docs": "Calculate the mean and variance of based on the sufficient statistics.\n\n Args:\n counts: A `Tensor` containing the total count of the data (one value).\n mean_ss: A `Tensor` containing the mean sufficient statistics: the (possibly\n shifted) sum of the elements to average over.\n variance_ss: A `Tensor` containing the variance sufficient statistics: the\n (possibly shifted) squared sum of the data to compute the variance over.\n shift: A `Tensor` containing the value by which the data is shifted for\n numerical stability, or `None` if no shift was performed.\n name: Name used to scope the operations that compute the moments.\n\n Returns:\n Two `Tensor` objects: `mean` and `variance`.\n ", "desc": "Calculate the mean and variance of based on the sufficient statistics.", "type": "API"}, {"name": "tf.nn.pool", "docs": "Performs an N-D pooling operation.\n\n In the case that `data_format` does not start with \"NC\", computes for\n 0 <= b < batch_size,\n 0 <= x[i] < output_spatial_shape[i],\n 0 <= c < num_channels:\n\n ```\n output[b, x[0], ..., x[N-1], c] =\n REDUCE_{z[0], ..., z[N-1]}\n input[b,\n x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],\n ...\n x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1],\n c],\n ```\n\n where the reduction function REDUCE depends on the value of `pooling_type`,\n and pad_before is defined based on the value of `padding` as described in\n the \"returns\" section of `tf.nn.convolution` for details.\n The reduction never includes out-of-bounds positions.\n\n In the case that `data_format` starts with `\"NC\"`, the `input` and output are\n simply transposed as follows:\n\n ```python\n pool(input, data_format, **kwargs) =\n tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]),\n **kwargs),\n [0, N+1] + range(1, N+1))\n ```\n\n Args:\n input: Tensor of rank N+2, of shape `[batch_size] + input_spatial_shape +\n [num_channels]` if data_format does not start with \"NC\" (default), or\n `[batch_size, num_channels] + input_spatial_shape` if data_format starts\n with \"NC\". Pooling happens over the spatial dimensions only.\n window_shape: Sequence of N ints >= 1.\n pooling_type: Specifies pooling operation, must be \"AVG\" or \"MAX\".\n strides: Optional. Sequence of N ints >= 1. Defaults to `[1]*N`. If any value of\n strides is > 1, then all values of dilation_rate must be 1.\n padding: The padding algorithm, must be \"SAME\" or \"VALID\". Defaults to \"SAME\".\n See\n [here](https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2)\n for more information.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\". For\n N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n dilations: Optional. Dilation rate. List of N ints >= 1. Defaults to\n `[1]*N`. If any value of dilation_rate is > 1, then all values of strides\n must be 1.\n name: Optional. Name of the op.\n\n Returns:\n Tensor of rank N+2, of shape\n [batch_size] + output_spatial_shape + [num_channels]\n\n if data_format is None or does not start with \"NC\", or\n\n [batch_size, num_channels] + output_spatial_shape\n\n if data_format starts with \"NC\",\n where `output_spatial_shape` depends on the value of padding:\n\n If padding = \"SAME\":\n output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])\n\n If padding = \"VALID\":\n output_spatial_shape[i] =\n ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i])\n / strides[i]).\n\n Raises:\n ValueError: if arguments are invalid.\n ", "desc": "Performs an N-D pooling operation.", "type": "API"}, {"name": "tf.nn.relu", "docs": "Computes rectified linear: `max(features, 0)`.\n\n See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks)\n Example usage:\n >>> tf.nn.relu([-2., 0., 3.]).numpy()\n array([0., 0., 3.], dtype=float32)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes rectified linear: `max(features, 0)`.", "type": "API"}, {"name": "tf.nn.relu6", "docs": "Computes Rectified Linear 6: `min(max(features, 0), 6)`.\n\n In comparison with `tf.nn.relu`, relu6 activation functions have shown to\n empirically perform better under low-precision conditions (e.g. fixed point\n inference) by encouraging the model to learn sparse features earlier.\n Source: [Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al.,\n 2010](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf).\n\n For example:\n\n >>> x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)\n >>> y = tf.nn.relu6(x)\n >>> y.numpy()\n array([0., 0., 0., 6., 6.], dtype=float32)\n\n Args:\n features: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,\n `int16`, or `int8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `features`.\n\n References:\n Convolutional Deep Belief Networks on CIFAR-10:\n Krizhevsky et al., 2010\n ([pdf](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf))\n ", "desc": "Computes Rectified Linear 6: `min(max(features, 0), 6)`.", "type": "API"}, {"name": "tf.nn.RNNCellDeviceWrapper", "docs": "Operator that ensures an RNNCell runs on a particular device.", "desc": "Operator that ensures an RNNCell runs on a particular device.", "type": "API"}, {"name": "tf.nn.RNNCellDropoutWrapper", "docs": "Operator adding dropout to inputs and outputs of the given cell.", "desc": "Operator adding dropout to inputs and outputs of the given cell.", "type": "API"}, {"name": "tf.nn.RNNCellResidualWrapper", "docs": "RNNCell wrapper that ensures cell inputs are added to the outputs.", "desc": "RNNCell wrapper that ensures cell inputs are added to the outputs.", "type": "API"}, {"name": "tf.nn.safe_embedding_lookup_sparse", "docs": "Lookup embedding results, accounting for invalid IDs and empty features.\n\n The partitioned embedding in `embedding_weights` must all be the same shape\n except for the first dimension. The first dimension is allowed to vary as the\n vocabulary size is not necessarily a multiple of num of shards.\n\n Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs\n with non-positive weight. For an entry with no features, the embedding vector\n for `default_id` is returned, or the 0-vector if `default_id` is not supplied.\n\n The ids and weights may be multi-dimensional. Embeddings are always aggregated\n along the last dimension.\n\n If `len(embedding_weights) > 1`, each element `id` of `ids` is partitioned\n between the elements of `embedding_weights` according to the \"div\" partition\n strategy, which means we assign ids to partitions in a contiguous manner. For\n instance, 13 ids are split across 5 partitions as:\n `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.\n\n If the id space does not evenly divide the number of partitions, each of the\n first `(max_id + 1) % len(embedding_weights)` partitions will be assigned one\n more id.\n\n Args:\n embedding_weights: A single tensor representing the complete embedding\n tensor, or a list of tensors all of same shape except for the first\n dimension, representing sharded embedding tensors following \"div\"\n partition strategy.\n sparse_ids: `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the\n ids. `d_0` is typically batch size.\n sparse_weights: `SparseTensor` of same shape as `sparse_ids`, containing\n float weights corresponding to `sparse_ids`, or `None` if all weights are\n be assumed to be 1.0.\n combiner: A string specifying how to combine embedding results for each\n entry. Currently \"mean\", \"sqrtn\" and \"sum\" are supported, with \"mean\" the\n default.\n default_id: The id to use for an entry with no features. Defaults to\n 0-vector.\n max_norm: If not `None`, all embeddings are l2-normalized to max_norm before\n combining.\n name: A name for this operation (optional).\n\n Returns:\n A dense tensor representing the combined embeddings for the\n sparse ids. For each row in the dense tensor represented by `sparse_ids`,\n the op looks up the embeddings for all ids in that row, multiplies them by\n the corresponding weight, and combines these embeddings as specified.\n\n In other words, if\n\n `shape(combined embedding_weights) = [p0, p1, ..., pm]`\n\n and\n\n `shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn]`\n\n then\n\n `shape(output) = [d0, d1, ... dn-1, p1, ..., pm]`.\n\n For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are\n\n ```python\n [0, 0]: id 1, weight 2.0\n [0, 1]: id 3, weight 0.5\n [1, 0]: id -1, weight 1.0\n [2, 3]: id 1, weight 3.0\n ```\n\n `default_id` is 0.\n\n with `combiner`=\"mean\", then the output will be a 3x20 matrix where\n\n ```python\n output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)\n output[1, :] = (params[0, :] * 1.0) / 1.0\n output[2, :] = (params[1, :] * 3.0) / 3.0\n ```\n\n Raises:\n ValueError: if `embedding_weights` is empty.\n ", "desc": "Lookup embedding results, accounting for invalid IDs and empty features.", "type": "API"}, {"name": "tf.nn.sampled_softmax_loss", "docs": "Computes and returns the sampled softmax training loss.\n\n This is a faster way to train a softmax classifier over a huge number of\n classes.\n\n This operation is for training only. It is generally an underestimate of\n the full softmax loss.\n\n A common use case is to use this method for training, and calculate the full\n softmax loss for evaluation or inference as in the following example:\n\n ```python\n if mode == \"train\":\n loss = tf.nn.sampled_softmax_loss(\n weights=weights,\n biases=biases,\n labels=labels,\n inputs=inputs,\n ...)\n elif mode == \"eval\":\n logits = tf.matmul(inputs, tf.transpose(weights))\n logits = tf.nn.bias_add(logits, biases)\n labels_one_hot = tf.one_hot(labels, n_classes)\n loss = tf.nn.softmax_cross_entropy_with_logits(\n labels=labels_one_hot,\n logits=logits)\n ```\n\n See our [Candidate Sampling Algorithms Reference]\n (https://www.tensorflow.org/extras/candidate_sampling.pdf)\n\n Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007)\n ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.\n\n Note: when doing embedding lookup on `weights` and `bias`, \"div\" partition\n strategy will be used. Support for other partition strategy will be added\n later.\n\n Args:\n weights: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`\n objects whose concatenation along dimension 0 has shape [num_classes,\n dim]. The (possibly-sharded) class embeddings.\n biases: A `Tensor` of shape `[num_classes]`. The class biases.\n labels: A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The\n target classes. Note that this format differs from the `labels` argument\n of `nn.softmax_cross_entropy_with_logits`.\n inputs: A `Tensor` of shape `[batch_size, dim]`. The forward activations of\n the input network.\n num_sampled: An `int`. The number of classes to randomly sample per batch.\n num_classes: An `int`. The number of possible classes.\n num_true: An `int`. The number of target classes per training example.\n sampled_values: a tuple of (`sampled_candidates`, `true_expected_count`,\n `sampled_expected_count`) returned by a `*_candidate_sampler` function.\n (if None, we default to `log_uniform_candidate_sampler`)\n remove_accidental_hits: A `bool`. whether to remove \"accidental hits\"\n where a sampled class equals one of the target classes. Default is True.\n seed: random seed for candidate sampling. Default to None, which doesn't set\n the op-level random seed for candidate sampling.\n name: A name for the operation (optional).\n\n Returns:\n A `batch_size` 1-D tensor of per-example sampled softmax losses.\n\n ", "desc": "Computes and returns the sampled softmax training loss.", "type": "API"}, {"name": "tf.nn.scale_regularization_loss", "docs": "Scales the sum of the given regularization losses by number of replicas.\n\n Usage with distribution strategy and custom training loop:\n\n ```python\n with strategy.scope():\n def compute_loss(self, label, predictions):\n per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n labels, predictions)\n\n # Compute loss that is scaled by sample_weight and by global batch size.\n loss = tf.nn.compute_average_loss(\n per_example_loss,\n sample_weight=sample_weight,\n global_batch_size=GLOBAL_BATCH_SIZE)\n\n # Add scaled regularization losses.\n loss += tf.nn.scale_regularization_loss(tf.nn.l2_loss(weights))\n return loss\n ```\n\n Args:\n regularization_loss: Regularization loss.\n\n Returns:\n Scalar loss value.\n ", "desc": "Scales the sum of the given regularization losses by number of replicas.", "type": "API"}, {"name": "tf.nn.selu", "docs": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`\n\n if < 0, `scale * features` otherwise.\n\n To be used together with\n `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`.\n For correct dropout, use `tf.contrib.nn.alpha_dropout`.\n\n See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`", "type": "API"}, {"name": "tf.nn.separable_conv2d", "docs": "2-D convolution with separable filters.\n\n Performs a depthwise convolution that acts separately on channels followed by\n a pointwise convolution that mixes channels. Note that this is separability\n between dimensions `[1, 2]` and `3`, not spatial separability between\n dimensions `1` and `2`.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k] = sum_{di, dj, q, r}\n input[b, strides[1] * i + di, strides[2] * j + dj, q] *\n depthwise_filter[di, dj, q, r] *\n pointwise_filter[0, 0, q * channel_multiplier + r, k]\n\n `strides` controls the strides for the depthwise convolution only, since\n the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have\n `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertical strides, `strides = [1, stride, stride, 1]`.\n If any value in `rate` is greater than 1, we perform atrous depthwise\n convolution, in which case all values in the `strides` tensor must be equal\n to 1.\n\n Args:\n input: 4-D `Tensor` with shape according to `data_format`.\n depthwise_filter: 4-D `Tensor` with shape `[filter_height, filter_width,\n in_channels, channel_multiplier]`. Contains `in_channels` convolutional\n filters of depth 1.\n pointwise_filter: 4-D `Tensor` with shape `[1, 1, channel_multiplier *\n in_channels, out_channels]`. Pointwise filter to mix channels after\n `depthwise_filter` has convolved spatially.\n strides: 1-D of size 4. The strides for the depthwise convolution for each\n dimension of `input`.\n padding: Controls how to pad the image before applying the depthwise\n convolution. Can be the string `\"SAME\"` or `\"VALID\"` indicating the type\n of padding algorithm to use, or a Python list indicating the explicit\n paddings at the start and end of each dimension. When explicit padding is\n used and data_format is `\"NHWC\"`, this should be in the form `[[0, 0],\n [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit\n padding used and data_format is `\"NCHW\"`, this should be in the form\n `[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]`.\n data_format: The data format for input. Either \"NHWC\" (default) or \"NCHW\".\n dilations: 1-D of size 2. The dilation rate in which we sample input values\n across the `height` and `width` dimensions in atrous convolution. If it is\n greater than 1, then all values of strides must be 1.\n name: A name for this operation (optional).\n\n Returns:\n A 4-D `Tensor` with shape according to 'data_format'. For\n example, with data_format=\"NHWC\", shape is [batch, out_height,\n out_width, out_channels].\n ", "desc": "2-D convolution with separable filters.", "type": "API"}, {"name": "tf.nn.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.nn.sigmoid_cross_entropy_with_logits", "docs": "Computes sigmoid cross entropy given `logits`.\n\n Measures the probability error in tasks with two outcomes in which each\n outcome is independent and need not have a fully certain label. For instance,\n one could perform a regression where the probability of an event happening is\n known and used as a label. This loss may also be used for binary\n classification, where labels are either zero or one.\n\n For brevity, let `x = logits`, `z = labels`. The logistic loss is\n\n z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))\n = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))\n = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))\n = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))\n = (1 - z) * x + log(1 + exp(-x))\n = x - x * z + log(1 + exp(-x))\n\n For x < 0, to avoid overflow in exp(-x), we reformulate the above\n\n x - x * z + log(1 + exp(-x))\n = log(exp(x)) - x * z + log(1 + exp(-x))\n = - x * z + log(1 + exp(x))\n\n Hence, to ensure stability and avoid overflow, the implementation uses this\n equivalent formulation\n\n max(x, 0) - x * z + log(1 + exp(-abs(x)))\n\n `logits` and `labels` must have the same type and shape.\n\n >>> logits = tf.constant([1., -1., 0., 1., -1., 0., 0.])\n >>> labels = tf.constant([0., 0., 0., 1., 1., 1., 0.5])\n >>> tf.nn.sigmoid_cross_entropy_with_logits(\n ... labels=labels, logits=logits).numpy()\n array([1.3132617, 0.3132617, 0.6931472, 0.3132617, 1.3132617, 0.6931472,\n 0.6931472], dtype=float32)\n\n Compared to the losses which handle multiple outcomes,\n `tf.nn.softmax_cross_entropy_with_logits` for general multi-class\n classification and `tf.nn.sparse_softmax_cross_entropy_with_logits` for more\n efficient multi-class classification with hard labels,\n `sigmoid_cross_entropy_with_logits` is a slight simplification for binary\n classification:\n\n sigmoid(x) = softmax([x, 0])[0]\n\n $$\\frac{1}{1 + e^{-x}} = \\frac{e^x}{e^x + e^0}$$\n\n While `sigmoid_cross_entropy_with_logits` works for soft binary labels\n (probabilities between 0 and 1), it can also be used for binary classification\n where the labels are hard. There is an equivalence between all three symbols\n in this case, with a probability 0 indicating the second class or 1 indicating\n the first class:\n\n >>> sigmoid_logits = tf.constant([1., -1., 0.])\n >>> softmax_logits = tf.stack([sigmoid_logits, tf.zeros_like(sigmoid_logits)],\n ... axis=-1)\n >>> soft_binary_labels = tf.constant([1., 1., 0.])\n >>> soft_multiclass_labels = tf.stack(\n ... [soft_binary_labels, 1. - soft_binary_labels], axis=-1)\n >>> hard_labels = tf.constant([0, 0, 1])\n >>> tf.nn.sparse_softmax_cross_entropy_with_logits(\n ... labels=hard_labels, logits=softmax_logits).numpy()\n array([0.31326166, 1.3132616 , 0.6931472 ], dtype=float32)\n >>> tf.nn.softmax_cross_entropy_with_logits(\n ... labels=soft_multiclass_labels, logits=softmax_logits).numpy()\n array([0.31326166, 1.3132616, 0.6931472], dtype=float32)\n >>> tf.nn.sigmoid_cross_entropy_with_logits(\n ... labels=soft_binary_labels, logits=sigmoid_logits).numpy()\n array([0.31326166, 1.3132616, 0.6931472], dtype=float32)\n\n Args:\n labels: A `Tensor` of the same type and shape as `logits`. Between 0 and 1,\n inclusive.\n logits: A `Tensor` of type `float32` or `float64`. Any real number.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `logits` with the componentwise\n logistic losses.\n\n Raises:\n ValueError: If `logits` and `labels` do not have the same shape.\n ", "desc": "Computes sigmoid cross entropy given `logits`.", "type": "API"}, {"name": "tf.nn.silu", "docs": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.\n\n beta : Hyperparameter for Swish activation function. Default value 1.0.\n\n The SiLU activation function was introduced in \"Gaussian Error Linear Units\n (GELUs)\" [Hendrycks et al. 2016](https://arxiv.org/abs/1606.08415) and\n \"Sigmoid-Weighted Linear Units for Neural Network Function Approximation in\n Reinforcement Learning\"\n [Elfwing et al. 2017](https://arxiv.org/abs/1702.03118) and was independently\n discovered (and called swish) in \"Searching for Activation Functions\"\n [Ramachandran et al. 2017](https://arxiv.org/abs/1710.05941)\n\n Args:\n features: A `Tensor` representing preactivation values.\n beta: A 'Tensor' representing value of beta hyperparameter.\n\n Returns:\n The activation value.\n ", "desc": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.", "type": "API"}, {"name": "tf.nn.softmax", "docs": "Computes softmax activations.\n\n Used for multi-class predictions. The sum of all outputs generated by softmax\n is 1.\n\n This function performs the equivalent of\n\n ```python\n softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis, keepdims=True)\n ```\n Example usage:\n\n >>> softmax = tf.nn.softmax([-1, 0., 1.])\n >>> softmax\n \n >>> sum(softmax)\n \n\n Args:\n logits: A non-empty `Tensor`. Must be one of the following types: `half`,\n `float32`, `float64`.\n axis: The dimension softmax would be performed on. The default is -1 which\n indicates the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type and shape as `logits`.\n\n Raises:\n InvalidArgumentError: if `logits` is empty or `axis` is beyond the last\n dimension of `logits`.\n ", "desc": "Computes softmax activations.", "type": "API"}, {"name": "tf.nn.softmax_cross_entropy_with_logits", "docs": "Computes softmax cross entropy between `logits` and `labels`.\n\n Measures the probability error in discrete classification tasks in which the\n classes are mutually exclusive (each entry is in exactly one class). For\n example, each CIFAR-10 image is labeled with one and only one label: an image\n can be a dog or a truck, but not both.\n\n **NOTE:** While the classes are mutually exclusive, their probabilities\n need not be. All that is required is that each row of `labels` is\n a valid probability distribution. If they are not, the computation of the\n gradient will be incorrect.\n\n If using exclusive `labels` (wherein one and only\n one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.\n\n Usage:\n\n >>> logits = [[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]]\n >>> labels = [[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]]\n >>> tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)\n \n\n **WARNING:** This op expects unscaled logits, since it performs a `softmax`\n on `logits` internally for efficiency. Do not call this op with the\n output of `softmax`, as it will produce incorrect results.\n\n A common use case is to have logits and labels of shape\n `[batch_size, num_classes]`, but higher dimensions are supported, with\n the `axis` argument specifying the class dimension.\n\n `logits` and `labels` must have the same dtype (either `float16`, `float32`,\n or `float64`).\n\n Backpropagation will happen into both `logits` and `labels`. To disallow\n backpropagation into `labels`, pass label tensors through `tf.stop_gradient`\n before feeding it to this function.\n\n **Note that to avoid confusion, it is required to pass only named arguments to\n this function.**\n\n Args:\n labels: Each vector along the class dimension should hold a valid\n probability distribution e.g. for the case in which labels are of shape\n `[batch_size, num_classes]`, each row of `labels[i]` must be a valid\n probability distribution.\n logits: Per-label activations, typically a linear output. These activation\n energies are interpreted as unnormalized log probabilities.\n axis: The class dimension. Defaulted to -1 which is the last dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that contains the softmax cross entropy loss. Its type is the\n same as `logits` and its shape is the same as `labels` except that it does\n not have the last dimension of `labels`.\n ", "desc": "Computes softmax cross entropy between `logits` and `labels`.", "type": "API"}, {"name": "tf.nn.softplus", "docs": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.\n\n `softplus` is a smooth approximation of `relu`. Like `relu`, `softplus` always\n takes on positive values.\n\n \n\n Example:\n\n >>> import tensorflow as tf\n >>> tf.math.softplus(tf.range(0, 2, dtype=tf.float32)).numpy()\n array([0.6931472, 1.3132616], dtype=float32)\n\n Args:\n features: `Tensor`\n name: Optional: name to associate with this operation.\n Returns:\n `Tensor`\n ", "desc": "Computes elementwise softplus: `softplus(x) = log(exp(x) + 1)`.", "type": "API"}, {"name": "tf.nn.softsign", "docs": "Computes softsign: `features / (abs(features) + 1)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softsign: `features / (abs(features) + 1)`.", "type": "API"}, {"name": "tf.nn.space_to_batch", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.nn.space_to_depth", "docs": "SpaceToDepth for tensors of type T.\n\n Rearranges blocks of spatial data, into depth. More specifically,\n this op outputs a copy of the input tensor where values from the `height`\n and `width` dimensions are moved to the `depth` dimension.\n The attr `block_size` indicates the input block size.\n\n * Non-overlapping blocks of size `block_size x block size` are rearranged\n into depth at each location.\n * The depth of the output tensor is `block_size * block_size * input_depth`.\n * The Y, X coordinates within each block of the input become the high order\n component of the output channel index.\n * The input tensor's height and width must be divisible by block_size.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates\n within the output image, bX, bY means coordinates\n within the input block, iC means input channels).\n The output would be a transpose to the following layout:\n n,oY,oX,bY,bX,iC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 2, 2, 1]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n This operation will output a tensor of shape `[1, 1, 1, 4]`:\n\n ```\n [[[[1, 2, 3, 4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,\n the corresponding output will have a single element (i.e. width and height are\n both 1) and will have a depth of 4 channels (1 * block_size * block_size).\n The output element shape is `[1, 1, 4]`.\n\n For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n This operation, for block_size of 2, will return the following tensor of shape\n `[1, 1, 1, 12]`\n\n ```\n [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:\n\n ```\n x = [[[[1], [2], [5], [6]],\n [[3], [4], [7], [8]],\n [[9], [10], [13], [14]],\n [[11], [12], [15], [16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 2 2 4]`:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`. The size of the spatial block.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToDepth for tensors of type T.", "type": "API"}, {"name": "tf.nn.sparse_softmax_cross_entropy_with_logits", "docs": "Computes sparse softmax cross entropy between `logits` and `labels`.\n\n Measures the probability error in discrete classification tasks in which the\n classes are mutually exclusive (each entry is in exactly one class). For\n example, each CIFAR-10 image is labeled with one and only one label: an image\n can be a dog or a truck, but not both.\n\n Note: For this operation, the probability of a given label is considered\n exclusive. That is, soft classes are not allowed, and the `labels` vector\n must provide a single specific index for the true class for each row of\n `logits` (each minibatch entry). For soft softmax classification with\n a probability distribution for each entry, see\n `softmax_cross_entropy_with_logits_v2`.\n\n Warning: This op expects unscaled logits, since it performs a `softmax`\n on `logits` internally for efficiency. Do not call this op with the\n output of `softmax`, as it will produce incorrect results.\n\n A common use case is to have logits of shape\n `[batch_size, num_classes]` and have labels of shape\n `[batch_size]`, but higher dimensions are supported, in which\n case the `dim`-th dimension is assumed to be of size `num_classes`.\n `logits` must have the dtype of `float16`, `float32`, or `float64`, and\n `labels` must have the dtype of `int32` or `int64`.\n\n >>> logits = tf.constant([[2., -5., .5, -.1],\n ... [0., 0., 1.9, 1.4],\n ... [-100., 100., -100., -100.]])\n >>> labels = tf.constant([0, 3, 1])\n >>> tf.nn.sparse_softmax_cross_entropy_with_logits(\n ... labels=labels, logits=logits).numpy()\n array([0.29750752, 1.1448325 , 0. ], dtype=float32)\n\n To avoid confusion, passing only named arguments to this function is\n recommended.\n\n Args:\n labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of\n `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`\n must be an index in `[0, num_classes)`. Other values will raise an\n exception when this op is run on CPU, and return `NaN` for corresponding\n loss and gradient rows on GPU.\n logits: Unscaled log probabilities of shape `[d_0, d_1, ..., d_{r-1},\n num_classes]` and dtype `float16`, `float32`, or `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `labels` and of the same type as `logits`\n with the softmax cross entropy loss.\n\n Raises:\n ValueError: If logits are scalars (need to have rank >= 1) or if the rank\n of the labels is not equal to the rank of the logits minus one.\n ", "desc": "Computes sparse softmax cross entropy between `logits` and `labels`.", "type": "API"}, {"name": "tf.nn.sufficient_statistics", "docs": "Calculate the sufficient statistics for the mean and variance of `x`.\n\n These sufficient statistics are computed using the one pass algorithm on\n an input that's optionally shifted. See:\n https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data\n\n Args:\n x: A `Tensor`.\n axes: Array of ints. Axes along which to compute mean and variance.\n shift: A `Tensor` containing the value by which to shift the data for\n numerical stability, or `None` if no shift is to be performed. A shift\n close to the true mean provides the most numerically stable results.\n keepdims: produce statistics with the same dimensionality as the input.\n name: Name used to scope the operations that compute the sufficient stats.\n\n Returns:\n Four `Tensor` objects of the same type as `x`:\n\n * the count (number of elements to average over).\n * the (possibly shifted) sum of the elements in the array.\n * the (possibly shifted) sum of squares of the elements in the array.\n * the shift by which the mean must be corrected or None if `shift` is None.\n ", "desc": "Calculate the sufficient statistics for the mean and variance of `x`.", "type": "API"}, {"name": "tf.nn.swish", "docs": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.\n\n beta : Hyperparameter for Swish activation function. Default value 1.0.\n\n The SiLU activation function was introduced in \"Gaussian Error Linear Units\n (GELUs)\" [Hendrycks et al. 2016](https://arxiv.org/abs/1606.08415) and\n \"Sigmoid-Weighted Linear Units for Neural Network Function Approximation in\n Reinforcement Learning\"\n [Elfwing et al. 2017](https://arxiv.org/abs/1702.03118) and was independently\n discovered (and called swish) in \"Searching for Activation Functions\"\n [Ramachandran et al. 2017](https://arxiv.org/abs/1710.05941)\n\n Args:\n features: A `Tensor` representing preactivation values.\n beta: A 'Tensor' representing value of beta hyperparameter.\n\n Returns:\n The activation value.\n ", "desc": "Computes the SiLU or Swish activation function: `x * sigmoid(beta * x)`.", "type": "API"}, {"name": "tf.nn.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.nn.top_k", "docs": "Finds values and indices of the `k` largest entries for the last dimension.\n\n If the input is a vector (rank=1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n >>> result = tf.math.top_k([1, 2, 98, 1, 1, 99, 3, 1, 3, 96, 4, 1],\n ... k=3)\n >>> result.values.numpy()\n array([99, 98, 96], dtype=int32)\n >>> result.indices.numpy()\n array([5, 2, 9], dtype=int32)\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n >>> input = tf.random.normal(shape=(3,4,5,6))\n >>> k = 2\n >>> values, indices = tf.math.top_k(input, k=k)\n >>> values.shape.as_list()\n [3, 4, 5, 2]\n >>>\n >>> values.shape == indices.shape == input.shape[:-1] + [k]\n True\n\n The indices can be used to `gather` from a tensor who's shape matches `input`.\n\n >>> gathered_values = tf.gather(input, indices, batch_dims=-1)\n >>> assert tf.reduce_all(gathered_values == values)\n\n If two elements are equal, the lower-index element appears first.\n\n >>> result = tf.math.top_k([1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],\n ... k=3)\n >>> result.indices.numpy()\n array([0, 1, 3], dtype=int32)\n\n Args:\n input: 1-D or higher `Tensor` with last dimension at least `k`.\n k: 0-D `int32` `Tensor`. Number of top elements to look for along the last\n dimension (along each row for matrices).\n sorted: If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: Optional name for the operation.\n\n Returns:\n A tuple with two named fields:\n values: The `k` largest elements along each last dimensional slice.\n indices: The indices of `values` within the last dimension of `input`.\n ", "desc": "Finds values and indices of the `k` largest entries for the last dimension.", "type": "API"}, {"name": "tf.nn.weighted_cross_entropy_with_logits", "docs": "Computes a weighted cross entropy.\n\n This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`,\n allows one to trade off recall and precision by up- or down-weighting the\n cost of a positive error relative to a negative error.\n\n The usual cross-entropy cost is defined as:\n\n labels * -log(sigmoid(logits)) +\n (1 - labels) * -log(1 - sigmoid(logits))\n\n A value `pos_weight > 1` decreases the false negative count, hence increasing\n the recall.\n Conversely setting `pos_weight < 1` decreases the false positive count and\n increases the precision.\n This can be seen from the fact that `pos_weight` is introduced as a\n multiplicative coefficient for the positive labels term\n in the loss expression:\n\n labels * -log(sigmoid(logits)) * pos_weight +\n (1 - labels) * -log(1 - sigmoid(logits))\n\n For brevity, let `x = logits`, `z = labels`, `q = pos_weight`.\n The loss is:\n\n qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))\n = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))\n = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))\n = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))\n = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))\n\n Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow,\n the implementation uses\n\n (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))\n\n `logits` and `labels` must have the same type and shape.\n\n >>> labels = tf.constant([1., 0.5, 0.])\n >>> logits = tf.constant([1.5, -0.1, -10.])\n >>> tf.nn.weighted_cross_entropy_with_logits(\n ... labels=labels, logits=logits, pos_weight=tf.constant(1.5)).numpy()\n array([3.0211994e-01, 8.8049585e-01, 4.5776367e-05], dtype=float32)\n >>> tf.nn.weighted_cross_entropy_with_logits(\n ... labels=labels, logits=logits, pos_weight=tf.constant(0.5)).numpy()\n array([1.00706644e-01, 5.08297503e-01, 4.57763672e-05], dtype=float32)\n\n Args:\n labels: A `Tensor` of the same type and shape as `logits`, with values\n between 0 and 1 inclusive.\n logits: A `Tensor` of type `float32` or `float64`, any real numbers.\n pos_weight: A coefficient to use on the positive examples, typically a\n scalar but otherwise broadcastable to the shape of `logits`. Its value\n should be non-negative.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the same shape as `logits` with the componentwise\n weighted logistic losses.\n\n Raises:\n ValueError: If `logits` and `labels` do not have the same shape.\n ", "desc": "Computes a weighted cross entropy.", "type": "API"}, {"name": "tf.nn.weighted_moments", "docs": "Returns the frequency-weighted mean and variance of `x`.\n\n Args:\n x: A tensor.\n axes: 1-d tensor of int32 values; these are the axes along which\n to compute mean and variance.\n frequency_weights: A tensor of positive weights which can be\n broadcast with x.\n keepdims: Produce moments with the same dimensionality as the input.\n name: Name used to scope the operation.\n\n Returns:\n Two tensors: `weighted_mean` and `weighted_variance`.\n ", "desc": "Returns the frequency-weighted mean and variance of `x`.", "type": "API"}, {"name": "tf.nn.with_space_to_batch", "docs": "Performs `op` on the space-to-batch representation of `input`.\n\n This has the effect of transforming sliding window operations into the\n corresponding \"atrous\" operation in which the input is sampled at the\n specified `dilation_rate`.\n\n In the special case that `dilation_rate` is uniformly 1, this simply returns:\n\n op(input, num_spatial_dims, padding)\n\n Otherwise, it returns:\n\n batch_to_space_nd(\n op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings),\n num_spatial_dims,\n \"VALID\")\n adjusted_dilation_rate,\n adjusted_crops),\n\n where:\n\n adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)],\n adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]\n\n defined as follows:\n\n We first define two int64 tensors `paddings` and `crops` of shape\n `[num_spatial_dims, 2]` based on the value of `padding` and the spatial\n dimensions of the `input`:\n\n If `padding = \"VALID\"`, then:\n\n paddings, crops = required_space_to_batch_paddings(\n input_shape[spatial_dims],\n dilation_rate)\n\n If `padding = \"SAME\"`, then:\n\n dilated_filter_shape =\n filter_shape + (filter_shape - 1) * (dilation_rate - 1)\n\n paddings, crops = required_space_to_batch_paddings(\n input_shape[spatial_dims],\n dilation_rate,\n [(dilated_filter_shape - 1) // 2,\n dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])\n\n Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial\n dimensions are contiguous starting at the second dimension, but the specified\n `spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and\n `crops` in order to be usable with these operations. For a given dimension,\n if the block size is 1, and both the starting and ending padding and crop\n amounts are 0, then space_to_batch_nd effectively leaves that dimension alone,\n which is what is needed for dimensions not part of `spatial_dims`.\n Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case\n efficiently for any number of leading and trailing dimensions.\n\n For 0 <= i < len(spatial_dims), we assign:\n\n adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i]\n adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :]\n adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]\n\n All unassigned values of `adjusted_dilation_rate` default to 1, while all\n unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.\n\n Note in the case that `dilation_rate` is not uniformly 1, specifying \"VALID\"\n padding is equivalent to specifying `padding = \"SAME\"` with a filter_shape of\n `[1]*N`.\n\n Advanced usage. Note the following optimization: A sequence of\n `with_space_to_batch` operations with identical (not uniformly 1)\n `dilation_rate` parameters and \"VALID\" padding\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", op_1)\n ...\n net = with_space_to_batch(net, dilation_rate, \"VALID\", op_k)\n\n can be combined into a single `with_space_to_batch` operation as follows:\n\n def combined_op(converted_input, num_spatial_dims, _):\n result = op_1(converted_input, num_spatial_dims, \"VALID\")\n ...\n result = op_k(result, num_spatial_dims, \"VALID\")\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", combined_op)\n\n This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and\n `batch_to_space_nd`.\n\n Similarly, a sequence of `with_space_to_batch` operations with identical (not\n uniformly 1) `dilation_rate` parameters, \"SAME\" padding, and odd filter\n dimensions\n\n net = with_space_to_batch(net, dilation_rate, \"SAME\", op_1, filter_shape_1)\n ...\n net = with_space_to_batch(net, dilation_rate, \"SAME\", op_k, filter_shape_k)\n\n can be combined into a single `with_space_to_batch` operation as follows:\n\n def combined_op(converted_input, num_spatial_dims, _):\n result = op_1(converted_input, num_spatial_dims, \"SAME\")\n ...\n result = op_k(result, num_spatial_dims, \"SAME\")\n\n net = with_space_to_batch(net, dilation_rate, \"VALID\", combined_op)\n\n Args:\n input: Tensor of rank > max(spatial_dims).\n dilation_rate: int32 Tensor of *known* shape [num_spatial_dims].\n padding: str constant equal to \"VALID\" or \"SAME\"\n op: Function that maps (input, num_spatial_dims, padding) -> output\n filter_shape: If padding = \"SAME\", specifies the shape of the convolution\n kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims].\n If padding = \"VALID\", filter_shape is ignored and need not be specified.\n spatial_dims: Monotonically increasing sequence of `num_spatial_dims`\n integers (which are >= 1) specifying the spatial dimensions of `input`\n and output. Defaults to: `range(1, num_spatial_dims+1)`.\n data_format: A string or None. Specifies whether the channel dimension of\n the `input` and output is the last dimension (default, or if `data_format`\n does not start with \"NC\"), or the second dimension (if `data_format`\n starts with \"NC\"). For N=1, the valid values are \"NWC\" (default) and\n \"NCW\". For N=2, the valid values are \"NHWC\" (default) and \"NCHW\".\n For N=3, the valid values are \"NDHWC\" (default) and \"NCDHW\".\n\n Returns:\n The output Tensor as described above, dimensions will vary based on the op\n provided.\n\n Raises:\n ValueError: if `padding` is invalid or the arguments are incompatible.\n ValueError: if `spatial_dims` are invalid.\n ", "desc": "Performs `op` on the space-to-batch representation of `input`.", "type": "API"}, {"name": "tf.nn.zero_fraction", "docs": "Returns the fraction of zeros in `value`.\n\n If `value` is empty, the result is `nan`.\n\n This is useful in summaries to measure and report sparsity. For example,\n\n ```python\n z = tf.nn.relu(...)\n summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z))\n ```\n\n Args:\n value: A tensor of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n The fraction of zeros in `value`, with type `float32`.\n ", "desc": "Returns the fraction of zeros in `value`.", "type": "API"}, {"name": "tf.no_gradient", "docs": "Specifies that ops of type `op_type` is not differentiable.\n\n This function should *not* be used for operations that have a\n well-defined gradient that is not yet implemented.\n\n This function is only used when defining a new op type. It may be\n used for ops such as `tf.size()` that are not differentiable. For\n example:\n\n ```python\n tf.no_gradient(\"Size\")\n ```\n\n The gradient computed for 'op_type' will then propagate zeros.\n\n For ops that have a well-defined gradient but are not yet implemented,\n no declaration should be made, and an error *must* be thrown if\n an attempt to request its gradient is made.\n\n Args:\n op_type: The string type of an operation. This corresponds to the\n `OpDef.name` field for the proto that defines the operation.\n\n Raises:\n TypeError: If `op_type` is not a string.\n\n ", "desc": "Specifies that ops of type `op_type` is not differentiable.", "type": "API"}, {"name": "tf.no_op", "docs": "Does nothing. Only useful as a placeholder for control edges.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Does nothing. Only useful as a placeholder for control edges.", "type": "API"}, {"name": "tf.nondifferentiable_batch_function", "docs": "Batches the computation done by the decorated function.\n\n So, for example, in the following code\n\n ```python\n @batch_function(1, 2, 3)\n def layer(a):\n return tf.matmul(a, a)\n\n b = layer(w)\n ```\n\n if more than one session.run call is simultaneously trying to compute `b`\n the values of `w` will be gathered, non-deterministically concatenated\n along the first axis, and only one thread will run the computation. See the\n documentation of the `Batch` op for more details.\n\n Assumes that all arguments of the decorated function are Tensors which will\n be batched along their first dimension.\n\n SparseTensor is not supported. The return value of the decorated function\n must be a Tensor or a list/tuple of Tensors.\n\n Args:\n num_batch_threads: Number of scheduling threads for processing batches\n of work. Determines the number of batches processed in parallel.\n max_batch_size: Batch sizes will never be bigger than this.\n batch_timeout_micros: Maximum number of microseconds to wait before\n outputting an incomplete batch.\n allowed_batch_sizes: Optional list of allowed batch sizes. If left empty,\n does nothing. Otherwise, supplies a list of batch sizes, causing the op\n to pad batches up to one of those sizes. The entries must increase\n monotonically, and the final entry must equal max_batch_size.\n max_enqueued_batches: The maximum depth of the batch queue. Defaults to 10.\n autograph: Whether to use autograph to compile python and eager style code\n for efficient graph-mode execution.\n enable_large_batch_splitting: The value of this option doesn't affect\n processing output given the same input; it affects implementation details\n as stated below: 1. Improve batching efficiency by eliminating unnecessary\n adding. 2.`max_batch_size` specifies the limit of input and\n `allowed_batch_sizes` specifies the limit of a task to be processed. API\n user can give an input of size 128 when 'max_execution_batch_size'\n is 32 -> implementation can split input of 128 into 4 x 32, schedule\n concurrent processing, and then return concatenated results corresponding\n to 128.\n\n Returns:\n The decorated function will return the unbatched computation output Tensors.\n ", "desc": "Batches the computation done by the decorated function.", "type": "API"}, {"name": "tf.norm", "docs": "Computes the norm of vectors, matrices, and tensors.\n\n This function can compute several different vector norms (the 1-norm, the\n Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and\n matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).\n\n Args:\n tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`\n ord: Order of the norm. Supported values are `'fro'`, `'euclidean'`,\n `1`, `2`, `np.inf` and any positive real number yielding the corresponding\n p-norm. Default is `'euclidean'` which is equivalent to Frobenius norm if\n `tensor` is a matrix and equivalent to 2-norm for vectors.\n Some restrictions apply:\n a) The Frobenius norm `'fro'` is not defined for vectors,\n b) If axis is a 2-tuple (matrix norm), only `'euclidean'`, '`fro'`, `1`,\n `2`, `np.inf` are supported.\n See the description of `axis` on how to compute norms for a batch of\n vectors or matrices stored in a tensor.\n axis: If `axis` is `None` (the default), the input is considered a vector\n and a single vector norm is computed over the entire set of values in the\n tensor, i.e. `norm(tensor, ord=ord)` is equivalent to\n `norm(reshape(tensor, [-1]), ord=ord)`.\n If `axis` is a Python integer, the input is considered a batch of vectors,\n and `axis` determines the axis in `tensor` over which to compute vector\n norms.\n If `axis` is a 2-tuple of Python integers it is considered a batch of\n matrices and `axis` determines the axes in `tensor` over which to compute\n a matrix norm.\n Negative indices are supported. Example: If you are passing a tensor that\n can be either a matrix or a batch of matrices at runtime, pass\n `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are\n computed.\n keepdims: If True, the axis indicated in `axis` are kept with size 1.\n Otherwise, the dimensions in `axis` are removed from the output shape.\n name: The name of the op.\n\n Returns:\n output: A `Tensor` of the same type as tensor, containing the vector or\n matrix norms. If `keepdims` is True then the rank of output is equal to\n the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,\n if `axis` is an integer, the rank of `output` is one less than the rank\n of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less\n than the rank of `tensor`.\n\n Raises:\n ValueError: If `ord` or `axis` is invalid.\n\n @compatibility(numpy)\n Mostly equivalent to numpy.linalg.norm.\n Not supported: ord <= 0, 2-norm for matrices, nuclear norm.\n Other differences:\n a) If axis is `None`, treats the flattened `tensor` as a vector\n regardless of rank.\n b) Explicitly supports 'euclidean' norm as the default, including for\n higher order tensors.\n @end_compatibility\n ", "desc": "Computes the norm of vectors, matrices, and tensors.", "type": "API"}, {"name": "tf.not_equal", "docs": "Returns the truth value of (x != y) element-wise.\n\n Performs a [broadcast](\n https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the\n arguments and then an element-wise inequality comparison, returning a Tensor\n of boolean values.\n\n For example:\n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant(2)\n >>> tf.math.not_equal(x, y)\n \n\n >>> x = tf.constant([2, 4])\n >>> y = tf.constant([2, 4])\n >>> tf.math.not_equal(x, y)\n \n\n Args:\n x: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n y: A `tf.Tensor` or `tf.sparse.SparseTensor` or `tf.IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the same size as that of x or y.\n\n Raises:\n `tf.errors.InvalidArgumentError`: If shapes of arguments are incompatible\n ", "desc": "Returns the truth value of (x != y) element-wise.", "type": "API"}, {"name": "tf.numpy_function", "docs": "Wraps a python function and uses it as a TensorFlow op.\n\n Given a python function `func` wrap this function as an operation in a\n TensorFlow function. `func` must take numpy arrays as its arguments and\n return numpy arrays as its outputs.\n\n The following example creates a TensorFlow graph with `np.sinh()` as an\n operation in the graph:\n\n >>> def my_numpy_func(x):\n ... # x will be a numpy array with the contents of the input to the\n ... # tf.function\n ... return np.sinh(x)\n >>> @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)])\n ... def tf_function(input):\n ... y = tf.numpy_function(my_numpy_func, [input], tf.float32)\n ... return y * y\n >>> tf_function(tf.constant(1.))\n \n\n Comparison to `tf.py_function`:\n `tf.py_function` and `tf.numpy_function` are very similar, except that\n `tf.numpy_function` takes numpy arrays, and not `tf.Tensor`s. If you want the\n function to contain `tf.Tensors`, and have any TensorFlow operations executed\n in the function be differentiable, please use `tf.py_function`.\n\n Note: We recommend to avoid using `tf.numpy_function` outside of\n prototyping and experimentation due to the following known limitations:\n\n * Calling `tf.numpy_function` will acquire the Python Global Interpreter Lock\n (GIL) that allows only one thread to run at any point in time. This will\n preclude efficient parallelization and distribution of the execution of the\n program. Therefore, you are discouraged to use `tf.numpy_function` outside\n of prototyping and experimentation.\n\n * The body of the function (i.e. `func`) will not be serialized in a\n `tf.SavedModel`. Therefore, you should not use this function if you need to\n serialize your model and restore it in a different environment.\n\n * The operation must run in the same address space as the Python program\n that calls `tf.numpy_function()`. If you are using distributed\n TensorFlow, you must run a `tf.distribute.Server` in the same process as the\n program that calls `tf.numpy_function` you must pin the created\n operation to a device in that server (e.g. using `with tf.device():`).\n\n * Currently `tf.numpy_function` is not compatible with XLA. Calling\n `tf.numpy_function` inside `tf.function(jit_comiple=True)` will raise an\n error.\n\n * Since the function takes numpy arrays, you cannot take gradients\n through a numpy_function. If you require something that is differentiable,\n please consider using tf.py_function.\n\n Args:\n func: A Python function, which accepts `numpy.ndarray` objects as arguments\n and returns a list of `numpy.ndarray` objects (or a single\n `numpy.ndarray`). This function must accept as many arguments as there are\n tensors in `inp`, and these argument types will match the corresponding\n `tf.Tensor` objects in `inp`. The returns `numpy.ndarray`s must match the\n number and types defined `Tout`.\n Important Note: Input and output `numpy.ndarray`s of `func` are not\n guaranteed to be copies. In some cases their underlying memory will be\n shared with the corresponding TensorFlow tensors. In-place modification\n or storing `func` input or return values in python datastructures\n without explicit (np.)copy can have non-deterministic consequences.\n inp: A list of `tf.Tensor` objects.\n Tout: A list or tuple of tensorflow data types or a single tensorflow data\n type if there is only one, indicating what `func` returns.\n stateful: (Boolean.) Setting this argument to False tells the runtime to\n treat the function as stateless, which enables certain optimizations.\n A function is stateless when given the same input it will return the\n same output and have no side effects; its only purpose is to have a\n return value.\n The behavior for a stateful function with the `stateful` argument False\n is undefined. In particular, caution should be taken when\n mutating the input arguments as this is a stateful operation.\n name: (Optional) A name for the operation.\n\n Returns:\n Single or list of `tf.Tensor` which `func` computes.\n ", "desc": "Wraps a python function and uses it as a TensorFlow op.", "type": "API"}, {"name": "tf.one_hot", "docs": "Returns a one-hot tensor.\n\n See also `tf.fill`, `tf.eye`.\n\n The locations represented by indices in `indices` take value `on_value`,\n while all other locations take value `off_value`.\n\n `on_value` and `off_value` must have matching data types. If `dtype` is also\n provided, they must be the same data type as specified by `dtype`.\n\n If `on_value` is not provided, it will default to the value `1` with type\n `dtype`\n\n If `off_value` is not provided, it will default to the value `0` with type\n `dtype`\n\n If the input `indices` is rank `N`, the output will have rank `N+1`. The\n new axis is created at dimension `axis` (default: the new axis is appended\n at the end).\n\n If `indices` is a scalar the output shape will be a vector of length `depth`\n\n If `indices` is a vector of length `features`, the output shape will be:\n\n ```\n features x depth if axis == -1\n depth x features if axis == 0\n ```\n\n If `indices` is a matrix (batch) with shape `[batch, features]`, the output\n shape will be:\n\n ```\n batch x features x depth if axis == -1\n batch x depth x features if axis == 1\n depth x batch x features if axis == 0\n ```\n\n If `indices` is a RaggedTensor, the 'axis' argument must be positive and refer\n to a non-ragged axis. The output will be equivalent to applying 'one_hot' on\n the values of the RaggedTensor, and creating a new RaggedTensor from the\n result.\n\n If `dtype` is not provided, it will attempt to assume the data type of\n `on_value` or `off_value`, if one or both are passed in. If none of\n `on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the\n value `tf.float32`.\n\n Note: If a non-numeric data type output is desired (`tf.string`, `tf.bool`,\n etc.), both `on_value` and `off_value` _must_ be provided to `one_hot`.\n\n For example:\n\n ```python\n indices = [0, 1, 2]\n depth = 3\n tf.one_hot(indices, depth) # output: [3 x 3]\n # [[1., 0., 0.],\n # [0., 1., 0.],\n # [0., 0., 1.]]\n\n indices = [0, 2, -1, 1]\n depth = 3\n tf.one_hot(indices, depth,\n on_value=5.0, off_value=0.0,\n axis=-1) # output: [4 x 3]\n # [[5.0, 0.0, 0.0], # one_hot(0)\n # [0.0, 0.0, 5.0], # one_hot(2)\n # [0.0, 0.0, 0.0], # one_hot(-1)\n # [0.0, 5.0, 0.0]] # one_hot(1)\n\n indices = [[0, 2], [1, -1]]\n depth = 3\n tf.one_hot(indices, depth,\n on_value=1.0, off_value=0.0,\n axis=-1) # output: [2 x 2 x 3]\n # [[[1.0, 0.0, 0.0], # one_hot(0)\n # [0.0, 0.0, 1.0]], # one_hot(2)\n # [[0.0, 1.0, 0.0], # one_hot(1)\n # [0.0, 0.0, 0.0]]] # one_hot(-1)\n\n indices = tf.ragged.constant([[0, 1], [2]])\n depth = 3\n tf.one_hot(indices, depth) # output: [2 x None x 3]\n # [[[1., 0., 0.],\n # [0., 1., 0.]],\n # [[0., 0., 1.]]]\n ```\n\n Args:\n indices: A `Tensor` of indices.\n depth: A scalar defining the depth of the one hot dimension.\n on_value: A scalar defining the value to fill in output when `indices[j]\n = i`. (default: 1)\n off_value: A scalar defining the value to fill in output when `indices[j]\n != i`. (default: 0)\n axis: The axis to fill (default: -1, a new inner-most axis).\n dtype: The data type of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n output: The one-hot tensor.\n\n Raises:\n TypeError: If dtype of either `on_value` or `off_value` don't match `dtype`\n TypeError: If dtype of `on_value` and `off_value` don't match one another\n ", "desc": "Returns a one-hot tensor.", "type": "API"}, {"name": "tf.ones", "docs": "Creates a tensor with all elements set to one (1).\n\n See also `tf.ones_like`, `tf.zeros`, `tf.fill`, `tf.eye`.\n\n This operation returns a tensor of type `dtype` with shape `shape` and\n all elements set to one.\n\n >>> tf.ones([3, 4], tf.int32)\n \n\n Args:\n shape: A `list` of integers, a `tuple` of integers, or\n a 1-D `Tensor` of type `int32`.\n dtype: Optional DType of an element in the resulting `Tensor`. Default is\n `tf.float32`.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor` with all elements set to one (1).\n ", "desc": "Creates a tensor with all elements set to one (1).", "type": "API"}, {"name": "tf.ones_initializer", "docs": "Initializer that generates tensors initialized to 1.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Examples:\n\n >>> def make_variables(k, initializer):\n ... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)),\n ... tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))\n >>> v1, v2 = make_variables(3, tf.ones_initializer())\n >>> v1\n \n >>> v2\n \n >>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.))\n (, >> tensor = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.ones_like(tensor)\n \n\n Args:\n input: A `Tensor`.\n dtype: A type for the returned `Tensor`. Must be `float16`, `float32`,\n `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`,\n `complex64`, `complex128`, `bool` or `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with all elements set to one.\n ", "desc": "Creates a tensor of all ones that has the same shape as the input.", "type": "API"}, {"name": "tf.Operation", "docs": "Represents a graph node that performs computation on tensors.\n\n An `Operation` is a node in a `tf.Graph` that takes zero or more `Tensor`\n objects as input, and produces zero or more `Tensor` objects as output.\n Objects of type `Operation` are created by calling a Python op constructor\n (such as `tf.matmul`) within a `tf.function` or under a `tf.Graph.as_default`\n context manager.\n\n For example, within a `tf.function`, `c = tf.matmul(a, b)` creates an\n `Operation` of type \"MatMul\" that takes tensors `a` and `b` as input, and\n produces `c` as output.\n\n If a `tf.compat.v1.Session` is used, an `Operation` of a `tf.Graph` can be\n executed by passing it to `tf.Session.run`. `op.run()` is a shortcut for\n calling `tf.compat.v1.get_default_session().run(op)`.\n ", "desc": "Represents a graph node that performs computation on tensors.", "type": "API"}, {"name": "tf.optimizers", "docs": "", "desc": "", "type": "API"}, {"name": "tf.optimizers.Adadelta", "docs": "Optimizer that implements the Adadelta algorithm.\n\n Adadelta optimization is a stochastic gradient descent method that is based on\n adaptive learning rate per dimension to address two drawbacks:\n\n - The continual decay of learning rates throughout training.\n - The need for a manually selected global learning rate.\n\n Adadelta is a more robust extension of Adagrad that adapts learning rates\n based on a moving window of gradient updates, instead of accumulating all\n past gradients. This way, Adadelta continues learning even when many updates\n have been done. Compared to Adagrad, in the original version of Adadelta you\n don't have to set an initial learning rate. In this version, the initial\n learning rate can be set, as in most other Keras optimizers.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adadelta` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n rho: A `Tensor` or a floating point value. The decay rate.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adadelta\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Zeiler, 2012](http://arxiv.org/abs/1212.5701)\n ", "desc": "Optimizer that implements the Adadelta algorithm.", "type": "API"}, {"name": "tf.optimizers.Adagrad", "docs": "Optimizer that implements the Adagrad algorithm.\n\n Adagrad is an optimizer with parameter-specific learning rates,\n which are adapted relative to how frequently a parameter gets\n updated during training. The more updates a parameter receives,\n the smaller the updates.\n\n Args:\n learning_rate: Initial value for the learning rate:\n either a floating point value,\n or a `tf.keras.optimizers.schedules.LearningRateSchedule` instance.\n Defaults to 0.001.\n Note that `Adagrad` tends to benefit from higher initial learning rate\n values compared to other optimizers.\n To match the exact form in the original paper, use 1.0.\n initial_accumulator_value: Floating point value.\n Starting value for the accumulators (per-parameter momentum values).\n Must be non-negative.\n epsilon: Small floating point value used to maintain numerical stability.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Adagrad\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value..\n\n Reference:\n - [Duchi et al., 2011](\n http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).\n ", "desc": "Optimizer that implements the Adagrad algorithm.", "type": "API"}, {"name": "tf.optimizers.Adam", "docs": "Optimizer that implements the Adam algorithm.\n\n Adam optimization is a stochastic gradient descent method that is based on\n adaptive estimation of first-order and second-order moments.\n\n According to\n [Kingma et al., 2014](http://arxiv.org/abs/1412.6980),\n the method is \"*computationally\n efficient, has little memory requirement, invariant to diagonal rescaling of\n gradients, and is well suited for problems that are large in terms of\n data/parameters*\".\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use, The\n learning rate. Defaults to 0.001.\n beta_1: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use. The\n exponential decay rate for the 1st moment estimates. Defaults to 0.9.\n beta_2: A float value or a constant float tensor, or a callable\n that takes no arguments and returns the actual value to use, The\n exponential decay rate for the 2nd moment estimates. Defaults to 0.999.\n epsilon: A small constant for numerical stability. This epsilon is\n \"epsilon hat\" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to\n 1e-7.\n amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper \"On the Convergence of Adam and beyond\". Defaults to `False`.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.Adam(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> # The first step is `-learning_rate*sign(grad)`\n >>> var1.numpy()\n 9.9\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n - [Reddi et al., 2018](\n https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`.\n\n Notes:\n\n The default value of 1e-7 for epsilon might not be a good default in\n general. For example, when training an Inception network on ImageNet a\n current good choice is 1.0 or 0.1. Note that since Adam uses the\n formulation just before Section 2.1 of the Kingma and Ba paper rather than\n the formulation in Algorithm 1, the \"epsilon\" referred to here is \"epsilon\n hat\" in the paper.\n\n The sparse implementation of this algorithm (used when the gradient is an\n IndexedSlices object, typically because of `tf.gather` or an embedding\n lookup in the forward pass) does apply momentum to variable slices even if\n they were not used in the forward pass (meaning they have a gradient equal\n to zero). Momentum decay (beta1) is also applied to the entire momentum\n accumulator. This means that the sparse behavior is equivalent to the dense\n behavior (in contrast to some momentum implementations which ignore momentum\n unless a variable slice was actually used).\n ", "desc": "Optimizer that implements the Adam algorithm.", "type": "API"}, {"name": "tf.optimizers.Adamax", "docs": "Optimizer that implements the Adamax algorithm.\n\n It is a variant of Adam based on the infinity norm.\n Default parameters follow those provided in the paper.\n Adamax is sometimes superior to adam, specially in models with embeddings.\n\n Initialization:\n\n ```python\n m = 0 # Initialize initial 1st moment vector\n v = 0 # Initialize the exponentially weighted infinity norm\n t = 0 # Initialize timestep\n ```\n\n The update rule for parameter `w` with gradient `g` is\n described at the end of section 7.1 of the paper:\n\n ```python\n t += 1\n m = beta1 * m + (1 - beta) * g\n v = max(beta2 * v, abs(g))\n current_lr = learning_rate / (1 - beta1 ** t)\n w = w - current_lr * m / (v + epsilon)\n ```\n\n Similarly to `Adam`, the epsilon is added for numerical stability\n (especially to get rid of division by zero when `v_t == 0`).\n\n In contrast to `Adam`, the sparse implementation of this algorithm\n (used when the gradient is an IndexedSlices object, typically because of\n `tf.gather` or an embedding lookup in the forward pass) only updates\n variable slices and corresponding `m_t`, `v_t` terms when that part of\n the variable was used in the forward pass. This means that the sparse\n behavior is contrast to the dense behavior (similar to some momentum\n implementations which ignore momentum unless a variable slice was actually\n used).\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Adamax\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)\n ", "desc": "Optimizer that implements the Adamax algorithm.", "type": "API"}, {"name": "tf.optimizers.deserialize", "docs": "Inverse of the `serialize` function.\n\n Args:\n config: Optimizer configuration dictionary.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during deserialization.\n\n Returns:\n A Keras Optimizer instance.\n ", "desc": "Inverse of the `serialize` function.", "type": "API"}, {"name": "tf.optimizers.Ftrl", "docs": "Optimizer that implements the FTRL algorithm.\n\n \"Follow The Regularized Leader\" (FTRL) is an optimization algorithm developed\n at Google for click-through rate prediction in the early 2010s. It is most\n suitable for shallow models with large and sparse feature spaces.\n The algorithm is described by\n [McMahan et al., 2013](https://research.google.com/pubs/archive/41159.pdf).\n The Keras version has support for both online L2 regularization\n (the L2 regularization described in the paper\n above) and shrinkage-type L2 regularization\n (which is the addition of an L2 penalty to the loss function).\n\n Initialization:\n\n ```python\n n = 0\n sigma = 0\n z = 0\n ```\n\n Update rule for one variable `w`:\n\n ```python\n prev_n = n\n n = n + g ** 2\n sigma = (sqrt(n) - sqrt(prev_n)) / lr\n z = z + g - sigma * w\n if abs(z) < lambda_1:\n w = 0\n else:\n w = (sgn(z) * lambda_1 - z) / ((beta + sqrt(n)) / alpha + lambda_2)\n ```\n\n Notation:\n\n - `lr` is the learning rate\n - `g` is the gradient for the variable\n - `lambda_1` is the L1 regularization strength\n - `lambda_2` is the L2 regularization strength\n\n Check the documentation for the `l2_shrinkage_regularization_strength`\n parameter for more details when shrinkage is enabled, in which case gradient\n is replaced with a gradient with shrinkage.\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The learning rate.\n learning_rate_power: A float value, must be less or equal to zero.\n Controls how the learning rate decreases during training. Use zero for\n a fixed learning rate.\n initial_accumulator_value: The starting value for accumulators.\n Only zero or positive values are allowed.\n l1_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n l2_regularization_strength: A float value, must be greater than or\n equal to zero. Defaults to 0.0.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"Ftrl\"`.\n l2_shrinkage_regularization_strength: A float value, must be greater than\n or equal to zero. This differs from L2 above in that the L2 above is a\n stabilization penalty, whereas this L2 shrinkage is a magnitude penalty.\n When input is sparse shrinkage will only happen on the active weights.\n beta: A float value, representing the beta value from the paper.\n Defaults to 0.0.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Reference:\n - [McMahan et al., 2013](\n https://research.google.com/pubs/archive/41159.pdf)\n ", "desc": "Optimizer that implements the FTRL algorithm.", "type": "API"}, {"name": "tf.optimizers.get", "docs": "Retrieves a Keras Optimizer instance.\n\n Args:\n identifier: Optimizer identifier, one of\n - String: name of an optimizer\n - Dictionary: configuration dictionary. - Keras Optimizer instance (it\n will be returned unchanged). - TensorFlow Optimizer instance (it\n will be wrapped as a Keras Optimizer).\n\n Returns:\n A Keras Optimizer instance.\n\n Raises:\n ValueError: If `identifier` cannot be interpreted.\n ", "desc": "Retrieves a Keras Optimizer instance.", "type": "API"}, {"name": "tf.optimizers.Nadam", "docs": "Optimizer that implements the NAdam algorithm.\n Much like Adam is essentially RMSprop with momentum, Nadam is Adam with\n Nesterov momentum.\n\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta_1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta_2: A float value or a constant float tensor. The exponential decay\n rate for the exponentially weighted infinity norm.\n epsilon: A small constant for numerical stability.\n name: Optional name for the operations created when applying gradients.\n Defaults to `\"Nadam\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage Example:\n >>> opt = tf.keras.optimizers.Nadam(learning_rate=0.2)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> \"{:.1f}\".format(var1.numpy())\n 9.8\n\n Reference:\n - [Dozat, 2015](http://cs229.stanford.edu/proj2015/054_report.pdf).\n ", "desc": "Optimizer that implements the NAdam algorithm.", "type": "API"}, {"name": "tf.optimizers.Optimizer", "docs": "Base class for Keras optimizers.\n\n You should not use this class directly, but instead instantiate one of its\n subclasses such as `tf.keras.optimizers.SGD`, `tf.keras.optimizers.Adam`, etc.\n\n ### Usage\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 * var1 + 2 * var2 * var2\n # In graph mode, returns op that minimizes the loss by updating the listed\n # variables.\n opt_op = opt.minimize(loss, var_list=[var1, var2])\n opt_op.run()\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Usage in custom training loops\n\n In Keras models, sometimes variables are created when the model is first\n called, instead of construction time. Examples include 1) sequential models\n without input shape pre-defined, or 2) subclassed models. Pass var_list as\n callable in these cases.\n\n Example:\n\n ```python\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))\n model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))\n loss_fn = lambda: tf.keras.losses.mse(model(input), output)\n var_list_fn = lambda: model.trainable_weights\n for input, output in data:\n opt.minimize(loss_fn, var_list_fn)\n ```\n\n ### Processing gradients before applying them\n\n Calling `minimize()` takes care of both computing the gradients and\n applying them to the variables. If you want to process the gradients\n before applying them you can instead use the optimizer in three steps:\n\n 1. Compute the gradients with `tf.GradientTape`.\n 2. Process the gradients as you wish.\n 3. Apply the processed gradients with `apply_gradients()`.\n\n Example:\n\n ```python\n # Create an optimizer.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n\n # Compute the gradients for a list of variables.\n with tf.GradientTape() as tape:\n loss = \n vars = \n grads = tape.gradient(loss, vars)\n\n # Process the gradients, for example cap them, etc.\n # capped_grads = [MyCapper(g) for g in grads]\n processed_grads = [process_gradient(g) for g in grads]\n\n # Ask the optimizer to apply the processed gradients.\n opt.apply_gradients(zip(processed_grads, var_list))\n ```\n\n ### Use with `tf.distribute.Strategy`\n\n This optimizer class is `tf.distribute.Strategy` aware, which means it\n automatically sums gradients across all replicas. To average gradients,\n you divide your loss by the global batch size, which is done\n automatically if you use `tf.keras` built-in training or evaluation loops.\n See the `reduction` argument of your loss which should be set to\n `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` for averaging or\n `tf.keras.losses.Reduction.SUM` for not.\n\n To aggregate gradients yourself, call `apply_gradients` with\n `experimental_aggregate_gradients` set to False. This is useful if you need to\n process aggregated gradients.\n\n If you are not using these and you want to average gradients, you should use\n `tf.math.reduce_sum` to add up your per-example losses and then divide by the\n global batch size. Note that when using `tf.distribute.Strategy`, the first\n component of a tensor's shape is the *replica-local* batch size, which is off\n by a factor equal to the number of replicas being used to compute a single\n step. As a result, using `tf.math.reduce_mean` will give the wrong answer,\n resulting in gradients that can be many times too big.\n\n ### Variable Constraints\n\n All Keras optimizers respect variable constraints. If constraint function is\n passed to any variable, the constraint will be applied to the variable after\n the gradient has been applied to the variable.\n Important: If gradient is sparse tensor, variable constraint is not supported.\n\n ### Thread Compatibility\n\n The entire optimizer is currently thread compatible, not thread-safe. The user\n needs to perform synchronization if necessary.\n\n ### Slots\n\n Many optimizer subclasses, such as `Adam` and `Adagrad` allocate and manage\n additional variables associated with the variables to train. These are called\n Slots. Slots have names and you can ask the optimizer for the names of\n the slots that it uses. Once you have a slot name you can ask the optimizer\n for the variable it created to hold the slot value.\n\n This can be useful if you want to log debug a training algorithm, report stats\n about the slots, etc.\n\n ### Hyperparameters\n\n These are arguments passed to the optimizer subclass constructor\n (the `__init__` method), and then passed to `self._set_hyper()`.\n They can be either regular Python values (like 1.0), tensors, or\n callables. If they are callable, the callable will be called during\n `apply_gradients()` to get the value for the hyper parameter.\n\n Hyperparameters can be overwritten through user code:\n\n Example:\n\n ```python\n # Create an optimizer with the desired parameters.\n opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n # `loss` is a callable that takes no argument and returns the value\n # to minimize.\n loss = lambda: 3 * var1 + 2 * var2\n # In eager mode, simply call minimize to update the list of variables.\n opt.minimize(loss, var_list=[var1, var2])\n # update learning rate\n opt.learning_rate = 0.05\n opt.minimize(loss, var_list=[var1, var2])\n ```\n\n ### Callable learning rate\n\n Optimizer accepts a callable learning rate in two ways. The first way is\n through built-in or customized\n `tf.keras.optimizers.schedules.LearningRateSchedule`. The schedule will be\n called on each iteration with `schedule(iteration)`, a `tf.Variable`\n owned by the optimizer.\n\n Example:\n\n >>> var = tf.Variable(np.random.random(size=(1,)))\n >>> learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(\n ... initial_learning_rate=.01, decay_steps=20, decay_rate=.1)\n >>> opt = tf.keras.optimizers.SGD(learning_rate=learning_rate)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> var = tf.Variable(np.random.random(size=(1,)))\n >>> def lr_callable():\n ... return .1\n >>> opt = tf.keras.optimizers.SGD(learning_rate=lr_callable)\n >>> loss = lambda: 3 * var\n >>> opt.minimize(loss, var_list=[var])\n >> opt = tf.keras.optimizers.RMSprop(learning_rate=0.1)\n >>> var1 = tf.Variable(10.0)\n >>> loss = lambda: (var1 ** 2) / 2.0 # d(loss) / d(var1) = var1\n >>> step_count = opt.minimize(loss, [var1]).numpy()\n >>> var1.numpy()\n 9.683772\n\n Reference:\n - [Hinton, 2012](\n http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)\n ", "desc": "Optimizer that implements the RMSprop algorithm.", "type": "API"}, {"name": "tf.optimizers.schedules", "docs": "Public API for tf.keras.optimizers.schedules namespace.\n", "desc": "Public API for tf.keras.optimizers.schedules namespace.", "type": "API"}, {"name": "tf.optimizers.schedules.CosineDecay", "docs": "A LearningRateSchedule that uses a cosine decay schedule.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))\n decayed = (1 - alpha) * cosine_decay + alpha\n return initial_learning_rate * decayed\n ```\n\n Example usage:\n ```python\n decay_steps = 1000\n lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(\n initial_learning_rate, decay_steps)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule.", "type": "API"}, {"name": "tf.optimizers.schedules.CosineDecayRestarts", "docs": "A LearningRateSchedule that uses a cosine decay schedule with restarts.\n\n See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),\n SGDR: Stochastic Gradient Descent with Warm Restarts.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies a cosine decay function with\n restarts to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n\n The learning rate multiplier first decays\n from 1 to `alpha` for `first_decay_steps` steps. Then, a warm\n restart is performed. Each new warm restart runs for `t_mul` times more\n steps and with `m_mul` times initial learning rate as the new learning rate.\n\n Example usage:\n ```python\n first_decay_steps = 1000\n lr_decayed_fn = (\n tf.keras.optimizers.schedules.CosineDecayRestarts(\n initial_learning_rate,\n first_decay_steps))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a cosine decay schedule with restarts.", "type": "API"}, {"name": "tf.optimizers.schedules.deserialize", "docs": "Instantiates a `LearningRateSchedule` object from a serialized form.\n\n Args:\n config: The serialized form of the `LearningRateSchedule`.\n Dictionary of the form {'class_name': str, 'config': dict}.\n custom_objects: A dictionary mapping class names (or function names) of\n custom (non-Keras) objects to class/functions.\n\n Returns:\n A `LearningRateSchedule` object.\n\n Example:\n\n ```python\n # Configuration for PolynomialDecay\n config = {\n 'class_name': 'PolynomialDecay',\n 'config': {'cycle': False,\n 'decay_steps': 10000,\n 'end_learning_rate': 0.01,\n 'initial_learning_rate': 0.1,\n 'name': None,\n 'power': 0.5}}\n lr_schedule = tf.keras.optimizers.schedules.deserialize(config)\n ```\n ", "desc": "Instantiates a `LearningRateSchedule` object from a serialized form.", "type": "API"}, {"name": "tf.optimizers.schedules.ExponentialDecay", "docs": "A LearningRateSchedule that uses an exponential decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies an exponential decay function\n to an optimizer step, given a provided initial learning rate.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate * decay_rate ^ (step / decay_steps)\n ```\n\n If the argument `staircase` is `True`, then `step / decay_steps` is\n an integer division and the decayed learning rate follows a\n staircase function.\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: When fitting a Keras model, decay every 100000 steps with a base\n of 0.96:\n\n ```python\n initial_learning_rate = 0.1\n lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate,\n decay_steps=100000,\n decay_rate=0.96,\n staircase=True)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an exponential decay schedule.", "type": "API"}, {"name": "tf.optimizers.schedules.InverseTimeDecay", "docs": "A LearningRateSchedule that uses an inverse time decay schedule.\n\n When training a model, it is often useful to lower the learning rate as\n the training progresses. This schedule applies the inverse decay function\n to an optimizer step, given a provided initial learning rate.\n It requires a `step` value to compute the decayed learning rate. You can\n just pass a TensorFlow variable that you increment at each training step.\n\n The schedule is a 1-arg callable that produces a decayed learning\n rate when passed the current optimizer step. This can be useful for changing\n the learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * step / decay_step)\n ```\n\n or, if `staircase` is `True`, as:\n\n ```python\n def decayed_learning_rate(step):\n return initial_learning_rate / (1 + decay_rate * floor(step / decay_step))\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a Keras model when decaying 1/t with a rate of 0.5:\n\n ```python\n ...\n initial_learning_rate = 0.1\n decay_steps = 1.0\n decay_rate = 0.5\n learning_rate_fn = keras.optimizers.schedules.InverseTimeDecay(\n initial_learning_rate, decay_steps, decay_rate)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses an inverse time decay schedule.", "type": "API"}, {"name": "tf.optimizers.schedules.LearningRateSchedule", "docs": "The learning rate schedule base class.\n\n You can use a learning rate schedule to modulate how the learning rate\n of your optimizer changes over time.\n\n Several built-in learning rate schedules are available, such as\n `tf.keras.optimizers.schedules.ExponentialDecay` or\n `tf.keras.optimizers.schedules.PiecewiseConstantDecay`:\n\n ```python\n lr_schedule = keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate=1e-2,\n decay_steps=10000,\n decay_rate=0.9)\n optimizer = keras.optimizers.SGD(learning_rate=lr_schedule)\n ```\n\n A `LearningRateSchedule` instance can be passed in as the `learning_rate`\n argument of any optimizer.\n\n To implement your own schedule object, you should implement the `__call__`\n method, which takes a `step` argument (scalar integer tensor, the\n current training step count).\n Like for any other Keras object, you can also optionally\n make your object serializable by implementing the `get_config`\n and `from_config` methods.\n\n Example:\n\n ```python\n class MyLRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):\n\n def __init__(self, initial_learning_rate):\n self.initial_learning_rate = initial_learning_rate\n\n def __call__(self, step):\n return self.initial_learning_rate / (step + 1)\n\n optimizer = tf.keras.optimizers.SGD(learning_rate=MyLRSchedule(0.1))\n ```\n ", "desc": "The learning rate schedule base class.", "type": "API"}, {"name": "tf.optimizers.schedules.PiecewiseConstantDecay", "docs": "A LearningRateSchedule that uses a piecewise constant decay schedule.\n\n The function returns a 1-arg callable to compute the piecewise constant\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n\n Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5\n for the next 10000 steps, and 0.1 for any additional steps.\n\n ```python\n step = tf.Variable(0, trainable=False)\n boundaries = [100000, 110000]\n values = [1.0, 0.5, 0.1]\n learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(\n boundaries, values)\n\n # Later, whenever we perform an optimization step, we pass in the step.\n learning_rate = learning_rate_fn(step)\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate. The learning rate schedule is also serializable and\n deserializable using `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as the boundary tensors.\n\n The output of the 1-arg function that takes the `step`\n is `values[0]` when `step <= boundaries[0]`,\n `values[1]` when `step > boundaries[0]` and `step <= boundaries[1]`, ...,\n and values[-1] when `step > boundaries[-1]`.\n ", "desc": "A LearningRateSchedule that uses a piecewise constant decay schedule.", "type": "API"}, {"name": "tf.optimizers.schedules.PolynomialDecay", "docs": "A LearningRateSchedule that uses a polynomial decay schedule.\n\n It is commonly observed that a monotonically decreasing learning rate, whose\n degree of change is carefully chosen, results in a better performing model.\n This schedule applies a polynomial decay function to an optimizer step,\n given a provided `initial_learning_rate`, to reach an `end_learning_rate`\n in the given `decay_steps`.\n\n It requires a `step` value to compute the decayed learning rate. You\n can just pass a TensorFlow variable that you increment at each training\n step.\n\n The schedule is a 1-arg callable that produces a decayed learning rate\n when passed the current optimizer step. This can be useful for changing the\n learning rate value across different invocations of optimizer functions.\n It is computed as:\n\n ```python\n def decayed_learning_rate(step):\n step = min(step, decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n If `cycle` is True then a multiple of `decay_steps` is used, the first one\n that is bigger than `step`.\n\n ```python\n def decayed_learning_rate(step):\n decay_steps = decay_steps * ceil(step / decay_steps)\n return ((initial_learning_rate - end_learning_rate) *\n (1 - step / decay_steps) ^ (power)\n ) + end_learning_rate\n ```\n\n You can pass this schedule directly into a `tf.keras.optimizers.Optimizer`\n as the learning rate.\n Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using\n sqrt (i.e. power=0.5):\n\n ```python\n ...\n starter_learning_rate = 0.1\n end_learning_rate = 0.01\n decay_steps = 10000\n learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(\n starter_learning_rate,\n decay_steps,\n end_learning_rate,\n power=0.5)\n\n model.compile(optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate_fn),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n model.fit(data, labels, epochs=5)\n ```\n\n The learning rate schedule is also serializable and deserializable using\n `tf.keras.optimizers.schedules.serialize` and\n `tf.keras.optimizers.schedules.deserialize`.\n\n Returns:\n A 1-arg callable learning rate schedule that takes the current optimizer\n step and outputs the decayed learning rate, a scalar `Tensor` of the same\n type as `initial_learning_rate`.\n ", "desc": "A LearningRateSchedule that uses a polynomial decay schedule.", "type": "API"}, {"name": "tf.optimizers.schedules.serialize", "docs": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.\n\n Args:\n learning_rate_schedule: The `LearningRateSchedule` object to serialize.\n\n Returns:\n A JSON-serializable dict representing the object's config.\n\n Example:\n\n >>> lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(\n ... 0.1, decay_steps=100000, decay_rate=0.96, staircase=True)\n >>> tf.keras.optimizers.schedules.serialize(lr_schedule)\n {'class_name': 'ExponentialDecay', 'config': {...}}\n ", "desc": "Serializes a `LearningRateSchedule` into a JSON-compatible representation.", "type": "API"}, {"name": "tf.optimizers.serialize", "docs": "Serialize the optimizer configuration to JSON compatible python dict.\n\n The configuration can be used for persistence and reconstruct the `Optimizer`\n instance again.\n\n >>> tf.keras.optimizers.serialize(tf.keras.optimizers.SGD())\n {'class_name': 'SGD', 'config': {'name': 'SGD', 'learning_rate': 0.01,\n 'decay': 0.0, 'momentum': 0.0,\n 'nesterov': False}}\n\n Args:\n optimizer: An `Optimizer` instance to serialize.\n\n Returns:\n Python dict which contains the configuration of the input optimizer.\n ", "desc": "Serialize the optimizer configuration to JSON compatible python dict.", "type": "API"}, {"name": "tf.optimizers.SGD", "docs": "Gradient descent (with momentum) optimizer.\n\n Update rule for parameter `w` with gradient `g` when `momentum` is 0:\n\n ```python\n w = w - learning_rate * g\n ```\n\n Update rule when `momentum` is larger than 0:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + velocity\n ```\n\n When `nesterov=True`, this rule becomes:\n\n ```python\n velocity = momentum * velocity - learning_rate * g\n w = w + momentum * velocity - learning_rate * g\n ```\n\n Args:\n learning_rate: A `Tensor`, floating point value, or a schedule that is a\n `tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable\n that takes no arguments and returns the actual value to use. The\n learning rate. Defaults to 0.01.\n momentum: float hyperparameter >= 0 that accelerates gradient descent\n in the relevant\n direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient\n descent.\n nesterov: boolean. Whether to apply Nesterov momentum.\n Defaults to `False`.\n name: Optional name prefix for the operations created when applying\n gradients. Defaults to `\"SGD\"`.\n **kwargs: keyword arguments. Allowed arguments are `clipvalue`,\n `clipnorm`, `global_clipnorm`.\n If `clipvalue` (float) is set, the gradient of each weight\n is clipped to be no higher than this value.\n If `clipnorm` (float) is set, the gradient of each weight\n is individually clipped so that its norm is no higher than this value.\n If `global_clipnorm` (float) is set the gradient of all weights is\n clipped so that their global norm is no higher than this value.\n\n Usage:\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1)\n >>> var = tf.Variable(1.0)\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> # Step is `- learning_rate * grad`\n >>> var.numpy()\n 0.9\n\n >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)\n >>> var = tf.Variable(1.0)\n >>> val0 = var.value()\n >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1\n >>> # First step is `- learning_rate * grad`\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val1 = var.value()\n >>> (val0 - val1).numpy()\n 0.1\n >>> # On later steps, step-size increases because of momentum\n >>> step_count = opt.minimize(loss, [var]).numpy()\n >>> val2 = var.value()\n >>> (val1 - val2).numpy()\n 0.18\n\n Reference:\n - For `nesterov=True`, See [Sutskever et al., 2013](\n http://jmlr.org/proceedings/papers/v28/sutskever13.pdf).\n ", "desc": "Gradient descent (with momentum) optimizer.", "type": "API"}, {"name": "tf.OptionalSpec", "docs": "Type specification for `tf.experimental.Optional`.\n\n For instance, `tf.OptionalSpec` can be used to define a tf.function that takes\n `tf.experimental.Optional` as an input argument:\n\n >>> @tf.function(input_signature=[tf.OptionalSpec(\n ... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])\n ... def maybe_square(optional):\n ... if optional.has_value():\n ... x = optional.get_value()\n ... return x * x\n ... return -1\n >>> optional = tf.experimental.Optional.from_value(5)\n >>> print(maybe_square(optional))\n tf.Tensor(25, shape=(), dtype=int32)\n\n Attributes:\n element_spec: A (nested) structure of `TypeSpec` objects that represents the\n type specification of the optional element.\n ", "desc": "Type specification for `tf.experimental.Optional`.", "type": "API"}, {"name": "tf.pad", "docs": "Pads a tensor.\n\n This operation pads a `tensor` according to the `paddings` you specify.\n `paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of\n `tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how\n many values to add before the contents of `tensor` in that dimension, and\n `paddings[D, 1]` indicates how many values to add after the contents of\n `tensor` in that dimension. If `mode` is \"REFLECT\" then both `paddings[D, 0]`\n and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If\n `mode` is \"SYMMETRIC\" then both `paddings[D, 0]` and `paddings[D, 1]` must be\n no greater than `tensor.dim_size(D)`.\n\n The padded size of each dimension D of the output is:\n\n `paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]`\n\n For example:\n\n ```python\n t = tf.constant([[1, 2, 3], [4, 5, 6]])\n paddings = tf.constant([[1, 1,], [2, 2]])\n # 'constant_values' is 0.\n # rank of 't' is 2.\n tf.pad(t, paddings, \"CONSTANT\") # [[0, 0, 0, 0, 0, 0, 0],\n # [0, 0, 1, 2, 3, 0, 0],\n # [0, 0, 4, 5, 6, 0, 0],\n # [0, 0, 0, 0, 0, 0, 0]]\n\n tf.pad(t, paddings, \"REFLECT\") # [[6, 5, 4, 5, 6, 5, 4],\n # [3, 2, 1, 2, 3, 2, 1],\n # [6, 5, 4, 5, 6, 5, 4],\n # [3, 2, 1, 2, 3, 2, 1]]\n\n tf.pad(t, paddings, \"SYMMETRIC\") # [[2, 1, 1, 2, 3, 3, 2],\n # [2, 1, 1, 2, 3, 3, 2],\n # [5, 4, 4, 5, 6, 6, 5],\n # [5, 4, 4, 5, 6, 6, 5]]\n ```\n\n Args:\n tensor: A `Tensor`.\n paddings: A `Tensor` of type `int32`.\n mode: One of \"CONSTANT\", \"REFLECT\", or \"SYMMETRIC\" (case-insensitive)\n constant_values: In \"CONSTANT\" mode, the scalar pad value to use. Must be\n same type as `tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n\n Raises:\n ValueError: When mode is not one of \"CONSTANT\", \"REFLECT\", or \"SYMMETRIC\".\n ", "desc": "Pads a tensor.", "type": "API"}, {"name": "tf.parallel_stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.\n\n Requires that the shape of inputs be known at graph construction time.\n\n Packs the list of tensors in `values` into a tensor with rank one higher than\n each tensor in `values`, by packing them along the first dimension.\n Given a list of length `N` of tensors of shape `(A, B, C)`; the `output`\n tensor will have the shape `(N, A, B, C)`.\n\n For example:\n\n ```python\n x = tf.constant([1, 4])\n y = tf.constant([2, 5])\n z = tf.constant([3, 6])\n tf.parallel_stack([x, y, z]) # [[1, 4], [2, 5], [3, 6]]\n ```\n\n The difference between `stack` and `parallel_stack` is that `stack` requires\n all the inputs be computed before the operation will begin but doesn't require\n that the input shapes be known during graph construction.\n\n `parallel_stack` will copy pieces of the input into the output as they become\n available, in some situations this can provide a performance benefit.\n\n Unlike `stack`, `parallel_stack` does NOT support backpropagation.\n\n This is the opposite of unstack. The numpy equivalent is\n\n tf.parallel_stack([x, y, z]) = np.asarray([x, y, z])\n\n @compatibility(eager)\n parallel_stack is not compatible with eager execution.\n @end_compatibility\n\n Args:\n values: A list of `Tensor` objects with the same shape and type.\n name: A name for this operation (optional).\n\n Returns:\n output: A stacked `Tensor` with the same type as `values`.\n\n Raises:\n RuntimeError: if executed in eager mode.\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.", "type": "API"}, {"name": "tf.pow", "docs": "Computes the power of one value to another.\n\n Given a tensor `x` and a tensor `y`, this operation computes \\\\(x^y\\\\) for\n corresponding elements in `x` and `y`. For example:\n\n ```python\n x = tf.constant([[2, 2], [3, 3]])\n y = tf.constant([[8, 16], [2, 3]])\n tf.pow(x, y) # [[256, 65536], [9, 27]]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n y: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, `int64`,\n `complex64`, or `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`.\n ", "desc": "Computes the power of one value to another.", "type": "API"}, {"name": "tf.print", "docs": "Print the specified inputs.\n\n A TensorFlow operator that prints the specified inputs to a desired\n output stream or logging level. The inputs may be dense or sparse Tensors,\n primitive python objects, data structures that contain tensors, and printable\n Python objects. Printed tensors will recursively show the first and last\n elements of each dimension to summarize.\n\n Example:\n Single-input usage:\n\n ```python\n tensor = tf.range(10)\n tf.print(tensor, output_stream=sys.stderr)\n ```\n\n (This prints \"[0 1 2 ... 7 8 9]\" to sys.stderr)\n\n Multi-input usage:\n\n ```python\n tensor = tf.range(10)\n tf.print(\"tensors:\", tensor, {2: tensor * 2}, output_stream=sys.stdout)\n ```\n\n (This prints \"tensors: [0 1 2 ... 7 8 9] {2: [0 2 4 ... 14 16 18]}\" to\n sys.stdout)\n\n Changing the input separator:\n ```python\n tensor_a = tf.range(2)\n tensor_b = tensor_a * 2\n tf.print(tensor_a, tensor_b, output_stream=sys.stderr, sep=',')\n ```\n\n (This prints \"[0 1],[0 2]\" to sys.stderr)\n\n Usage in a `tf.function`:\n\n ```python\n @tf.function\n def f():\n tensor = tf.range(10)\n tf.print(tensor, output_stream=sys.stderr)\n return tensor\n\n range_tensor = f()\n ```\n\n (This prints \"[0 1 2 ... 7 8 9]\" to sys.stderr)\n\n *Compatibility usage in TF 1.x graphs*:\n\n In graphs manually created outside of `tf.function`, this method returns\n the created TF operator that prints the data. To make sure the\n operator runs, users need to pass the produced op to\n `tf.compat.v1.Session`'s run method, or to use the op as a control\n dependency for executed ops by specifying\n `with tf.compat.v1.control_dependencies([print_op])`.\n\n ```python\n tf.compat.v1.disable_v2_behavior() # for TF1 compatibility only\n\n sess = tf.compat.v1.Session()\n with sess.as_default():\n tensor = tf.range(10)\n print_op = tf.print(\"tensors:\", tensor, {2: tensor * 2},\n output_stream=sys.stdout)\n with tf.control_dependencies([print_op]):\n tripled_tensor = tensor * 3\n\n sess.run(tripled_tensor)\n ```\n\n (This prints \"tensors: [0 1 2 ... 7 8 9] {2: [0 2 4 ... 14 16 18]}\" to\n sys.stdout)\n\n Note: In Jupyter notebooks and colabs, `tf.print` prints to the notebook\n cell outputs. It will not write to the notebook kernel's console logs.\n\n Args:\n *inputs: Positional arguments that are the inputs to print. Inputs in the\n printed output will be separated by spaces. Inputs may be python\n primitives, tensors, data structures such as dicts and lists that may\n contain tensors (with the data structures possibly nested in arbitrary\n ways), and printable python objects.\n output_stream: The output stream, logging level, or file to print to.\n Defaults to sys.stderr, but sys.stdout, tf.compat.v1.logging.info,\n tf.compat.v1.logging.warning, tf.compat.v1.logging.error,\n absl.logging.info, absl.logging.warning and absl.logging.error are also\n supported. To print to a file, pass a string started with \"file://\"\n followed by the file path, e.g., \"file:///tmp/foo.out\".\n summarize: The first and last `summarize` elements within each dimension are\n recursively printed per Tensor. If None, then the first 3 and last 3\n elements of each dimension are printed for each tensor. If set to -1, it\n will print all elements of every tensor.\n sep: The string to use to separate the inputs. Defaults to \" \".\n end: End character that is appended at the end the printed string. Defaults\n to the newline character.\n name: A name for the operation (optional).\n\n Returns:\n None when executing eagerly. During graph tracing this returns\n a TF operator that prints the specified inputs in the specified output\n stream or logging level. This operator will be automatically executed\n except inside of `tf.compat.v1` graphs and sessions.\n\n Raises:\n ValueError: If an unsupported output stream is specified.\n ", "desc": "Print the specified inputs.", "type": "API"}, {"name": "tf.profiler", "docs": "Public API for tf.profiler namespace.\n", "desc": "Public API for tf.profiler namespace.", "type": "API"}, {"name": "tf.profiler.experimental", "docs": "Public API for tf.profiler.experimental namespace.\n", "desc": "Public API for tf.profiler.experimental namespace.", "type": "API"}, {"name": "tf.profiler.experimental.client", "docs": "Public API for tf.profiler.experimental.client namespace.\n", "desc": "Public API for tf.profiler.experimental.client namespace.", "type": "API"}, {"name": "tf.profiler.experimental.client.monitor", "docs": "Sends grpc requests to profiler server to perform on-demand monitoring.\n\n The monitoring result is a light weight performance summary of your model\n execution. This method will block the caller thread until it receives the\n monitoring result. This method currently supports Cloud TPU only.\n\n Args:\n service_addr: gRPC address of profiler service e.g. grpc://10.0.0.2:8466.\n duration_ms: Duration of monitoring in ms.\n level: Choose a monitoring level between 1 and 2 to monitor your job. Level\n 2 is more verbose than level 1 and shows more metrics.\n\n Returns:\n A string of monitoring output.\n\n Example usage:\n\n ```python\n # Continuously send gRPC requests to the Cloud TPU to monitor the model\n # execution.\n\n for query in range(0, 100):\n print(\n tf.profiler.experimental.client.monitor('grpc://10.0.0.2:8466', 1000))\n ```\n\n ", "desc": "Sends grpc requests to profiler server to perform on-demand monitoring.", "type": "API"}, {"name": "tf.profiler.experimental.client.trace", "docs": "Sends gRPC requests to one or more profiler servers to perform on-demand profiling.\n\n This method will block the calling thread until it receives responses from all\n servers or until deadline expiration. Both single host and multiple host\n profiling are supported on CPU, GPU, and TPU.\n The profiled results will be saved by each server to the specified TensorBoard\n log directory (i.e. the directory you save your model checkpoints). Use the\n TensorBoard profile plugin to view the visualization and analysis results.\n\n Args:\n service_addr: A comma delimited string of gRPC addresses of the workers to\n profile.\n e.g. service_addr='grpc://localhost:6009'\n service_addr='grpc://10.0.0.2:8466,grpc://10.0.0.3:8466'\n service_addr='grpc://localhost:12345,grpc://localhost:23456'\n logdir: Path to save profile data to, typically a TensorBoard log directory.\n This path must be accessible to both the client and server.\n e.g. logdir='gs://your_tb_dir'\n duration_ms: Duration of tracing or monitoring in milliseconds. Must be\n greater than zero.\n worker_list: An optional TPU only configuration. The list of workers to\n profile in the current session.\n num_tracing_attempts: Optional. Automatically retry N times when no trace\n event is collected (default 3).\n options: profiler.experimental.ProfilerOptions namedtuple for miscellaneous\n profiler options.\n\n Raises:\n InvalidArgumentError: For when arguments fail validation checks.\n UnavailableError: If no trace event was collected.\n\n Example usage (CPU/GPU):\n\n ```python\n # Start a profiler server before your model runs.\n tf.profiler.experimental.server.start(6009)\n # (Model code goes here).\n # Send gRPC request to the profiler server to collect a trace of your model.\n tf.profiler.experimental.client.trace('grpc://localhost:6009',\n '/nfs/tb_log', 2000)\n ```\n\n Example usage (Multiple GPUs):\n\n ```python\n # E.g. your worker IP addresses are 10.0.0.2, 10.0.0.3, 10.0.0.4, and you\n # would like to schedule start of profiling 1 second from now, for a\n # duration of 2 seconds.\n options['delay_ms'] = 1000\n tf.profiler.experimental.client.trace(\n 'grpc://10.0.0.2:8466,grpc://10.0.0.3:8466,grpc://10.0.0.4:8466',\n 'gs://your_tb_dir',\n 2000,\n options=options)\n ```\n\n Example usage (TPU):\n\n ```python\n # Send gRPC request to a TPU worker to collect a trace of your model. A\n # profiler service has been started in the TPU worker at port 8466.\n # E.g. your TPU IP address is 10.0.0.2 and you want to profile for 2 seconds\n # .\n tf.profiler.experimental.client.trace('grpc://10.0.0.2:8466',\n 'gs://your_tb_dir', 2000)\n ```\n\n Example usage (Multiple TPUs):\n\n ```python\n # Send gRPC request to a TPU pod to collect a trace of your model on\n # multiple TPUs. A profiler service has been started in all the TPU workers\n # at the port 8466.\n # E.g. your TPU IP addresses are 10.0.0.2, 10.0.0.3, 10.0.0.4, and you want\n # to profile for 2 seconds.\n tf.profiler.experimental.client.trace(\n 'grpc://10.0.0.2:8466',\n 'gs://your_tb_dir',\n 2000,\n '10.0.0.2:8466,10.0.0.3:8466,10.0.0.4:8466')\n ```\n\n Launch TensorBoard and point it to the same logdir you provided to this API.\n\n ```shell\n # logdir can be gs://your_tb_dir as in the above examples.\n $ tensorboard --logdir=/tmp/tb_log\n ```\n\n Open your browser and go to localhost:6006/#profile to view profiling results.\n\n ", "desc": "Sends gRPC requests to one or more profiler servers to perform on-demand profiling.", "type": "API"}, {"name": "tf.profiler.experimental.Profile", "docs": "Context-manager profile API.\n\n Profiling will start when entering the scope, and stop and save the results to\n the logdir when exits the scope. Open TensorBoard profile tab to view results.\n\n Example usage:\n ```python\n with tf.profiler.experimental.Profile(\"/path/to/logdir\"):\n # do some work\n ```\n ", "desc": "Context-manager profile API.", "type": "API"}, {"name": "tf.profiler.experimental.ProfilerOptions", "docs": "Options for finer control over the profiler.\n\n Use `tf.profiler.experimental.ProfilerOptions` to control `tf.profiler`\n behavior.\n\n Fields:\n host_tracer_level: Adjust CPU tracing level. Values are: 1 - critical info\n only, 2 - info, 3 - verbose. [default value is 2]\n python_tracer_level: Toggle tracing of Python function calls. Values are: 1\n - enabled, 0 - disabled [default value is 0]\n device_tracer_level: Adjust device (TPU/GPU) tracing level. Values are: 1 -\n enabled, 0 - disabled [default value is 1]\n delay_ms: Requests for all hosts to start profiling at a timestamp that is\n `delay_ms` away from the current time. `delay_ms` is in milliseconds. If\n zero, each host will start profiling immediately upon receiving the\n request. Default value is None, allowing the profiler guess the best\n value.\n\n ", "desc": "Options for finer control over the profiler.", "type": "API"}, {"name": "tf.profiler.experimental.server", "docs": "Public API for tf.profiler.experimental.server namespace.\n", "desc": "Public API for tf.profiler.experimental.server namespace.", "type": "API"}, {"name": "tf.profiler.experimental.server.start", "docs": "Start a profiler grpc server that listens to given port.\n\n The profiler server will exit when the process finishes. The service is\n defined in tensorflow/core/profiler/profiler_service.proto.\n\n Args:\n port: port profiler server listens to.\n Example usage: ```python tf.profiler.experimental.server.start(6009) # do\n your training here.\n ", "desc": "Start a profiler grpc server that listens to given port.", "type": "API"}, {"name": "tf.profiler.experimental.start", "docs": "Start profiling TensorFlow performance.\n\n Args:\n logdir: Profiling results log directory.\n options: `ProfilerOptions` namedtuple to specify miscellaneous profiler\n options. See example usage below.\n\n Raises:\n AlreadyExistsError: If a profiling session is already running.\n\n Example usage:\n ```python\n options = tf.profiler.experimental.ProfilerOptions(host_tracer_level = 3,\n python_tracer_level = 1,\n device_tracer_level = 1)\n tf.profiler.experimental.start('logdir_path', options = options)\n # Training code here\n tf.profiler.experimental.stop()\n ```\n\n To view the profiling results, launch TensorBoard and point it to `logdir`.\n Open your browser and go to `localhost:6006/#profile` to view profiling\n results.\n\n ", "desc": "Start profiling TensorFlow performance.", "type": "API"}, {"name": "tf.profiler.experimental.stop", "docs": "Stops the current profiling session.\n\n The profiler session will be stopped and profile results can be saved.\n\n Args:\n save: An optional variable to save the results to TensorBoard. Default True.\n\n Raises:\n UnavailableError: If there is no active profiling session.\n ", "desc": "Stops the current profiling session.", "type": "API"}, {"name": "tf.profiler.experimental.Trace", "docs": "Context manager that generates a trace event in the profiler.\n\n A trace event will start when entering the context, and stop and save the\n result to the profiler when exiting the context. Open TensorBoard Profile tab\n and choose trace viewer to view the trace event in the timeline.\n\n Trace events are created only when the profiler is enabled. More information\n on how to use the profiler can be found at\n https://tensorflow.org/guide/profiler\n\n Example usage:\n ```python\n tf.profiler.experimental.start('logdir')\n for step in range(num_steps):\n # Creates a trace event for each training step with the step number.\n with tf.profiler.experimental.Trace(\"Train\", step_num=step, _r=1):\n train_fn()\n tf.profiler.experimental.stop()\n ```\n ", "desc": "Context manager that generates a trace event in the profiler.", "type": "API"}, {"name": "tf.py_function", "docs": "Wraps a python function into a TensorFlow op that executes it eagerly.\n\n This function allows expressing computations in a TensorFlow graph as\n Python functions. In particular, it wraps a Python function `func`\n in a once-differentiable TensorFlow operation that executes it with eager\n execution enabled. As a consequence, `tf.py_function` makes it\n possible to express control flow using Python constructs (`if`, `while`,\n `for`, etc.), instead of TensorFlow control flow constructs (`tf.cond`,\n `tf.while_loop`). For example, you might use `tf.py_function` to\n implement the log huber function:\n\n ```python\n def log_huber(x, m):\n if tf.abs(x) <= m:\n return x**2\n else:\n return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))\n\n x = tf.constant(1.0)\n m = tf.constant(2.0)\n\n with tf.GradientTape() as t:\n t.watch([x, m])\n y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32)\n\n dy_dx = t.gradient(y, x)\n assert dy_dx.numpy() == 2.0\n ```\n\n You can also use `tf.py_function` to debug your models at runtime\n using Python tools, i.e., you can isolate portions of your code that\n you want to debug, wrap them in Python functions and insert `pdb` tracepoints\n or print statements as desired, and wrap those functions in\n `tf.py_function`.\n\n For more information on eager execution, see the\n [Eager guide](https://tensorflow.org/guide/eager).\n\n `tf.py_function` is similar in spirit to `tf.compat.v1.py_func`, but unlike\n the latter, the former lets you use TensorFlow operations in the wrapped\n Python function. In particular, while `tf.compat.v1.py_func` only runs on CPUs\n and wraps functions that take NumPy arrays as inputs and return NumPy arrays\n as outputs, `tf.py_function` can be placed on GPUs and wraps functions\n that take Tensors as inputs, execute TensorFlow operations in their bodies,\n and return Tensors as outputs.\n\n Note: We recommend to avoid using `tf.py_function` outside of prototyping\n and experimentation due to the following known limitations:\n\n * Calling `tf.py_function` will acquire the Python Global Interpreter Lock\n (GIL) that allows only one thread to run at any point in time. This will\n preclude efficient parallelization and distribution of the execution of the\n program.\n\n * The body of the function (i.e. `func`) will not be serialized in a\n `GraphDef`. Therefore, you should not use this function if you need to\n serialize your model and restore it in a different environment.\n\n * The operation must run in the same address space as the Python program\n that calls `tf.py_function()`. If you are using distributed\n TensorFlow, you must run a `tf.distribute.Server` in the same process as the\n program that calls `tf.py_function()` and you must pin the created\n operation to a device in that server (e.g. using `with tf.device():`).\n\n * Currently `tf.py_function` is not compatible with XLA. Calling\n `tf.py_function` inside `tf.function(jit_comiple=True)` will raise an\n error.\n\n Args:\n func: A Python function that accepts `inp` as arguments, and returns a\n value (or list of values) whose type is described by `Tout`.\n\n inp: Input arguments for `func`. A list whose elements are `Tensor`s or\n `CompositeTensors` (such as `tf.RaggedTensor`); or a single `Tensor` or\n `CompositeTensor`.\n\n Tout: The type(s) of the value(s) returned by `func`. One of the\n following.\n\n * If `func` returns a `Tensor` (or a value that can be converted to a\n Tensor): the `tf.DType` for that value.\n * If `func` returns a `CompositeTensor`: The `tf.TypeSpec` for that value.\n * If `func` returns `None`: the empty list (`[]`).\n * If `func` returns a list of `Tensor` and `CompositeTensor` values:\n a corresponding list of `tf.DType`s and `tf.TypeSpec`s for each value.\n\n name: A name for the operation (optional).\n\n Returns:\n The value(s) computed by `func`: a `Tensor`, `CompositeTensor`, or list of\n `Tensor` and `CompositeTensor`; or an empty list if `func` returns `None`.\n ", "desc": "Wraps a python function into a TensorFlow op that executes it eagerly.", "type": "API"}, {"name": "tf.quantization", "docs": "Public API for tf.quantization namespace.\n", "desc": "Public API for tf.quantization namespace.", "type": "API"}, {"name": "tf.quantization.dequantize", "docs": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.\n\n [min_range, max_range] are scalar floats that specify the range for\n the output. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n if T == qint8: in[i] += (range(T) + 1)/ 2.0\n out[i] = min_range + (in[i]* (max_range - min_range) / range(T))\n ```\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n If the input comes from a QuantizedRelu6, the output type is\n quint8 (range of 0-255) but the possible range of QuantizedRelu6 is\n 0-6. The min_range and max_range values are therefore 0.0 and 6.0.\n Dequantize on quint8 will take each value, cast to float, and multiply\n by 6 / 255.\n Note that if quantizedtype is qint8, the operation will additionally add\n each value by 128 prior to casting.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```c++\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = range / num_discrete_values\n const double offset_input = static_cast(input) - lowest_quantized;\n result = range_min + ((input - numeric_limits::min()) * range_scale)\n ```\n\n If the mode is `SCALED`, dequantization is performed by multiplying each\n input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).\n\n The scaling_factor is determined from `min_range`, `max_range`, and\n `narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}`\n and `QuantizeV2`, using the following algorithm:\n\n ```c++\n\n const int min_expected_T = std::numeric_limits::min() +\n (narrow_range ? 1 : 0);\n const int max_expected_T = std::numeric_limits::max();\n const float max_expected_T = std::numeric_limits::max();\n\n const float scale_factor =\n (std::numeric_limits::min() == 0) ? (max_range / max_expected_T)\n : std::max(min_range / min_expected_T,\n max_range / max_expected_T);\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_range: A `Tensor` of type `float32`.\n The minimum scalar value possibly produced for the input.\n max_range: A `Tensor` of type `float32`.\n The maximum scalar value possibly produced for the input.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n dtype: An optional `tf.DType` from: `tf.bfloat16, tf.float32`. Defaults to `tf.float32`.\n Type of the output tensor. Currently Dequantize supports float and bfloat16.\n If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_args", "docs": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n Quantization is called fake since the output is still in floating point.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_args_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxArgs operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxArgs operation.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxArgs operation.", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_vars", "docs": "Fake-quantize the 'inputs' tensor of type float via global float scalars\n\n Fake-quantize the `inputs` tensor of type float via global float scalars\n `min` and `max` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via global float scalars", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_vars_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVars operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation.\n min, max: Quantization interval, scalar floats.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 8, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVars operation.", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_vars_per_channel", "docs": "Fake-quantize the 'inputs' tensor of type float via per-channel floats\n\n Fake-quantize the `inputs` tensor of type float per-channel and one of the\n shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max`\n of shape `[d]` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via per-channel floats", "type": "API"}, {"name": "tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation,\n shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape\n same as `gradients`.\n min, max: Quantization interval, floats of shape `[d]`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 16, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.", "type": "API"}, {"name": "tf.quantization.quantize", "docs": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.\n\n [min_range, max_range] are scalar floats that specify the range for\n the 'input' data. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents. The\n 'round_mode' attribute controls which rounding tie-breaking algorithm is used\n when rounding float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)\n if T == qint8: out[i] -= (range(T) + 1) / 2.0\n ```\n\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n Assume the input is type float and has a possible range of [0.0, 6.0] and the\n output type is quint8 ([0, 255]). The min_range and max_range values should be\n specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each\n value of the input by 255/6 and cast to quint8.\n\n If the output type was qint8 ([-128, 127]), the operation will additionally\n subtract each value by 128 prior to casting, so that the range of values aligns\n with the range of qint8.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = num_discrete_values / range\n quantized = round(input * range_scale) - round(range_min * range_scale) +\n numeric_limits::min()\n quantized = max(quantized, numeric_limits::min())\n quantized = min(quantized, numeric_limits::max())\n ```\n\n The biggest difference between this and MIN_COMBINED is that the minimum range\n is rounded first, before it's subtracted from the rounded value. With\n MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing\n and dequantizing will introduce a larger and larger error.\n\n *SCALED mode Example*\n\n `SCALED` mode matches the quantization approach used in\n `QuantizeAndDequantize{V2|V3}`.\n\n If the mode is `SCALED`, the quantization is performed by multiplying each\n input value by a scaling_factor.\n The scaling_factor is determined from `min_range` and `max_range` to be as large\n as possible such that the range from `min_range` to `max_range` is representable\n within values of type T.\n\n ```c++\n\n const int min_T = std::numeric_limits::min();\n const int max_T = std::numeric_limits::max();\n const float max_float = std::numeric_limits::max();\n\n const float scale_factor_from_min_side =\n (min_T * min_range > 0) ? min_T / min_range : max_float;\n const float scale_factor_from_max_side =\n (max_T * max_range > 0) ? max_T / max_range : max_float;\n\n const float scale_factor = std::min(scale_factor_from_min_side,\n scale_factor_from_max_side);\n ```\n\n We next use the scale_factor to adjust min_range and max_range as follows:\n\n ```c++\n min_range = min_T / scale_factor;\n max_range = max_T / scale_factor;\n ```\n\n\n e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would\n compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8\n In this case, min_range would remain -10, but max_range would be adjusted to\n 127 / 12.8 = 9.921875\n\n So we will quantize input values in the range (-10, 9.921875) to (-128, 127).\n\n The input tensor can now be quantized by clipping values to the range\n `min_range` to `max_range`, then multiplying by scale_factor as follows:\n\n ```c++\n result = round(min(max_range, max(min_range, input)) * scale_factor)\n ```\n\n The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of\n this operation. These outputs should be used as the range for any further\n calculations.\n\n\n *narrow_range (bool) attribute*\n\n If true, we do not use the minimum quantized value.\n i.e. for int8 the quantized output, it would be restricted to the range\n -127..127 instead of the full -128..127 range.\n This is provided for compatibility with certain inference backends.\n (Only applies to SCALED mode)\n\n\n *axis (int) attribute*\n\n An optional `axis` attribute can specify a dimension index of the input tensor,\n such that quantization ranges will be calculated and applied separately for each\n slice of the tensor along that dimension. This is useful for per-channel\n quantization.\n\n If axis is specified, min_range and max_range\n\n if `axis`=None, per-tensor quantization is performed as normal.\n\n\n *ensure_minimum_range (float) attribute*\n\n Ensures the minimum quantization range is at least this value.\n The legacy default value for this is 0.01, but it is strongly suggested to\n set it to 0 for new uses.\n\n Args:\n input: A `Tensor` of type `float32`.\n min_range: A `Tensor` of type `float32`.\n The minimum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_min`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n max_range: A `Tensor` of type `float32`.\n The maximum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_max`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n T: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n round_mode: An optional `string` from: `\"HALF_AWAY_FROM_ZERO\", \"HALF_TO_EVEN\"`. Defaults to `\"HALF_AWAY_FROM_ZERO\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n ensure_minimum_range: An optional `float`. Defaults to `0.01`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `T`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.", "type": "API"}, {"name": "tf.quantization.quantize_and_dequantize", "docs": "Quantizes then dequantizes a tensor. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nThis Op has been deprecated, use`quantize_and_dequantize_v2` instead. To To simulate the V1 the behavior of tf.quantization.quantize_and_dequantize(...) use tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).\n\nArgs:\n input: A `Tensor` to quantize and dequantize.\n input_min: If range_given=True, the minimum input value, that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of minimum values for each slice along axis.\n input_max: If range_given=True, the maximum input value that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of maximum values for each slice along axis.\n signed_input: True if the quantization is signed or unsigned.\n num_bits: The bitwidth of the quantization.\n range_given: If true use `input_min` and `input_max` for the range of the\n input, otherwise determine min and max from the input `Tensor`.\n round_mode: Rounding mode when rounding from float values to quantized ones.\n one of ['HALF_TO_EVEN', 'HALF_UP']\n name: Optional name for the operation.\n narrow_range: If true, then the absolute value of the quantized minimum\n value is the same as the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: Integer. If specified, refers to a dimension of the input tensor, such\n that quantization will be per slice along that dimension.\n\nReturns:\n A `Tensor`. Each element is the result of quantizing and dequantizing the\n corresponding element of `input`.", "desc": "Quantizes then dequantizes a tensor. (deprecated)", "type": "API"}, {"name": "tf.quantization.quantize_and_dequantize_v2", "docs": "Quantizes then dequantizes a tensor.\n\n Updates the gradient definition for quantization that is outside the range to\n be 0.To simulate the V1 the behavior of\n tf.quantization.quantize_and_dequantize(...) use\n tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).\n\n Example usage:\n\n ```python\n def getQuantizeOp(input):\n input_tensor = tf.placeholder(tf.float32, shape=[4, 4])\n net = tf.quantization.quantize_and_dequantize(input,\n input_min=min_threshold,\n input_max=max_threshold,\n range_given=True)\n\n To simulate v1 behavior:\n\n def testDecomposeQuantizeDequantize(self):\n def f(input_tensor):\n return tf.quantization.quantize_and_dequantize_v2(input_tensor,\n input_min = 5.0,\n input_max= -10.0,\n range_given=True)\n input_tensor = tf.placeholder(tf.float32, shape=[4, 4])\n net = tf.grad_pass_through(f)(input_tensor)\n ```\n\n Args:\n input: A `Tensor` to quantize and dequantize.\n input_min: If range_given=True, the minimum input value, that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of minimum values for each slice along axis.\n input_max: If range_given=True, the maximum input value that needs to be\n represented in the quantized representation. If axis is specified, this\n should be a vector of maximum values for each slice along axis.\n signed_input: True if the quantization is signed or unsigned.\n num_bits: The bitwidth of the quantization.\n range_given: If true use `input_min` and `input_max` for the range of the\n input, otherwise determine min and max from the input `Tensor`.\n round_mode: Rounding mode when rounding from float values to quantized ones.\n one of ['HALF_TO_EVEN', 'HALF_UP']\n name: Optional name for the operation.\n narrow_range: If true, then the absolute value of the quantized minimum\n value is the same as the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: Integer. If specified, refers to a dimension of the input tensor, such\n that quantization will be per slice along that dimension.\n\n Returns:\n A `Tensor`. Each element is the result of quantizing and dequantizing the\n corresponding element of `input`.\n ", "desc": "Quantizes then dequantizes a tensor.", "type": "API"}, {"name": "tf.quantization.quantized_concat", "docs": "Concatenates quantized tensors along one dimension.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [0, rank(values)).\n values: A list of at least 2 `Tensor` objects with the same type.\n The `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n input_mins: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The minimum scalar values for each of the input tensors.\n input_maxes: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The maximum scalar values for each of the input tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor`. Has the same type as `values`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Concatenates quantized tensors along one dimension.", "type": "API"}, {"name": "tf.queue", "docs": "Public API for tf.queue namespace.\n", "desc": "Public API for tf.queue namespace.", "type": "API"}, {"name": "tf.queue.FIFOQueue", "docs": "A queue implementation that dequeues elements in first-in first-out order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in first-in first-out order.", "type": "API"}, {"name": "tf.queue.PaddingFIFOQueue", "docs": "A FIFOQueue that supports batching variable-sized tensors by padding.\n\n A `PaddingFIFOQueue` may contain components with dynamic shape, while also\n supporting `dequeue_many`. See the constructor for more details.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A FIFOQueue that supports batching variable-sized tensors by padding.", "type": "API"}, {"name": "tf.queue.PriorityQueue", "docs": "A queue implementation that dequeues elements in prioritized order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in prioritized order.", "type": "API"}, {"name": "tf.queue.QueueBase", "docs": "Base class for queue implementations.\n\n A queue is a TensorFlow data structure that stores tensors across\n multiple steps, and exposes operations that enqueue and dequeue\n tensors.\n\n Each queue element is a tuple of one or more tensors, where each\n tuple component has a static dtype, and may have a static shape. The\n queue implementations support versions of enqueue and dequeue that\n handle single elements, versions that support enqueuing and\n dequeuing a batch of elements at once.\n\n See `tf.queue.FIFOQueue` and\n `tf.queue.RandomShuffleQueue` for concrete\n implementations of this class, and instructions on how to create\n them.\n ", "desc": "Base class for queue implementations.", "type": "API"}, {"name": "tf.queue.RandomShuffleQueue", "docs": "A queue implementation that dequeues elements in a random order.\n\n See `tf.queue.QueueBase` for a description of the methods on\n this class.\n ", "desc": "A queue implementation that dequeues elements in a random order.", "type": "API"}, {"name": "tf.ragged", "docs": "Ragged Tensors.\n\nThis package defines ops for manipulating ragged tensors (`tf.RaggedTensor`),\nwhich are tensors with non-uniform shapes. In particular, each `RaggedTensor`\nhas one or more *ragged dimensions*, which are dimensions whose slices may have\ndifferent lengths. For example, the inner (column) dimension of\n`rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged, since the column slices\n(`rt[0, :]`, ..., `rt[4, :]`) have different lengths. For a more detailed\ndescription of ragged tensors, see the `tf.RaggedTensor` class documentation\nand the [Ragged Tensor Guide](/guide/ragged_tensor).\n\n\n### Additional ops that support `RaggedTensor`\n\nArguments that accept `RaggedTensor`s are marked in **bold**.\n\n* `tf.__operators__.eq`(**self**, **other**)\n* `tf.__operators__.ne`(**self**, **other**)\n* `tf.bitcast`(**input**, type, name=`None`)\n* `tf.bitwise.bitwise_and`(**x**, **y**, name=`None`)\n* `tf.bitwise.bitwise_or`(**x**, **y**, name=`None`)\n* `tf.bitwise.bitwise_xor`(**x**, **y**, name=`None`)\n* `tf.bitwise.invert`(**x**, name=`None`)\n* `tf.bitwise.left_shift`(**x**, **y**, name=`None`)\n* `tf.bitwise.right_shift`(**x**, **y**, name=`None`)\n* `tf.broadcast_to`(**input**, **shape**, name=`None`)\n* `tf.cast`(**x**, dtype, name=`None`)\n* `tf.clip_by_value`(**t**, clip_value_min, clip_value_max, name=`None`)\n* `tf.concat`(**values**, axis, name=`'concat'`)\n* `tf.debugging.check_numerics`(**tensor**, message, name=`None`)\n* `tf.dtypes.complex`(**real**, **imag**, name=`None`)\n* `tf.dtypes.saturate_cast`(**value**, dtype, name=`None`)\n* `tf.dynamic_partition`(**data**, **partitions**, num_partitions, name=`None`)\n* `tf.expand_dims`(**input**, axis, name=`None`)\n* `tf.gather_nd`(**params**, **indices**, batch_dims=`0`, name=`None`)\n* `tf.gather`(**params**, **indices**, validate_indices=`None`, axis=`None`, batch_dims=`0`, name=`None`)\n* `tf.image.adjust_brightness`(**image**, delta)\n* `tf.image.adjust_gamma`(**image**, gamma=`1`, gain=`1`)\n* `tf.image.convert_image_dtype`(**image**, dtype, saturate=`False`, name=`None`)\n* `tf.image.random_brightness`(**image**, max_delta, seed=`None`)\n* `tf.image.resize`(**images**, size, method=`'bilinear'`, preserve_aspect_ratio=`False`, antialias=`False`, name=`None`)\n* `tf.image.stateless_random_brightness`(**image**, max_delta, seed)\n* `tf.io.decode_base64`(**input**, name=`None`)\n* `tf.io.decode_compressed`(**bytes**, compression_type=`''`, name=`None`)\n* `tf.io.encode_base64`(**input**, pad=`False`, name=`None`)\n* `tf.linalg.matmul`(**a**, **b**, transpose_a=`False`, transpose_b=`False`, adjoint_a=`False`, adjoint_b=`False`, a_is_sparse=`False`, b_is_sparse=`False`, output_type=`None`, name=`None`)\n* `tf.math.abs`(**x**, name=`None`)\n* `tf.math.acos`(**x**, name=`None`)\n* `tf.math.acosh`(**x**, name=`None`)\n* `tf.math.add_n`(**inputs**, name=`None`)\n* `tf.math.add`(**x**, **y**, name=`None`)\n* `tf.math.angle`(**input**, name=`None`)\n* `tf.math.asin`(**x**, name=`None`)\n* `tf.math.asinh`(**x**, name=`None`)\n* `tf.math.atan2`(**y**, **x**, name=`None`)\n* `tf.math.atan`(**x**, name=`None`)\n* `tf.math.atanh`(**x**, name=`None`)\n* `tf.math.bessel_i0`(**x**, name=`None`)\n* `tf.math.bessel_i0e`(**x**, name=`None`)\n* `tf.math.bessel_i1`(**x**, name=`None`)\n* `tf.math.bessel_i1e`(**x**, name=`None`)\n* `tf.math.ceil`(**x**, name=`None`)\n* `tf.math.conj`(**x**, name=`None`)\n* `tf.math.cos`(**x**, name=`None`)\n* `tf.math.cosh`(**x**, name=`None`)\n* `tf.math.digamma`(**x**, name=`None`)\n* `tf.math.divide_no_nan`(**x**, **y**, name=`None`)\n* `tf.math.divide`(**x**, **y**, name=`None`)\n* `tf.math.equal`(**x**, **y**, name=`None`)\n* `tf.math.erf`(**x**, name=`None`)\n* `tf.math.erfc`(**x**, name=`None`)\n* `tf.math.erfcinv`(**x**, name=`None`)\n* `tf.math.erfinv`(**x**, name=`None`)\n* `tf.math.exp`(**x**, name=`None`)\n* `tf.math.expm1`(**x**, name=`None`)\n* `tf.math.floor`(**x**, name=`None`)\n* `tf.math.floordiv`(**x**, **y**, name=`None`)\n* `tf.math.floormod`(**x**, **y**, name=`None`)\n* `tf.math.greater_equal`(**x**, **y**, name=`None`)\n* `tf.math.greater`(**x**, **y**, name=`None`)\n* `tf.math.imag`(**input**, name=`None`)\n* `tf.math.is_finite`(**x**, name=`None`)\n* `tf.math.is_inf`(**x**, name=`None`)\n* `tf.math.is_nan`(**x**, name=`None`)\n* `tf.math.less_equal`(**x**, **y**, name=`None`)\n* `tf.math.less`(**x**, **y**, name=`None`)\n* `tf.math.lgamma`(**x**, name=`None`)\n* `tf.math.log1p`(**x**, name=`None`)\n* `tf.math.log_sigmoid`(**x**, name=`None`)\n* `tf.math.log`(**x**, name=`None`)\n* `tf.math.logical_and`(**x**, **y**, name=`None`)\n* `tf.math.logical_not`(**x**, name=`None`)\n* `tf.math.logical_or`(**x**, **y**, name=`None`)\n* `tf.math.logical_xor`(**x**, **y**, name=`'LogicalXor'`)\n* `tf.math.maximum`(**x**, **y**, name=`None`)\n* `tf.math.minimum`(**x**, **y**, name=`None`)\n* `tf.math.multiply_no_nan`(**x**, **y**, name=`None`)\n* `tf.math.multiply`(**x**, **y**, name=`None`)\n* `tf.math.ndtri`(**x**, name=`None`)\n* `tf.math.negative`(**x**, name=`None`)\n* `tf.math.nextafter`(**x1**, x2, name=`None`)\n* `tf.math.not_equal`(**x**, **y**, name=`None`)\n* `tf.math.pow`(**x**, **y**, name=`None`)\n* `tf.math.real`(**input**, name=`None`)\n* `tf.math.reciprocal_no_nan`(**x**, name=`None`)\n* `tf.math.reciprocal`(**x**, name=`None`)\n* `tf.math.reduce_all`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_any`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_max`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_mean`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_min`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_prod`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_std`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_sum`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.reduce_variance`(**input_tensor**, axis=`None`, keepdims=`False`, name=`None`)\n* `tf.math.rint`(**x**, name=`None`)\n* `tf.math.round`(**x**, name=`None`)\n* `tf.math.rsqrt`(**x**, name=`None`)\n* `tf.math.scalar_mul`(**scalar**, **x**, name=`None`)\n* `tf.math.sigmoid`(**x**, name=`None`)\n* `tf.math.sign`(**x**, name=`None`)\n* `tf.math.sin`(**x**, name=`None`)\n* `tf.math.sinh`(**x**, name=`None`)\n* `tf.math.softplus`(**features**, name=`None`)\n* `tf.math.special.bessel_j0`(**x**, name=`None`)\n* `tf.math.special.bessel_j1`(**x**, name=`None`)\n* `tf.math.special.bessel_k0`(**x**, name=`None`)\n* `tf.math.special.bessel_k0e`(**x**, name=`None`)\n* `tf.math.special.bessel_k1`(**x**, name=`None`)\n* `tf.math.special.bessel_k1e`(**x**, name=`None`)\n* `tf.math.special.bessel_y0`(**x**, name=`None`)\n* `tf.math.special.bessel_y1`(**x**, name=`None`)\n* `tf.math.special.dawsn`(**x**, name=`None`)\n* `tf.math.special.expint`(**x**, name=`None`)\n* `tf.math.special.fresnel_cos`(**x**, name=`None`)\n* `tf.math.special.fresnel_sin`(**x**, name=`None`)\n* `tf.math.special.spence`(**x**, name=`None`)\n* `tf.math.sqrt`(**x**, name=`None`)\n* `tf.math.square`(**x**, name=`None`)\n* `tf.math.squared_difference`(**x**, **y**, name=`None`)\n* `tf.math.subtract`(**x**, **y**, name=`None`)\n* `tf.math.tan`(**x**, name=`None`)\n* `tf.math.tanh`(**x**, name=`None`)\n* `tf.math.truediv`(**x**, **y**, name=`None`)\n* `tf.math.unsorted_segment_max`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_mean`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_min`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_prod`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_sqrt_n`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.unsorted_segment_sum`(**data**, **segment_ids**, num_segments, name=`None`)\n* `tf.math.xdivy`(**x**, **y**, name=`None`)\n* `tf.math.xlog1py`(**x**, **y**, name=`None`)\n* `tf.math.xlogy`(**x**, **y**, name=`None`)\n* `tf.math.zeta`(**x**, **q**, name=`None`)\n* `tf.nn.dropout`(**x**, rate, noise_shape=`None`, seed=`None`, name=`None`)\n* `tf.nn.elu`(**features**, name=`None`)\n* `tf.nn.gelu`(**features**, approximate=`False`, name=`None`)\n* `tf.nn.leaky_relu`(**features**, alpha=`0.2`, name=`None`)\n* `tf.nn.relu6`(**features**, name=`None`)\n* `tf.nn.relu`(**features**, name=`None`)\n* `tf.nn.selu`(**features**, name=`None`)\n* `tf.nn.sigmoid_cross_entropy_with_logits`(**labels**=`None`, **logits**=`None`, name=`None`)\n* `tf.nn.silu`(**features**, beta=`1.0`)\n* `tf.nn.softmax`(**logits**, axis=`None`, name=`None`)\n* `tf.nn.softsign`(**features**, name=`None`)\n* `tf.one_hot`(**indices**, depth, on_value=`None`, off_value=`None`, axis=`None`, dtype=`None`, name=`None`)\n* `tf.ones_like`(**input**, dtype=`None`, name=`None`)\n* `tf.print`(***inputs**, **kwargs)\n* `tf.rank`(**input**, name=`None`)\n* `tf.realdiv`(**x**, **y**, name=`None`)\n* `tf.reshape`(**tensor**, **shape**, name=`None`)\n* `tf.reverse`(**tensor**, axis, name=`None`)\n* `tf.size`(**input**, out_type=`tf.int32`, name=`None`)\n* `tf.split`(**value**, num_or_size_splits, axis=`0`, num=`None`, name=`'split'`)\n* `tf.squeeze`(**input**, axis=`None`, name=`None`)\n* `tf.stack`(**values**, axis=`0`, name=`'stack'`)\n* `tf.strings.as_string`(**input**, precision=`-1`, scientific=`False`, shortest=`False`, width=`-1`, fill=`''`, name=`None`)\n* `tf.strings.format`(**template**, **inputs**, placeholder=`'{}'`, summarize=`3`, name=`None`)\n* `tf.strings.join`(**inputs**, separator=`''`, name=`None`)\n* `tf.strings.length`(**input**, unit=`'BYTE'`, name=`None`)\n* `tf.strings.lower`(**input**, encoding=`''`, name=`None`)\n* `tf.strings.reduce_join`(**inputs**, axis=`None`, keepdims=`False`, separator=`''`, name=`None`)\n* `tf.strings.regex_full_match`(**input**, pattern, name=`None`)\n* `tf.strings.regex_replace`(**input**, pattern, rewrite, replace_global=`True`, name=`None`)\n* `tf.strings.strip`(**input**, name=`None`)\n* `tf.strings.substr`(**input**, pos, len, unit=`'BYTE'`, name=`None`)\n* `tf.strings.to_hash_bucket_fast`(**input**, num_buckets, name=`None`)\n* `tf.strings.to_hash_bucket_strong`(**input**, num_buckets, key, name=`None`)\n* `tf.strings.to_hash_bucket`(**input**, num_buckets, name=`None`)\n* `tf.strings.to_number`(**input**, out_type=`tf.float32`, name=`None`)\n* `tf.strings.unicode_script`(**input**, name=`None`)\n* `tf.strings.unicode_transcode`(**input**, input_encoding, output_encoding, errors=`'replace'`, replacement_char=`65533`, replace_control_characters=`False`, name=`None`)\n* `tf.strings.upper`(**input**, encoding=`''`, name=`None`)\n* `tf.tile`(**input**, multiples, name=`None`)\n* `tf.truncatediv`(**x**, **y**, name=`None`)\n* `tf.truncatemod`(**x**, **y**, name=`None`)\n* `tf.where`(**condition**, **x**=`None`, **y**=`None`, name=`None`)\n* `tf.zeros_like`(**input**, dtype=`None`, name=`None`)n\n", "desc": "Ragged Tensors.", "type": "API"}, {"name": "tf.ragged.boolean_mask", "docs": "Applies a boolean mask to `data` without flattening the mask dimensions.\n\n Returns a potentially ragged tensor that is formed by retaining the elements\n in `data` where the corresponding value in `mask` is `True`.\n\n * `output[a1...aA, i, b1...bB] = data[a1...aA, j, b1...bB]`\n\n Where `j` is the `i`th `True` entry of `mask[a1...aA]`.\n\n Note that `output` preserves the mask dimensions `a1...aA`; this differs\n from `tf.boolean_mask`, which flattens those dimensions.\n\n Args:\n data: A potentially ragged tensor.\n mask: A potentially ragged boolean tensor. `mask`'s shape must be a prefix\n of `data`'s shape. `rank(mask)` must be known statically.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A potentially ragged tensor that is formed by retaining the elements in\n `data` where the corresponding value in `mask` is `True`.\n\n * `rank(output) = rank(data)`.\n * `output.ragged_rank = max(data.ragged_rank, rank(mask) - 1)`.\n\n Raises:\n ValueError: if `rank(mask)` is not known statically; or if `mask.shape` is\n not a prefix of `data.shape`.\n\n #### Examples:\n\n >>> # Aliases for True & False so data and mask line up.\n >>> T, F = (True, False)\n\n >>> tf.ragged.boolean_mask( # Mask a 2D Tensor.\n ... data=[[1, 2, 3], [4, 5, 6], [7, 8, 9]],\n ... mask=[[T, F, T], [F, F, F], [T, F, F]]).to_list()\n [[1, 3], [], [7]]\n\n >>> tf.ragged.boolean_mask( # Mask a 2D RaggedTensor.\n ... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),\n ... tf.ragged.constant([[F, F, T], [F], [T, T]])).to_list()\n [[3], [], [5, 6]]\n\n >>> tf.ragged.boolean_mask( # Mask rows of a 2D RaggedTensor.\n ... tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),\n ... tf.ragged.constant([True, False, True])).to_list()\n [[1, 2, 3], [5, 6]]\n ", "desc": "Applies a boolean mask to `data` without flattening the mask dimensions.", "type": "API"}, {"name": "tf.ragged.constant", "docs": "Constructs a constant RaggedTensor from a nested Python list.\n\n Example:\n\n >>> tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n \n\n All scalar values in `pylist` must have the same nesting depth `K`, and the\n returned `RaggedTensor` will have rank `K`. If `pylist` contains no scalar\n values, then `K` is one greater than the maximum depth of empty lists in\n `pylist`. All scalar values in `pylist` must be compatible with `dtype`.\n\n Args:\n pylist: A nested `list`, `tuple` or `np.ndarray`. Any nested element that\n is not a `list`, `tuple` or `np.ndarray` must be a scalar value\n compatible with `dtype`.\n dtype: The type of elements for the returned `RaggedTensor`. If not\n specified, then a default is chosen based on the scalar values in\n `pylist`.\n ragged_rank: An integer specifying the ragged rank of the returned\n `RaggedTensor`. Must be nonnegative and less than `K`. Defaults to\n `max(0, K - 1)` if `inner_shape` is not specified. Defaults to\n `max(0, K - 1 - len(inner_shape))` if `inner_shape` is specified.\n inner_shape: A tuple of integers specifying the shape for individual inner\n values in the returned `RaggedTensor`. Defaults to `()` if `ragged_rank`\n is not specified. If `ragged_rank` is specified, then a default is chosen\n based on the contents of `pylist`.\n name: A name prefix for the returned tensor (optional).\n row_splits_dtype: data type for the constructed `RaggedTensor`'s row_splits.\n One of `tf.int32` or `tf.int64`.\n\n Returns:\n A potentially ragged tensor with rank `K` and the specified `ragged_rank`,\n containing the values from `pylist`.\n\n Raises:\n ValueError: If the scalar values in `pylist` have inconsistent nesting\n depth; or if ragged_rank or inner_shape are incompatible with `pylist`.\n ", "desc": "Constructs a constant RaggedTensor from a nested Python list.", "type": "API"}, {"name": "tf.ragged.cross", "docs": "Generates feature cross from a list of tensors.\n\n The input tensors must have `rank=2`, and must all have the same number of\n rows. The result is a `RaggedTensor` with the same number of rows as the\n inputs, where `result[row]` contains a list of all combinations of values\n formed by taking a single value from each input's corresponding row\n (`inputs[i][row]`). Values are combined by joining their strings with '_X_'.\n E.g.:\n\n >>> tf.ragged.cross([tf.ragged.constant([['a'], ['b', 'c']]),\n ... tf.ragged.constant([['d'], ['e']]),\n ... tf.ragged.constant([['f'], ['g']])])\n \n\n Args:\n inputs: A list of `RaggedTensor` or `Tensor` or `SparseTensor`.\n name: Optional name for the op.\n\n Returns:\n A 2D `RaggedTensor` of type `string`.\n ", "desc": "Generates feature cross from a list of tensors.", "type": "API"}, {"name": "tf.ragged.cross_hashed", "docs": "Generates hashed feature cross from a list of tensors.\n\n The input tensors must have `rank=2`, and must all have the same number of\n rows. The result is a `RaggedTensor` with the same number of rows as the\n inputs, where `result[row]` contains a list of all combinations of values\n formed by taking a single value from each input's corresponding row\n (`inputs[i][row]`). Values are combined by hashing together their\n fingerprints. E.g.:\n\n >>> tf.ragged.cross_hashed([tf.ragged.constant([['a'], ['b', 'c']]),\n ... tf.ragged.constant([['d'], ['e']]),\n ... tf.ragged.constant([['f'], ['g']])],\n ... num_buckets=100)\n \n\n Args:\n inputs: A list of `RaggedTensor` or `Tensor` or `SparseTensor`.\n num_buckets: A non-negative `int` that used to bucket the hashed values. If\n `num_buckets != 0`, then `output = hashed_value % num_buckets`.\n hash_key: Integer hash_key that will be used by the `FingerprintCat64`\n function. If not given, a default key is used.\n name: Optional name for the op.\n\n Returns:\n A 2D `RaggedTensor` of type `int64`.\n ", "desc": "Generates hashed feature cross from a list of tensors.", "type": "API"}, {"name": "tf.ragged.map_flat_values", "docs": "Applies `op` to the `flat_values` of one or more RaggedTensors.\n\n Replaces any `RaggedTensor` in `args` or `kwargs` with its `flat_values`\n tensor (which collapses all ragged dimensions), and then calls `op`. Returns\n a `RaggedTensor` that is constructed from the input `RaggedTensor`s'\n `nested_row_splits` and the value returned by the `op`.\n\n If the input arguments contain multiple `RaggedTensor`s, then they must have\n identical `nested_row_splits`.\n\n This operation is generally used to apply elementwise operations to each value\n in a `RaggedTensor`.\n\n Warning: `tf.ragged.map_flat_values` does *not* apply `op` to each row of a\n ragged tensor. This difference is important for non-elementwise operations,\n such as `tf.reduce_sum`. If you wish to apply a non-elementwise operation to\n each row of a ragged tensor, use `tf.map_fn` instead. (You may need to\n specify an `output_signature` when using `tf.map_fn` with ragged tensors.)\n\n Examples:\n\n >>> rt = tf.ragged.constant([[1, 2, 3], [], [4, 5], [6]])\n >>> tf.ragged.map_flat_values(tf.ones_like, rt)\n \n >>> tf.ragged.map_flat_values(tf.multiply, rt, rt)\n \n >>> tf.ragged.map_flat_values(tf.add, rt, 5)\n \n\n Example with a non-elementwise operation (note that `map_flat_values` and\n `map_fn` return different results):\n\n >>> rt = tf.ragged.constant([[1.0, 3.0], [], [3.0, 6.0, 3.0]])\n >>> def normalized(x):\n ... return x / tf.reduce_sum(x)\n >>> tf.ragged.map_flat_values(normalized, rt)\n \n >>> tf.map_fn(normalized, rt)\n \n\n Args:\n op: The operation that should be applied to the RaggedTensor `flat_values`.\n `op` is typically an element-wise operation (such as math_ops.add), but\n any operation that preserves the size of the outermost dimension can be\n used. I.e., `shape[0]` of the value returned by `op` must match\n `shape[0]` of the `RaggedTensor`s' `flat_values` tensors.\n *args: Arguments for `op`.\n **kwargs: Keyword arguments for `op`.\n\n Returns:\n A `RaggedTensor` whose `ragged_rank` matches the `ragged_rank` of all\n input `RaggedTensor`s.\n Raises:\n ValueError: If args contains no `RaggedTensors`, or if the `nested_splits`\n of the input `RaggedTensor`s are not identical.\n ", "desc": "Applies `op` to the `flat_values` of one or more RaggedTensors.", "type": "API"}, {"name": "tf.ragged.range", "docs": "Returns a `RaggedTensor` containing the specified sequences of numbers.\n\n Each row of the returned `RaggedTensor` contains a single sequence:\n\n ```python\n ragged.range(starts, limits, deltas)[i] ==\n tf.range(starts[i], limits[i], deltas[i])\n ```\n\n If `start[i] < limits[i] and deltas[i] > 0`, then `output[i]` will be an\n empty list. Similarly, if `start[i] > limits[i] and deltas[i] < 0`, then\n `output[i]` will be an empty list. This behavior is consistent with the\n Python `range` function, but differs from the `tf.range` op, which returns\n an error for these cases.\n\n Examples:\n\n >>> tf.ragged.range([3, 5, 2]).to_list()\n [[0, 1, 2], [0, 1, 2, 3, 4], [0, 1]]\n >>> tf.ragged.range([0, 5, 8], [3, 3, 12]).to_list()\n [[0, 1, 2], [], [8, 9, 10, 11]]\n >>> tf.ragged.range([0, 5, 8], [3, 3, 12], 2).to_list()\n [[0, 2], [], [8, 10]]\n\n The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors.\n The vector inputs must all have the same size. Scalar inputs are broadcast\n to match the size of the vector inputs.\n\n Args:\n starts: Vector or scalar `Tensor`. Specifies the first entry for each range\n if `limits` is not `None`; otherwise, specifies the range limits, and the\n first entries default to `0`.\n limits: Vector or scalar `Tensor`. Specifies the exclusive upper limits for\n each range.\n deltas: Vector or scalar `Tensor`. Specifies the increment for each range.\n Defaults to `1`.\n dtype: The type of the elements of the resulting tensor. If not specified,\n then a value is chosen based on the other args.\n name: A name for the operation.\n row_splits_dtype: `dtype` for the returned `RaggedTensor`'s `row_splits`\n tensor. One of `tf.int32` or `tf.int64`.\n\n Returns:\n A `RaggedTensor` of type `dtype` with `ragged_rank=1`.\n ", "desc": "Returns a `RaggedTensor` containing the specified sequences of numbers.", "type": "API"}, {"name": "tf.ragged.row_splits_to_segment_ids", "docs": "Generates the segmentation corresponding to a RaggedTensor `row_splits`.\n\n Returns an integer vector `segment_ids`, where `segment_ids[i] == j` if\n `splits[j] <= i < splits[j+1]`. Example:\n\n >>> print(tf.ragged.row_splits_to_segment_ids([0, 3, 3, 5, 6, 9]))\n tf.Tensor([0 0 0 2 2 3 4 4 4], shape=(9,), dtype=int64)\n\n Args:\n splits: A sorted 1-D integer Tensor. `splits[0]` must be zero.\n name: A name prefix for the returned tensor (optional).\n out_type: The dtype for the return value. Defaults to `splits.dtype`,\n or `tf.int64` if `splits` does not have a dtype.\n\n Returns:\n A sorted 1-D integer Tensor, with `shape=[splits[-1]]`\n\n Raises:\n ValueError: If `splits` is invalid.\n ", "desc": "Generates the segmentation corresponding to a RaggedTensor `row_splits`.", "type": "API"}, {"name": "tf.ragged.segment_ids_to_row_splits", "docs": "Generates the RaggedTensor `row_splits` corresponding to a segmentation.\n\n Returns an integer vector `splits`, where `splits[0] = 0` and\n `splits[i] = splits[i-1] + count(segment_ids==i)`. Example:\n\n >>> print(tf.ragged.segment_ids_to_row_splits([0, 0, 0, 2, 2, 3, 4, 4, 4]))\n tf.Tensor([0 3 3 5 6 9], shape=(6,), dtype=int64)\n\n Args:\n segment_ids: A 1-D integer Tensor.\n num_segments: A scalar integer indicating the number of segments. Defaults\n to `max(segment_ids) + 1` (or zero if `segment_ids` is empty).\n out_type: The dtype for the return value. Defaults to `segment_ids.dtype`,\n or `tf.int64` if `segment_ids` does not have a dtype.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A sorted 1-D integer Tensor, with `shape=[num_segments + 1]`.\n ", "desc": "Generates the RaggedTensor `row_splits` corresponding to a segmentation.", "type": "API"}, {"name": "tf.ragged.stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`.\n\n Given a list of tensors or ragged tensors with the same rank `R`\n (`R >= axis`), returns a rank-`R+1` `RaggedTensor` `result` such that\n `result[i0...iaxis]` is `[value[i0...iaxis] for value in values]`.\n\n #### Examples:\n\n >>> # Stacking two ragged tensors.\n >>> t1 = tf.ragged.constant([[1, 2], [3, 4, 5]])\n >>> t2 = tf.ragged.constant([[6], [7, 8, 9]])\n >>> tf.ragged.stack([t1, t2], axis=0)\n \n >>> tf.ragged.stack([t1, t2], axis=1)\n \n\n >>> # Stacking two dense tensors with different sizes.\n >>> t3 = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> t4 = tf.constant([[5], [6], [7]])\n >>> tf.ragged.stack([t3, t4], axis=0)\n \n\n Args:\n values: A list of `tf.Tensor` or `tf.RaggedTensor`. May not be empty. All\n `values` must have the same rank and the same dtype; but unlike\n `tf.stack`, they can have arbitrary dimension sizes.\n axis: A python integer, indicating the dimension along which to stack.\n (Note: Unlike `tf.stack`, the `axis` parameter must be statically known.)\n Negative values are supported only if the rank of at least one\n `values` value is statically known.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A `RaggedTensor` with rank `R+1` (if `R>0`).\n If `R==0`, then the result will be returned as a 1D `Tensor`, since\n `RaggedTensor` can only be used when `rank>1`.\n `result.ragged_rank=1+max(axis, max(rt.ragged_rank for rt in values]))`.\n\n Raises:\n ValueError: If `values` is empty, if `axis` is out of bounds or if\n the input tensors have different ranks.\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` `RaggedTensor`.", "type": "API"}, {"name": "tf.ragged.stack_dynamic_partitions", "docs": "Stacks dynamic partitions of a Tensor or RaggedTensor.\n\n Returns a RaggedTensor `output` with `num_partitions` rows, where the row\n `output[i]` is formed by stacking all slices `data[j1...jN]` such that\n `partitions[j1...jN] = i`. Slices of `data` are stacked in row-major\n order.\n\n If `num_partitions` is an `int` (not a `Tensor`), then this is equivalent to\n `tf.ragged.stack(tf.dynamic_partition(data, partitions, num_partitions))`.\n\n #### Example:\n\n >>> data = ['a', 'b', 'c', 'd', 'e']\n >>> partitions = [ 3, 0, 2, 2, 3]\n >>> num_partitions = 5\n >>> tf.ragged.stack_dynamic_partitions(data, partitions, num_partitions)\n \n\n Args:\n data: A `Tensor` or `RaggedTensor` containing the values to stack.\n partitions: An `int32` or `int64` `Tensor` or `RaggedTensor` specifying the\n partition that each slice of `data` should be added to. `partitions.shape`\n must be a prefix of `data.shape`. Values must be greater than or equal to\n zero, and less than `num_partitions`. `partitions` is not required to be\n sorted.\n num_partitions: An `int32` or `int64` scalar specifying the number of\n partitions to output. This determines the number of rows in `output`.\n name: A name prefix for the returned tensor (optional).\n\n Returns:\n A `RaggedTensor` containing the stacked partitions. The returned tensor\n has the same dtype as `data`, and its shape is\n `[num_partitions, (D)] + data.shape[partitions.rank:]`, where `(D)` is a\n ragged dimension whose length is the number of data slices stacked for\n each `partition`.\n ", "desc": "Stacks dynamic partitions of a Tensor or RaggedTensor.", "type": "API"}, {"name": "tf.RaggedTensor", "docs": "Represents a ragged tensor.\n\n A `RaggedTensor` is a tensor with one or more *ragged dimensions*, which are\n dimensions whose slices may have different lengths. For example, the inner\n (column) dimension of `rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is ragged,\n since the column slices (`rt[0, :]`, ..., `rt[4, :]`) have different lengths.\n Dimensions whose slices all have the same length are called *uniform\n dimensions*. The outermost dimension of a `RaggedTensor` is always uniform,\n since it consists of a single slice (and so there is no possibility for\n differing slice lengths).\n\n The total number of dimensions in a `RaggedTensor` is called its *rank*,\n and the number of ragged dimensions in a `RaggedTensor` is called its\n *ragged-rank*. A `RaggedTensor`'s ragged-rank is fixed at graph creation\n time: it can't depend on the runtime values of `Tensor`s, and can't vary\n dynamically for different session runs.\n\n Note that the `__init__` constructor is private. Please use one of the\n following methods to construct a `RaggedTensor`:\n\n * `tf.RaggedTensor.from_row_lengths`\n * `tf.RaggedTensor.from_value_rowids`\n * `tf.RaggedTensor.from_row_splits`\n * `tf.RaggedTensor.from_row_starts`\n * `tf.RaggedTensor.from_row_limits`\n * `tf.RaggedTensor.from_nested_row_splits`\n * `tf.RaggedTensor.from_nested_row_lengths`\n * `tf.RaggedTensor.from_nested_value_rowids`\n\n ### Potentially Ragged Tensors\n\n Many ops support both `Tensor`s and `RaggedTensor`s\n (see [tf.ragged](https://www.tensorflow.org/api_docs/python/tf/ragged) for a\n full listing). The term \"potentially ragged tensor\" may be used to refer to a\n tensor that might be either a `Tensor` or a `RaggedTensor`. The ragged-rank\n of a `Tensor` is zero.\n\n ### Documenting RaggedTensor Shapes\n\n When documenting the shape of a RaggedTensor, ragged dimensions can be\n indicated by enclosing them in parentheses. For example, the shape of\n a 3-D `RaggedTensor` that stores the fixed-size word embedding for each\n word in a sentence, for each sentence in a batch, could be written as\n `[num_sentences, (num_words), embedding_size]`. The parentheses around\n `(num_words)` indicate that dimension is ragged, and that the length\n of each element list in that dimension may vary for each item.\n\n ### Component Tensors\n\n Internally, a `RaggedTensor` consists of a concatenated list of values that\n are partitioned into variable-length rows. In particular, each `RaggedTensor`\n consists of:\n\n * A `values` tensor, which concatenates the variable-length rows into a\n flattened list. For example, the `values` tensor for\n `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` is `[3, 1, 4, 1, 5, 9, 2, 6]`.\n\n * A `row_splits` vector, which indicates how those flattened values are\n divided into rows. In particular, the values for row `rt[i]` are stored\n in the slice `rt.values[rt.row_splits[i]:rt.row_splits[i+1]]`.\n\n Example:\n\n >>> print(tf.RaggedTensor.from_row_splits(\n ... values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... row_splits=[0, 4, 4, 7, 8, 8]))\n \n\n ### Alternative Row-Partitioning Schemes\n\n In addition to `row_splits`, ragged tensors provide support for five other\n row-partitioning schemes:\n\n * `row_lengths`: a vector with shape `[nrows]`, which specifies the length\n of each row.\n\n * `value_rowids` and `nrows`: `value_rowids` is a vector with shape\n `[nvals]`, corresponding one-to-one with `values`, which specifies\n each value's row index. In particular, the row `rt[row]` consists of the\n values `rt.values[j]` where `value_rowids[j]==row`. `nrows` is an\n integer scalar that specifies the number of rows in the\n `RaggedTensor`. (`nrows` is used to indicate trailing empty rows.)\n\n * `row_starts`: a vector with shape `[nrows]`, which specifies the start\n offset of each row. Equivalent to `row_splits[:-1]`.\n\n * `row_limits`: a vector with shape `[nrows]`, which specifies the stop\n offset of each row. Equivalent to `row_splits[1:]`.\n\n * `uniform_row_length`: A scalar tensor, specifying the length of every\n row. This row-partitioning scheme may only be used if all rows have\n the same length.\n\n Example: The following ragged tensors are equivalent, and all represent the\n nested list `[[3, 1, 4, 1], [], [5, 9, 2], [6], []]`.\n\n >>> values = [3, 1, 4, 1, 5, 9, 2, 6]\n >>> RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])\n \n >>> RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])\n \n >>> RaggedTensor.from_value_rowids(\n ... values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)\n \n >>> RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])\n \n >>> RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])\n \n >>> RaggedTensor.from_uniform_row_length(values, uniform_row_length=2)\n \n\n ### Multiple Ragged Dimensions\n\n `RaggedTensor`s with multiple ragged dimensions can be defined by using\n a nested `RaggedTensor` for the `values` tensor. Each nested `RaggedTensor`\n adds a single ragged dimension.\n\n >>> inner_rt = RaggedTensor.from_row_splits( # =rt1 from above\n ... values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8])\n >>> outer_rt = RaggedTensor.from_row_splits(\n ... values=inner_rt, row_splits=[0, 3, 3, 5])\n >>> print(outer_rt.to_list())\n [[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]\n >>> print(outer_rt.ragged_rank)\n 2\n\n The factory function `RaggedTensor.from_nested_row_splits` may be used to\n construct a `RaggedTensor` with multiple ragged dimensions directly, by\n providing a list of `row_splits` tensors:\n\n >>> RaggedTensor.from_nested_row_splits(\n ... flat_values=[3, 1, 4, 1, 5, 9, 2, 6],\n ... nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list()\n [[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]\n\n ### Uniform Inner Dimensions\n\n `RaggedTensor`s with uniform inner dimensions can be defined\n by using a multidimensional `Tensor` for `values`.\n\n >>> rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32),\n ... row_splits=[0, 2, 5])\n >>> print(rt.to_list())\n [[[1, 1, 1], [1, 1, 1]],\n [[1, 1, 1], [1, 1, 1], [1, 1, 1]]]\n >>> print(rt.shape)\n (2, None, 3)\n\n ### Uniform Outer Dimensions\n\n `RaggedTensor`s with uniform outer dimensions can be defined by using\n one or more `RaggedTensor` with a `uniform_row_length` row-partitioning\n tensor. For example, a `RaggedTensor` with shape `[2, 2, None]` can be\n constructed with this method from a `RaggedTensor` values with shape\n `[4, None]`:\n\n >>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])\n >>> print(values.shape)\n (4, None)\n >>> rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2)\n >>> print(rt6)\n \n >>> print(rt6.shape)\n (2, 2, None)\n\n Note that `rt6` only contains one ragged dimension (the innermost\n dimension). In contrast, if `from_row_splits` is used to construct a similar\n `RaggedTensor`, then that `RaggedTensor` will have two ragged dimensions:\n\n >>> rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])\n >>> print(rt7.shape)\n (2, None, None)\n\n Uniform and ragged outer dimensions may be interleaved, meaning that a\n tensor with any combination of ragged and uniform dimensions may be created.\n For example, a RaggedTensor `t4` with shape `[3, None, 4, 8, None, 2]` could\n be constructed as follows:\n\n ```python\n t0 = tf.zeros([1000, 2]) # Shape: [1000, 2]\n t1 = RaggedTensor.from_row_lengths(t0, [...]) # [160, None, 2]\n t2 = RaggedTensor.from_uniform_row_length(t1, 8) # [20, 8, None, 2]\n t3 = RaggedTensor.from_uniform_row_length(t2, 4) # [5, 4, 8, None, 2]\n t4 = RaggedTensor.from_row_lengths(t3, [...]) # [3, None, 4, 8, None, 2]\n ```\n\n ", "desc": "Represents a ragged tensor.", "type": "API"}, {"name": "tf.RaggedTensorSpec", "docs": "Type specification for a `tf.RaggedTensor`.", "desc": "Type specification for a `tf.RaggedTensor`.", "type": "API"}, {"name": "tf.random", "docs": "Public API for tf.random namespace.\n", "desc": "Public API for tf.random namespace.", "type": "API"}, {"name": "tf.random.Algorithm", "docs": "An enumeration.", "desc": "An enumeration.", "type": "API"}, {"name": "tf.random.all_candidate_sampler", "docs": "Generate the set of all classes.\n\n Deterministically generates and returns the set of all possible classes.\n For testing purposes. There is no need to use this, since you might as\n well use full softmax or full logistic regression.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of possible classes.\n unique: A `bool`. Ignored.\n unique.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n This operation deterministically returns the entire range\n `[0, num_sampled]`.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`. All returned values are 1.0.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`. All returned values are 1.0.\n ", "desc": "Generate the set of all classes.", "type": "API"}, {"name": "tf.random.categorical", "docs": "Draws samples from a categorical distribution.\n\n Example:\n\n ```python\n # samples has shape [1, 5], where each value is either 0 or 1 with equal\n # probability.\n samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5)\n ```\n\n Args:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n dtype: The integer type of the output: `int32` or `int64`. Defaults to\n `int64`.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for behavior.\n name: Optional name for the operation.\n\n Returns:\n The drawn samples of shape `[batch_size, num_samples]`.\n ", "desc": "Draws samples from a categorical distribution.", "type": "API"}, {"name": "tf.random.create_rng_state", "docs": "Creates a RNG state from an integer or a vector.\n\n Example:\n\n >>> tf.random.create_rng_state(\n ... 1234, \"philox\")\n \n >>> tf.random.create_rng_state(\n ... [12, 34], \"threefry\")\n \n\n Args:\n seed: an integer or 1-D numpy array.\n alg: the RNG algorithm. Can be a string, an `Algorithm` or an integer.\n\n Returns:\n a 1-D numpy array whose size depends on the algorithm.\n ", "desc": "Creates a RNG state from an integer or a vector.", "type": "API"}, {"name": "tf.random.experimental", "docs": "Public API for tf.random.experimental namespace.\n", "desc": "Public API for tf.random.experimental namespace.", "type": "API"}, {"name": "tf.random.experimental.Algorithm", "docs": "An enumeration.", "desc": "An enumeration.", "type": "API"}, {"name": "tf.random.experimental.create_rng_state", "docs": "Creates a RNG state from an integer or a vector.\n\n Example:\n\n >>> tf.random.create_rng_state(\n ... 1234, \"philox\")\n \n >>> tf.random.create_rng_state(\n ... [12, 34], \"threefry\")\n \n\n Args:\n seed: an integer or 1-D numpy array.\n alg: the RNG algorithm. Can be a string, an `Algorithm` or an integer.\n\n Returns:\n a 1-D numpy array whose size depends on the algorithm.\n ", "desc": "Creates a RNG state from an integer or a vector.", "type": "API"}, {"name": "tf.random.experimental.Generator", "docs": "Random-number generator.\n\n Example:\n\n Creating a generator from a seed:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.normal(shape=(2, 3))\n \n\n Creating a generator from a non-deterministic state:\n\n >>> g = tf.random.Generator.from_non_deterministic_state()\n >>> g.normal(shape=(2, 3))\n \n\n All the constructors allow explicitly choosing an Random-Number-Generation\n (RNG) algorithm. Supported algorithms are `\"philox\"` and `\"threefry\"`. For\n example:\n\n >>> g = tf.random.Generator.from_seed(123, alg=\"philox\")\n >>> g.normal(shape=(2, 3))\n \n\n CPU, GPU and TPU with the same algorithm and seed will generate the same\n integer random numbers. Float-point results (such as the output of `normal`)\n may have small numerical discrepancies between different devices.\n\n This class uses a `tf.Variable` to manage its internal state. Every time\n random numbers are generated, the state of the generator will change. For\n example:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.state\n \n >>> g.normal(shape=(2, 3))\n <...>\n >>> g.state\n \n\n The shape of the state is algorithm-specific.\n\n There is also a global generator:\n\n >>> g = tf.random.get_global_generator()\n >>> g.normal(shape=(2, 3))\n \n\n When creating a generator inside a `tf.distribute.Strategy` scope, each\n replica will get a different stream of random numbers.\n\n For example, in this code:\n\n ```\n strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n with strat.scope():\n g = tf.random.Generator.from_seed(1)\n def f():\n return g.normal([])\n results = strat.run(f).values\n ```\n\n `results[0]` and `results[1]` will have different values.\n\n If the generator is seeded (e.g. created via `Generator.from_seed`), the\n random numbers will be determined by the seed, even though different replicas\n get different numbers. One can think of a random number generated on a\n replica as a hash of the replica ID and a \"master\" random number that may be\n common to all replicas. Hence, the whole system is still deterministic.\n\n (Note that the random numbers on different replicas are not correlated, even\n if they are deterministically determined by the same seed. They are not\n correlated in the sense that no matter what statistics one calculates on them,\n there won't be any discernable correlation.)\n\n Generators can be freely saved and restored using `tf.train.Checkpoint`. The\n checkpoint can be restored in a distribution strategy with a different number\n of replicas than the original strategy. If a replica ID is present in both the\n original and the new distribution strategy, its state will be properly\n restored (i.e. the random-number stream from the restored point will be the\n same as that from the saving point) unless the replicas have already diverged\n in their RNG call traces before saving (e.g. one replica has made one RNG call\n while another has made two RNG calls). We don't have such guarantee if the\n generator is saved in a strategy scope and restored outside of any strategy\n scope, or vice versa.\n\n When a generator is created within the scope of\n `tf.distribute.experimental.ParameterServerStrategy`, the workers\n will share the generator's state (placed on one of the parameter\n servers). In this way the workers will still get different\n random-number streams, as stated above. (This is similar to replicas\n in a `tf.distribute.MirroredStrategy` sequentially accessing a\n generator created outside the strategy.) Each RNG call on a worker\n will incur a round-trip to a parameter server, which may have\n performance impacts. When creating a\n `tf.distribute.experimental.ParameterServerStrategy`, please make\n sure that the `variable_partitioner` argument won't shard small\n variables of shape `[2]` or `[3]` (because generator states must not\n be sharded). Ways to avoid sharding small variables include setting\n `variable_partitioner` to `None` or to\n `tf.distribute.experimental.partitioners.MinSizePartitioner` with a\n large enough `min_shard_bytes` (see\n `tf.distribute.experimental.ParameterServerStrategy`'s documentation\n for more details).\n ", "desc": "Random-number generator.", "type": "API"}, {"name": "tf.random.experimental.get_global_generator", "docs": "Retrieves the global generator.\n\n This function will create the global generator the first time it is called,\n and the generator will be placed at the default device at that time, so one\n needs to be careful when this function is first called. Using a generator\n placed on a less-ideal device will incur performance regression.\n\n Returns:\n The global `tf.random.Generator` object.\n ", "desc": "Retrieves the global generator.", "type": "API"}, {"name": "tf.random.experimental.set_global_generator", "docs": "Replaces the global generator with another `Generator` object.\n\n This function replaces the global generator with the provided `generator`\n object.\n A random number generator utilizes a `tf.Variable` object to store its state.\n The user shall be aware of caveats how `set_global_generator` interacts with\n `tf.function`:\n\n - tf.function puts restrictions on Variable creation thus one cannot freely\n create a new random generator instance inside `tf.function`.\n To call `set_global_generator` inside `tf.function`, the generator instance\n must have already been created eagerly.\n - tf.function captures the Variable during trace-compilation, thus a compiled\n f.function will not be affected `set_global_generator` as demonstrated by\n random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun .\n\n For most use cases, avoid calling `set_global_generator` after program\n initialization, and prefer to reset the state of the existing global generator\n instead, such as,\n\n >>> rng = tf.random.get_global_generator()\n >>> rng.reset_from_seed(30)\n\n\n Args:\n generator: the new `Generator` object.\n ", "desc": "Replaces the global generator with another `Generator` object.", "type": "API"}, {"name": "tf.random.experimental.stateless_fold_in", "docs": "Folds in data to an RNG seed to form a new RNG seed.\n\n For example, in a distributed-training setting, suppose we have a master seed\n and a replica ID. We want to fold the replica ID into the master seed to\n form a \"replica seed\" to be used by that replica later on, so that different\n replicas will generate different random numbers but the reproducibility of the\n whole system can still be controlled by the master seed:\n\n >>> master_seed = [1, 2]\n >>> replica_id = 3\n >>> replica_seed = tf.random.experimental.stateless_fold_in(\n ... master_seed, replica_id)\n >>> print(replica_seed)\n tf.Tensor([1105988140 3], shape=(2,), dtype=int32)\n >>> tf.random.stateless_normal(shape=[3], seed=replica_seed)\n \n\n Args:\n seed: an RNG seed (a tensor with shape [2] and dtype `int32` or\n `int64`). (When using XLA, only `int32` is allowed.)\n data: an `int32` or `int64` scalar representing data to be folded in to the\n seed.\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A new RNG seed that is a deterministic function of the inputs and is\n statistically safe for producing a stream of new pseudo-random values. It\n will have the same dtype as `data` (if `data` doesn't have an explict dtype,\n the dtype will be determined by `tf.convert_to_tensor`).\n ", "desc": "Folds in data to an RNG seed to form a new RNG seed.", "type": "API"}, {"name": "tf.random.experimental.stateless_split", "docs": "Splits an RNG seed into `num` new seeds by adding a leading axis.\n\n Example:\n\n >>> seed = [1, 2]\n >>> new_seeds = tf.random.experimental.stateless_split(seed, num=3)\n >>> print(new_seeds)\n tf.Tensor(\n [[1105988140 1738052849]\n [-335576002 370444179]\n [ 10670227 -246211131]], shape=(3, 2), dtype=int32)\n >>> tf.random.stateless_normal(shape=[3], seed=new_seeds[0, :])\n \n\n Args:\n seed: an RNG seed (a tensor with shape [2] and dtype `int32` or\n `int64`). (When using XLA, only `int32` is allowed.)\n num: optional, a positive integer or scalar tensor indicating the number of\n seeds to produce (default 2).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor with shape [num, 2] representing `num` new seeds. It will have the\n same dtype as `seed` (if `seed` doesn't have an explict dtype, the dtype\n will be determined by `tf.convert_to_tensor`).\n ", "desc": "Splits an RNG seed into `num` new seeds by adding a leading axis.", "type": "API"}, {"name": "tf.random.fixed_unigram_candidate_sampler", "docs": "Samples a set of classes using the provided (fixed) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution is read from a file or passed in as an\n in-memory array. There is also an option to skew the distribution by\n applying a distortion power to the weights.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n vocab_file: Each valid line in this file (which should have a CSV-like\n format) corresponds to a valid word ID. IDs are in sequential order,\n starting from num_reserved_ids. The last entry in each line is expected\n to be a value corresponding to the count or relative probability. Exactly\n one of `vocab_file` and `unigrams` needs to be passed to this operation.\n distortion: The distortion is used to skew the unigram probability\n distribution. Each weight is first raised to the distortion's power\n before adding to the internal unigram distribution. As a result,\n `distortion = 1.0` gives regular unigram sampling (as defined by the vocab\n file), and `distortion = 0.0` gives a uniform distribution.\n num_reserved_ids: Optionally some reserved IDs can be added in the range\n `[0, num_reserved_ids)` by the users. One use case is that a special\n unknown word token is used as ID 0. These IDs will have a sampling\n probability of 0.\n num_shards: A sampler can be used to sample from a subset of the original\n range in order to speed up the whole computation through parallelism. This\n parameter (together with `shard`) indicates the number of partitions that\n are being used in the overall computation.\n shard: A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This\n parameter (together with `num_shards`) indicates the particular partition\n number of the operation, when partitioning is being used.\n unigrams: A list of unigram counts or probabilities, one per ID in\n sequential order. Exactly one of `vocab_file` and `unigrams` should be\n passed to this operation.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes using the provided (fixed) base distribution.", "type": "API"}, {"name": "tf.random.gamma", "docs": "Draws `shape` samples from each of the given Gamma distribution(s).\n\n `alpha` is the shape parameter describing the distribution(s), and `beta` is\n the inverse scale parameter(s).\n\n Note: Because internal calculations are done using `float64` and casting has\n `floor` semantics, we must manually map zero outcomes to the smallest\n possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This\n means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise\n should. This bias can only happen for small values of `alpha`, i.e.,\n `alpha << 1` or large values of `beta`, i.e., `beta >> 1`.\n\n The samples are differentiable w.r.t. alpha and beta.\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n Example:\n\n ```python\n samples = tf.random.gamma([10], [0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.gamma([7, 5], [0.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.],[3.],[5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.gamma([30], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n loss = tf.reduce_mean(tf.square(samples))\n dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per alpha/beta-parameterized distribution.\n alpha: A Tensor or Python value or N-D array of type `dtype`. `alpha`\n provides the shape parameter(s) describing the gamma distribution(s) to\n sample. Must be broadcastable with `beta`.\n beta: A Tensor or Python value or N-D array of type `dtype`. Defaults to 1.\n `beta` provides the inverse scale parameter(s) of the gamma\n distribution(s) to sample. Must be broadcastable with `alpha`.\n dtype: The type of alpha, beta, and the output: `float16`, `float32`, or\n `float64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape\n `tf.concat([shape, tf.shape(alpha + beta)], axis=0)` with values of type\n `dtype`.\n\n References:\n Implicit Reparameterization Gradients:\n [Figurnov et al., 2018]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients)\n ([pdf]\n (http://papers.nips.cc/paper/7326-implicit-reparameterization-gradients.pdf))\n ", "desc": "Draws `shape` samples from each of the given Gamma distribution(s).", "type": "API"}, {"name": "tf.random.Generator", "docs": "Random-number generator.\n\n Example:\n\n Creating a generator from a seed:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.normal(shape=(2, 3))\n \n\n Creating a generator from a non-deterministic state:\n\n >>> g = tf.random.Generator.from_non_deterministic_state()\n >>> g.normal(shape=(2, 3))\n \n\n All the constructors allow explicitly choosing an Random-Number-Generation\n (RNG) algorithm. Supported algorithms are `\"philox\"` and `\"threefry\"`. For\n example:\n\n >>> g = tf.random.Generator.from_seed(123, alg=\"philox\")\n >>> g.normal(shape=(2, 3))\n \n\n CPU, GPU and TPU with the same algorithm and seed will generate the same\n integer random numbers. Float-point results (such as the output of `normal`)\n may have small numerical discrepancies between different devices.\n\n This class uses a `tf.Variable` to manage its internal state. Every time\n random numbers are generated, the state of the generator will change. For\n example:\n\n >>> g = tf.random.Generator.from_seed(1234)\n >>> g.state\n \n >>> g.normal(shape=(2, 3))\n <...>\n >>> g.state\n \n\n The shape of the state is algorithm-specific.\n\n There is also a global generator:\n\n >>> g = tf.random.get_global_generator()\n >>> g.normal(shape=(2, 3))\n \n\n When creating a generator inside a `tf.distribute.Strategy` scope, each\n replica will get a different stream of random numbers.\n\n For example, in this code:\n\n ```\n strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n with strat.scope():\n g = tf.random.Generator.from_seed(1)\n def f():\n return g.normal([])\n results = strat.run(f).values\n ```\n\n `results[0]` and `results[1]` will have different values.\n\n If the generator is seeded (e.g. created via `Generator.from_seed`), the\n random numbers will be determined by the seed, even though different replicas\n get different numbers. One can think of a random number generated on a\n replica as a hash of the replica ID and a \"master\" random number that may be\n common to all replicas. Hence, the whole system is still deterministic.\n\n (Note that the random numbers on different replicas are not correlated, even\n if they are deterministically determined by the same seed. They are not\n correlated in the sense that no matter what statistics one calculates on them,\n there won't be any discernable correlation.)\n\n Generators can be freely saved and restored using `tf.train.Checkpoint`. The\n checkpoint can be restored in a distribution strategy with a different number\n of replicas than the original strategy. If a replica ID is present in both the\n original and the new distribution strategy, its state will be properly\n restored (i.e. the random-number stream from the restored point will be the\n same as that from the saving point) unless the replicas have already diverged\n in their RNG call traces before saving (e.g. one replica has made one RNG call\n while another has made two RNG calls). We don't have such guarantee if the\n generator is saved in a strategy scope and restored outside of any strategy\n scope, or vice versa.\n\n When a generator is created within the scope of\n `tf.distribute.experimental.ParameterServerStrategy`, the workers\n will share the generator's state (placed on one of the parameter\n servers). In this way the workers will still get different\n random-number streams, as stated above. (This is similar to replicas\n in a `tf.distribute.MirroredStrategy` sequentially accessing a\n generator created outside the strategy.) Each RNG call on a worker\n will incur a round-trip to a parameter server, which may have\n performance impacts. When creating a\n `tf.distribute.experimental.ParameterServerStrategy`, please make\n sure that the `variable_partitioner` argument won't shard small\n variables of shape `[2]` or `[3]` (because generator states must not\n be sharded). Ways to avoid sharding small variables include setting\n `variable_partitioner` to `None` or to\n `tf.distribute.experimental.partitioners.MinSizePartitioner` with a\n large enough `min_shard_bytes` (see\n `tf.distribute.experimental.ParameterServerStrategy`'s documentation\n for more details).\n ", "desc": "Random-number generator.", "type": "API"}, {"name": "tf.random.get_global_generator", "docs": "Retrieves the global generator.\n\n This function will create the global generator the first time it is called,\n and the generator will be placed at the default device at that time, so one\n needs to be careful when this function is first called. Using a generator\n placed on a less-ideal device will incur performance regression.\n\n Returns:\n The global `tf.random.Generator` object.\n ", "desc": "Retrieves the global generator.", "type": "API"}, {"name": "tf.random.learned_unigram_candidate_sampler", "docs": "Samples a set of classes from a distribution learned during training.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is constructed on the fly\n during training. It is a unigram distribution over the target\n classes seen so far during training. Every integer in `[0, range_max)`\n begins with a weight of 1, and is incremented by 1 each time it is\n seen as a target class. The base distribution is not saved to checkpoints,\n so it is reset when the model is reloaded.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n\n ", "desc": "Samples a set of classes from a distribution learned during training.", "type": "API"}, {"name": "tf.random.log_uniform_candidate_sampler", "docs": "Samples a set of classes using a log-uniform (Zipfian) base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is an approximately log-uniform\n or Zipfian distribution:\n\n `P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`\n\n This sampler is useful when the target classes approximately follow such\n a distribution - for example, if the classes represent words in a lexicon\n sorted in decreasing order of frequency. If your classes are not ordered by\n decreasing frequency, do not use this op.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`.\n The sampled classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a log-uniform (Zipfian) base distribution.", "type": "API"}, {"name": "tf.random.normal", "docs": "Outputs random values from a normal distribution.\n\n Example that generates a new set of random values every time:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([4], 0, 1, tf.float32)\n \n\n Example that outputs a reproducible result:\n\n >>> tf.random.set_seed(5);\n >>> tf.random.normal([2,2], 0, 1, tf.float32, seed=1)\n \n\n In this case, we are setting both the global and operation-level seed to\n ensure this result is reproducible. See `tf.random.set_seed` for more\n information.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A Tensor or Python value of type `dtype`, broadcastable with `stddev`.\n The mean of the normal distribution.\n stddev: A Tensor or Python value of type `dtype`, broadcastable with `mean`.\n The standard deviation of the normal distribution.\n dtype: The float type of the output: `float16`, `bfloat16`, `float32`,\n `float64`. Defaults to `float32`.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random normal values.\n ", "desc": "Outputs random values from a normal distribution.", "type": "API"}, {"name": "tf.random.poisson", "docs": "Draws `shape` samples from each of the given Poisson distribution(s).\n\n `lam` is the rate parameter describing the distribution(s).\n\n Example:\n\n ```python\n samples = tf.random.poisson([10], [0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.poisson([7, 5], [12.2, 3.3])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output samples\n to be drawn per \"rate\"-parameterized distribution.\n lam: A Tensor or Python value or N-D array of type `dtype`.\n `lam` provides the rate parameter(s) describing the poisson\n distribution(s) to sample.\n dtype: The type of the output: `float16`, `float32`, `float64`, `int32` or\n `int64`.\n seed: A Python integer. Used to create a random seed for the distributions.\n See\n `tf.random.set_seed`\n for behavior.\n name: Optional name for the operation.\n\n Returns:\n samples: a `Tensor` of shape `tf.concat([shape, tf.shape(lam)], axis=0)`\n with values of type `dtype`.\n ", "desc": "Draws `shape` samples from each of the given Poisson distribution(s).", "type": "API"}, {"name": "tf.random.set_global_generator", "docs": "Replaces the global generator with another `Generator` object.\n\n This function replaces the global generator with the provided `generator`\n object.\n A random number generator utilizes a `tf.Variable` object to store its state.\n The user shall be aware of caveats how `set_global_generator` interacts with\n `tf.function`:\n\n - tf.function puts restrictions on Variable creation thus one cannot freely\n create a new random generator instance inside `tf.function`.\n To call `set_global_generator` inside `tf.function`, the generator instance\n must have already been created eagerly.\n - tf.function captures the Variable during trace-compilation, thus a compiled\n f.function will not be affected `set_global_generator` as demonstrated by\n random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun .\n\n For most use cases, avoid calling `set_global_generator` after program\n initialization, and prefer to reset the state of the existing global generator\n instead, such as,\n\n >>> rng = tf.random.get_global_generator()\n >>> rng.reset_from_seed(30)\n\n\n Args:\n generator: the new `Generator` object.\n ", "desc": "Replaces the global generator with another `Generator` object.", "type": "API"}, {"name": "tf.random.set_seed", "docs": "Sets the global random seed.\n\n Operations that rely on a random seed actually derive it from two seeds:\n the global and operation-level seeds. This sets the global seed.\n\n Its interactions with operation-level seeds is as follows:\n\n 1. If neither the global seed nor the operation seed is set: A randomly\n picked seed is used for this op.\n 2. If the global seed is set, but the operation seed is not:\n The system deterministically picks an operation seed in conjunction with\n the global seed so that it gets a unique random sequence. Within the\n same version of tensorflow and user code, this sequence is deterministic.\n However across different versions, this sequence might change. If the\n code depends on particular seeds to work, specify both global\n and operation-level seeds explicitly.\n 3. If the operation seed is set, but the global seed is not set:\n A default global seed and the specified operation seed are used to\n determine the random sequence.\n 4. If both the global and the operation seed are set:\n Both seeds are used in conjunction to determine the random sequence.\n\n To illustrate the user-visible effects, consider these examples:\n\n If neither the global seed nor the operation seed is set, we get different\n results for every call to the random op and every re-run of the program:\n\n ```python\n print(tf.random.uniform([1])) # generates 'A1'\n print(tf.random.uniform([1])) # generates 'A2'\n ```\n\n (now close the program and run it again)\n\n ```python\n print(tf.random.uniform([1])) # generates 'A3'\n print(tf.random.uniform([1])) # generates 'A4'\n ```\n\n If the global seed is set but the operation seed is not set, we get different\n results for every call to the random op, but the same sequence for every\n re-run of the program:\n\n ```python\n tf.random.set_seed(1234)\n print(tf.random.uniform([1])) # generates 'A1'\n print(tf.random.uniform([1])) # generates 'A2'\n ```\n\n (now close the program and run it again)\n\n ```python\n tf.random.set_seed(1234)\n print(tf.random.uniform([1])) # generates 'A1'\n print(tf.random.uniform([1])) # generates 'A2'\n ```\n\n The reason we get 'A2' instead 'A1' on the second call of `tf.random.uniform`\n above is because the second call uses a different operation seed.\n\n Note that `tf.function` acts like a re-run of a program in this case. When\n the global seed is set but operation seeds are not set, the sequence of random\n numbers are the same for each `tf.function`. For example:\n\n ```python\n tf.random.set_seed(1234)\n\n @tf.function\n def f():\n a = tf.random.uniform([1])\n b = tf.random.uniform([1])\n return a, b\n\n @tf.function\n def g():\n a = tf.random.uniform([1])\n b = tf.random.uniform([1])\n return a, b\n\n print(f()) # prints '(A1, A2)'\n print(g()) # prints '(A1, A2)'\n ```\n\n If the operation seed is set, we get different results for every call to the\n random op, but the same sequence for every re-run of the program:\n\n ```python\n print(tf.random.uniform([1], seed=1)) # generates 'A1'\n print(tf.random.uniform([1], seed=1)) # generates 'A2'\n ```\n\n (now close the program and run it again)\n\n ```python\n print(tf.random.uniform([1], seed=1)) # generates 'A1'\n print(tf.random.uniform([1], seed=1)) # generates 'A2'\n ```\n\n The reason we get 'A2' instead 'A1' on the second call of `tf.random.uniform`\n above is because the same `tf.random.uniform` kernel (i.e. internal\n representation) is used by TensorFlow for all calls of it with the same\n arguments, and the kernel maintains an internal counter which is incremented\n every time it is executed, generating different results.\n\n Calling `tf.random.set_seed` will reset any such counters:\n\n ```python\n tf.random.set_seed(1234)\n print(tf.random.uniform([1], seed=1)) # generates 'A1'\n print(tf.random.uniform([1], seed=1)) # generates 'A2'\n tf.random.set_seed(1234)\n print(tf.random.uniform([1], seed=1)) # generates 'A1'\n print(tf.random.uniform([1], seed=1)) # generates 'A2'\n ```\n\n When multiple identical random ops are wrapped in a `tf.function`, their\n behaviors change because the ops no long share the same counter. For example:\n\n ```python\n @tf.function\n def foo():\n a = tf.random.uniform([1], seed=1)\n b = tf.random.uniform([1], seed=1)\n return a, b\n print(foo()) # prints '(A1, A1)'\n print(foo()) # prints '(A2, A2)'\n\n @tf.function\n def bar():\n a = tf.random.uniform([1])\n b = tf.random.uniform([1])\n return a, b\n print(bar()) # prints '(A1, A2)'\n print(bar()) # prints '(A3, A4)'\n ```\n\n The second call of `foo` returns '(A2, A2)' instead of '(A1, A1)' because\n `tf.random.uniform` maintains an internal counter. If you want `foo` to return\n '(A1, A1)' every time, use the stateless random ops such as\n `tf.random.stateless_uniform`. Also see `tf.random.experimental.Generator` for\n a new set of stateful random ops that use external variables to manage their\n states.\n\n Args:\n seed: integer.\n ", "desc": "Sets the global random seed.", "type": "API"}, {"name": "tf.random.shuffle", "docs": "Randomly shuffles a tensor along its first dimension.\n\n The tensor is shuffled along dimension 0, such that each `value[j]` is mapped\n to one and only one `output[i]`. For example, a mapping that might occur for a\n 3x2 tensor is:\n\n ```python\n [[1, 2], [[5, 6],\n [3, 4], ==> [1, 2],\n [5, 6]] [3, 4]]\n ```\n\n Args:\n value: A Tensor to be shuffled.\n seed: A Python integer. Used to create a random seed for the distribution.\n See\n `tf.random.set_seed`\n for behavior.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of same shape and type as `value`, shuffled along its first\n dimension.\n ", "desc": "Randomly shuffles a tensor along its first dimension.", "type": "API"}, {"name": "tf.random.stateless_binomial", "docs": "Outputs deterministic pseudorandom values from a binomial distribution.\n\n The generated values follow a binomial distribution with specified count and\n probability of success parameters.\n\n This is a stateless version of `tf.random.Generator.binomial`: if run twice\n with the same seeds and shapes, it will produce the same pseudorandom numbers.\n The output is consistent across multiple runs on the same hardware (and\n between CPU and GPU), but may change between versions of TensorFlow or on\n non-CPU/GPU hardware.\n\n Example:\n\n ```python\n counts = [10., 20.]\n # Probability of success.\n probs = [0.8]\n\n binomial_samples = tf.random.stateless_binomial(\n shape=[2], seed=[123, 456], counts=counts, probs=probs)\n\n counts = ... # Shape [3, 1, 2]\n probs = ... # Shape [1, 4, 2]\n shape = [3, 4, 3, 4, 2]\n # Sample shape will be [3, 4, 3, 4, 2]\n binomial_samples = tf.random.stateless_binomial(\n shape=shape, seed=[123, 456], counts=counts, probs=probs)\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n counts: Tensor. The counts of the binomial distribution. Must be\n broadcastable with `probs`, and broadcastable with the rightmost\n dimensions of `shape`.\n probs: Tensor. The probability of success for the binomial distribution.\n Must be broadcastable with `counts` and broadcastable with the rightmost\n dimensions of `shape`.\n output_dtype: The type of the output. Default: tf.int32\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random binomial\n values. For each i, each samples[..., i] is an independent draw from\n the binomial distribution on counts[i] trials with probability of\n success probs[i].\n\n ", "desc": "Outputs deterministic pseudorandom values from a binomial distribution.", "type": "API"}, {"name": "tf.random.stateless_categorical", "docs": "Draws deterministic pseudorandom samples from a categorical distribution.\n\n This is a stateless version of `tf.categorical`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n\n Example:\n\n ```python\n # samples has shape [1, 5], where each value is either 0 or 1 with equal\n # probability.\n samples = tf.random.stateless_categorical(\n tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])\n ```\n\n Args:\n logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice\n `[i, :]` represents the unnormalized log-probabilities for all classes.\n num_samples: 0-D. Number of independent samples to draw for each row slice.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n dtype: The integer type of the output: `int32` or `int64`. Defaults to\n `int64`.\n name: Optional name for the operation.\n\n Returns:\n The drawn samples of shape `[batch_size, num_samples]`.\n ", "desc": "Draws deterministic pseudorandom samples from a categorical distribution.", "type": "API"}, {"name": "tf.random.stateless_gamma", "docs": "Outputs deterministic pseudorandom values from a gamma distribution.\n\n The generated values follow a gamma distribution with specified concentration\n (`alpha`) and inverse scale (`beta`) parameters.\n\n This is a stateless version of `tf.random.gamma`: if run twice with the same\n seeds and shapes, it will produce the same pseudorandom numbers. The output is\n consistent across multiple runs on the same hardware (and between CPU and\n GPU),\n but may change between versions of TensorFlow or on non-CPU/GPU hardware.\n\n A slight difference exists in the interpretation of the `shape` parameter\n between `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always\n prepended to the shape of the broadcast of `alpha` with `beta`; whereas in\n `stateless_gamma` the `shape` parameter must always encompass the shapes of\n each of `alpha` and `beta` (which must broadcast together to match the\n trailing dimensions of `shape`).\n\n Note: Because internal calculations are done using `float64` and casting has\n `floor` semantics, we must manually map zero outcomes to the smallest\n possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This\n means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise\n should. This bias can only happen for small values of `alpha`, i.e.,\n `alpha << 1` or large values of `beta`, i.e., `beta >> 1`.\n\n The samples are differentiable w.r.t. alpha and beta.\n The derivatives are computed using the approach described in\n (Figurnov et al., 2018).\n\n Example:\n\n ```python\n samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.], [3.], [5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n with tf.GradientTape() as tape:\n tape.watch([alpha, beta])\n loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)))\n dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n alpha: Tensor. The concentration parameter of the gamma distribution. Must\n be broadcastable with `beta`, and broadcastable with the rightmost\n dimensions of `shape`.\n beta: Tensor. The inverse scale parameter of the gamma distribution. Must be\n broadcastable with `alpha` and broadcastable with the rightmost dimensions\n of `shape`.\n dtype: Floating point dtype of `alpha`, `beta`, and the output.\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random gamma values.\n For each i, each `samples[..., i] is an independent draw from the gamma\n distribution with concentration alpha[i] and scale beta[i].\n\n ", "desc": "Outputs deterministic pseudorandom values from a gamma distribution.", "type": "API"}, {"name": "tf.random.stateless_normal", "docs": "Outputs deterministic pseudorandom values from a normal distribution.\n\n This is a stateless version of `tf.random.normal`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the normal\n distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution.\n dtype: The float type of the output: `float16`, `bfloat16`, `float32`,\n `float64`. Defaults to `float32`.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor of the specified shape filled with random normal values.\n ", "desc": "Outputs deterministic pseudorandom values from a normal distribution.", "type": "API"}, {"name": "tf.random.stateless_parameterized_truncated_normal", "docs": "Outputs random values from a truncated normal distribution.\n\n The generated values follow a normal distribution with specified mean and\n standard deviation, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n\n Examples:\n\n Sample from a Truncated normal, with deferring shape parameters that\n broadcast.\n\n >>> means = 0.\n >>> stddevs = tf.math.exp(tf.random.uniform(shape=[2, 3]))\n >>> minvals = [-1., -2., -1000.]\n >>> maxvals = [[10000.], [1.]]\n >>> y = tf.random.stateless_parameterized_truncated_normal(\n ... shape=[10, 2, 3], seed=[7, 17],\n ... means=means, stddevs=stddevs, minvals=minvals, maxvals=maxvals)\n >>> y.shape\n TensorShape([10, 2, 3])\n\n Args:\n shape: A 1-D integer `Tensor` or Python array. The shape of the output\n tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n means: A `Tensor` or Python value of type `dtype`. The mean of the truncated\n normal distribution. This must broadcast with `stddevs`, `minvals` and\n `maxvals`, and the broadcasted shape must be dominated by `shape`.\n stddevs: A `Tensor` or Python value of type `dtype`. The standard deviation\n of the truncated normal distribution. This must broadcast with `means`,\n `minvals` and `maxvals`, and the broadcasted shape must be dominated by\n `shape`.\n minvals: A `Tensor` or Python value of type `dtype`. The minimum value of\n the truncated normal distribution. This must broadcast with `means`,\n `stddevs` and `maxvals`, and the broadcasted shape must be dominated by\n `shape`.\n maxvals: A `Tensor` or Python value of type `dtype`. The maximum value of\n the truncated normal distribution. This must broadcast with `means`,\n `stddevs` and `minvals`, and the broadcasted shape must be dominated by\n `shape`.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.random.stateless_poisson", "docs": "Outputs deterministic pseudorandom values from a Poisson distribution.\n\n The generated values follow a Poisson distribution with specified rate\n parameter.\n\n This is a stateless version of `tf.random.poisson`: if run twice with the same\n seeds and shapes, it will produce the same pseudorandom numbers. The output is\n consistent across multiple runs on the same hardware, but may change between\n versions of TensorFlow or on non-CPU/GPU hardware.\n\n A slight difference exists in the interpretation of the `shape` parameter\n between `stateless_poisson` and `poisson`: in `poisson`, the `shape` is always\n prepended to the shape of `lam`; whereas in `stateless_poisson` the shape of\n `lam` must match the trailing dimensions of `shape`.\n\n Example:\n\n ```python\n samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n rate = tf.constant([[1.], [3.], [5.]])\n samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate)\n # samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions.\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n lam: Tensor. The rate parameter \"lambda\" of the Poisson distribution. Shape\n must match the rightmost dimensions of `shape`.\n dtype: Dtype of the samples (int or float dtypes are permissible, as samples\n are discrete). Default: int32.\n name: A name for the operation (optional).\n\n Returns:\n samples: A Tensor of the specified shape filled with random Poisson values.\n For each i, each `samples[..., i]` is an independent draw from the Poisson\n distribution with rate `lam[i]`.\n\n ", "desc": "Outputs deterministic pseudorandom values from a Poisson distribution.", "type": "API"}, {"name": "tf.random.stateless_truncated_normal", "docs": "Outputs deterministic pseudorandom values, truncated normally distributed.\n\n This is a stateless version of `tf.random.truncated_normal`: if run twice with\n the same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n The generated values follow a normal distribution with specified mean and\n standard deviation, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the\n truncated normal distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution, before truncation.\n dtype: The type of the output.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. See\n `tf.random.stateless_uniform` for a detailed explanation.\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs deterministic pseudorandom values, truncated normally distributed.", "type": "API"}, {"name": "tf.random.stateless_uniform", "docs": "Outputs deterministic pseudorandom values from a uniform distribution.\n\n This is a stateless version of `tf.random.uniform`: if run twice with the\n same seeds and shapes, it will produce the same pseudorandom numbers. The\n output is consistent across multiple runs on the same hardware (and between\n CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU\n hardware.\n\n The generated values follow a uniform distribution in the range\n `[minval, maxval)`. The lower bound `minval` is included in the range, while\n the upper bound `maxval` is excluded.\n\n For floats, the default range is `[0, 1)`. For ints, at least `maxval` must\n be specified explicitly.\n\n In the integer case, the random integers are slightly biased unless\n `maxval - minval` is an exact power of two. The bias is small for values of\n `maxval - minval` significantly smaller than the range of the output (either\n `2**32` or `2**64`).\n\n For full-range (i.e. inclusive of both max and min) random integers, pass\n `minval=None` and `maxval=None` with an integer `dtype`. For an integer dtype\n either both `minval` and `maxval` must be `None` or neither may be `None`. For\n example:\n ```python\n ints = tf.random.stateless_uniform(\n [10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32)\n ```\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n seed: A shape [2] Tensor, the seed to the random number generator. Must have\n dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n minval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The lower bound on the range of random values to\n generate. Pass `None` for full-range integers. Defaults to 0.\n maxval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The upper bound on the range of random values to generate.\n Defaults to 1 if `dtype` is floating point. Pass `None` for full-range\n integers.\n dtype: The type of the output: `float16`, `bfloat16`, `float32`, `float64`,\n `int32`, or `int64`. For unbounded uniform ints (`minval`, `maxval` both\n `None`), `uint32` and `uint64` may be used. Defaults to `float32`.\n name: A name for the operation (optional).\n alg: The RNG algorithm used to generate the random numbers. Valid\n choices are `\"philox\"` for [the Philox\n algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf),\n `\"threefry\"` for [the ThreeFry\n algorithm](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf),\n and `\"auto_select\"` (default) for the system to automatically\n select an algorithm based the device type. Values of\n `tf.random.Algorithm` can also be used. Note that with\n `\"auto_select\"`, the outputs of this function may change when\n it is running on a different device.\n\n Returns:\n A tensor of the specified shape filled with random uniform values.\n\n Raises:\n ValueError: If `dtype` is integral and only one of `minval` or `maxval` is\n specified.\n ", "desc": "Outputs deterministic pseudorandom values from a uniform distribution.", "type": "API"}, {"name": "tf.random.truncated_normal", "docs": "Outputs random values from a truncated normal distribution.\n\n The values are drawn from a normal distribution with specified mean and\n standard deviation, discarding and re-drawing any samples that are more than\n two standard deviations from the mean.\n\n Examples:\n\n >>> tf.random.truncated_normal(shape=[2])\n \n\n >>> tf.random.truncated_normal(shape=[2], mean=3, stddev=1, dtype=tf.float32)\n \n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n mean: A 0-D Tensor or Python value of type `dtype`. The mean of the\n truncated normal distribution.\n stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation\n of the normal distribution, before truncation.\n dtype: The type of the output. Restricted to floating-point types:\n `tf.half`, `tf.float`, `tf.double`, etc.\n seed: A Python integer. Used to create a random seed for the distribution.\n See `tf.random.set_seed` for more information.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random truncated normal values.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.random.uniform", "docs": "Outputs random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range\n `[minval, maxval)`. The lower bound `minval` is included in the range, while\n the upper bound `maxval` is excluded.\n\n For floats, the default range is `[0, 1)`. For ints, at least `maxval` must\n be specified explicitly.\n\n In the integer case, the random integers are slightly biased unless\n `maxval - minval` is an exact power of two. The bias is small for values of\n `maxval - minval` significantly smaller than the range of the output (either\n `2**32` or `2**64`).\n\n Examples:\n\n >>> tf.random.uniform(shape=[2])\n \n >>> tf.random.uniform(shape=[], minval=-1., maxval=0.)\n \n >>> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64)\n \n\n The `seed` argument produces a deterministic sequence of tensors across\n multiple calls. To repeat that sequence, use `tf.random.set_seed`:\n\n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.set_seed(5)\n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n >>> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10)\n \n\n Without `tf.random.set_seed` but with a `seed` argument is specified, small\n changes to function graphs or previously executed operations will change the\n returned value. See `tf.random.set_seed` for details.\n\n Args:\n shape: A 1-D integer Tensor or Python array. The shape of the output tensor.\n minval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The lower bound on the range of random values to generate\n (inclusive). Defaults to 0.\n maxval: A Tensor or Python value of type `dtype`, broadcastable with\n `shape` (for integer types, broadcasting is not supported, so it needs to\n be a scalar). The upper bound on the range of random values to generate\n (exclusive). Defaults to 1 if `dtype` is floating point.\n dtype: The type of the output: `float16`, `bfloat16`, `float32`, `float64`,\n `int32`, or `int64`. Defaults to `float32`.\n seed: A Python integer. Used in combination with `tf.random.set_seed` to\n create a reproducible sequence of tensors across multiple calls.\n name: A name for the operation (optional).\n\n Returns:\n A tensor of the specified shape filled with random uniform values.\n\n Raises:\n ValueError: If `dtype` is integral and `maxval` is not specified.\n ", "desc": "Outputs random values from a uniform distribution.", "type": "API"}, {"name": "tf.random.uniform_candidate_sampler", "docs": "Samples a set of classes using a uniform base distribution.\n\n This operation randomly samples a tensor of sampled classes\n (`sampled_candidates`) from the range of integers `[0, range_max)`.\n\n The elements of `sampled_candidates` are drawn without replacement\n (if `unique=True`) or with replacement (if `unique=False`) from\n the base distribution.\n\n The base distribution for this operation is the uniform distribution\n over the range of integers `[0, range_max)`.\n\n In addition, this operation returns tensors `true_expected_count`\n and `sampled_expected_count` representing the number of times each\n of the target classes (`true_classes`) and the sampled\n classes (`sampled_candidates`) is expected to occur in an average\n tensor of sampled classes. These values correspond to `Q(y|x)`\n defined in [this\n document](http://www.tensorflow.org/extras/candidate_sampling.pdf).\n If `unique=True`, then these are post-rejection probabilities and we\n compute them approximately.\n\n Args:\n true_classes: A `Tensor` of type `int64` and shape `[batch_size,\n num_true]`. The target classes.\n num_true: An `int`. The number of target classes per training example.\n num_sampled: An `int`. The number of classes to randomly sample. The\n `sampled_candidates` return value will have shape `[num_sampled]`. If\n `unique=True`, `num_sampled` must be less than or equal to `range_max`.\n unique: A `bool`. Determines whether all sampled classes in a batch are\n unique.\n range_max: An `int`. The number of possible classes.\n seed: An `int`. An operation-specific seed. Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n sampled_candidates: A tensor of type `int64` and shape `[num_sampled]`. The\n sampled classes, either with possible duplicates (`unique=False`) or all\n unique (`unique=True`). In either case, `sampled_candidates` is\n independent of the true classes.\n true_expected_count: A tensor of type `float`. Same shape as\n `true_classes`. The expected counts under the sampling distribution\n of each of `true_classes`.\n sampled_expected_count: A tensor of type `float`. Same shape as\n `sampled_candidates`. The expected counts under the sampling distribution\n of each of `sampled_candidates`.\n ", "desc": "Samples a set of classes using a uniform base distribution.", "type": "API"}, {"name": "tf.random_normal_initializer", "docs": "Initializer that generates tensors with a normal distribution.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Examples:\n\n >>> def make_variables(k, initializer):\n ... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)),\n ... tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))\n >>> v1, v2 = make_variables(3,\n ... tf.random_normal_initializer(mean=1., stddev=2.))\n >>> v1\n \n >>> v2\n >> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.))\n (, >> def make_variables(k, initializer):\n ... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)),\n ... tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))\n >>> v1, v2 = make_variables(3, tf.ones_initializer())\n >>> v1\n \n >>> v2\n \n >>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.))\n (, >> start = 3\n >>> limit = 18\n >>> delta = 3\n >>> tf.range(start, limit, delta)\n \n\n >>> start = 3\n >>> limit = 1\n >>> delta = -0.5\n >>> tf.range(start, limit, delta)\n \n\n >>> limit = 5\n >>> tf.range(limit)\n \n\n Args:\n start: A 0-D `Tensor` (scalar). Acts as first entry in the range if `limit`\n is not None; otherwise, acts as range limit and first entry defaults to 0.\n limit: A 0-D `Tensor` (scalar). Upper limit of sequence, exclusive. If None,\n defaults to the value of `start` while the first entry of the range\n defaults to 0.\n delta: A 0-D `Tensor` (scalar). Number that increments `start`. Defaults to\n 1.\n dtype: The type of the elements of the resulting tensor.\n name: A name for the operation. Defaults to \"range\".\n\n Returns:\n An 1-D `Tensor` of type `dtype`.\n\n @compatibility(numpy)\n Equivalent to np.arange\n @end_compatibility\n ", "desc": "Creates a sequence of numbers.", "type": "API"}, {"name": "tf.rank", "docs": "Returns the rank of a tensor.\n\n See also `tf.shape`.\n\n Returns a 0-D `int32` `Tensor` representing the rank of `input`.\n\n For example:\n\n ```python\n # shape of tensor 't' is [2, 2, 3]\n t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n tf.rank(t) # 3\n ```\n\n **Note**: The rank of a tensor is not the same as the rank of a matrix. The\n rank of a tensor is the number of indices required to uniquely select each\n element of the tensor. Rank is also known as \"order\", \"degree\", or \"ndims.\"\n\n Args:\n input: A `Tensor` or `SparseTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n\n @compatibility(numpy)\n Equivalent to np.ndim\n @end_compatibility\n ", "desc": "Returns the rank of a tensor.", "type": "API"}, {"name": "tf.raw_ops", "docs": "Public API for tf.raw_ops namespace.\n", "desc": "Public API for tf.raw_ops namespace.", "type": "API"}, {"name": "tf.raw_ops.Abort", "docs": "Raise a exception to abort the process when called.\n\n If exit_without_error is true, the process will exit normally,\n otherwise it will exit with a SIGABORT signal.\n\n Returns nothing but an exception.\n\n Args:\n error_msg: An optional `string`. Defaults to `\"\"`.\n A string which is the message associated with the exception.\n exit_without_error: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Raise a exception to abort the process when called.", "type": "API"}, {"name": "tf.raw_ops.Abs", "docs": "Computes the absolute value of a tensor.\n\n Given a tensor `x`, this operation returns a tensor containing the absolute\n value of each element in `x`. For example, if x is an input element and y is\n an output element, this operation computes \\\\(y = |x|\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the absolute value of a tensor.", "type": "API"}, {"name": "tf.raw_ops.AccumulateNV2", "docs": "Returns the element-wise sum of a list of tensors.\n\n `tf.accumulate_n_v2` performs the same operation as `tf.add_n`, but does not\n wait for all of its inputs to be ready before beginning to sum. This can\n save memory if inputs are ready at different times, since minimum temporary\n storage is proportional to the output size rather than the inputs size.\n\n Unlike the original `accumulate_n`, `accumulate_n_v2` is differentiable.\n\n Returns a `Tensor` of same shape and type as the elements of `inputs`.\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with the same type in: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A list of `Tensor` objects, each with same shape and type.\n shape: A `tf.TensorShape` or list of `ints`.\n Shape of elements of `inputs`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Returns the element-wise sum of a list of tensors.", "type": "API"}, {"name": "tf.raw_ops.AccumulatorApplyGradient", "docs": "Applies a gradient to a given accumulator.\n\n Does not add if local_step is lesser than the accumulator's global_step.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a accumulator.\n local_step: A `Tensor` of type `int64`.\n The local_step value at which the gradient was computed.\n gradient: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of the gradient to be accumulated.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies a gradient to a given accumulator.", "type": "API"}, {"name": "tf.raw_ops.AccumulatorNumAccumulated", "docs": "Returns the number of gradients aggregated in the given accumulators.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to an accumulator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Returns the number of gradients aggregated in the given accumulators.", "type": "API"}, {"name": "tf.raw_ops.AccumulatorSetGlobalStep", "docs": "Updates the accumulator with a new value for global_step.\n\n Logs warning if the accumulator's value is already higher than\n new_global_step.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to an accumulator.\n new_global_step: A `Tensor` of type `int64`.\n The new global_step value to set.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the accumulator with a new value for global_step.", "type": "API"}, {"name": "tf.raw_ops.AccumulatorTakeGradient", "docs": "Extracts the average gradient in the given ConditionalAccumulator.\n\n The op blocks until sufficient (i.e., more than num_required)\n gradients have been accumulated. If the accumulator has already\n aggregated more than num_required gradients, it returns the average of\n the accumulated gradients. Also automatically increments the recorded\n global_step in the accumulator by 1, and resets the aggregate to 0.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to an accumulator.\n num_required: A `Tensor` of type `int32`.\n Number of gradients required before we return an aggregate.\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The data type of accumulated gradients. Needs to correspond to the type\n of the accumulator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Extracts the average gradient in the given ConditionalAccumulator.", "type": "API"}, {"name": "tf.raw_ops.Acos", "docs": "Computes acos of x element-wise.\n\n \n Provided an input tensor, the `tf.math.acos` operation returns the inverse cosine of each element of the tensor. If `y = tf.math.cos(x)` then, `x = tf.math.acos(y)`.\n\n Input range is `[-1, 1]` and the output has a range of `[0, pi]`.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes acos of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Acosh", "docs": "Computes inverse hyperbolic cosine of x element-wise.\n\n Given an input tensor, the function computes inverse hyperbolic cosine of every element.\n Input range is `[1, inf]`. It returns `nan` if the input lies outside the range.\n\n ```python\n x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Add", "docs": "Returns x + y element-wise.\n\n *NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Given two input tensors, the `tf.add` operation computes the sum for every element in the tensor.\n\n Both input and output have a range `(-inf, inf)`.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.raw_ops.AddManySparseToTensorsMap", "docs": "Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles.\n\n A `SparseTensor` of rank `R` is represented by three tensors: `sparse_indices`,\n `sparse_values`, and `sparse_shape`, where\n\n ```sparse_indices.shape[1] == sparse_shape.shape[0] == R```\n\n An `N`-minibatch of `SparseTensor` objects is represented as a `SparseTensor`\n having a first `sparse_indices` column taking values between `[0, N)`, where\n the minibatch size `N == sparse_shape[0]`.\n\n The input `SparseTensor` must have rank `R` greater than 1, and the first\n dimension is treated as the minibatch dimension. Elements of the `SparseTensor`\n must be sorted in increasing order of this first dimension. The stored\n `SparseTensor` objects pointed to by each row of the output `sparse_handles`\n will have rank `R-1`.\n\n The `SparseTensor` values can then be read out as part of a minibatch by passing\n the given keys as vector elements to `TakeManySparseFromTensorsMap`. To ensure\n the correct `SparseTensorsMap` is accessed, ensure that the same\n `container` and `shared_name` are passed to that Op. If no `shared_name`\n is provided here, instead use the *name* of the Operation created by calling\n `AddManySparseToTensorsMap` as the `shared_name` passed to\n `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.\n\n Args:\n sparse_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the minibatch `SparseTensor`.\n `sparse_indices[:, 0]` must be ordered values in `[0, N)`.\n sparse_values: A `Tensor`.\n 1-D. The `values` of the minibatch `SparseTensor`.\n sparse_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the minibatch `SparseTensor`.\n The minibatch size `N == sparse_shape[0]`.\n container: An optional `string`. Defaults to `\"\"`.\n The container name for the `SparseTensorsMap` created by this op.\n shared_name: An optional `string`. Defaults to `\"\"`.\n The shared name for the `SparseTensorsMap` created by this op.\n If blank, the new Operation's unique name is used.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles.", "type": "API"}, {"name": "tf.raw_ops.AddN", "docs": "Add all input tensors element wise.\n\n Inputs must be of same size and shape.\n\n ```python\n x = [9, 7, 10]\n tf.math.add_n(x) ==> 26\n ```\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with the same type in: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Add all input tensors element wise.", "type": "API"}, {"name": "tf.raw_ops.AddSparseToTensorsMap", "docs": "Add a `SparseTensor` to a `SparseTensorsMap` return its handle.\n\n A `SparseTensor` is represented by three tensors: `sparse_indices`,\n `sparse_values`, and `sparse_shape`.\n\n This operator takes the given `SparseTensor` and adds it to a container\n object (a `SparseTensorsMap`). A unique key within this container is generated\n in the form of an `int64`, and this is the value that is returned.\n\n The `SparseTensor` can then be read out as part of a minibatch by passing\n the key as a vector element to `TakeManySparseFromTensorsMap`. To ensure\n the correct `SparseTensorsMap` is accessed, ensure that the same\n `container` and `shared_name` are passed to that Op. If no `shared_name`\n is provided here, instead use the *name* of the Operation created by calling\n `AddSparseToTensorsMap` as the `shared_name` passed to\n `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.\n\n Args:\n sparse_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the `SparseTensor`.\n sparse_values: A `Tensor`. 1-D. The `values` of the `SparseTensor`.\n sparse_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the `SparseTensor`.\n container: An optional `string`. Defaults to `\"\"`.\n The container name for the `SparseTensorsMap` created by this op.\n shared_name: An optional `string`. Defaults to `\"\"`.\n The shared name for the `SparseTensorsMap` created by this op.\n If blank, the new Operation's unique name is used.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Add a `SparseTensor` to a `SparseTensorsMap` return its handle.", "type": "API"}, {"name": "tf.raw_ops.AddV2", "docs": "Returns x + y element-wise.\n\n *NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x + y element-wise.", "type": "API"}, {"name": "tf.raw_ops.AdjustContrast", "docs": "Deprecated. Disallowed in GraphDef version >= 2.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `float32`, `float64`.\n contrast_factor: A `Tensor` of type `float32`.\n min_value: A `Tensor` of type `float32`.\n max_value: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Deprecated. Disallowed in GraphDef version >= 2.", "type": "API"}, {"name": "tf.raw_ops.AdjustContrastv2", "docs": "Adjust the contrast of one or more images.\n\n `images` is a tensor of at least 3 dimensions. The last 3 dimensions are\n interpreted as `[height, width, channels]`. The other dimensions only\n represent a collection of images, such as `[batch, height, width, channels].`\n\n Contrast is adjusted independently for each channel of each image.\n\n For each channel, the Op first computes the mean of the image pixels in the\n channel and then adjusts each component of each pixel to\n `(x - mean) * contrast_factor + mean`.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `float32`.\n Images to adjust. At least 3-D.\n contrast_factor: A `Tensor` of type `float32`.\n A float multiplier for adjusting contrast.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Adjust the contrast of one or more images.", "type": "API"}, {"name": "tf.raw_ops.AdjustHue", "docs": "Adjust the hue of one or more images.\n\n `images` is a tensor of at least 3 dimensions. The last dimension is\n interpreted as channels, and must be three.\n\n The input image is considered in the RGB colorspace. Conceptually, the RGB\n colors are first mapped into HSV. A delta is then applied all the hue values,\n and then remapped back to RGB colorspace.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `float32`.\n Images to adjust. At least 3-D.\n delta: A `Tensor` of type `float32`. A float delta to add to the hue.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Adjust the hue of one or more images.", "type": "API"}, {"name": "tf.raw_ops.AdjustSaturation", "docs": "Adjust the saturation of one or more images.\n\n `images` is a tensor of at least 3 dimensions. The last dimension is\n interpreted as channels, and must be three.\n\n The input image is considered in the RGB colorspace. Conceptually, the RGB\n colors are first mapped into HSV. A scale is then applied all the saturation\n values, and then remapped back to RGB colorspace.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `float32`.\n Images to adjust. At least 3-D.\n scale: A `Tensor` of type `float32`.\n A float scale to add to the saturation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Adjust the saturation of one or more images.", "type": "API"}, {"name": "tf.raw_ops.All", "docs": "Computes the \"logical and\" of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor` of type `bool`. The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Computes the \"logical and\" of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.AllCandidateSampler", "docs": "Generates labels for candidate sampling with a learned unigram distribution.\n\n See explanations of candidate sampling and the data formats at\n go/candidate-sampling.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`. Number of candidates to produce.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a learned unigram distribution.", "type": "API"}, {"name": "tf.raw_ops.AllToAll", "docs": "An Op to exchange data across TPU replicas.\n\n On each replica, the input is split into `split_count` blocks along\n `split_dimension` and send to the other replicas given group_assignment. After\n receiving `split_count` - 1 blocks from other replicas, we concatenate the\n blocks along `concat_dimension` as the output.\n\n For example, suppose there are 2 TPU replicas:\n replica 0 receives input: `[[A, B]]`\n replica 1 receives input: `[[C, D]]`\n\n group_assignment=`[[0, 1]]`\n concat_dimension=0\n split_dimension=1\n split_count=2\n\n replica 0's output: `[[A], [C]]`\n replica 1's output: `[[B], [D]]`\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n The local input to the sum.\n group_assignment: A `Tensor` of type `int32`. An int32 tensor with shape\n [num_groups, num_replicas_per_group]. `group_assignment[i]` represents the\n replica ids in the ith subgroup.\n concat_dimension: An `int`. The dimension number to concatenate.\n split_dimension: An `int`. The dimension number to split.\n split_count: An `int`.\n The number of splits, this number must equal to the sub-group\n size(group_assignment.get_shape()[1])\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "An Op to exchange data across TPU replicas.", "type": "API"}, {"name": "tf.raw_ops.Angle", "docs": "Returns the argument of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n type `float` that is the argument of each element in `input`. All elements in\n `input` must be complex numbers of the form \\\\(a + bj\\\\), where *a*\n is the real part and *b* is the imaginary part.\n\n The argument returned by this operation is of the form \\\\(atan2(b, a)\\\\).\n\n For example:\n\n ```\n # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]\n tf.angle(input) ==> [2.0132, 1.056]\n ```\n\n @compatibility(numpy)\n Equivalent to np.angle.\n @end_compatibility\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Tout: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tout`.\n ", "desc": "Returns the argument of a complex number.", "type": "API"}, {"name": "tf.raw_ops.AnonymousIterator", "docs": "A container for an iterator resource.\n\n Args:\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A container for an iterator resource.", "type": "API"}, {"name": "tf.raw_ops.AnonymousIteratorV2", "docs": "A container for an iterator resource.\n\n Args:\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, deleter).\n\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n ", "desc": "A container for an iterator resource.", "type": "API"}, {"name": "tf.raw_ops.AnonymousMemoryCache", "docs": "TODO: add doc.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, deleter).\n\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.AnonymousMultiDeviceIterator", "docs": "A container for a multi device iterator resource.\n\n Args:\n devices: A list of `strings` that has length `>= 1`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, deleter).\n\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n ", "desc": "A container for a multi device iterator resource.", "type": "API"}, {"name": "tf.raw_ops.AnonymousRandomSeedGenerator", "docs": "TODO: add doc.\n\n Args:\n seed: A `Tensor` of type `int64`.\n seed2: A `Tensor` of type `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, deleter).\n\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.AnonymousSeedGenerator", "docs": "TODO: add doc.\n\n Args:\n seed: A `Tensor` of type `int64`.\n seed2: A `Tensor` of type `int64`.\n reshuffle: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, deleter).\n\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Any", "docs": "Computes the \"logical or\" of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor` of type `bool`. The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Computes the \"logical or\" of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdadelta", "docs": "Update '*var' according to the adadelta scheme.\n\n accum = rho() * accum + (1 - rho()) * grad.square();\n update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad;\n update_accum = rho() * update_accum + (1 - rho()) * update.square();\n var -= update;\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n accum_update: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var, accum and update_accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the adadelta scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdagrad", "docs": "Update '*var' according to the adagrad scheme.\n\n accum += grad * grad\n var -= lr * grad * (1 / sqrt(accum))\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdagradDA", "docs": "Update '*var' according to the proximal adagrad scheme.\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n gradient_accumulator: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n gradient_squared_accumulator: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n global_step: A `Tensor` of type `int64`.\n Training step number. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the proximal adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdagradV2", "docs": "Update '*var' according to the adagrad scheme.\n\n accum += grad * grad\n var -= lr * grad * (1 / sqrt(accum))\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdam", "docs": "Update '*var' according to the Adam algorithm.\n\n $$\\text{lr}_t := \\mathrm{lr} \\cdot \\frac{\\sqrt{1 - \\beta_2^t}}{1 - \\beta_1^t}$$\n $$m_t := \\beta_1 \\cdot m_{t-1} + (1 - \\beta_1) \\cdot g$$\n $$v_t := \\beta_2 \\cdot v_{t-1} + (1 - \\beta_2) \\cdot g^2$$\n $$\\text{var} := \\begin{cases} \\text{var} - (m_t \\beta_1 + g \\cdot (1 - \\beta_1))\\cdot\\text{lr}_t/(\\sqrt{v_t} + \\epsilon), &\\text{if use_nesterov}\\\\\\\\ \\text{var} - m_t \\cdot \\text{lr}_t /(\\sqrt{v_t} + \\epsilon), &\\text{otherwise} \\end{cases}$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n m: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n v: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n beta1_power: A `Tensor`. Must have the same type as `var`.\n Must be a scalar.\n beta2_power: A `Tensor`. Must have the same type as `var`.\n Must be a scalar.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n beta1: A `Tensor`. Must have the same type as `var`.\n Momentum factor. Must be a scalar.\n beta2: A `Tensor`. Must have the same type as `var`.\n Momentum factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, m, and v tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, uses the nesterov update.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the Adam algorithm.", "type": "API"}, {"name": "tf.raw_ops.ApplyAdaMax", "docs": "Update '*var' according to the AdaMax algorithm.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n v_t <- max(beta2 * v_{t-1}, abs(g))\n variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n m: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n v: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n beta1_power: A `Tensor`. Must have the same type as `var`.\n Must be a scalar.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n beta1: A `Tensor`. Must have the same type as `var`.\n Momentum factor. Must be a scalar.\n beta2: A `Tensor`. Must have the same type as `var`.\n Momentum factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, m, and v tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the AdaMax algorithm.", "type": "API"}, {"name": "tf.raw_ops.ApplyAddSign", "docs": "Update '*var' according to the AddSign update.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n update <- (alpha + sign_decay * sign(g) *sign(m)) * g\n variable <- variable - lr_t * update\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n m: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n alpha: A `Tensor`. Must have the same type as `var`. Must be a scalar.\n sign_decay: A `Tensor`. Must have the same type as `var`.\n Must be a scalar.\n beta: A `Tensor`. Must have the same type as `var`. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and m tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the AddSign update.", "type": "API"}, {"name": "tf.raw_ops.ApplyCenteredRMSProp", "docs": "Update '*var' according to the centered RMSProp algorithm.\n\n The centered RMSProp algorithm uses an estimate of the centered second moment\n (i.e., the variance) for normalization, as opposed to regular RMSProp, which\n uses the (uncentered) second moment. This often helps with training, but is\n slightly more expensive in terms of computation and memory.\n\n Note that in dense implementation of this algorithm, mg, ms, and mom will\n update even if the grad is zero, but in this sparse implementation, mg, ms,\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n mean_grad = decay * mean_grad + (1-decay) * gradient\n\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)\n\n mg <- rho * mg_{t-1} + (1-rho) * grad\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon)\n var <- var - mom\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n mg: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n ms: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n mom: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `var`.\n Momentum Scale. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, mg, ms, and mom tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the centered RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ApplyFtrl", "docs": "Update '*var' according to the Ftrl-proximal scheme.\n\n accum_new = accum + grad * grad\n linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n linear: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n lr_power: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyFtrlV2", "docs": "Update '*var' according to the Ftrl-proximal scheme.\n\n grad_with_shrinkage = grad + 2 * l2_shrinkage * var\n accum_new = accum + grad * grad\n linear += grad_with_shrinkage -\n (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n linear: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 shrinkage regularization. Must be a scalar.\n l2_shrinkage: A `Tensor`. Must have the same type as `var`.\n lr_power: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyGradientDescent", "docs": "Update '*var' by subtracting 'alpha' * 'delta' from it.\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n alpha: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n delta: A `Tensor`. Must have the same type as `var`. The change.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' by subtracting 'alpha' * 'delta' from it.", "type": "API"}, {"name": "tf.raw_ops.ApplyMomentum", "docs": "Update '*var' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n accum = accum * momentum + grad\n var -= lr * accum\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n momentum: A `Tensor`. Must have the same type as `var`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var - lr * momentum * accum, so in the end, the var you get is actually\n var - lr * momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.ApplyPowerSign", "docs": "Update '*var' according to the AddSign update.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g\n variable <- variable - lr_t * update\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n m: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n logbase: A `Tensor`. Must have the same type as `var`. Must be a scalar.\n sign_decay: A `Tensor`. Must have the same type as `var`.\n Must be a scalar.\n beta: A `Tensor`. Must have the same type as `var`. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and m tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the AddSign update.", "type": "API"}, {"name": "tf.raw_ops.ApplyProximalAdagrad", "docs": "Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.\n\n accum += grad * grad\n prox_v = var - lr * grad * (1 / sqrt(accum))\n var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.", "type": "API"}, {"name": "tf.raw_ops.ApplyProximalGradientDescent", "docs": "Update '*var' as FOBOS algorithm with fixed learning rate.\n\n prox_v = var - alpha * delta\n var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n alpha: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n delta: A `Tensor`. Must have the same type as `var`. The change.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' as FOBOS algorithm with fixed learning rate.", "type": "API"}, {"name": "tf.raw_ops.ApplyRMSProp", "docs": "Update '*var' according to the RMSProp algorithm.\n\n Note that in dense implementation of this algorithm, ms and mom will\n update even if the grad is zero, but in this sparse implementation, ms\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon)\n\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)\n var <- var - mom\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n ms: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n mom: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `var`.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, ms, and mom tensors is protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ApproximateEqual", "docs": "Returns the truth value of abs(x-y) < tolerance element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n tolerance: An optional `float`. Defaults to `1e-05`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of abs(x-y) < tolerance element-wise.", "type": "API"}, {"name": "tf.raw_ops.ArgMax", "docs": "Returns the index with the largest value across dimensions of a tensor.\n\n Note that in case of ties the identity of the return value is not guaranteed.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmax(input = a)\n c = tf.keras.backend.eval(b)\n # c = 4\n # here a[4] = 166.32 which is the largest element of a across axis 0\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n dimension: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n int16, int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which dimension of the input Tensor to reduce across. For vectors,\n use dimension = 0.\n output_type: An optional `tf.DType` from: `tf.int16, tf.uint16, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the largest value across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.ArgMin", "docs": "Returns the index with the smallest value across dimensions of a tensor.\n\n Note that in case of ties the identity of the return value is not guaranteed.\n\n Usage:\n ```python\n import tensorflow as tf\n a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n b = tf.math.argmin(input = a)\n c = tf.keras.backend.eval(b)\n # c = 0\n # here a[0] = 1 which is the smallest element of a across axis 0\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n dimension: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n int32 or int64, must be in the range `[-rank(input), rank(input))`.\n Describes which dimension of the input Tensor to reduce across. For vectors,\n use dimension = 0.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Returns the index with the smallest value across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Asin", "docs": "Computes the trignometric inverse sine of x element-wise.\n\n The `tf.math.asin` operation returns the inverse of `tf.math.sin`, such that\n if `y = tf.math.sin(x)` then, `x = tf.math.asin(y)`.\n\n **Note**: The output of `tf.math.asin` will lie within the invertible range\n of sine, i.e [-pi/2, pi/2].\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.sin(x) # [0.8659266, 0.7068252]\n\n tf.math.asin(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse sine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Asinh", "docs": "Computes inverse hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic sine\n for every element in the tensor. Both input and output has a range of\n `[-inf, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -2, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Assert", "docs": "Asserts that the given condition is true.\n\n If `condition` evaluates to false, print the list of tensors in `data`.\n `summarize` determines how many entries of the tensors to print.\n\n Args:\n condition: A `Tensor` of type `bool`. The condition to evaluate.\n data: A list of `Tensor` objects.\n The tensors to print out when condition is false.\n summarize: An optional `int`. Defaults to `3`.\n Print this many entries of each tensor.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Asserts that the given condition is true.", "type": "API"}, {"name": "tf.raw_ops.AssertCardinalityDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n cardinality: A `Tensor` of type `int64`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.AssertNextDataset", "docs": "A transformation that asserts which transformations happen next.\n\n This transformation checks whether the camel-case names (i.e. \"FlatMap\", not\n \"flat_map\") of the transformations following this transformation match the list\n of names in the `transformations` argument. If there is a mismatch, the\n transformation raises an exception.\n\n The check occurs when iterating over the contents of the dataset, which\n means that the check happens *after* any static optimizations are applied\n to the dataset graph.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n `AssertNextDataset` passes through the outputs of its input dataset.\n transformations: A `Tensor` of type `string`.\n A `tf.string` vector `tf.Tensor` identifying the transformations that are\n expected to happen next.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "A transformation that asserts which transformations happen next.", "type": "API"}, {"name": "tf.raw_ops.Assign", "docs": "Update 'ref' by assigning 'value' to it.\n\n This operation outputs \"ref\" after the assignment is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Args:\n ref: A mutable `Tensor`.\n Should be from a `Variable` node. May be uninitialized.\n value: A `Tensor`. Must have the same type as `ref`.\n The value to be assigned to the variable.\n validate_shape: An optional `bool`. Defaults to `True`.\n If true, the operation will validate that the shape\n of 'value' matches the shape of the Tensor being assigned to. If false,\n 'ref' will take on the shape of 'value'.\n use_locking: An optional `bool`. Defaults to `True`.\n If True, the assignment will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Update 'ref' by assigning 'value' to it.", "type": "API"}, {"name": "tf.raw_ops.AssignAdd", "docs": "Update 'ref' by adding 'value' to it.\n\n This operation outputs \"ref\" after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n value: A `Tensor`. Must have the same type as `ref`.\n The value to be added to the variable.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the addition will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Update 'ref' by adding 'value' to it.", "type": "API"}, {"name": "tf.raw_ops.AssignAddVariableOp", "docs": "Adds a value to the current value of a variable.\n\n Any ReadVariableOp with a control dependency on this op is guaranteed to\n see the incremented value or a subsequent newer one.\n\n Args:\n resource: A `Tensor` of type `resource`.\n handle to the resource in which to store the variable.\n value: A `Tensor`. the value by which the variable will be incremented.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Adds a value to the current value of a variable.", "type": "API"}, {"name": "tf.raw_ops.AssignSub", "docs": "Update 'ref' by subtracting 'value' from it.\n\n This operation outputs \"ref\" after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n value: A `Tensor`. Must have the same type as `ref`.\n The value to be subtracted to the variable.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Update 'ref' by subtracting 'value' from it.", "type": "API"}, {"name": "tf.raw_ops.AssignSubVariableOp", "docs": "Subtracts a value from the current value of a variable.\n\n Any ReadVariableOp with a control dependency on this op is guaranteed to\n see the decremented value or a subsequent newer one.\n\n Args:\n resource: A `Tensor` of type `resource`.\n handle to the resource in which to store the variable.\n value: A `Tensor`. the value by which the variable will be incremented.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Subtracts a value from the current value of a variable.", "type": "API"}, {"name": "tf.raw_ops.AssignVariableOp", "docs": "Assigns a new value to a variable.\n\n Any ReadVariableOp with a control dependency on this op is guaranteed to return\n this value or a subsequent newer value of the variable.\n\n Args:\n resource: A `Tensor` of type `resource`.\n handle to the resource in which to store the variable.\n value: A `Tensor`. the value to set the new tensor to use.\n validate_shape: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Assigns a new value to a variable.", "type": "API"}, {"name": "tf.raw_ops.AsString", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.raw_ops.Atan", "docs": "Computes the trignometric inverse tangent of x element-wise.\n\n The `tf.math.atan` operation returns the inverse of `tf.math.tan`, such that\n if `y = tf.math.tan(x)` then, `x = tf.math.atan(y)`.\n\n **Note**: The output of `tf.math.atan` will lie within the invertible range\n of tan, i.e (-pi/2, pi/2).\n\n For example:\n\n ```python\n # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]\n x = tf.constant([1.047, 0.785])\n y = tf.math.tan(x) # [1.731261, 0.99920404]\n\n tf.math.atan(y) # [1.047, 0.785] = x\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the trignometric inverse tangent of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Atan2", "docs": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.\n\n This is the angle \\\\( \\theta \\in [-\\pi, \\pi] \\\\) such that\n \\\\[ x = r \\cos(\\theta) \\\\]\n and\n \\\\[ y = r \\sin(\\theta) \\\\]\n where \\\\(r = \\sqrt{x^2 + y^2} \\\\).\n\n For example:\n\n >>> x = [1., 1.]\n >>> y = [1., -1.]\n >>> print((tf.math.atan2(y,x) * (180 / np.pi)).numpy())\n [ 45. -45.]\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes arctangent of `y/x` element-wise, respecting signs of the arguments.", "type": "API"}, {"name": "tf.raw_ops.Atanh", "docs": "Computes inverse hyperbolic tangent of x element-wise.\n\n Given an input tensor, this function computes inverse hyperbolic tangent\n for every element in the tensor. Input range is `[-1,1]` and output range is\n `[-inf, inf]`. If input is `-1`, output will be `-inf` and if the\n input is `1`, output will be `inf`. Values outside the range will have\n `nan` as output.\n\n ```python\n x = tf.constant([-float(\"inf\"), -1, -0.5, 1, 0, 0.5, 10, float(\"inf\")])\n tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes inverse hyperbolic tangent of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.AudioSpectrogram", "docs": "Produces a visualization of audio data over time.\n\n Spectrograms are a standard way of representing audio information as a series of\n slices of frequency information, one slice for each window of time. By joining\n these together into a sequence, they form a distinctive fingerprint of the sound\n over time.\n\n This op expects to receive audio data as an input, stored as floats in the range\n -1 to 1, together with a window width in samples, and a stride specifying how\n far to move the window between slices. From this it generates a three\n dimensional output. The first dimension is for the channels in the input, so a\n stereo audio input would have two here for example. The second dimension is time,\n with successive frequency slices. The third dimension has an amplitude value for\n each frequency during that time slice.\n\n This means the layout when converted and saved as an image is rotated 90 degrees\n clockwise from a typical spectrogram. Time is descending down the Y axis, and\n the frequency decreases from left to right.\n\n Each value in the result represents the square root of the sum of the real and\n imaginary parts of an FFT on the current window of samples. In this way, the\n lowest dimension represents the power of each frequency in the current window,\n and adjacent windows are concatenated in the next dimension.\n\n To get a more intuitive and visual look at what this operation does, you can run\n tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the\n resulting spectrogram as a PNG image.\n\n Args:\n input: A `Tensor` of type `float32`. Float representation of audio data.\n window_size: An `int`.\n How wide the input window is in samples. For the highest efficiency\n this should be a power of two, but other values are accepted.\n stride: An `int`.\n How widely apart the center of adjacent sample windows should be.\n magnitude_squared: An optional `bool`. Defaults to `False`.\n Whether to return the squared magnitude or just the\n magnitude. Using squared magnitude can avoid extra calculations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Produces a visualization of audio data over time.", "type": "API"}, {"name": "tf.raw_ops.AudioSummary", "docs": "Outputs a `Summary` protocol buffer with audio.\n\n The summary has up to `max_outputs` summary values containing audio. The\n audio is built from `tensor` which must be 3-D with shape `[batch_size,\n frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are\n assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.\n\n The `tag` argument is a scalar `Tensor` of type `string`. It is used to\n build the `tag` of the summary values:\n\n * If `max_outputs` is 1, the summary value tag is '*tag*/audio'.\n * If `max_outputs` is greater than 1, the summary value tags are\n generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.\n\n Args:\n tag: A `Tensor` of type `string`.\n Scalar. Used to build the `tag` attribute of the summary values.\n tensor: A `Tensor` of type `float32`. 2-D of shape `[batch_size, frames]`.\n sample_rate: A `float`. The sample rate of the signal in hertz.\n max_outputs: An optional `int` that is `>= 1`. Defaults to `3`.\n Max number of batch elements to generate audio for.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with audio.", "type": "API"}, {"name": "tf.raw_ops.AudioSummaryV2", "docs": "Outputs a `Summary` protocol buffer with audio.\n\n The summary has up to `max_outputs` summary values containing audio. The\n audio is built from `tensor` which must be 3-D with shape `[batch_size,\n frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are\n assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.\n\n The `tag` argument is a scalar `Tensor` of type `string`. It is used to\n build the `tag` of the summary values:\n\n * If `max_outputs` is 1, the summary value tag is '*tag*/audio'.\n * If `max_outputs` is greater than 1, the summary value tags are\n generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.\n\n Args:\n tag: A `Tensor` of type `string`.\n Scalar. Used to build the `tag` attribute of the summary values.\n tensor: A `Tensor` of type `float32`. 2-D of shape `[batch_size, frames]`.\n sample_rate: A `Tensor` of type `float32`.\n The sample rate of the signal in hertz.\n max_outputs: An optional `int` that is `>= 1`. Defaults to `3`.\n Max number of batch elements to generate audio for.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with audio.", "type": "API"}, {"name": "tf.raw_ops.AutoShardDataset", "docs": "Creates a dataset that shards the input dataset.\n\n Creates a dataset that shards the input dataset by num_workers, returning a\n sharded dataset for the index-th worker. This attempts to automatically shard\n a dataset by examining the Dataset graph and inserting a shard op before the\n inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset).\n\n This dataset will throw a NotFound error if we cannot shard the dataset\n automatically.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n num_workers: A `Tensor` of type `int64`.\n A scalar representing the number of workers to distribute this dataset across.\n index: A `Tensor` of type `int64`.\n A scalar representing the index of the current worker out of num_workers.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n auto_shard_policy: An optional `int`. Defaults to `0`.\n num_replicas: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that shards the input dataset.", "type": "API"}, {"name": "tf.raw_ops.AvgPool", "docs": "Performs average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize`\n window in `value`.\n\n Args:\n value: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape `[batch, height, width, channels]`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the sliding window for each dimension of `value`.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of `value`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n ", "desc": "Performs average pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.AvgPool3D", "docs": "Performs 3D average pooling on the input.\n\n Each entry in `output` is the mean of the corresponding size `ksize` window in\n `value`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, channels]` tensor to pool over.\n ksize: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The size of the window for each dimension of\n the input tensor. Must have `ksize[0] = ksize[4] = 1`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs 3D average pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.AvgPool3DGrad", "docs": "Computes gradients of average pooling function.\n\n Args:\n orig_input_shape: A `Tensor` of type `int32`.\n The original input dimensions.\n grad: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Output backprop of shape `[batch, depth, rows, cols, channels]`.\n ksize: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The size of the window for each dimension of\n the input tensor. Must have `ksize[0] = ksize[4] = 1`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grad`.\n ", "desc": "Computes gradients of average pooling function.", "type": "API"}, {"name": "tf.raw_ops.AvgPoolGrad", "docs": "Computes gradients of the average pooling function.\n\n Args:\n orig_input_shape: A `Tensor` of type `int32`.\n 1-D. Shape of the original input to `avg_pool`.\n grad: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape `[batch, height, width, channels]`. Gradients w.r.t.\n the output of `avg_pool`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the sliding window for each dimension of the input.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the input.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grad`.\n ", "desc": "Computes gradients of the average pooling function.", "type": "API"}, {"name": "tf.raw_ops.BandedTriangularSolve", "docs": "TODO: add doc.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n lower: An optional `bool`. Defaults to `True`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Barrier", "docs": "Defines a barrier that persists across different graph executions.\n\n A barrier represents a key-value map, where each key is a string, and\n each value is a tuple of tensors.\n\n At runtime, the barrier contains 'complete' and 'incomplete'\n elements. A complete element has defined tensors for all components of\n its value tuple, and may be accessed using BarrierTakeMany. An\n incomplete element has some undefined components in its value tuple,\n and may be updated using BarrierInsertMany.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. Each shape must be 1 in the\n first dimension. The length of this attr must be the same as the length of\n component_types.\n capacity: An optional `int`. Defaults to `-1`.\n The capacity of the barrier. The default capacity is MAX_INT32,\n which is the largest capacity of the underlying queue.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this barrier is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this barrier will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Defines a barrier that persists across different graph executions.", "type": "API"}, {"name": "tf.raw_ops.BarrierClose", "docs": "Closes the given barrier.\n\n This operation signals that no more new elements will be inserted in the\n given barrier. Subsequent InsertMany that try to introduce a new key will fail.\n Subsequent InsertMany operations that just add missing components to already\n existing elements will continue to succeed. Subsequent TakeMany operations will\n continue to succeed if sufficient completed elements remain in the barrier.\n Subsequent TakeMany operations that would block will fail immediately.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a barrier.\n cancel_pending_enqueues: An optional `bool`. Defaults to `False`.\n If true, all pending enqueue requests that are\n blocked on the barrier's queue will be canceled. InsertMany will fail, even\n if no new key is introduced.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Closes the given barrier.", "type": "API"}, {"name": "tf.raw_ops.BarrierIncompleteSize", "docs": "Computes the number of incomplete elements in the given barrier.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a barrier.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Computes the number of incomplete elements in the given barrier.", "type": "API"}, {"name": "tf.raw_ops.BarrierInsertMany", "docs": "For each key, assigns the respective value to the specified component.\n\n If a key is not found in the barrier, this operation will create a new\n incomplete element. If a key is found in the barrier, and the element\n already has a value at component_index, this operation will fail with\n INVALID_ARGUMENT, and leave the barrier in an undefined state.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a barrier.\n keys: A `Tensor` of type `string`.\n A one-dimensional tensor of keys, with length n.\n values: A `Tensor`.\n An any-dimensional tensor of values, which are associated with the\n respective keys. The 0th dimension must have length n.\n component_index: An `int`.\n The component of the barrier elements that is being assigned.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "For each key, assigns the respective value to the specified component.", "type": "API"}, {"name": "tf.raw_ops.BarrierReadySize", "docs": "Computes the number of complete elements in the given barrier.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a barrier.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Computes the number of complete elements in the given barrier.", "type": "API"}, {"name": "tf.raw_ops.BarrierTakeMany", "docs": "Takes the given number of completed elements from a barrier.\n\n This operation concatenates completed-element component tensors along\n the 0th dimension to make a single component tensor.\n\n Elements come out of the barrier when they are complete, and in the order\n in which they were placed into the barrier. The indices output provides\n information about the batch in which each element was originally inserted\n into the barrier.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a barrier.\n num_elements: A `Tensor` of type `int32`.\n A single-element tensor containing the number of elements to\n take.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n allow_small_batch: An optional `bool`. Defaults to `False`.\n Allow to return less than num_elements items if barrier is\n already closed.\n wait_for_incomplete: An optional `bool`. Defaults to `False`.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is empty, this operation will block for up to\n timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, keys, values).\n\n indices: A `Tensor` of type `int64`.\n keys: A `Tensor` of type `string`.\n values: A list of `Tensor` objects of type `component_types`.\n ", "desc": "Takes the given number of completed elements from a barrier.", "type": "API"}, {"name": "tf.raw_ops.Batch", "docs": "Batches all input tensors nondeterministically.\n\n When many instances of this Op are being run concurrently with the same\n container/shared_name in the same device, some will output zero-shaped Tensors\n and others will output Tensors of size up to max_batch_size.\n\n All Tensors in in_tensors are batched together (so, for example, labels and\n features should be batched with a single instance of this operation.\n\n Each invocation of batch emits an `id` scalar which will be used to identify\n this particular invocation when doing unbatch or its gradient.\n\n Each op which emits a non-empty batch will also emit a non-empty batch_index\n Tensor, which, is a [K, 3] matrix where each row contains the invocation's id,\n start, and length of elements of each set of Tensors present in batched_tensors.\n\n Batched tensors are concatenated along the first dimension, and all tensors in\n in_tensors must have the first dimension of the same size.\n\n in_tensors: The tensors to be batched.\n num_batch_threads: Number of scheduling threads for processing batches of work.\n Determines the number of batches processed in parallel.\n max_batch_size: Batch sizes will never be bigger than this.\n batch_timeout_micros: Maximum number of microseconds to wait before outputting\n an incomplete batch.\n allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does\n nothing. Otherwise, supplies a list of batch sizes, causing the op to pad\n batches up to one of those sizes. The entries must increase monotonically, and\n the final entry must equal max_batch_size.\n grad_timeout_micros: The timeout to use for the gradient. See Unbatch.\n batched_tensors: Either empty tensors or a batch of concatenated Tensors.\n batch_index: If out_tensors is non-empty, has information to invert it.\n container: Controls the scope of sharing of this batch.\n id: always contains a scalar with a unique ID for this invocation of Batch.\n shared_name: Concurrently running instances of batch in the same device with the\n same container and shared_name will batch their elements together. If left\n empty, the op name will be used as the shared name.\n T: the types of tensors to be batched.\n\n Args:\n in_tensors: A list of `Tensor` objects.\n num_batch_threads: An `int`.\n max_batch_size: An `int`.\n batch_timeout_micros: An `int`.\n grad_timeout_micros: An `int`.\n max_enqueued_batches: An optional `int`. Defaults to `10`.\n allowed_batch_sizes: An optional list of `ints`. Defaults to `[]`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n batching_queue: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (batched_tensors, batch_index, id).\n\n batched_tensors: A list of `Tensor` objects. Has the same type as `in_tensors`.\n batch_index: A `Tensor` of type `int64`.\n id: A `Tensor` of type `int64`.\n ", "desc": "Batches all input tensors nondeterministically.", "type": "API"}, {"name": "tf.raw_ops.BatchCholesky", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchCholeskyGrad", "docs": "TODO: add doc.\n\n Args:\n l: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n grad: A `Tensor`. Must have the same type as `l`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `l`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchDataset", "docs": "Creates a dataset that batches `batch_size` elements from `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches `batch_size` elements from `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.BatchDatasetV2", "docs": "Creates a dataset that batches `batch_size` elements from `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a batch.\n drop_remainder: A `Tensor` of type `bool`.\n A scalar representing whether the last batch should be dropped in case its size\n is smaller than desired.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n parallel_copy: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches `batch_size` elements from `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.BatchFFT", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchFFT2D", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchFFT3D", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchFunction", "docs": "Batches all the inputs tensors to the computation done by the function.\n\n So, for example, in the following code\n\n ```python\n\n # This input will be captured.\n y = tf.placeholder_with_default(1.0, shape=[])\n\n @tf.Defun(tf.float32)\n def computation(a):\n return tf.matmul(a, a) + y\n\n b = gen_batch_ops.batch_function(\n f=computation\n in_tensors=[a],\n captured_tensors=computation.captured_inputs,\n Tout=[o.type for o in computation.definition.signature.output_arg],\n num_batch_threads=1,\n max_batch_size=10,\n batch_timeout_micros=100000, # 100ms\n allowed_batch_sizes=[3, 10],\n batching_queue=\"\")\n ```\n\n If more than one session.run call is simultaneously trying to compute `b`\n the values of `a` will be gathered, non-deterministically concatenated\n along the first axis, and only one thread will run the computation.\n\n Assumes that all arguments of the function are Tensors which will be batched\n along their first dimension.\n\n Arguments that are captured, are not batched. The session.run call which does\n the concatenation, will use the values of the captured tensors available to it.\n Therefore, typical uses of captured tensors should involve values which remain\n unchanged across session.run calls. Inference is a good example of this.\n\n SparseTensor is not supported. The return value of the decorated function\n must be a Tensor or a list/tuple of Tensors.\n\n Args:\n in_tensors: A list of `Tensor` objects. The tensors to be batched.\n captured_tensors: A list of `Tensor` objects.\n The tensors which are captured in the function, and don't need\n to be batched.\n f: A function decorated with @Defun.\n num_batch_threads: An `int`.\n Number of scheduling threads for processing batches of work.\n Determines the number of batches processed in parallel.\n max_batch_size: An `int`. Batch sizes will never be bigger than this.\n batch_timeout_micros: An `int`.\n Maximum number of microseconds to wait before outputting\n an incomplete batch.\n Tout: A list of `tf.DTypes` that has length `>= 1`.\n the types of the output tensors.\n max_enqueued_batches: An optional `int`. Defaults to `10`.\n Maximum number of batches enqueued. Default: 10.\n allowed_batch_sizes: An optional list of `ints`. Defaults to `[]`.\n Optional list of allowed batch sizes. If left empty, does\n nothing. Otherwise, supplies a list of batch sizes, causing the op to pad\n batches up to one of those sizes. The entries must increase monotonically.\n If enable_large_batch_splitting is false (i.e., large-input-split is not\n enabled) the final entry must equal max_batch_size.\n container: An optional `string`. Defaults to `\"\"`.\n Controls the scope of sharing of this batch.\n shared_name: An optional `string`. Defaults to `\"\"`.\n Concurrently running instances of batch in the same device with the\n same container and shared_name will batch their elements together. If left\n empty, the op name will be used as the shared name.\n batching_queue: An optional `string`. Defaults to `\"\"`.\n enable_large_batch_splitting: An optional `bool`. Defaults to `False`.\n input with a large size (i.e., larger than the largest value of\n `allowed_batch_sizes`) will be splitted into multiple batches with batch size.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Batches all the inputs tensors to the computation done by the function.", "type": "API"}, {"name": "tf.raw_ops.BatchIFFT", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchIFFT2D", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchIFFT3D", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor` of type `complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `complex64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatMul", "docs": "Multiplies slices of two tensors in batches.\n\n Multiplies all slices of `Tensor` `x` and `y` (each slice can be\n viewed as an element of a batch), and arranges the individual results\n in a single output tensor of the same batch size. Each of the\n individual slices can optionally be adjointed (to adjoint a matrix\n means to transpose and conjugate it) before multiplication by setting\n the `adj_x` or `adj_y` flag to `True`, which are by default `False`.\n\n The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]`\n and `[..., r_y, c_y]`.\n\n The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where:\n\n r_o = c_x if adj_x else r_x\n c_o = r_y if adj_y else c_y\n\n It is computed as:\n\n output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n 2-D or higher with shape `[..., r_x, c_x]`.\n y: A `Tensor`. Must have the same type as `x`.\n 2-D or higher with shape `[..., r_y, c_y]`.\n adj_x: An optional `bool`. Defaults to `False`.\n If `True`, adjoint the slices of `x`. Defaults to `False`.\n adj_y: An optional `bool`. Defaults to `False`.\n If `True`, adjoint the slices of `y`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Multiplies slices of two tensors in batches.", "type": "API"}, {"name": "tf.raw_ops.BatchMatMulV2", "docs": "Multiplies slices of two tensors in batches.\n\n Multiplies all slices of `Tensor` `x` and `y` (each slice can be\n viewed as an element of a batch), and arranges the individual results\n in a single output tensor of the same batch size. Each of the\n individual slices can optionally be adjointed (to adjoint a matrix\n means to transpose and conjugate it) before multiplication by setting\n the `adj_x` or `adj_y` flag to `True`, which are by default `False`.\n\n The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]`\n and `[..., r_y, c_y]`.\n\n The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where:\n\n r_o = c_x if adj_x else r_x\n c_o = r_y if adj_y else c_y\n\n It is computed as:\n\n output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])\n\n *NOTE*: `BatchMatMulV2` supports broadcasting in the batch dimensions. More\n about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n 2-D or higher with shape `[..., r_x, c_x]`.\n y: A `Tensor`. Must have the same type as `x`.\n 2-D or higher with shape `[..., r_y, c_y]`.\n adj_x: An optional `bool`. Defaults to `False`.\n If `True`, adjoint the slices of `x`. Defaults to `False`.\n adj_y: An optional `bool`. Defaults to `False`.\n If `True`, adjoint the slices of `y`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Multiplies slices of two tensors in batches.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixBandPart", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`.\n num_lower: A `Tensor` of type `int64`.\n num_upper: A `Tensor` of type `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixDeterminant", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixDiag", "docs": "TODO: add doc.\n\n Args:\n diagonal: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixDiagPart", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixInverse", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixSetDiag", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`.\n diagonal: A `Tensor`. Must have the same type as `input`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixSolve", "docs": "TODO: add doc.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixSolveLs", "docs": "TODO: add doc.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n l2_regularizer: A `Tensor` of type `float64`.\n fast: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchMatrixTriangularSolve", "docs": "TODO: add doc.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n lower: An optional `bool`. Defaults to `True`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchNormWithGlobalNormalization", "docs": "Batch normalization.\n\n This op is deprecated. Prefer `tf.nn.batch_normalization`.\n\n Args:\n t: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A 4D input Tensor.\n m: A `Tensor`. Must have the same type as `t`.\n A 1D mean Tensor with size matching the last dimension of t.\n This is the first output from tf.nn.moments,\n or a saved moving average thereof.\n v: A `Tensor`. Must have the same type as `t`.\n A 1D variance Tensor with size matching the last dimension of t.\n This is the second output from tf.nn.moments,\n or a saved moving average thereof.\n beta: A `Tensor`. Must have the same type as `t`.\n A 1D beta Tensor with size matching the last dimension of t.\n An offset to be added to the normalized tensor.\n gamma: A `Tensor`. Must have the same type as `t`.\n A 1D gamma Tensor with size matching the last dimension of t.\n If \"scale_after_normalization\" is true, this tensor will be multiplied\n with the normalized tensor.\n variance_epsilon: A `float`. A small float number to avoid dividing by 0.\n scale_after_normalization: A `bool`.\n A bool indicating whether the resulted tensor\n needs to be multiplied with gamma.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.raw_ops.BatchNormWithGlobalNormalizationGrad", "docs": "Gradients for batch normalization.\n\n This op is deprecated. See `tf.nn.batch_normalization`.\n\n Args:\n t: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A 4D input Tensor.\n m: A `Tensor`. Must have the same type as `t`.\n A 1D mean Tensor with size matching the last dimension of t.\n This is the first output from tf.nn.moments,\n or a saved moving average thereof.\n v: A `Tensor`. Must have the same type as `t`.\n A 1D variance Tensor with size matching the last dimension of t.\n This is the second output from tf.nn.moments,\n or a saved moving average thereof.\n gamma: A `Tensor`. Must have the same type as `t`.\n A 1D gamma Tensor with size matching the last dimension of t.\n If \"scale_after_normalization\" is true, this Tensor will be multiplied\n with the normalized Tensor.\n backprop: A `Tensor`. Must have the same type as `t`. 4D backprop Tensor.\n variance_epsilon: A `float`. A small float number to avoid dividing by 0.\n scale_after_normalization: A `bool`.\n A bool indicating whether the resulted tensor\n needs to be multiplied with gamma.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (dx, dm, dv, db, dg).\n\n dx: A `Tensor`. Has the same type as `t`.\n dm: A `Tensor`. Has the same type as `t`.\n dv: A `Tensor`. Has the same type as `t`.\n db: A `Tensor`. Has the same type as `t`.\n dg: A `Tensor`. Has the same type as `t`.\n ", "desc": "Gradients for batch normalization.", "type": "API"}, {"name": "tf.raw_ops.BatchSelfAdjointEig", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchSelfAdjointEigV2", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n compute_v: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (e, v).\n\n e: A `Tensor`. Has the same type as `input`.\n v: A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchSvd", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.\n compute_uv: An optional `bool`. Defaults to `True`.\n full_matrices: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (s, u, v).\n\n s: A `Tensor`. Has the same type as `input`.\n u: A `Tensor`. Has the same type as `input`.\n v: A `Tensor`. Has the same type as `input`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BatchToSpace", "docs": "BatchToSpace for 4-D tensors of type T.\n\n This is a legacy version of the more general BatchToSpaceND.\n\n Rearranges (permutes) data from batch into blocks of spatial data, followed by\n cropping. This is the reverse transformation of SpaceToBatch. More specifically,\n this op outputs a copy of the input tensor where values from the `batch`\n dimension are moved in spatial blocks to the `height` and `width` dimensions,\n followed by cropping along the `height` and `width` dimensions.\n\n Args:\n input: A `Tensor`. 4-D tensor with shape\n `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,\n depth]`. Note that the batch size of the input tensor must be divisible by\n `block_size * block_size`.\n crops: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies\n how many elements to crop from the intermediate result across the spatial\n dimensions as follows:\n\n crops = [[crop_top, crop_bottom], [crop_left, crop_right]]\n block_size: An `int` that is `>= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for 4-D tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.BatchToSpaceND", "docs": "BatchToSpace for N-D tensors of type T.\n\n This operation reshapes the \"batch\" dimension 0 into `M + 1` dimensions of shape\n `block_shape + [batch]`, interleaves these blocks back into the grid defined by\n the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as\n the input. The spatial dimensions of this intermediate result are then\n optionally cropped according to `crops` to produce the output. This is the\n reverse of SpaceToBatch. See below for a precise description.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has M dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n crops: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input\n dimension `i + 1`, which corresponds to spatial dimension `i`. It is\n required that\n `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.\n\n This operation is equivalent to the following steps:\n\n 1. Reshape `input` to `reshaped` of shape:\n [block_shape[0], ..., block_shape[M-1],\n batch / prod(block_shape),\n input_shape[1], ..., input_shape[N-1]]\n\n 2. Permute dimensions of `reshaped` to produce `permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1], block_shape[0],\n ...,\n input_shape[M], block_shape[M-1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n 3. Reshape `permuted` to produce `reshaped_permuted` of shape\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0],\n ...,\n input_shape[M] * block_shape[M-1],\n\n input_shape[M+1],\n ...,\n input_shape[N-1]]\n\n 4. Crop the start and end of dimensions `[1, ..., M]` of\n `reshaped_permuted` according to `crops` to produce the output of shape:\n [batch / prod(block_shape),\n\n input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],\n ...,\n input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],\n\n input_shape[M+1], ..., input_shape[N-1]]\n\n Some examples:\n\n (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n The output tensor has shape `[1, 2, 2, 3]` and value:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n The output tensor has shape `[1, 4, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and\n `crops = [[0, 0], [2, 0]]`:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n The output tensor has shape `[2, 2, 4, 1]` and value:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "BatchToSpace for N-D tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.BesselI0", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselI0e", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselI1", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselI1e", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselJ0", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselJ1", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselK0", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselK0e", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselK1", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselK1e", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselY0", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.BesselY1", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Betainc", "docs": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).\n\n The regularized incomplete beta integral is defined as:\n\n\n \\\\(I_x(a, b) = \\frac{B(x; a, b)}{B(a, b)}\\\\)\n\n where\n\n\n \\\\(B(x; a, b) = \\int_0^x t^{a-1} (1 - t)^{b-1} dt\\\\)\n\n\n is the incomplete beta function and \\\\(B(a, b)\\\\) is the *complete*\n beta function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n b: A `Tensor`. Must have the same type as `a`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the regularized incomplete beta integral \\\\(I_x(a, b)\\\\).", "type": "API"}, {"name": "tf.raw_ops.BiasAdd", "docs": "Adds `bias` to `value`.\n\n This is a special case of `tf.add` where `bias` is restricted to be 1-D.\n Broadcasting is supported, so `value` may have any number of dimensions.\n\n Args:\n value: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Any number of dimensions.\n bias: A `Tensor`. Must have the same type as `value`.\n 1-D with size the last dimension of `value`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the bias tensor will be added to the last dimension\n of the value tensor.\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n The tensor will be added to \"in_channels\", the third-to-the-last\n dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n ", "desc": "Adds `bias` to `value`.", "type": "API"}, {"name": "tf.raw_ops.BiasAddGrad", "docs": "The backward operation for \"BiasAdd\" on the \"bias\" tensor.\n\n It accumulates all the values from out_backprop into the feature dimension.\n For NHWC data format, the feature dimension is the last. For NCHW data format,\n the feature dimension is the third-to-last.\n\n Args:\n out_backprop: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Any number of dimensions.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the bias tensor will be added to the last dimension\n of the value tensor.\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n The tensor will be added to \"in_channels\", the third-to-the-last\n dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `out_backprop`.\n ", "desc": "The backward operation for \"BiasAdd\" on the \"bias\" tensor.", "type": "API"}, {"name": "tf.raw_ops.BiasAddV1", "docs": "Adds `bias` to `value`.\n\n This is a deprecated version of BiasAdd and will be soon removed.\n\n This is a special case of `tf.add` where `bias` is restricted to be 1-D.\n Broadcasting is supported, so `value` may have any number of dimensions.\n\n Args:\n value: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Any number of dimensions.\n bias: A `Tensor`. Must have the same type as `value`.\n 1-D with size the last dimension of `value`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n ", "desc": "Adds `bias` to `value`.", "type": "API"}, {"name": "tf.raw_ops.Bincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n Outputs a vector with length `size` and the same dtype as `weights`. If\n `weights` are empty, then index `i` stores the number of times the value `i` is\n counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of\n the value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Values in `arr` outside of the range [0, size) are ignored.\n\n Args:\n arr: A `Tensor` of type `int32`. int32 `Tensor`.\n size: A `Tensor` of type `int32`. non-negative int32 scalar `Tensor`.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n is an int32, int64, float32, or float64 `Tensor` with the same\n shape as `arr`, or a length-0 `Tensor`, in which case it acts as all weights\n equal to 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.raw_ops.Bitcast", "docs": "Bitcasts a tensor from one type to another without copying data.\n\n Given a tensor `input`, this operation returns a tensor that has the same buffer\n data as `input` with datatype `type`.\n\n If the input datatype `T` is larger than the output datatype `type` then the\n shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].\n\n If `T` is smaller than `type`, the operator requires that the rightmost\n dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from\n [..., sizeof(`type`)/sizeof(`T`)] to [...].\n\n tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype\n (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast()\n gives module error.\n For example,\n\n Example 1:\n\n >>> a = [1., 2., 3.]\n >>> equality_bitcast = tf.bitcast(a, tf.complex128)\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast]\n >>> equality_cast = tf.cast(a, tf.complex128)\n >>> print(equality_cast)\n tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)\n\n Example 2:\n\n >>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)\n \n\n Example 3:\n\n >>> x = [1., 2., 3.]\n >>> y = [0., 2., 3.]\n >>> equality= tf.equal(x,y)\n >>> equality_cast = tf.cast(equality,tf.float32)\n >>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8)\n >>> print(equality)\n tf.Tensor([False True True], shape=(3,), dtype=bool)\n >>> print(equality_cast)\n tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32)\n >>> print(equality_bitcast)\n tf.Tensor(\n [[ 0 0 0 0]\n [ 0 0 128 63]\n [ 0 0 128 63]], shape=(3, 4), dtype=uint8)\n\n *NOTE*: Bitcast is implemented as a low-level cast, so machines with different\n endian orderings will give different results.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `uint32`, `uint64`, `int8`, `int16`, `complex64`, `complex128`, `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.\n type: A `tf.DType` from: `tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `type`.\n ", "desc": "Bitcasts a tensor from one type to another without copying data.", "type": "API"}, {"name": "tf.raw_ops.BitwiseAnd", "docs": "Elementwise computes the bitwise AND of `x` and `y`.\n\n The result will have those bits set, that are set in both `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_and(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise AND of `x` and `y`.", "type": "API"}, {"name": "tf.raw_ops.BitwiseOr", "docs": "Elementwise computes the bitwise OR of `x` and `y`.\n\n The result will have those bits set, that are set in `x`, `y` or both. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_or(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise OR of `x` and `y`.", "type": "API"}, {"name": "tf.raw_ops.BitwiseXor", "docs": "Elementwise computes the bitwise XOR of `x` and `y`.\n\n The result will have those bits set, that are different in `x` and `y`. The\n computation is performed on the underlying representations of `x` and `y`.\n\n For example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,\n tf.uint8, tf.uint16, tf.uint32, tf.uint64]\n\n for dtype in dtype_list:\n lhs = tf.constant([0, 5, 3, 14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n exp = tf.constant([5, 5, 4, 5], dtype=tf.float32)\n\n res = bitwise_ops.bitwise_xor(lhs, rhs)\n tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise XOR of `x` and `y`.", "type": "API"}, {"name": "tf.raw_ops.BlockLSTM", "docs": "Computes the LSTM cell forward propagation for all the time steps.\n\n This is equivalent to applying LSTMBlockCell in a loop, like so:\n\n ```python\n for x1 in unpack(x):\n i1, cs1, f1, o1, ci1, co1, h1 = LSTMBlock(\n x1, cs_prev, h_prev, w, wci, wcf, wco, b)\n cs_prev = cs1\n h_prev = h1\n i.append(i1)\n cs.append(cs1)\n f.append(f1)\n o.append(o1)\n ci.append(ci1)\n co.append(co1)\n h.append(h1)\n return pack(i), pack(cs), pack(f), pack(o), pack(ci), pack(ch), pack(h)\n ```\n\n Args:\n seq_len_max: A `Tensor` of type `int64`.\n Maximum time length actually used by this input. Outputs are padded\n with zeros beyond this length.\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The sequence input to the LSTM, shape (timelen, batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n Value of the initial cell state.\n h_prev: A `Tensor`. Must have the same type as `x`.\n Initial output of cell (to be used for peephole).\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n forget_bias: An optional `float`. Defaults to `1`. The forget gate bias.\n cell_clip: An optional `float`. Defaults to `3`.\n Value to clip the 'cs' value to.\n use_peephole: An optional `bool`. Defaults to `False`.\n Whether to use peephole weights.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (i, cs, f, o, ci, co, h).\n\n i: A `Tensor`. Has the same type as `x`.\n cs: A `Tensor`. Has the same type as `x`.\n f: A `Tensor`. Has the same type as `x`.\n o: A `Tensor`. Has the same type as `x`.\n ci: A `Tensor`. Has the same type as `x`.\n co: A `Tensor`. Has the same type as `x`.\n h: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell forward propagation for all the time steps.", "type": "API"}, {"name": "tf.raw_ops.BlockLSTMGrad", "docs": "Computes the LSTM cell backward propagation for the entire time sequence.\n\n This implementation is to be used in conjunction of LSTMBlock.\n\n Args:\n seq_len_max: A `Tensor` of type `int64`.\n Maximum time length actually used by this input. Outputs are padded\n with zeros beyond this length.\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The sequence input to the LSTM, shape (timelen, batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n Value of the initial cell state.\n h_prev: A `Tensor`. Must have the same type as `x`.\n Initial output of cell (to be used for peephole).\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n i: A `Tensor`. Must have the same type as `x`.\n The input gate over the whole time sequence.\n cs: A `Tensor`. Must have the same type as `x`.\n The cell state before the tanh over the whole time sequence.\n f: A `Tensor`. Must have the same type as `x`.\n The forget gate over the whole time sequence.\n o: A `Tensor`. Must have the same type as `x`.\n The output gate over the whole time sequence.\n ci: A `Tensor`. Must have the same type as `x`.\n The cell input over the whole time sequence.\n co: A `Tensor`. Must have the same type as `x`.\n The cell after the tanh over the whole time sequence.\n h: A `Tensor`. Must have the same type as `x`.\n The output h vector over the whole time sequence.\n cs_grad: A `Tensor`. Must have the same type as `x`.\n The current gradient of cs.\n h_grad: A `Tensor`. Must have the same type as `x`.\n The gradient of h vector.\n use_peephole: A `bool`. Whether to use peephole weights.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (x_grad, cs_prev_grad, h_prev_grad, w_grad, wci_grad, wcf_grad, wco_grad, b_grad).\n\n x_grad: A `Tensor`. Has the same type as `x`.\n cs_prev_grad: A `Tensor`. Has the same type as `x`.\n h_prev_grad: A `Tensor`. Has the same type as `x`.\n w_grad: A `Tensor`. Has the same type as `x`.\n wci_grad: A `Tensor`. Has the same type as `x`.\n wcf_grad: A `Tensor`. Has the same type as `x`.\n wco_grad: A `Tensor`. Has the same type as `x`.\n b_grad: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell backward propagation for the entire time sequence.", "type": "API"}, {"name": "tf.raw_ops.BlockLSTMGradV2", "docs": "Computes the LSTM cell backward propagation for the entire time sequence.\n\n This implementation is to be used in conjunction of BlockLSTMV2.\n\n Args:\n seq_len_max: A `Tensor` of type `int64`.\n Maximum time length actually used by this input. Outputs are padded\n with zeros beyond this length.\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The sequence input to the LSTM, shape (timelen, batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n Value of the initial cell state.\n h_prev: A `Tensor`. Must have the same type as `x`.\n Initial output of cell (to be used for peephole).\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n i: A `Tensor`. Must have the same type as `x`.\n The input gate over the whole time sequence.\n cs: A `Tensor`. Must have the same type as `x`.\n The cell state before the tanh over the whole time sequence.\n f: A `Tensor`. Must have the same type as `x`.\n The forget gate over the whole time sequence.\n o: A `Tensor`. Must have the same type as `x`.\n The output gate over the whole time sequence.\n ci: A `Tensor`. Must have the same type as `x`.\n The cell input over the whole time sequence.\n co: A `Tensor`. Must have the same type as `x`.\n The cell after the tanh over the whole time sequence.\n h: A `Tensor`. Must have the same type as `x`.\n The output h vector over the whole time sequence.\n cs_grad: A `Tensor`. Must have the same type as `x`.\n The current gradient of cs.\n h_grad: A `Tensor`. Must have the same type as `x`.\n The gradient of h vector.\n use_peephole: A `bool`. Whether to use peephole weights.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (x_grad, cs_prev_grad, h_prev_grad, w_grad, wci_grad, wcf_grad, wco_grad, b_grad).\n\n x_grad: A `Tensor`. Has the same type as `x`.\n cs_prev_grad: A `Tensor`. Has the same type as `x`.\n h_prev_grad: A `Tensor`. Has the same type as `x`.\n w_grad: A `Tensor`. Has the same type as `x`.\n wci_grad: A `Tensor`. Has the same type as `x`.\n wcf_grad: A `Tensor`. Has the same type as `x`.\n wco_grad: A `Tensor`. Has the same type as `x`.\n b_grad: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell backward propagation for the entire time sequence.", "type": "API"}, {"name": "tf.raw_ops.BlockLSTMV2", "docs": "Computes the LSTM cell forward propagation for all the time steps.\n\n This is equivalent to applying LSTMBlockCell in a loop, like so:\n\n ```python\n for x1 in unpack(x):\n i1, cs1, f1, o1, ci1, co1, h1 = LSTMBlock(\n x1, cs_prev, h_prev, w, wci, wcf, wco, b)\n cs_prev = cs1\n h_prev = h1\n i.append(i1)\n cs.append(cs1)\n f.append(f1)\n o.append(o1)\n ci.append(ci1)\n co.append(co1)\n h.append(h1)\n return pack(i), pack(cs), pack(f), pack(o), pack(ci), pack(ch), pack(h)\n\n Note that unlike LSTMBlockCell (and BlockLSTM) which uses ICFO gate layout,\n this op uses IFCO. So in order for the following snippet to be equivalent\n all gate-related outputs should be reordered.\n ```\n\n Args:\n seq_len_max: A `Tensor` of type `int64`.\n Maximum time length actually used by this input. Outputs are padded\n with zeros beyond this length.\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The sequence input to the LSTM, shape (timelen, batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n Value of the initial cell state.\n h_prev: A `Tensor`. Must have the same type as `x`.\n Initial output of cell (to be used for peephole).\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n cell_clip: An optional `float`. Defaults to `0`.\n Value to clip the 'cs' value to.\n use_peephole: An optional `bool`. Defaults to `False`.\n Whether to use peephole weights.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (i, cs, f, o, ci, co, h).\n\n i: A `Tensor`. Has the same type as `x`.\n cs: A `Tensor`. Has the same type as `x`.\n f: A `Tensor`. Has the same type as `x`.\n o: A `Tensor`. Has the same type as `x`.\n ci: A `Tensor`. Has the same type as `x`.\n co: A `Tensor`. Has the same type as `x`.\n h: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell forward propagation for all the time steps.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesAggregateStats", "docs": "Aggregates the summary of accumulated stats for the batch.\n\n The summary stats contains gradients and hessians accumulated for each node, feature dimension id and bucket.\n\n Args:\n node_ids: A `Tensor` of type `int32`.\n int32; Rank 1 Tensor containing node ids for each example, shape [batch_size].\n gradients: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[batch_size, logits_dimension]) with gradients for each example.\n hessians: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[batch_size, hessian_dimension]) with hessians for each example.\n feature: A `Tensor` of type `int32`.\n int32; Rank 2 feature Tensors (shape=[batch_size, feature_dimension]).\n max_splits: An `int` that is `>= 1`.\n int; the maximum number of splits possible in the whole tree.\n num_buckets: An `int` that is `>= 1`.\n int; equals to the maximum possible value of bucketized feature.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Aggregates the summary of accumulated stats for the batch.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesBucketize", "docs": "Bucketize each feature based on bucket boundaries.\n\n An op that returns a list of float tensors, where each tensor represents the\n bucketized values for a single feature.\n\n Args:\n float_values: A list of `Tensor` objects with type `float32`.\n float; List of Rank 1 Tensor each containing float values for a single feature.\n bucket_boundaries: A list with the same length as `float_values` of `Tensor` objects with type `float32`.\n float; List of Rank 1 Tensors each containing the bucket boundaries for a single\n feature.\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `float_values` of `Tensor` objects with type `int32`.\n ", "desc": "Bucketize each feature based on bucket boundaries.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCalculateBestFeatureSplit", "docs": "Calculates gains for each feature and returns the best possible split information for the feature.\n\n The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.\n\n It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.\n\n In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).\n\n The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature.\n\n Args:\n node_id_range: A `Tensor` of type `int32`.\n A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).\n stats_summary: A `Tensor` of type `float32`.\n A Rank 4 tensor (#shape=[max_splits, feature_dims, bucket, stats_dims]) for accumulated stats summary (gradient/hessian) per node, per dimension, per buckets for each feature.\n The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.\n l1: A `Tensor` of type `float32`.\n l1 regularization factor on leaf weights, per instance based.\n l2: A `Tensor` of type `float32`.\n l2 regularization factor on leaf weights, per instance based.\n tree_complexity: A `Tensor` of type `float32`.\n adjustment to the gain, per leaf based.\n min_node_weight: A `Tensor` of type `float32`.\n minimum avg of hessians in a node before required for the node to be considered for splitting.\n logits_dimension: An `int` that is `>= 1`.\n The dimension of logit, i.e., number of classes.\n split_type: An optional `string` from: `\"inequality\", \"equality\"`. Defaults to `\"inequality\"`.\n A string indicating if this Op should perform inequality split or equality split.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (node_ids, gains, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions).\n\n node_ids: A `Tensor` of type `int32`.\n gains: A `Tensor` of type `float32`.\n feature_dimensions: A `Tensor` of type `int32`.\n thresholds: A `Tensor` of type `int32`.\n left_node_contribs: A `Tensor` of type `float32`.\n right_node_contribs: A `Tensor` of type `float32`.\n split_with_default_directions: A `Tensor` of type `string`.\n ", "desc": "Calculates gains for each feature and returns the best possible split information for the feature.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCalculateBestFeatureSplitV2", "docs": "Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node.\n\n The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.\n\n It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.\n\n In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).\n\n The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature.\n\n Args:\n node_id_range: A `Tensor` of type `int32`.\n A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).\n stats_summaries_list: A list of at least 1 `Tensor` objects with type `float32`.\n A list of Rank 4 tensor (#shape=[max_splits, feature_dims, bucket, stats_dims]) for accumulated stats summary (gradient/hessian) per node, per dimension, per buckets for each feature.\n The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.\n split_types: A `Tensor` of type `string`.\n A Rank 1 tensor indicating if this Op should perform inequality split or equality split per feature.\n candidate_feature_ids: A `Tensor` of type `int32`.\n Rank 1 tensor with ids for each feature. This is the real id of the feature.\n l1: A `Tensor` of type `float32`.\n l1 regularization factor on leaf weights, per instance based.\n l2: A `Tensor` of type `float32`.\n l2 regularization factor on leaf weights, per instance based.\n tree_complexity: A `Tensor` of type `float32`.\n adjustment to the gain, per leaf based.\n min_node_weight: A `Tensor` of type `float32`.\n minimum avg of hessians in a node before required for the node to be considered for splitting.\n logits_dimension: An `int` that is `>= 1`.\n The dimension of logit, i.e., number of classes.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (node_ids, gains, feature_ids, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions).\n\n node_ids: A `Tensor` of type `int32`.\n gains: A `Tensor` of type `float32`.\n feature_ids: A `Tensor` of type `int32`.\n feature_dimensions: A `Tensor` of type `int32`.\n thresholds: A `Tensor` of type `int32`.\n left_node_contribs: A `Tensor` of type `float32`.\n right_node_contribs: A `Tensor` of type `float32`.\n split_with_default_directions: A `Tensor` of type `string`.\n ", "desc": "Calculates gains for each feature and returns the best possible split information for each node. However, if no split is found, then no split information is returned for that node.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCalculateBestGainsPerFeature", "docs": "Calculates gains for each feature and returns the best possible split information for the feature.\n\n The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.\n\n It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.\n\n In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).\n\n The length of output lists are all of the same length, `num_features`.\n The output shapes are compatible in a way that the first dimension of all tensors of all lists are the same and equal to the number of possible split nodes for each feature.\n\n Args:\n node_id_range: A `Tensor` of type `int32`.\n A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).\n stats_summary_list: A list of at least 1 `Tensor` objects with type `float32`.\n A list of Rank 3 tensor (#shape=[max_splits, bucket, 2]) for accumulated stats summary (gradient/hessian) per node per buckets for each feature. The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.\n l1: A `Tensor` of type `float32`.\n l1 regularization factor on leaf weights, per instance based.\n l2: A `Tensor` of type `float32`.\n l2 regularization factor on leaf weights, per instance based.\n tree_complexity: A `Tensor` of type `float32`.\n adjustment to the gain, per leaf based.\n min_node_weight: A `Tensor` of type `float32`.\n minimum avg of hessians in a node before required for the node to be considered for splitting.\n max_splits: An `int` that is `>= 1`.\n the number of nodes that can be split in the whole tree. Used as a dimension of output tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (node_ids_list, gains_list, thresholds_list, left_node_contribs_list, right_node_contribs_list).\n\n node_ids_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.\n gains_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.\n thresholds_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.\n left_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.\n right_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.\n ", "desc": "Calculates gains for each feature and returns the best possible split information for the feature.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCenterBias", "docs": "Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble.\n mean_gradients: A `Tensor` of type `float32`.\n A tensor with shape=[logits_dimension] with mean of gradients for a first node.\n mean_hessians: A `Tensor` of type `float32`.\n A tensor with shape=[logits_dimension] mean of hessians for a first node.\n l1: A `Tensor` of type `float32`.\n l1 regularization factor on leaf weights, per instance based.\n l2: A `Tensor` of type `float32`.\n l2 regularization factor on leaf weights, per instance based.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCreateEnsemble", "docs": "Creates a tree ensemble model and returns a handle to it.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble resource to be created.\n stamp_token: A `Tensor` of type `int64`.\n Token to use as the initial value of the resource stamp.\n tree_ensemble_serialized: A `Tensor` of type `string`.\n Serialized proto of the tree ensemble.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Creates a tree ensemble model and returns a handle to it.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesCreateQuantileStreamResource", "docs": "Create the Resource for Quantile Streams.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource; Handle to quantile stream resource.\n epsilon: A `Tensor` of type `float32`.\n float; The required approximation error of the stream resource.\n num_streams: A `Tensor` of type `int64`.\n int; The number of streams managed by the resource that shares the same epsilon.\n max_elements: An optional `int`. Defaults to `1099511627776`.\n int; The maximum number of data points that can be fed to the stream.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Create the Resource for Quantile Streams.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesDeserializeEnsemble", "docs": "Deserializes a serialized tree ensemble config and replaces current tree\n\n ensemble.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble.\n stamp_token: A `Tensor` of type `int64`.\n Token to use as the new value of the resource stamp.\n tree_ensemble_serialized: A `Tensor` of type `string`.\n Serialized proto of the ensemble.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Deserializes a serialized tree ensemble config and replaces current tree", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesEnsembleResourceHandleOp", "docs": "Creates a handle to a BoostedTreesEnsembleResource\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a handle to a BoostedTreesEnsembleResource", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesExampleDebugOutputs", "docs": "Debugging/model interpretability outputs for each example.\n\n It traverses all the trees and computes debug metrics for individual examples,\n such as getting split feature ids and logits after each split along the decision\n path used to compute directional feature contributions.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.\n A list of rank 1 Tensors containing bucket id for each\n feature.\n logits_dimension: An `int`.\n scalar, dimension of the logits, to be used for constructing the protos in\n examples_debug_outputs_serialized.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Debugging/model interpretability outputs for each example.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesFlushQuantileSummaries", "docs": "Flush the quantile summaries from each quantile stream resource.\n\n An op that outputs a list of quantile summaries of a quantile stream resource.\n Each summary Tensor is rank 2, containing summaries (value, weight, min_rank,\n max_rank) for a single feature.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource handle referring to a QuantileStreamResource.\n num_features: An `int` that is `>= 0`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_features` `Tensor` objects with type `float32`.\n ", "desc": "Flush the quantile summaries from each quantile stream resource.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesGetEnsembleStates", "docs": "Retrieves the tree ensemble resource stamp token, number of trees and growing statistics.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (stamp_token, num_trees, num_finalized_trees, num_attempted_layers, last_layer_nodes_range).\n\n stamp_token: A `Tensor` of type `int64`.\n num_trees: A `Tensor` of type `int32`.\n num_finalized_trees: A `Tensor` of type `int32`.\n num_attempted_layers: A `Tensor` of type `int32`.\n last_layer_nodes_range: A `Tensor` of type `int32`.\n ", "desc": "Retrieves the tree ensemble resource stamp token, number of trees and growing statistics.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesMakeQuantileSummaries", "docs": "Makes the summary of quantiles for the batch.\n\n An op that takes a list of tensors (one tensor per feature) and outputs the\n quantile summaries for each tensor.\n\n Args:\n float_values: A list of `Tensor` objects with type `float32`.\n float; List of Rank 1 Tensors each containing values for a single feature.\n example_weights: A `Tensor` of type `float32`.\n float; Rank 1 Tensor with weights per instance.\n epsilon: A `Tensor` of type `float32`.\n float; The required maximum approximation error.\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `float_values` of `Tensor` objects with type `float32`.\n ", "desc": "Makes the summary of quantiles for the batch.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesMakeStatsSummary", "docs": "Makes the summary of accumulated stats for the batch.\n\n The summary stats contains gradients and hessians accumulated into the corresponding node and bucket for each example.\n\n Args:\n node_ids: A `Tensor` of type `int32`.\n int32 Rank 1 Tensor containing node ids, which each example falls into for the requested layer.\n gradients: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[#examples, 1]) for gradients.\n hessians: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[#examples, 1]) for hessians.\n bucketized_features_list: A list of at least 1 `Tensor` objects with type `int32`.\n int32 list of Rank 1 Tensors, each containing the bucketized feature (for each feature column).\n max_splits: An `int` that is `>= 1`.\n int; the maximum number of splits possible in the whole tree.\n num_buckets: An `int` that is `>= 1`.\n int; equals to the maximum possible value of bucketized feature.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Makes the summary of accumulated stats for the batch.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesPredict", "docs": "Runs multiple additive regression ensemble predictors on input instances and\n\n computes the logits. It is designed to be used during prediction.\n It traverses all the trees and calculates the final score for each instance.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.\n A list of rank 1 Tensors containing bucket id for each\n feature.\n logits_dimension: An `int`.\n scalar, dimension of the logits, to be used for partial logits\n shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Runs multiple additive regression ensemble predictors on input instances and", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesQuantileStreamResourceAddSummaries", "docs": "Add the quantile summaries to each quantile stream resource.\n\n An op that adds a list of quantile summaries to a quantile stream resource. Each\n summary Tensor is rank 2, containing summaries (value, weight, min_rank, max_rank)\n for a single feature.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource handle referring to a QuantileStreamResource.\n summaries: A list of `Tensor` objects with type `float32`.\n string; List of Rank 2 Tensor each containing the summaries for a single feature.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Add the quantile summaries to each quantile stream resource.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesQuantileStreamResourceDeserialize", "docs": "Deserialize bucket boundaries and ready flag into current QuantileAccumulator.\n\n An op that deserializes bucket boundaries and are boundaries ready flag into current QuantileAccumulator.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource handle referring to a QuantileStreamResource.\n bucket_boundaries: A list of at least 1 `Tensor` objects with type `float32`.\n float; List of Rank 1 Tensors each containing the bucket boundaries for a feature.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Deserialize bucket boundaries and ready flag into current QuantileAccumulator.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesQuantileStreamResourceFlush", "docs": "Flush the summaries for a quantile stream resource.\n\n An op that flushes the summaries for a quantile stream resource.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource handle referring to a QuantileStreamResource.\n num_buckets: A `Tensor` of type `int64`.\n int; approximate number of buckets unless using generate_quantiles.\n generate_quantiles: An optional `bool`. Defaults to `False`.\n bool; If True, the output will be the num_quantiles for each stream where the ith\n entry is the ith quantile of the input with an approximation error of epsilon.\n Duplicate values may be present.\n If False, the output will be the points in the histogram that we got which roughly\n translates to 1/epsilon boundaries and without any duplicates.\n Default to False.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Flush the summaries for a quantile stream resource.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesQuantileStreamResourceGetBucketBoundaries", "docs": "Generate the bucket boundaries for each feature based on accumulated summaries.\n\n An op that returns a list of float tensors for a quantile stream resource. Each\n tensor is Rank 1 containing bucket boundaries for a single feature.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource handle referring to a QuantileStreamResource.\n num_features: An `int` that is `>= 0`.\n inferred int; number of features to get bucket boundaries for.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_features` `Tensor` objects with type `float32`.\n ", "desc": "Generate the bucket boundaries for each feature based on accumulated summaries.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesQuantileStreamResourceHandleOp", "docs": "Creates a handle to a BoostedTreesQuantileStreamResource.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a handle to a BoostedTreesQuantileStreamResource.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesSerializeEnsemble", "docs": "Serializes the tree ensemble to a proto.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (stamp_token, tree_ensemble_serialized).\n\n stamp_token: A `Tensor` of type `int64`.\n tree_ensemble_serialized: A `Tensor` of type `string`.\n ", "desc": "Serializes the tree ensemble to a proto.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesSparseAggregateStats", "docs": "Aggregates the summary of accumulated stats for the batch.\n\n The summary stats contains gradients and hessians accumulated for each node, bucket and dimension id.\n\n Args:\n node_ids: A `Tensor` of type `int32`.\n int32; Rank 1 Tensor containing node ids for each example, shape [batch_size].\n gradients: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[batch_size, logits_dimension]) with gradients for each example.\n hessians: A `Tensor` of type `float32`.\n float32; Rank 2 Tensor (shape=[batch_size, hessian_dimension]) with hessians for each example.\n feature_indices: A `Tensor` of type `int32`.\n int32; Rank 2 indices of feature sparse Tensors (shape=[number of sparse entries, 2]).\n Number of sparse entries across all instances from the batch. The first value is\n the index of the instance, the second is dimension of the feature. The second axis\n can only have 2 values, i.e., the input dense version of Tensor can only be matrix.\n feature_values: A `Tensor` of type `int32`.\n int32; Rank 1 values of feature sparse Tensors (shape=[number of sparse entries]).\n Number of sparse entries across all instances from the batch. The first value is\n the index of the instance, the second is dimension of the feature.\n feature_shape: A `Tensor` of type `int32`.\n int32; Rank 1 dense shape of feature sparse Tensors (shape=[2]).\n The first axis can only have 2 values, [batch_size, feature_dimension].\n max_splits: An `int` that is `>= 1`.\n int; the maximum number of splits possible in the whole tree.\n num_buckets: An `int` that is `>= 1`.\n int; equals to the maximum possible value of bucketized feature + 1.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (stats_summary_indices, stats_summary_values, stats_summary_shape).\n\n stats_summary_indices: A `Tensor` of type `int32`.\n stats_summary_values: A `Tensor` of type `float32`.\n stats_summary_shape: A `Tensor` of type `int32`.\n ", "desc": "Aggregates the summary of accumulated stats for the batch.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesSparseCalculateBestFeatureSplit", "docs": "Calculates gains for each feature and returns the best possible split information for the feature.\n\n The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.\n\n It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.\n\n In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).\n\n The output shapes are compatible in a way that the first dimension of all tensors are the same and equal to the number of possible split nodes for each feature.\n\n Args:\n node_id_range: A `Tensor` of type `int32`.\n A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).\n stats_summary_indices: A `Tensor` of type `int32`.\n A Rank 2 int64 tensor of dense shape [N, 4] (N specifies the number of non-zero values) for accumulated stats summary (gradient/hessian) per node per bucket for each feature. The second dimension contains node id, feature dimension, bucket id, and stats dim.\n stats dim is the sum of logits dimension and hessian dimension, hessian dimension can either be logits dimension if diagonal hessian is used, or logits dimension^2 if full hessian is used.\n stats_summary_values: A `Tensor` of type `float32`.\n A Rank 1 float tensor of dense shape [N] (N specifies the number of non-zero values), which supplies the values for each element in summary_indices.\n stats_summary_shape: A `Tensor` of type `int32`.\n A Rank 1 float tensor of dense shape [4], which specifies the dense shape of the sparse tensor, which is [num tree nodes, feature dimensions, num buckets, stats dim].\n l1: A `Tensor` of type `float32`.\n l1 regularization factor on leaf weights, per instance based.\n l2: A `Tensor` of type `float32`.\n l2 regularization factor on leaf weights, per instance based.\n tree_complexity: A `Tensor` of type `float32`.\n adjustment to the gain, per leaf based.\n min_node_weight: A `Tensor` of type `float32`.\n minimum avg of hessians in a node before required for the node to be considered for splitting.\n logits_dimension: An `int` that is `>= 1`.\n The dimension of logit, i.e., number of classes.\n split_type: An optional `string` from: `\"inequality\"`. Defaults to `\"inequality\"`.\n A string indicating if this Op should perform inequality split or equality split.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (node_ids, gains, feature_dimensions, thresholds, left_node_contribs, right_node_contribs, split_with_default_directions).\n\n node_ids: A `Tensor` of type `int32`.\n gains: A `Tensor` of type `float32`.\n feature_dimensions: A `Tensor` of type `int32`.\n thresholds: A `Tensor` of type `int32`.\n left_node_contribs: A `Tensor` of type `float32`.\n right_node_contribs: A `Tensor` of type `float32`.\n split_with_default_directions: A `Tensor` of type `string`.\n ", "desc": "Calculates gains for each feature and returns the best possible split information for the feature.", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesTrainingPredict", "docs": "Runs multiple additive regression ensemble predictors on input instances and\n\n computes the update to cached logits. It is designed to be used during training.\n It traverses the trees starting from cached tree id and cached node id and\n calculates the updates to be pushed to the cache.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n cached_tree_ids: A `Tensor` of type `int32`.\n Rank 1 Tensor containing cached tree ids which is the starting\n tree of prediction.\n cached_node_ids: A `Tensor` of type `int32`.\n Rank 1 Tensor containing cached node id which is the starting\n node of prediction.\n bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.\n A list of rank 1 Tensors containing bucket id for each\n feature.\n logits_dimension: An `int`.\n scalar, dimension of the logits, to be used for partial logits\n shape.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (partial_logits, tree_ids, node_ids).\n\n partial_logits: A `Tensor` of type `float32`.\n tree_ids: A `Tensor` of type `int32`.\n node_ids: A `Tensor` of type `int32`.\n ", "desc": "Runs multiple additive regression ensemble predictors on input instances and", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesUpdateEnsemble", "docs": "Updates the tree ensemble by either adding a layer to the last tree being grown\n\n or by starting a new tree.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the ensemble variable.\n feature_ids: A `Tensor` of type `int32`.\n Rank 1 tensor with ids for each feature. This is the real id of\n the feature that will be used in the split.\n node_ids: A list of `Tensor` objects with type `int32`.\n List of rank 1 tensors representing the nodes for which this feature\n has a split.\n gains: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.\n List of rank 1 tensors representing the gains for each of the feature's\n split.\n thresholds: A list with the same length as `node_ids` of `Tensor` objects with type `int32`.\n List of rank 1 tensors representing the thesholds for each of the\n feature's split.\n left_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.\n List of rank 2 tensors with left leaf contribs for each of\n the feature's splits. Will be added to the previous node values to constitute\n the values of the left nodes.\n right_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.\n List of rank 2 tensors with right leaf contribs for each\n of the feature's splits. Will be added to the previous node values to constitute\n the values of the right nodes.\n max_depth: A `Tensor` of type `int32`. Max depth of the tree to build.\n learning_rate: A `Tensor` of type `float32`.\n shrinkage const for each new tree.\n pruning_mode: An `int` that is `>= 0`.\n 0-No pruning, 1-Pre-pruning, 2-Post-pruning.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the tree ensemble by either adding a layer to the last tree being grown", "type": "API"}, {"name": "tf.raw_ops.BoostedTreesUpdateEnsembleV2", "docs": "Updates the tree ensemble by adding a layer to the last tree being grown\n\n or by starting a new tree.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the ensemble variable.\n feature_ids: A list of at least 1 `Tensor` objects with type `int32`.\n Rank 1 tensor with ids for each feature. This is the real id of\n the feature that will be used in the split.\n dimension_ids: A list of `Tensor` objects with type `int32`.\n List of rank 1 tensors representing the dimension in each feature.\n node_ids: A list with the same length as `dimension_ids` of `Tensor` objects with type `int32`.\n List of rank 1 tensors representing the nodes for which this feature\n has a split.\n gains: A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`.\n List of rank 1 tensors representing the gains for each of the feature's\n split.\n thresholds: A list with the same length as `dimension_ids` of `Tensor` objects with type `int32`.\n List of rank 1 tensors representing the thesholds for each of the\n feature's split.\n left_node_contribs: A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`.\n List of rank 2 tensors with left leaf contribs for each of\n the feature's splits. Will be added to the previous node values to constitute\n the values of the left nodes.\n right_node_contribs: A list with the same length as `dimension_ids` of `Tensor` objects with type `float32`.\n List of rank 2 tensors with right leaf contribs for each\n of the feature's splits. Will be added to the previous node values to constitute\n the values of the right nodes.\n split_types: A list with the same length as `dimension_ids` of `Tensor` objects with type `string`.\n List of rank 1 tensors representing the split type for each feature.\n max_depth: A `Tensor` of type `int32`. Max depth of the tree to build.\n learning_rate: A `Tensor` of type `float32`.\n shrinkage const for each new tree.\n pruning_mode: A `Tensor` of type `int32`.\n 0-No pruning, 1-Pre-pruning, 2-Post-pruning.\n logits_dimension: An optional `int`. Defaults to `1`.\n scalar, dimension of the logits\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the tree ensemble by adding a layer to the last tree being grown", "type": "API"}, {"name": "tf.raw_ops.BroadcastArgs", "docs": "Return the shape of s0 op s1 with broadcast.\n\n Given `s0` and `s1`, tensors that represent shapes, compute `r0`, the\n broadcasted shape. `s0`, `s1` and `r0` are all integer vectors.\n\n Args:\n s0: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n s1: A `Tensor`. Must have the same type as `s0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `s0`.\n ", "desc": "Return the shape of s0 op s1 with broadcast.", "type": "API"}, {"name": "tf.raw_ops.BroadcastGradientArgs", "docs": "Return the reduction indices for computing gradients of s0 op s1 with broadcast.\n\n This is typically used by gradient computations for a broadcasting operation.\n\n Args:\n s0: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n s1: A `Tensor`. Must have the same type as `s0`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (r0, r1).\n\n r0: A `Tensor`. Has the same type as `s0`.\n r1: A `Tensor`. Has the same type as `s0`.\n ", "desc": "Return the reduction indices for computing gradients of s0 op s1 with broadcast.", "type": "API"}, {"name": "tf.raw_ops.BroadcastTo", "docs": "Broadcast an array for a compatible shape.\n\n Broadcasting is the process of making arrays to have compatible shapes\n for arithmetic operations. Two shapes are compatible if for each\n dimension pair they are either equal or one of them is one. When trying\n to broadcast a Tensor to a shape, it starts with the trailing dimensions,\n and works its way forward.\n\n For example,\n\n >>> x = tf.constant([1, 2, 3])\n >>> y = tf.broadcast_to(x, [3, 3])\n >>> print(y)\n tf.Tensor(\n [[1 2 3]\n [1 2 3]\n [1 2 3]], shape=(3, 3), dtype=int32)\n\n In the above example, the input Tensor with the shape of `[1, 3]`\n is broadcasted to output Tensor with shape of `[3, 3]`.\n\n When doing broadcasted operations such as multiplying a tensor\n by a scalar, broadcasting (usually) confers some time or space\n benefit, as the broadcasted tensor is never materialized.\n\n However, `broadcast_to` does not carry with it any such benefits.\n The newly-created tensor takes the full memory of the broadcasted\n shape. (In a graph context, `broadcast_to` might be fused to\n subsequent operation and then be optimized away, however.)\n\n Args:\n input: A `Tensor`. A Tensor to broadcast.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 1-D `int` Tensor. The shape of the desired output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Broadcast an array for a compatible shape.", "type": "API"}, {"name": "tf.raw_ops.Bucketize", "docs": "Bucketizes 'input' based on 'boundaries'.\n\n For example, if the inputs are\n boundaries = [0, 10, 100]\n input = [[-5, 10000]\n [150, 10]\n [5, 100]]\n\n then the output will be\n output = [[0, 3]\n [3, 2]\n [1, 3]]\n\n Args:\n input: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n Any shape of Tensor contains with int or float type.\n boundaries: A list of `floats`.\n A sorted list of floats gives the boundary of the buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Bucketizes 'input' based on 'boundaries'.", "type": "API"}, {"name": "tf.raw_ops.BytesProducedStatsDataset", "docs": "Records the bytes size of each element of `input_dataset` in a StatsAggregator.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n tag: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Records the bytes size of each element of `input_dataset` in a StatsAggregator.", "type": "API"}, {"name": "tf.raw_ops.CacheDataset", "docs": "Creates a dataset that caches elements from `input_dataset`.\n\n A CacheDataset will iterate over the input_dataset, and store tensors. If the\n cache already exists, the cache will be used. If the cache is inappropriate\n (e.g. cannot be opened, contains tensors of the wrong shape / size), an error\n will the returned when used.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n filename: A `Tensor` of type `string`.\n A path on the filesystem where we should cache the dataset. Note: this\n will be a directory.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that caches elements from `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.CacheDatasetV2", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n filename: A `Tensor` of type `string`.\n cache: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Case", "docs": "An n-way switch statement which calls a single branch function.\n\n An n-way switch statement, implementing the following:\n ```\n switch (branch_index) {\n case 0:\n output = branches[0](input);\n break;\n case 1:\n output = branches[1](input);\n break;\n ...\n case [[nbranches-1]]:\n default:\n output = branches[nbranches-1](input);\n break;\n }\n ```\n\n Args:\n branch_index: A `Tensor` of type `int32`.\n The branch selector, an int32 Tensor.\n input: A list of `Tensor` objects.\n A list of input tensors passed to the branch function.\n Tout: A list of `tf.DTypes`. A list of output types.\n branches: A list of functions decorated with @Defun that has length `>= 1`.\n A list of functions each of which takes 'inputs' and returns a list of\n tensors, whose types are the same as what every other branch returns.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "An n-way switch statement which calls a single branch function.", "type": "API"}, {"name": "tf.raw_ops.Cast", "docs": "Cast x of type SrcT to y of DstT.\n\n Args:\n x: A `Tensor`.\n DstT: A `tf.DType`.\n Truncate: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `DstT`.\n ", "desc": "Cast x of type SrcT to y of DstT.", "type": "API"}, {"name": "tf.raw_ops.Ceil", "docs": "Returns element-wise smallest integer not less than x.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise smallest integer not less than x.", "type": "API"}, {"name": "tf.raw_ops.CheckNumerics", "docs": "Checks a tensor for NaN and Inf values.\n\n When run, reports an `InvalidArgument` error if `tensor` has any values\n that are not a number (NaN) or infinity (Inf). Otherwise, returns the input\n tensor.\n\n Example usage:\n\n ``` python\n a = tf.Variable(1.0)\n tf.debugging.check_numerics(a, message='')\n\n b = tf.Variable(np.nan)\n try:\n tf.debugging.check_numerics(b, message='Checking b')\n except Exception as e:\n assert \"Checking b : Tensor had NaN values\" in e.message\n\n c = tf.Variable(np.inf)\n try:\n tf.debugging.check_numerics(c, message='Checking c')\n except Exception as e:\n assert \"Checking c : Tensor had Inf values\" in e.message\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n message: A `string`. Prefix of the error message.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Checks a tensor for NaN and Inf values.", "type": "API"}, {"name": "tf.raw_ops.CheckNumericsV2", "docs": "Checks a tensor for NaN, -Inf and +Inf values.\n\n When run, reports an `InvalidArgument` error if `tensor` has any values\n that are not a number (NaN) or infinity (Inf). Otherwise, returns the input\n tensor. Unlike CheckNumerics (V1), CheckNumericsV2 distinguishes -Inf and +Inf\n in the errors it throws.\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n message: A `string`. Prefix of the error message.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Checks a tensor for NaN, -Inf and +Inf values.", "type": "API"}, {"name": "tf.raw_ops.Cholesky", "docs": "Computes the Cholesky decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be symmetric and positive definite. Only the lower-triangular\n part of the input will be used for this operation. The upper-triangular part\n will not be read.\n\n The output is a tensor of the same shape as the input\n containing the Cholesky decompositions for all input submatrices `[..., :, :]`.\n\n **Note**: The gradient computation on GPU is faster for large matrices but\n not for large batch dimensions when the submatrices are small. In this\n case it might be faster to use the CPU.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the Cholesky decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.raw_ops.CholeskyGrad", "docs": "Computes the reverse mode backpropagated gradient of the Cholesky algorithm.\n\n For an explanation see \"Differentiation of the Cholesky algorithm\" by\n Iain Murray http://arxiv.org/abs/1602.07527.\n\n Args:\n l: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n Output of batch Cholesky algorithm l = cholesky(A). Shape is `[..., M, M]`.\n Algorithm depends only on lower triangular part of the innermost matrices of\n this tensor.\n grad: A `Tensor`. Must have the same type as `l`.\n df/dl where f is some scalar function. Shape is `[..., M, M]`.\n Algorithm depends only on lower triangular part of the innermost matrices of\n this tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `l`.\n ", "desc": "Computes the reverse mode backpropagated gradient of the Cholesky algorithm.", "type": "API"}, {"name": "tf.raw_ops.ChooseFastestBranchDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n ratio_numerator: A `Tensor` of type `int64`.\n ratio_denominator: A `Tensor` of type `int64`.\n other_arguments: A list of `Tensor` objects.\n num_elements_per_branch: An `int` that is `>= 1`.\n branches: A list of functions decorated with @Defun that has length `>= 1`.\n other_arguments_lengths: A list of `ints` that has length `>= 1`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ChooseFastestDataset", "docs": "TODO: add doc.\n\n Args:\n input_datasets: A list of at least 2 `Tensor` objects with type `variant`.\n num_experiments: An `int`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ClipByValue", "docs": "Clips tensor values to a specified min and max.\n\n Given a tensor `t`, this operation returns a tensor of the same type and\n shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.\n Any values less than `clip_value_min` are set to `clip_value_min`. Any values\n greater than `clip_value_max` are set to `clip_value_max`.\n\n Args:\n t: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A `Tensor`.\n clip_value_min: A `Tensor`. Must have the same type as `t`.\n A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape\n as `t`. The minimum value to clip by.\n clip_value_max: A `Tensor`. Must have the same type as `t`.\n A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape\n as `t`. The maximum value to clip by.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "Clips tensor values to a specified min and max.", "type": "API"}, {"name": "tf.raw_ops.CloseSummaryWriter", "docs": "TODO: add doc.\n\n Args:\n writer: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.CollectiveBcastRecv", "docs": "Receives a tensor value broadcast from another device.\n\n Args:\n T: A `tf.DType` from: `tf.bool, tf.float32, tf.half, tf.float64, tf.int32, tf.int64`.\n group_size: An `int`.\n group_key: An `int`.\n instance_key: An `int`.\n shape: A `tf.TensorShape` or list of `ints`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `T`.\n ", "desc": "Receives a tensor value broadcast from another device.", "type": "API"}, {"name": "tf.raw_ops.CollectiveBcastRecvV2", "docs": "Receives a tensor value broadcast from another device.\n\n Args:\n group_size: A `Tensor` of type `int32`.\n group_key: A `Tensor` of type `int32`.\n instance_key: A `Tensor` of type `int32`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n T: A `tf.DType` from: `tf.bool, tf.float32, tf.half, tf.float64, tf.int32, tf.int64`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `T`.\n ", "desc": "Receives a tensor value broadcast from another device.", "type": "API"}, {"name": "tf.raw_ops.CollectiveBcastSend", "docs": "Broadcasts a tensor value to one or more other devices.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bool`, `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: An `int`.\n group_key: An `int`.\n instance_key: An `int`.\n shape: A `tf.TensorShape` or list of `ints`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Broadcasts a tensor value to one or more other devices.", "type": "API"}, {"name": "tf.raw_ops.CollectiveBcastSendV2", "docs": "Broadcasts a tensor value to one or more other devices.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bool`, `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: A `Tensor` of type `int32`.\n group_key: A `Tensor` of type `int32`.\n instance_key: A `Tensor` of type `int32`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Broadcasts a tensor value to one or more other devices.", "type": "API"}, {"name": "tf.raw_ops.CollectiveGather", "docs": "Mutually accumulates multiple tensors of identical type and shape.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: An `int`.\n group_key: An `int`.\n instance_key: An `int`.\n shape: A `tf.TensorShape` or list of `ints`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Mutually accumulates multiple tensors of identical type and shape.", "type": "API"}, {"name": "tf.raw_ops.CollectiveGatherV2", "docs": "Mutually accumulates multiple tensors of identical type and shape.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: A `Tensor` of type `int32`.\n group_key: A `Tensor` of type `int32`.\n instance_key: A `Tensor` of type `int32`.\n ordering_token: A list of `Tensor` objects with type `resource`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Mutually accumulates multiple tensors of identical type and shape.", "type": "API"}, {"name": "tf.raw_ops.CollectivePermute", "docs": "An Op to permute tensors across replicated TPU instances.\n\n Each instance supplies its own input.\n\n For example, suppose there are 4 TPU instances: `[A, B, C, D]`. Passing\n source_target_pairs=`[[0,1],[1,2],[2,3],[3,0]]` gets the outputs:\n `[D, A, B, C]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The local input to be permuted. Currently only supports float and\n bfloat16.\n source_target_pairs: A `Tensor` of type `int32`.\n A tensor with shape [num_pairs, 2].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "An Op to permute tensors across replicated TPU instances.", "type": "API"}, {"name": "tf.raw_ops.CollectiveReduce", "docs": "Mutually reduces multiple tensors of identical type and shape.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: An `int`.\n group_key: An `int`.\n instance_key: An `int`.\n merge_op: A `string` from: `\"Min\", \"Max\", \"Mul\", \"Add\"`.\n final_op: A `string` from: `\"Id\", \"Div\"`.\n subdiv_offsets: A list of `ints`.\n wait_for: An optional list of `ints`. Defaults to `[]`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Mutually reduces multiple tensors of identical type and shape.", "type": "API"}, {"name": "tf.raw_ops.CollectiveReduceV2", "docs": "Mutually reduces multiple tensors of identical type and shape.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `half`, `float64`, `int32`, `int64`.\n group_size: A `Tensor` of type `int32`.\n group_key: A `Tensor` of type `int32`.\n instance_key: A `Tensor` of type `int32`.\n ordering_token: A list of `Tensor` objects with type `resource`.\n merge_op: A `string` from: `\"Min\", \"Max\", \"Mul\", \"Add\"`.\n final_op: A `string` from: `\"Id\", \"Div\"`.\n communication_hint: An optional `string`. Defaults to `\"auto\"`.\n timeout_seconds: An optional `float`. Defaults to `0`.\n max_subdivs_per_device: An optional `int`. Defaults to `-1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Mutually reduces multiple tensors of identical type and shape.", "type": "API"}, {"name": "tf.raw_ops.CombinedNonMaxSuppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n This operation performs non_max_suppression on the inputs per batch, across\n all classes.\n Prunes away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Also note that\n this algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is the final boxes, scores and classes tensor\n returned after performing non_max_suppression.\n\n Args:\n boxes: A `Tensor` of type `float32`.\n A 4-D float tensor of shape `[batch_size, num_boxes, q, 4]`. If `q` is 1 then\n same boxes are used for all classes otherwise, if `q` is equal to number of\n classes, class-specific boxes are used.\n scores: A `Tensor` of type `float32`.\n A 3-D float tensor of shape `[batch_size, num_boxes, num_classes]`\n representing a single score corresponding to each box (each row of boxes).\n max_output_size_per_class: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression per class\n max_total_size: A `Tensor` of type `int32`.\n An int32 scalar representing the maximum number of boxes retained over all\n classes. Note that setting this value to a large number may result in OOM error\n depending on the system workload.\n iou_threshold: A `Tensor` of type `float32`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too much with respect to IOU.\n score_threshold: A `Tensor` of type `float32`.\n A 0-D float tensor representing the threshold for deciding when to remove\n boxes based on score.\n pad_per_class: An optional `bool`. Defaults to `False`.\n If false, the output nmsed boxes, scores and classes\n are padded/clipped to `max_total_size`. If true, the\n output nmsed boxes, scores and classes are padded to be of length\n `max_size_per_class`*`num_classes`, unless it exceeds `max_total_size` in\n which case it is clipped to `max_total_size`. Defaults to false.\n clip_boxes: An optional `bool`. Defaults to `True`.\n If true, assume the box coordinates are between [0, 1] and clip the output boxes\n if they fall beyond [0, 1]. If false, do not do clipping and output the box\n coordinates as it is.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (nmsed_boxes, nmsed_scores, nmsed_classes, valid_detections).\n\n nmsed_boxes: A `Tensor` of type `float32`.\n nmsed_scores: A `Tensor` of type `float32`.\n nmsed_classes: A `Tensor` of type `float32`.\n valid_detections: A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.Complex", "docs": "Converts two real numbers to a complex number.\n\n Given a tensor `real` representing the real part of a complex number, and a\n tensor `imag` representing the imaginary part of a complex number, this\n operation returns complex numbers elementwise of the form \\\\(a + bj\\\\), where\n *a* represents the `real` part and *b* represents the `imag` part.\n\n The input tensors `real` and `imag` must have the same shape.\n\n For example:\n\n ```\n # tensor 'real' is [2.25, 3.25]\n # tensor `imag` is [4.75, 5.75]\n tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]\n ```\n\n Args:\n real: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n imag: A `Tensor`. Must have the same type as `real`.\n Tout: An optional `tf.DType` from: `tf.complex64, tf.complex128`. Defaults to `tf.complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tout`.\n ", "desc": "Converts two real numbers to a complex number.", "type": "API"}, {"name": "tf.raw_ops.ComplexAbs", "docs": "Computes the complex absolute value of a tensor.\n\n Given a tensor `x` of complex numbers, this operation returns a tensor of type\n `float` or `double` that is the absolute value of each element in `x`. All\n elements in `x` must be complex numbers of the form \\\\(a + bj\\\\). The absolute\n value is computed as \\\\( \\sqrt{a^2 + b^2}\\\\).\n\n For example:\n\n >>> x = tf.complex(3.0, 4.0)\n >>> print((tf.raw_ops.ComplexAbs(x=x, Tout=tf.dtypes.float32, name=None)).numpy())\n 5.0\n\n Args:\n x: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Tout: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tout`.\n ", "desc": "Computes the complex absolute value of a tensor.", "type": "API"}, {"name": "tf.raw_ops.CompressElement", "docs": "Compresses a dataset element.\n\n Args:\n components: A list of `Tensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Compresses a dataset element.", "type": "API"}, {"name": "tf.raw_ops.ComputeAccidentalHits", "docs": "Computes the ids of the positions in sampled_candidates that match true_labels.\n\n When doing log-odds NCE, the result of this op should be passed through a\n SparseToDense op, then added to the logits of the sampled candidates. This has\n the effect of 'removing' the sampled labels that match the true labels by\n making the classifier sure that they are sampled labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n The true_classes output of UnpackSparseLabels.\n sampled_candidates: A `Tensor` of type `int64`.\n The sampled_candidates output of CandidateSampler.\n num_true: An `int`. Number of true labels per context.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, ids, weights).\n\n indices: A `Tensor` of type `int32`.\n ids: A `Tensor` of type `int64`.\n weights: A `Tensor` of type `float32`.\n ", "desc": "Computes the ids of the positions in sampled_candidates that match true_labels.", "type": "API"}, {"name": "tf.raw_ops.ComputeBatchSize", "docs": "Computes the static batch size of a dataset sans partial batches.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Computes the static batch size of a dataset sans partial batches.", "type": "API"}, {"name": "tf.raw_ops.Concat", "docs": "Concatenates tensors along one dimension.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [0, rank(values)).\n values: A list of at least 2 `Tensor` objects with the same type.\n The `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `values`.\n ", "desc": "Concatenates tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.ConcatenateDataset", "docs": "Creates a dataset that concatenates `input_dataset` with `another_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n another_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that concatenates `input_dataset` with `another_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ConcatOffset", "docs": "Computes offsets of concat inputs within its output.\n\n For example:\n\n ```\n # 'x' is [2, 2, 7]\n # 'y' is [2, 3, 7]\n # 'z' is [2, 5, 7]\n concat_offset(2, [x, y, z]) => [0, 0, 0], [0, 2, 0], [0, 5, 0]\n ```\n\n This is typically used by gradient computations for a concat operation.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n The dimension along which to concatenate.\n shape: A list of at least 2 `Tensor` objects with type `int32`.\n The `N` int32 vectors representing shape of tensors being concatenated.\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `shape` of `Tensor` objects with type `int32`.\n ", "desc": "Computes offsets of concat inputs within its output.", "type": "API"}, {"name": "tf.raw_ops.ConcatV2", "docs": "Concatenates tensors along one dimension.\n\n Args:\n values: A list of at least 2 `Tensor` objects with the same type.\n List of `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [-rank(values), rank(values)).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `values`.\n ", "desc": "Concatenates tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.ConditionalAccumulator", "docs": "A conditional accumulator for aggregating gradients.\n\n The accumulator accepts gradients marked with local_step greater or\n equal to the most recent global_step known to the accumulator. The\n average can be extracted from the accumulator, provided sufficient\n gradients have been accumulated. Extracting the average automatically\n resets the aggregate to 0, and increments the global_step recorded by\n the accumulator.\n\n Args:\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The type of the value being accumulated.\n shape: A `tf.TensorShape` or list of `ints`.\n The shape of the values, can be [], in which case shape is unknown.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator will be shared under the\n given name across multiple sessions.\n reduction_type: An optional `string` from: `\"MEAN\", \"SUM\"`. Defaults to `\"MEAN\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A conditional accumulator for aggregating gradients.", "type": "API"}, {"name": "tf.raw_ops.ConfigureDistributedTPU", "docs": "Sets up the centralized structures for a distributed TPU system.\n\n Args:\n embedding_config: An optional `string`. Defaults to `\"\"`.\n Reserved. Do not use.\n tpu_embedding_config: An optional `string`. Defaults to `\"\"`.\n Serialized tensorflow.tpu.TPUEmbeddingConfiguration that\n describes the embedding lookups of the program.\n is_global_init: An optional `bool`. Defaults to `False`.\n Reserved. Do not use.\n enable_whole_mesh_compilations: An optional `bool`. Defaults to `False`.\n compilation_failure_closes_chips: An optional `bool`. Defaults to `True`.\n tpu_cancellation_closes_chips: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Sets up the centralized structures for a distributed TPU system.", "type": "API"}, {"name": "tf.raw_ops.ConfigureTPUEmbedding", "docs": "Sets up TPUEmbedding in a distributed TPU system.\n\n Args:\n config: A `string`.\n Serialized tensorflow.tpu.TPUEmbeddingConfiguration that\n describes the embedding lookups of the program.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Sets up TPUEmbedding in a distributed TPU system.", "type": "API"}, {"name": "tf.raw_ops.Conj", "docs": "Returns the complex conjugate of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n complex numbers that are the complex conjugate of each element in `input`. The\n complex numbers in `input` must be of the form \\\\(a + bj\\\\), where *a* is the\n real part and *b* is the imaginary part.\n\n The complex conjugate returned by this operation is of the form \\\\(a - bj\\\\).\n\n For example:\n\n ```\n # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]\n tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`, `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the complex conjugate of a complex number.", "type": "API"}, {"name": "tf.raw_ops.ConjugateTranspose", "docs": "Shuffle dimensions of x according to a permutation and conjugate the result.\n\n The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy:\n `y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]`\n `y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])`\n\n Args:\n x: A `Tensor`.\n perm: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Shuffle dimensions of x according to a permutation and conjugate the result.", "type": "API"}, {"name": "tf.raw_ops.Const", "docs": "Returns a constant tensor.\n\n Args:\n value: A `tf.TensorProto`. Attr `value` is the tensor to return.\n dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Returns a constant tensor.", "type": "API"}, {"name": "tf.raw_ops.ConsumeMutexLock", "docs": "This op consumes a lock created by `MutexLock`.\n\n This op exists to consume a tensor created by `MutexLock` (other than\n direct control dependencies). It should be the only that consumes the tensor,\n and will raise an error if it is not. Its only purpose is to keep the\n mutex lock tensor alive until it is consumed by this op.\n\n **NOTE**: This operation must run on the same device as its input. This may\n be enforced via the `colocate_with` mechanism.\n\n Args:\n mutex_lock: A `Tensor` of type `variant`.\n A tensor returned by `MutexLock`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "This op consumes a lock created by `MutexLock`.", "type": "API"}, {"name": "tf.raw_ops.ControlTrigger", "docs": "Does nothing. Serves as a control trigger for scheduling.\n\n Only useful as a placeholder for control edges.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Does nothing. Serves as a control trigger for scheduling.", "type": "API"}, {"name": "tf.raw_ops.Conv2D", "docs": "Computes a 2-D convolution given 4-D `input` and `filter` tensors.\n\n Given an input tensor of shape `[batch, in_height, in_width, in_channels]`\n and a filter / kernel tensor of shape\n `[filter_height, filter_width, in_channels, out_channels]`, this op\n performs the following:\n\n 1. Flattens the filter to a 2-D matrix with shape\n `[filter_height * filter_width * in_channels, output_channels]`.\n 2. Extracts image patches from the input tensor to form a *virtual*\n tensor of shape `[batch, out_height, out_width,\n filter_height * filter_width * in_channels]`.\n 3. For each patch, right-multiplies the filter matrix and the image patch\n vector.\n\n In detail, with the default NHWC format,\n\n output[b, i, j, k] =\n sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *\n filter[di, dj, q, k]\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertices strides, `strides = [1, stride, stride, 1]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`.\n A 4-D tensor. The dimension order is interpreted according to the value\n of `data_format`, see below for details.\n filter: A `Tensor`. Must have the same type as `input`.\n A 4-D tensor of shape\n `[filter_height, filter_width, in_channels, out_channels]`\n strides: A list of `ints`.\n 1-D tensor of length 4. The stride of the sliding window for each\n dimension of `input`. The dimension order is determined by the value of\n `data_format`, see below for details.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n If `padding` is `\"EXPLICIT\"`, the list of explicit padding amounts. For the ith\n dimension, the amount of padding inserted before and after the dimension is\n `explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If\n `padding` is not `\"EXPLICIT\"`, `explicit_paddings` must be empty.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, height, width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 2-D convolution given 4-D `input` and `filter` tensors.", "type": "API"}, {"name": "tf.raw_ops.Conv2DBackpropFilter", "docs": "Computes the gradients of convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape `[batch, in_height, in_width, in_channels]`.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 4-D\n `[filter_height, filter_width, in_channels, out_channels]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution. Must be in the same order as the dimension specified with\n format.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n If `padding` is `\"EXPLICIT\"`, the list of explicit padding amounts. For the ith\n dimension, the amount of padding inserted before and after the dimension is\n `explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If\n `padding` is not `\"EXPLICIT\"`, `explicit_paddings` must be empty.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each filter\n element on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of convolution with respect to the filter.", "type": "API"}, {"name": "tf.raw_ops.Conv2DBackpropInput", "docs": "Computes the gradients of convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`.\n An integer vector representing the shape of `input`,\n where `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`.\n 4-D with shape\n `[filter_height, filter_width, in_channels, out_channels]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`.\n 4-D with shape `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution. Must be in the same order as the dimension specified with\n format.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n If `padding` is `\"EXPLICIT\"`, the list of explicit padding amounts. For the ith\n dimension, the amount of padding inserted before and after the dimension is\n `explicit_paddings[2 * i]` and `explicit_paddings[2 * i + 1]`, respectively. If\n `padding` is not `\"EXPLICIT\"`, `explicit_paddings` must be empty.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each filter\n element on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of convolution with respect to the input.", "type": "API"}, {"name": "tf.raw_ops.Conv3D", "docs": "Computes a 3-D convolution given 5-D `input` and `filter` tensors.\n\n In signal processing, cross-correlation is a measure of similarity of\n two waveforms as a function of a time-lag applied to one of them. This\n is also known as a sliding dot product or sliding inner-product.\n\n Our Conv3D implements a form of cross-correlation.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, in_depth, in_height, in_width, in_channels]`.\n filter: A `Tensor`. Must have the same type as `input`.\n Shape `[filter_depth, filter_height, filter_width, in_channels,\n out_channels]`. `in_channels` must match between `input` and `filter`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 3-D convolution given 5-D `input` and `filter` tensors.", "type": "API"}, {"name": "tf.raw_ops.Conv3DBackpropFilter", "docs": "Computes the gradients of 3-D convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, in_channels]`.\n filter: A `Tensor`. Must have the same type as `input`.\n Shape `[depth, rows, cols, in_channels, out_channels]`.\n `in_channels` must match between `input` and `filter`.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the filter.", "type": "API"}, {"name": "tf.raw_ops.Conv3DBackpropFilterV2", "docs": "Computes the gradients of 3-D convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, in_channels]`.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 5-D\n `[filter_depth, filter_height, filter_width, in_channels, out_channels]`\n tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the filter.", "type": "API"}, {"name": "tf.raw_ops.Conv3DBackpropInput", "docs": "Computes the gradients of 3-D convolution with respect to the input.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n Shape `[batch, depth, rows, cols, in_channels]`.\n filter: A `Tensor`. Must have the same type as `input`.\n Shape `[depth, rows, cols, in_channels, out_channels]`.\n `in_channels` must match between `input` and `filter`.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the input.", "type": "API"}, {"name": "tf.raw_ops.Conv3DBackpropInputV2", "docs": "Computes the gradients of 3-D convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An integer vector representing the tensor shape of `input`,\n where `input` is a 5-D\n `[batch, depth, rows, cols, in_channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Shape `[depth, rows, cols, in_channels, out_channels]`.\n `in_channels` must match between `input` and `filter`.\n out_backprop: A `Tensor`. Must have the same type as `filter`.\n Backprop signal of shape `[batch, out_depth, out_rows, out_cols,\n out_channels]`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1, 1]`.\n 1-D tensor of length 5. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of 3-D convolution with respect to the input.", "type": "API"}, {"name": "tf.raw_ops.Copy", "docs": "Copy a tensor from CPU-to-CPU or GPU-to-GPU.\n\n Performs CPU-to-CPU or GPU-to-GPU deep-copying of tensor, depending on the\n device on which the tensor is allocated.\n N.B.: If the all downstream attached debug ops are disabled given the current\n gRPC gating status, the output will simply forward the input tensor without\n deep-copying. See the documentation of Debug* ops for more details.\n\n Unlike the CopyHost Op, this op does not have HostMemory constraint on its\n input or output.\n\n Args:\n input: A `Tensor`. Input tensor.\n tensor_name: An optional `string`. Defaults to `\"\"`.\n The name of the input tensor.\n debug_ops_spec: An optional list of `strings`. Defaults to `[]`.\n A list of debug op spec (op, url, gated_grpc) for attached debug\n ops. Each element of the list has the format\n ;;, wherein gated_grpc is boolean represented\n as 0/1. E.g., \"DebugIdentity;grpc://foo:3333;1\",\n \"DebugIdentity;file:///tmp/tfdbg_1;0\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor from CPU-to-CPU or GPU-to-GPU.", "type": "API"}, {"name": "tf.raw_ops.CopyHost", "docs": "Copy a tensor to host.\n\n Performs CPU-to-CPU deep-copying of tensor.\n N.B.: If the all downstream attached debug ops are disabled given the current\n gRPC gating status, the output will simply forward the input tensor without\n deep-copying. See the documentation of Debug* ops for more details.\n\n Unlike the Copy Op, this op has HostMemory constraint on its input or output.\n\n Args:\n input: A `Tensor`. Input tensor.\n tensor_name: An optional `string`. Defaults to `\"\"`.\n The name of the input tensor.\n debug_ops_spec: An optional list of `strings`. Defaults to `[]`.\n A list of debug op spec (op, url, gated_grpc) for attached debug\n ops. Each element of the list has the format\n ;;, wherein gated_grpc is boolean represented\n as 0/1. E.g., \"DebugIdentity;grpc://foo:3333;1\",\n \"DebugIdentity;file:///tmp/tfdbg_1;0\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor to host.", "type": "API"}, {"name": "tf.raw_ops.Cos", "docs": "Computes cos of x element-wise.\n\n Given an input tensor, this function computes cosine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes cos of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Cosh", "docs": "Computes hyperbolic cosine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic cosine of every\n element in the tensor. Input range is `[-inf, inf]` and output range\n is `[1, inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic cosine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.CountUpTo", "docs": "Increments 'ref' until it reaches 'limit'.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`.\n Should be from a scalar `Variable` node.\n limit: An `int`.\n If incrementing ref would bring it above limit, instead generates an\n 'OutOfRange' error.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `ref`.\n ", "desc": "Increments 'ref' until it reaches 'limit'.", "type": "API"}, {"name": "tf.raw_ops.CreateSummaryDbWriter", "docs": "TODO: add doc.\n\n Args:\n writer: A `Tensor` of type `resource`.\n db_uri: A `Tensor` of type `string`.\n experiment_name: A `Tensor` of type `string`.\n run_name: A `Tensor` of type `string`.\n user_name: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.CreateSummaryFileWriter", "docs": "TODO: add doc.\n\n Args:\n writer: A `Tensor` of type `resource`.\n logdir: A `Tensor` of type `string`.\n max_queue: A `Tensor` of type `int32`.\n flush_millis: A `Tensor` of type `int32`.\n filename_suffix: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.CropAndResize", "docs": "Extracts crops from the input image tensor and resizes them.\n\n Extracts crops from the input image tensor and resizes them using bilinear\n sampling or nearest neighbor sampling (possibly with aspect ratio change) to a\n common output size specified by `crop_size`. This is more general than the\n `crop_to_bounding_box` op which extracts a fixed size slice from the input image\n and does not allow resizing or aspect ratio change.\n\n Returns a tensor with `crops` from the input `image` at positions defined at the\n bounding box locations in `boxes`. The cropped boxes are all resized (with\n bilinear or nearest neighbor interpolation) to a fixed\n `size = [crop_height, crop_width]`. The result is a 4-D tensor\n `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned.\n In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical\n results to using `tf.image.resize_bilinear()` or\n `tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with\n `align_corners=True`.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.\n A 4-D tensor of shape `[batch, image_height, image_width, depth]`.\n Both `image_height` and `image_width` need to be positive.\n boxes: A `Tensor` of type `float32`.\n A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor\n specifies the coordinates of a box in the `box_ind[i]` image and is specified\n in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of\n `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the\n `[0, 1]` interval of normalized image height is mapped to\n `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in\n which case the sampled crop is an up-down flipped version of the original\n image. The width dimension is treated similarly. Normalized coordinates\n outside the `[0, 1]` range are allowed, in which case we use\n `extrapolation_value` to extrapolate the input image values.\n box_ind: A `Tensor` of type `int32`.\n A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.\n The value of `box_ind[i]` specifies the image that the `i`-th box refers to.\n crop_size: A `Tensor` of type `int32`.\n A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All\n cropped image patches are resized to this size. The aspect ratio of the image\n content is not preserved. Both `crop_height` and `crop_width` need to be\n positive.\n method: An optional `string` from: `\"bilinear\", \"nearest\"`. Defaults to `\"bilinear\"`.\n A string specifying the sampling method for resizing. It can be either\n `\"bilinear\"` or `\"nearest\"` and default to `\"bilinear\"`. Currently two sampling\n methods are supported: Bilinear and Nearest Neighbor.\n extrapolation_value: An optional `float`. Defaults to `0`.\n Value used for extrapolation, when applicable.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts crops from the input image tensor and resizes them.", "type": "API"}, {"name": "tf.raw_ops.CropAndResizeGradBoxes", "docs": "Computes the gradient of the crop_and_resize op wrt the input boxes tensor.\n\n Args:\n grads: A `Tensor` of type `float32`.\n A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.\n A 4-D tensor of shape `[batch, image_height, image_width, depth]`.\n Both `image_height` and `image_width` need to be positive.\n boxes: A `Tensor` of type `float32`.\n A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor\n specifies the coordinates of a box in the `box_ind[i]` image and is specified\n in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of\n `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the\n `[0, 1]` interval of normalized image height is mapped to\n `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in\n which case the sampled crop is an up-down flipped version of the original\n image. The width dimension is treated similarly. Normalized coordinates\n outside the `[0, 1]` range are allowed, in which case we use\n `extrapolation_value` to extrapolate the input image values.\n box_ind: A `Tensor` of type `int32`.\n A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.\n The value of `box_ind[i]` specifies the image that the `i`-th box refers to.\n method: An optional `string` from: `\"bilinear\"`. Defaults to `\"bilinear\"`.\n A string specifying the interpolation method. Only 'bilinear' is\n supported for now.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Computes the gradient of the crop_and_resize op wrt the input boxes tensor.", "type": "API"}, {"name": "tf.raw_ops.CropAndResizeGradImage", "docs": "Computes the gradient of the crop_and_resize op wrt the input image tensor.\n\n Args:\n grads: A `Tensor` of type `float32`.\n A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.\n boxes: A `Tensor` of type `float32`.\n A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor\n specifies the coordinates of a box in the `box_ind[i]` image and is specified\n in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of\n `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the\n `[0, 1]` interval of normalized image height is mapped to\n `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in\n which case the sampled crop is an up-down flipped version of the original\n image. The width dimension is treated similarly. Normalized coordinates\n outside the `[0, 1]` range are allowed, in which case we use\n `extrapolation_value` to extrapolate the input image values.\n box_ind: A `Tensor` of type `int32`.\n A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.\n The value of `box_ind[i]` specifies the image that the `i`-th box refers to.\n image_size: A `Tensor` of type `int32`.\n A 1-D tensor with value `[batch, image_height, image_width, depth]`\n containing the original image size. Both `image_height` and `image_width` need\n to be positive.\n T: A `tf.DType` from: `tf.float32, tf.half, tf.float64`.\n method: An optional `string` from: `\"bilinear\", \"nearest\"`. Defaults to `\"bilinear\"`.\n A string specifying the interpolation method. Only 'bilinear' is\n supported for now.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `T`.\n ", "desc": "Computes the gradient of the crop_and_resize op wrt the input image tensor.", "type": "API"}, {"name": "tf.raw_ops.Cross", "docs": "Compute the pairwise cross product.\n\n `a` and `b` must be the same shape; they can either be simple 3-element vectors,\n or any shape where the innermost dimension is 3. In the latter case, each pair\n of corresponding 3-element vectors is cross-multiplied independently.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n A tensor containing 3-element vectors.\n b: A `Tensor`. Must have the same type as `a`.\n Another tensor, of same type and shape as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the pairwise cross product.", "type": "API"}, {"name": "tf.raw_ops.CrossReplicaSum", "docs": "An Op to sum inputs across replicated TPU instances.\n\n Each instance supplies its own input.\n\n For example, suppose there are 8 TPU instances: `[A, B, C, D, E, F, G, H]`.\n Passing group_assignment=`[[0,2,4,6],[1,3,5,7]]` sets `A, C, E, G` as group 0,\n and `B, D, F, H` as group 1. Thus we get the outputs:\n `[A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `uint32`.\n The local input to the sum.\n group_assignment: A `Tensor` of type `int32`. An int32 tensor with shape\n [num_groups, num_replicas_per_group]. `group_assignment[i]` represents the\n replica ids in the ith subgroup.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "An Op to sum inputs across replicated TPU instances.", "type": "API"}, {"name": "tf.raw_ops.CSRSparseMatrixComponents", "docs": "Reads out the CSR components at batch `index`.\n\n This op is meant only for debugging / testing, and its interface is not expected\n to be stable.\n\n Args:\n csr_sparse_matrix: A `Tensor` of type `variant`.\n A batched CSRSparseMatrix.\n index: A `Tensor` of type `int32`.\n The index in `csr_sparse_matrix`'s batch.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (row_ptrs, col_inds, values).\n\n row_ptrs: A `Tensor` of type `int32`.\n col_inds: A `Tensor` of type `int32`.\n values: A `Tensor` of type `type`.\n ", "desc": "Reads out the CSR components at batch `index`.", "type": "API"}, {"name": "tf.raw_ops.CSRSparseMatrixToDense", "docs": "Convert a (possibly batched) CSRSparseMatrix to dense.\n\n Args:\n sparse_input: A `Tensor` of type `variant`. A batched CSRSparseMatrix.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `type`.\n ", "desc": "Convert a (possibly batched) CSRSparseMatrix to dense.", "type": "API"}, {"name": "tf.raw_ops.CSRSparseMatrixToSparseTensor", "docs": "Converts a (possibly batched) CSRSparesMatrix to a SparseTensor.\n\n Args:\n sparse_matrix: A `Tensor` of type `variant`.\n A (possibly batched) CSRSparseMatrix.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, values, dense_shape).\n\n indices: A `Tensor` of type `int64`.\n values: A `Tensor` of type `type`.\n dense_shape: A `Tensor` of type `int64`.\n ", "desc": "Converts a (possibly batched) CSRSparesMatrix to a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.CSVDataset", "docs": "TODO: add doc.\n\n Args:\n filenames: A `Tensor` of type `string`.\n compression_type: A `Tensor` of type `string`.\n buffer_size: A `Tensor` of type `int64`.\n header: A `Tensor` of type `bool`.\n field_delim: A `Tensor` of type `string`.\n use_quote_delim: A `Tensor` of type `bool`.\n na_value: A `Tensor` of type `string`.\n select_cols: A `Tensor` of type `int64`.\n record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.CSVDatasetV2", "docs": "TODO: add doc.\n\n Args:\n filenames: A `Tensor` of type `string`.\n compression_type: A `Tensor` of type `string`.\n buffer_size: A `Tensor` of type `int64`.\n header: A `Tensor` of type `bool`.\n field_delim: A `Tensor` of type `string`.\n use_quote_delim: A `Tensor` of type `bool`.\n na_value: A `Tensor` of type `string`.\n select_cols: A `Tensor` of type `int64`.\n record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.\n exclude_cols: A `Tensor` of type `int64`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.CTCBeamSearchDecoder", "docs": "Performs beam search decoding on the logits given in input.\n\n A note about the attribute merge_repeated: For the beam search decoder,\n this means that if consecutive entries in a beam are the same, only\n the first of these is emitted. That is, when the top path is \"A B B B B\",\n \"A B\" is returned if merge_repeated = True but \"A B B B B\" is\n returned if merge_repeated = False.\n\n Args:\n inputs: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.\n sequence_length: A `Tensor` of type `int32`.\n A vector containing sequence lengths, size `(batch)`.\n beam_width: An `int` that is `>= 1`.\n A scalar >= 0 (beam search beam width).\n top_paths: An `int` that is `>= 1`.\n A scalar >= 0, <= beam_width (controls output size).\n merge_repeated: An optional `bool`. Defaults to `True`.\n If true, merge repeated classes in output.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (decoded_indices, decoded_values, decoded_shape, log_probability).\n\n decoded_indices: A list of `top_paths` `Tensor` objects with type `int64`.\n decoded_values: A list of `top_paths` `Tensor` objects with type `int64`.\n decoded_shape: A list of `top_paths` `Tensor` objects with type `int64`.\n log_probability: A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Performs beam search decoding on the logits given in input.", "type": "API"}, {"name": "tf.raw_ops.CTCGreedyDecoder", "docs": "Performs greedy decoding on the logits given in inputs.\n\n A note about the attribute merge_repeated: if enabled, when\n consecutive logits' maximum indices are the same, only the first of\n these is emitted. Labeling the blank '*', the sequence \"A B B * B B\"\n becomes \"A B B\" if merge_repeated = True and \"A B B B B\" if\n merge_repeated = False.\n\n Regardless of the value of merge_repeated, if the maximum index of a given\n time and batch corresponds to the blank, index `(num_classes - 1)`, no new\n element is emitted.\n\n Args:\n inputs: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.\n sequence_length: A `Tensor` of type `int32`.\n A vector containing sequence lengths, size `(batch_size)`.\n merge_repeated: An optional `bool`. Defaults to `False`.\n If True, merge repeated classes in output.\n blank_index: An optional `int`. Defaults to `-1`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (decoded_indices, decoded_values, decoded_shape, log_probability).\n\n decoded_indices: A `Tensor` of type `int64`.\n decoded_values: A `Tensor` of type `int64`.\n decoded_shape: A `Tensor` of type `int64`.\n log_probability: A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Performs greedy decoding on the logits given in inputs.", "type": "API"}, {"name": "tf.raw_ops.CTCLoss", "docs": "Calculates the CTC Loss (log probability) for each batch entry. Also calculates\n\n the gradient. This class performs the softmax operation for you, so inputs\n should be e.g. linear projections of outputs by an LSTM.\n\n Args:\n inputs: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.\n labels_indices: A `Tensor` of type `int64`.\n The indices of a `SparseTensor`.\n `labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for\n `(batch b, time t)`.\n labels_values: A `Tensor` of type `int32`.\n The values (labels) associated with the given batch and time.\n sequence_length: A `Tensor` of type `int32`.\n A vector containing sequence lengths (batch).\n preprocess_collapse_repeated: An optional `bool`. Defaults to `False`.\n Scalar, if true then repeated labels are\n collapsed prior to the CTC calculation.\n ctc_merge_repeated: An optional `bool`. Defaults to `True`.\n Scalar. If set to false, *during* CTC calculation\n repeated non-blank labels will not be merged and are interpreted as\n individual labels. This is a simplified version of CTC.\n ignore_longer_outputs_than_inputs: An optional `bool`. Defaults to `False`.\n Scalar. If set to true, during CTC\n calculation, items that have longer output sequences than input sequences\n are skipped: they don't contribute to the loss term and have zero-gradient.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (loss, gradient).\n\n loss: A `Tensor`. Has the same type as `inputs`.\n gradient: A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Calculates the CTC Loss (log probability) for each batch entry. Also calculates", "type": "API"}, {"name": "tf.raw_ops.CTCLossV2", "docs": "Calculates the CTC Loss (log probability) for each batch entry. Also calculates\n\n the gradient. This class performs the softmax operation for you, so inputs\n should be e.g. linear projections of outputs by an LSTM.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n 3-D, shape: `(max_time x batch_size x num_classes)`, the logits. Default blank\n label is 0 rather num_classes - 1.\n labels_indices: A `Tensor` of type `int64`.\n The indices of a `SparseTensor`.\n `labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for\n `(batch b, time t)`.\n labels_values: A `Tensor` of type `int32`.\n The values (labels) associated with the given batch and time.\n sequence_length: A `Tensor` of type `int32`.\n A vector containing sequence lengths (batch).\n preprocess_collapse_repeated: An optional `bool`. Defaults to `False`.\n Scalar, if true then repeated labels are\n collapsed prior to the CTC calculation.\n ctc_merge_repeated: An optional `bool`. Defaults to `True`.\n Scalar. If set to false, *during* CTC calculation\n repeated non-blank labels will not be merged and are interpreted as\n individual labels. This is a simplified version of CTC.\n ignore_longer_outputs_than_inputs: An optional `bool`. Defaults to `False`.\n Scalar. If set to true, during CTC\n calculation, items that have longer output sequences than input sequences\n are skipped: they don't contribute to the loss term and have zero-gradient.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (loss, gradient).\n\n loss: A `Tensor` of type `float32`.\n gradient: A `Tensor` of type `float32`.\n ", "desc": "Calculates the CTC Loss (log probability) for each batch entry. Also calculates", "type": "API"}, {"name": "tf.raw_ops.CudnnRNN", "docs": "A RNN backed by cuDNN.\n\n Computes the RNN from the input and initial states, with respect to the params\n buffer.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: A 3-D tensor with the shape of [seq_length, batch_size, input_size].\n input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,\n num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n output: A 3-D tensor with the shape of [seq_length, batch_size,\n dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n is_training: Indicates whether this operation is used for inference or\n training.\n reserve_space: An opaque tensor that can be used in backprop calculation. It\n is only produced if is_training is false.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n is_training: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_h, output_c, reserve_space).\n\n output: A `Tensor`. Has the same type as `input`.\n output_h: A `Tensor`. Has the same type as `input`.\n output_c: A `Tensor`. Has the same type as `input`.\n reserve_space: A `Tensor`. Has the same type as `input`.\n ", "desc": "A RNN backed by cuDNN.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNBackprop", "docs": "Backprop step of CudnnRNN.\n\n Compute the backprop of both data and weights in a RNN.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: A 3-D tensor with the shape of [seq_length, batch_size, input_size].\n input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,\n num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n output: A 3-D tensor with the shape of [seq_length, batch_size,\n dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n output_backprop: A 3-D tensor with the same shape as output in the forward pass.\n output_h_backprop: A 3-D tensor with the same shape as output_h in the forward\n pass.\n output_c_backprop: A 3-D tensor with the same shape as output_c in the forward\n pass.\n reserve_space: The same reserve_space produced in for forward operation.\n input_backprop: The backprop to input in the forward pass. Has the same shape\n as input.\n input_h_backprop: The backprop to input_h in the forward pass. Has the same\n shape as input_h.\n input_c_backprop: The backprop to input_c in the forward pass. Has the same\n shape as input_c.\n params_backprop: The backprop to the params buffer in the forward pass. Has the\n same shape as params.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n output: A `Tensor`. Must have the same type as `input`.\n output_h: A `Tensor`. Must have the same type as `input`.\n output_c: A `Tensor`. Must have the same type as `input`.\n output_backprop: A `Tensor`. Must have the same type as `input`.\n output_h_backprop: A `Tensor`. Must have the same type as `input`.\n output_c_backprop: A `Tensor`. Must have the same type as `input`.\n reserve_space: A `Tensor`. Must have the same type as `input`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop).\n\n input_backprop: A `Tensor`. Has the same type as `input`.\n input_h_backprop: A `Tensor`. Has the same type as `input`.\n input_c_backprop: A `Tensor`. Has the same type as `input`.\n params_backprop: A `Tensor`. Has the same type as `input`.\n ", "desc": "Backprop step of CudnnRNN.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNBackpropV2", "docs": "Backprop step of CudnnRNN.\n\n Compute the backprop of both data and weights in a RNN. Takes an extra\n \"host_reserved\" inupt than CudnnRNNBackprop, which is used to determine RNN\n cudnnRNNAlgo_t and cudnnMathType_t.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicates whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: A 3-D tensor with the shape of [seq_length, batch_size, input_size].\n input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,\n num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n output: A 3-D tensor with the shape of [seq_length, batch_size,\n dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n output_backprop: A 3-D tensor with the same shape as output in the forward pass.\n output_h_backprop: A 3-D tensor with the same shape as output_h in the forward\n pass.\n output_c_backprop: A 3-D tensor with the same shape as output_c in the forward\n pass.\n reserve_space: The same reserve_space produced in the forward operation.\n host_reserved: The same host_reserved produced in the forward operation.\n input_backprop: The backprop to input in the forward pass. Has the same shape\n as input.\n input_h_backprop: The backprop to input_h in the forward pass. Has the same\n shape as input_h.\n input_c_backprop: The backprop to input_c in the forward pass. Has the same\n shape as input_c.\n params_backprop: The backprop to the params buffer in the forward pass. Has the\n same shape as params.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n output: A `Tensor`. Must have the same type as `input`.\n output_h: A `Tensor`. Must have the same type as `input`.\n output_c: A `Tensor`. Must have the same type as `input`.\n output_backprop: A `Tensor`. Must have the same type as `input`.\n output_h_backprop: A `Tensor`. Must have the same type as `input`.\n output_c_backprop: A `Tensor`. Must have the same type as `input`.\n reserve_space: A `Tensor`. Must have the same type as `input`.\n host_reserved: A `Tensor` of type `int8`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop).\n\n input_backprop: A `Tensor`. Has the same type as `input`.\n input_h_backprop: A `Tensor`. Has the same type as `input`.\n input_c_backprop: A `Tensor`. Has the same type as `input`.\n params_backprop: A `Tensor`. Has the same type as `input`.\n ", "desc": "Backprop step of CudnnRNN.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNBackpropV3", "docs": "Backprop step of CudnnRNNV3.\n\n Compute the backprop of both data and weights in a RNN. Takes an extra\n \"sequence_lengths\" input than CudnnRNNBackprop.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicates whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: If time_major is true, this is a 3-D tensor with the shape of\n [seq_length, batch_size, input_size]. If time_major is false, the shape is\n [batch_size, seq_length, input_size].\n input_h: If time_major is true, this is a 3-D tensor with the shape of\n [num_layer * dir, batch_size, num_units]. If time_major is false, the shape\n is [batch_size, num_layer * dir, num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n sequence_lengths: a vector of lengths of each input sequence.\n output: If time_major is true, this is a 3-D tensor with the shape of\n [seq_length, batch_size, dir * num_units]. If time_major is false, the\n shape is [batch_size, seq_length, dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n output_backprop: A 3-D tensor with the same shape as output in the forward pass.\n output_h_backprop: A 3-D tensor with the same shape as output_h in the forward\n pass.\n output_c_backprop: A 3-D tensor with the same shape as output_c in the forward\n pass.\n time_major: Indicates whether the input/output format is time major or batch\n major.\n reserve_space: The same reserve_space produced in the forward operation.\n input_backprop: The backprop to input in the forward pass. Has the same shape\n as input.\n input_h_backprop: The backprop to input_h in the forward pass. Has the same\n shape as input_h.\n input_c_backprop: The backprop to input_c in the forward pass. Has the same\n shape as input_c.\n params_backprop: The backprop to the params buffer in the forward pass. Has the\n same shape as params.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n sequence_lengths: A `Tensor` of type `int32`.\n output: A `Tensor`. Must have the same type as `input`.\n output_h: A `Tensor`. Must have the same type as `input`.\n output_c: A `Tensor`. Must have the same type as `input`.\n output_backprop: A `Tensor`. Must have the same type as `input`.\n output_h_backprop: A `Tensor`. Must have the same type as `input`.\n output_c_backprop: A `Tensor`. Must have the same type as `input`.\n reserve_space: A `Tensor`. Must have the same type as `input`.\n host_reserved: A `Tensor` of type `int8`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n num_proj: An optional `int`. Defaults to `0`.\n time_major: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (input_backprop, input_h_backprop, input_c_backprop, params_backprop).\n\n input_backprop: A `Tensor`. Has the same type as `input`.\n input_h_backprop: A `Tensor`. Has the same type as `input`.\n input_c_backprop: A `Tensor`. Has the same type as `input`.\n params_backprop: A `Tensor`. Has the same type as `input`.\n ", "desc": "Backprop step of CudnnRNNV3.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNCanonicalToParams", "docs": "Converts CudnnRNN params from canonical form to usable form.\n\n Writes a set of weights into the opaque params buffer so they can be used in\n upcoming training or inferences.\n\n Note that the params buffer may not be compatible across different GPUs. So any\n save and restoration should be converted to and from the canonical weights and\n biases.\n\n num_layers: Specifies the number of layers in the RNN model.\n num_units: Specifies the size of the hidden state.\n input_size: Specifies the size of the input state.\n weights: the canonical form of weights that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n biases: the canonical form of biases that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n num_params: number of parameter sets for all layers.\n Each layer may contain multiple parameter sets, with each set consisting of\n a weight matrix and a bias vector.\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n The actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used.\n dir = (direction == bidirectional) ? 2 : 1\n dropout: dropout probability. When set to 0., dropout is disabled.\n seed: the 1st part of a seed to initialize dropout.\n seed2: the 2nd part of a seed to initialize dropout.\n\n Args:\n num_layers: A `Tensor` of type `int32`.\n num_units: A `Tensor` of type `int32`.\n input_size: A `Tensor` of type `int32`.\n weights: A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`.\n biases: A list with the same length as `weights` of `Tensor` objects with the same type as `weights`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Converts CudnnRNN params from canonical form to usable form.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNCanonicalToParamsV2", "docs": "Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM.\n\n Writes a set of weights into the opaque params buffer so they can be used in\n upcoming training or inferences.\n\n Note that the params buffer may not be compatible across different GPUs. So any\n save and restoration should be converted to and from the canonical weights and\n biases.\n\n num_layers: Specifies the number of layers in the RNN model.\n num_units: Specifies the size of the hidden state.\n input_size: Specifies the size of the input state.\n weights: the canonical form of weights that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n biases: the canonical form of biases that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n num_params_weights: number of weight parameter matrix for all layers.\n num_params_biases: number of bias parameter vector for all layers.\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n The actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used.\n dir = (direction == bidirectional) ? 2 : 1\n dropout: dropout probability. When set to 0., dropout is disabled.\n seed: the 1st part of a seed to initialize dropout.\n seed2: the 2nd part of a seed to initialize dropout.\n num_proj: The output dimensionality for the projection matrices. If None or 0,\n no projection is performed.\n\n Args:\n num_layers: A `Tensor` of type `int32`.\n num_units: A `Tensor` of type `int32`.\n input_size: A `Tensor` of type `int32`.\n weights: A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`.\n biases: A list of at least 1 `Tensor` objects with the same type as `weights`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n num_proj: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Converts CudnnRNN params from canonical form to usable form. It supports the projection in LSTM.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNParamsSize", "docs": "Computes size of weights that can be used by a Cudnn RNN model.\n\n Return the params size that can be used by the Cudnn RNN model. Subsequent\n weight allocation and initialization should use this size.\n\n num_layers: Specifies the number of layers in the RNN model.\n num_units: Specifies the size of the hidden state.\n input_size: Specifies the size of the input state.\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n The actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used.\n dir = (direction == bidirectional) ? 2 : 1\n dropout: dropout probability. When set to 0., dropout is disabled.\n seed: the 1st part of a seed to initialize dropout.\n seed2: the 2nd part of a seed to initialize dropout.\n params_size: The size of the params buffer that should be allocated and\n initialized for this RNN model. Note that this params buffer may not be\n compatible across GPUs. Please use CudnnRNNParamsWeights and\n CudnnRNNParamsBiases to save and restore them in a way that is compatible\n across different runs.\n\n Args:\n num_layers: A `Tensor` of type `int32`.\n num_units: A `Tensor` of type `int32`.\n input_size: A `Tensor` of type `int32`.\n T: A `tf.DType` from: `tf.half, tf.float32, tf.float64`.\n S: A `tf.DType` from: `tf.int32, tf.int64`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n num_proj: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `S`.\n ", "desc": "Computes size of weights that can be used by a Cudnn RNN model.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNParamsToCanonical", "docs": "Retrieves CudnnRNN params in canonical form.\n\n Retrieves a set of weights from the opaque params buffer that can be saved and\n restored in a way compatible with future runs.\n\n Note that the params buffer may not be compatible across different GPUs. So any\n save and restoration should be converted to and from the canonical weights and\n biases.\n\n num_layers: Specifies the number of layers in the RNN model.\n num_units: Specifies the size of the hidden state.\n input_size: Specifies the size of the input state.\n num_params: number of parameter sets for all layers.\n Each layer may contain multiple parameter sets, with each set consisting of\n a weight matrix and a bias vector.\n weights: the canonical form of weights that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n biases: the canonical form of biases that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n The actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used.\n dir = (direction == bidirectional) ? 2 : 1\n dropout: dropout probability. When set to 0., dropout is disabled.\n seed: the 1st part of a seed to initialize dropout.\n seed2: the 2nd part of a seed to initialize dropout.\n\n Args:\n num_layers: A `Tensor` of type `int32`.\n num_units: A `Tensor` of type `int32`.\n input_size: A `Tensor` of type `int32`.\n params: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n num_params: An `int` that is `>= 1`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (weights, biases).\n\n weights: A list of `num_params` `Tensor` objects with the same type as `params`.\n biases: A list of `num_params` `Tensor` objects with the same type as `params`.\n ", "desc": "Retrieves CudnnRNN params in canonical form.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNParamsToCanonicalV2", "docs": "Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM.\n\n Retrieves a set of weights from the opaque params buffer that can be saved and\n restored in a way compatible with future runs.\n\n Note that the params buffer may not be compatible across different GPUs. So any\n save and restoration should be converted to and from the canonical weights and\n biases.\n\n num_layers: Specifies the number of layers in the RNN model.\n num_units: Specifies the size of the hidden state.\n input_size: Specifies the size of the input state.\n num_params_weights: number of weight parameter matrix for all layers.\n num_params_biases: number of bias parameter vector for all layers.\n weights: the canonical form of weights that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n biases: the canonical form of biases that can be used for saving\n and restoration. They are more likely to be compatible across different\n generations.\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicate whether there is a linear projection between the input and\n The actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used.\n dir = (direction == bidirectional) ? 2 : 1\n dropout: dropout probability. When set to 0., dropout is disabled.\n seed: the 1st part of a seed to initialize dropout.\n seed2: the 2nd part of a seed to initialize dropout.\n num_proj: The output dimensionality for the projection matrices. If None or 0,\n no projection is performed.\n\n Args:\n num_layers: A `Tensor` of type `int32`.\n num_units: A `Tensor` of type `int32`.\n input_size: A `Tensor` of type `int32`.\n params: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n num_params_weights: An `int` that is `>= 1`.\n num_params_biases: An `int` that is `>= 1`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n num_proj: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (weights, biases).\n\n weights: A list of `num_params_weights` `Tensor` objects with the same type as `params`.\n biases: A list of `num_params_biases` `Tensor` objects with the same type as `params`.\n ", "desc": "Retrieves CudnnRNN params in canonical form. It supports the projection in LSTM.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNV2", "docs": "A RNN backed by cuDNN.\n\n Computes the RNN from the input and initial states, with respect to the params\n buffer. Produces one extra output \"host_reserved\" than CudnnRNN.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicates whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: A 3-D tensor with the shape of [seq_length, batch_size, input_size].\n input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,\n num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n output: A 3-D tensor with the shape of [seq_length, batch_size,\n dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n is_training: Indicates whether this operation is used for inference or\n training.\n reserve_space: An opaque tensor that can be used in backprop calculation. It\n is only produced if is_training is true.\n host_reserved: An opaque tensor that can be used in backprop calculation. It is\n only produced if is_training is true. It is output on host memory rather than\n device memory.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n is_training: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_h, output_c, reserve_space, host_reserved).\n\n output: A `Tensor`. Has the same type as `input`.\n output_h: A `Tensor`. Has the same type as `input`.\n output_c: A `Tensor`. Has the same type as `input`.\n reserve_space: A `Tensor`. Has the same type as `input`.\n host_reserved: A `Tensor` of type `int8`.\n ", "desc": "A RNN backed by cuDNN.", "type": "API"}, {"name": "tf.raw_ops.CudnnRNNV3", "docs": "A RNN backed by cuDNN.\n\n Computes the RNN from the input and initial states, with respect to the params\n buffer. Accepts one extra input \"sequence_lengths\" than CudnnRNN.\n\n rnn_mode: Indicates the type of the RNN model.\n input_mode: Indicates whether there is a linear projection between the input and\n the actual computation before the first layer. 'skip_input' is only allowed\n when input_size == num_units; 'auto_select' implies 'skip_input' when\n input_size == num_units; otherwise, it implies 'linear_input'.\n direction: Indicates whether a bidirectional model will be used. Should be\n \"unidirectional\" or \"bidirectional\".\n dropout: Dropout probability. When set to 0., dropout is disabled.\n seed: The 1st part of a seed to initialize dropout.\n seed2: The 2nd part of a seed to initialize dropout.\n input: If time_major is true, this is a 3-D tensor with the shape of\n [seq_length, batch_size, input_size]. If time_major is false, the shape is\n [batch_size, seq_length, input_size].\n input_h: If time_major is true, this is a 3-D tensor with the shape of\n [num_layer * dir, batch_size, num_units]. If time_major is false, the shape\n is [batch_size, num_layer * dir, num_units].\n input_c: For LSTM, a 3-D tensor with the shape of\n [num_layer * dir, batch, num_units]. For other models, it is ignored.\n params: A 1-D tensor that contains the weights and biases in an opaque layout.\n The size must be created through CudnnRNNParamsSize, and initialized\n separately. Note that they might not be compatible across different\n generations. So it is a good idea to save and restore\n sequence_lengths: a vector of lengths of each input sequence.\n output: If time_major is true, this is a 3-D tensor with the shape of\n [seq_length, batch_size, dir * num_units]. If time_major is false, the\n shape is [batch_size, seq_length, dir * num_units].\n output_h: The same shape has input_h.\n output_c: The same shape as input_c for LSTM. An empty tensor for other models.\n is_training: Indicates whether this operation is used for inference or\n training.\n time_major: Indicates whether the input/output format is time major or batch\n major.\n reserve_space: An opaque tensor that can be used in backprop calculation. It\n is only produced if is_training is true.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n input_h: A `Tensor`. Must have the same type as `input`.\n input_c: A `Tensor`. Must have the same type as `input`.\n params: A `Tensor`. Must have the same type as `input`.\n sequence_lengths: A `Tensor` of type `int32`.\n rnn_mode: An optional `string` from: `\"rnn_relu\", \"rnn_tanh\", \"lstm\", \"gru\"`. Defaults to `\"lstm\"`.\n input_mode: An optional `string` from: `\"linear_input\", \"skip_input\", \"auto_select\"`. Defaults to `\"linear_input\"`.\n direction: An optional `string` from: `\"unidirectional\", \"bidirectional\"`. Defaults to `\"unidirectional\"`.\n dropout: An optional `float`. Defaults to `0`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n num_proj: An optional `int`. Defaults to `0`.\n is_training: An optional `bool`. Defaults to `True`.\n time_major: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_h, output_c, reserve_space, host_reserved).\n\n output: A `Tensor`. Has the same type as `input`.\n output_h: A `Tensor`. Has the same type as `input`.\n output_c: A `Tensor`. Has the same type as `input`.\n reserve_space: A `Tensor`. Has the same type as `input`.\n host_reserved: A `Tensor` of type `int8`.\n ", "desc": "A RNN backed by cuDNN.", "type": "API"}, {"name": "tf.raw_ops.Cumprod", "docs": "Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the first\n element of the input is identical to the first element of the output:\n\n ```python\n tf.cumprod([a, b, c]) # => [a, a * b, a * b * c]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumprod is\n performed instead:\n\n ```python\n tf.cumprod([a, b, c], exclusive=True) # => [1, a, a * b]\n ```\n\n By setting the `reverse` kwarg to `True`, the cumprod is performed in the\n opposite direction:\n\n ```python\n tf.cumprod([a, b, c], reverse=True) # => [a * b * c, b * c, c]\n ```\n\n This is more efficient than using separate `tf.reverse` ops.\n\n The `reverse` and `exclusive` kwargs can also be combined:\n\n ```python\n tf.cumprod([a, b, c], exclusive=True, reverse=True) # => [b * c, c, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: An optional `bool`. Defaults to `False`.\n If `True`, perform exclusive cumprod.\n reverse: An optional `bool`. Defaults to `False`.\n A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative product of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.raw_ops.Cumsum", "docs": "Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:\n\n ```python\n tf.cumsum([a, b, c]) # => [a, a + b, a + b + c]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumsum is\n performed instead:\n\n ```python\n tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b]\n ```\n\n By setting the `reverse` kwarg to `True`, the cumsum is performed in the\n opposite direction:\n\n ```python\n tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c]\n ```\n\n This is more efficient than using separate `tf.reverse` ops.\n\n The `reverse` and `exclusive` kwargs can also be combined:\n\n ```python\n tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A `Tensor`. Must be one of the following types: `float32`, `float64`,\n `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,\n `complex128`, `qint8`, `quint8`, `qint32`, `half`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: An optional `bool`. Defaults to `False`.\n If `True`, perform exclusive cumsum.\n reverse: An optional `bool`. Defaults to `False`.\n A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative sum of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.raw_ops.CumulativeLogsumexp", "docs": "Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumulative log-sum-exp,\n which means that the first\n element of the input is identical to the first element of the output:\n ```python\n tf.math.cumulative_logsumexp([a, b, c]) # => [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))]\n ```\n\n By setting the `exclusive` kwarg to `True`, an exclusive cumulative log-sum-exp is\n performed instead:\n ```python\n tf.cumulative_logsumexp([a, b, c], exclusive=True) # => [-inf, a, log(exp(a) * exp(b))]\n ```\n Note that the neutral element of the log-sum-exp operation is `-inf`,\n however, for performance reasons, the minimal value representable by the\n floating point type is used instead.\n\n By setting the `reverse` kwarg to `True`, the cumulative log-sum-exp is performed in the\n opposite direction.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n A `Tensor`. Must be one of the following types: `float16`, `float32`, `float64`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `Tensor` of type `int32` (default: 0). Must be in the range\n `[-rank(x), rank(x))`.\n exclusive: An optional `bool`. Defaults to `False`.\n If `True`, perform exclusive cumulative log-sum-exp.\n reverse: An optional `bool`. Defaults to `False`.\n A `bool` (default: False).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the cumulative product of the tensor `x` along `axis`.", "type": "API"}, {"name": "tf.raw_ops.DataFormatDimMap", "docs": "Returns the dimension index in the destination data format given the one in\n\n the source data format.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor with each element as a dimension index in source data format.\n Must be in the range [-4, 4).\n src_format: An optional `string`. Defaults to `\"NHWC\"`.\n source data format.\n dst_format: An optional `string`. Defaults to `\"NCHW\"`.\n destination data format.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the dimension index in the destination data format given the one in", "type": "API"}, {"name": "tf.raw_ops.DataFormatVecPermute", "docs": "Permute input tensor from `src_format` to `dst_format`.\n\n Input tensor must be a vector of size 4, or a 4x2 tensor.\n\n For example, with `src_format` of `NHWC`, `dst_format` of `NCHW`, and inputs:\n ```\n [1, 2, 3, 4]\n ```\n and\n ```\n [[1, 2, 3, 4],\n [5, 6, 7, 8]]\n ```\n , the outputs will be (respectively):\n ```\n [1, 4, 2, 3]\n ```\n and\n ```\n [[1, 4, 2, 3],\n [5, 8, 6, 7]]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Vector of size 4 or Tensor of shape (4, 2) in source data format.\n src_format: An optional `string`. Defaults to `\"NHWC\"`.\n source data format.\n dst_format: An optional `string`. Defaults to `\"NCHW\"`.\n destination data format.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Permute input tensor from `src_format` to `dst_format`.", "type": "API"}, {"name": "tf.raw_ops.DataServiceDataset", "docs": "Creates a dataset that reads data from the tf.data service.\n\n Args:\n dataset_id: A `Tensor` of type `int64`.\n processing_mode: A `Tensor` of type `string`.\n address: A `Tensor` of type `string`.\n protocol: A `Tensor` of type `string`.\n job_name: A `Tensor` of type `string`.\n max_outstanding_requests: A `Tensor` of type `int64`.\n iteration_counter: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n task_refresh_interval_hint_ms: An optional `int`. Defaults to `-1`.\n data_transfer_protocol: An optional `string`. Defaults to `\"\"`.\n target_workers: An optional `string`. Defaults to `\"AUTO\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that reads data from the tf.data service.", "type": "API"}, {"name": "tf.raw_ops.DataServiceDatasetV2", "docs": "Creates a dataset that reads data from the tf.data service.\n\n Args:\n dataset_id: A `Tensor` of type `int64`.\n processing_mode: A `Tensor` of type `string`.\n address: A `Tensor` of type `string`.\n protocol: A `Tensor` of type `string`.\n job_name: A `Tensor` of type `string`.\n consumer_index: A `Tensor` of type `int64`.\n num_consumers: A `Tensor` of type `int64`.\n max_outstanding_requests: A `Tensor` of type `int64`.\n iteration_counter: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n task_refresh_interval_hint_ms: An optional `int`. Defaults to `-1`.\n data_transfer_protocol: An optional `string`. Defaults to `\"\"`.\n target_workers: An optional `string`. Defaults to `\"AUTO\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that reads data from the tf.data service.", "type": "API"}, {"name": "tf.raw_ops.DatasetCardinality", "docs": "Returns the cardinality of `input_dataset`.\n\n Returns the cardinality of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to return cardinality for.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the cardinality of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.DatasetFromGraph", "docs": "Creates a dataset from the given `graph_def`.\n\n Creates a dataset from the provided `graph_def`.\n\n Args:\n graph_def: A `Tensor` of type `string`.\n The graph representation of the dataset (as serialized GraphDef).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset from the given `graph_def`.", "type": "API"}, {"name": "tf.raw_ops.DatasetToGraph", "docs": "Returns a serialized GraphDef representing `input_dataset`.\n\n Returns a graph representation for `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to return the graph representation for.\n stateful_whitelist: An optional list of `strings`. Defaults to `[]`.\n allow_stateful: An optional `bool`. Defaults to `False`.\n strip_device_assignment: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns a serialized GraphDef representing `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.DatasetToGraphV2", "docs": "Returns a serialized GraphDef representing `input_dataset`.\n\n Returns a graph representation for `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to return the graph representation for.\n external_state_policy: An optional `int`. Defaults to `0`.\n strip_device_assignment: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns a serialized GraphDef representing `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.DatasetToSingleElement", "docs": "Outputs the single element from the given dataset.\n\n Args:\n dataset: A `Tensor` of type `variant`.\n A handle to a dataset that contains a single element.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Outputs the single element from the given dataset.", "type": "API"}, {"name": "tf.raw_ops.DatasetToTFRecord", "docs": "Writes the given dataset to the given file using the TFRecord format.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to write.\n filename: A `Tensor` of type `string`.\n A scalar string tensor representing the filename to use.\n compression_type: A `Tensor` of type `string`.\n A scalar string tensor containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes the given dataset to the given file using the TFRecord format.", "type": "API"}, {"name": "tf.raw_ops.Dawsn", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DebugGradientIdentity", "docs": "Identity op for gradient debugging.\n\n This op is hidden from public in Python. It is used by TensorFlow Debugger to\n register gradient tensors for gradient debugging.\n This op operates on non-reference-type tensors.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Identity op for gradient debugging.", "type": "API"}, {"name": "tf.raw_ops.DebugGradientRefIdentity", "docs": "Identity op for gradient debugging.\n\n This op is hidden from public in Python. It is used by TensorFlow Debugger to\n register gradient tensors for gradient debugging.\n This op operates on reference-type tensors.\n\n Args:\n input: A mutable `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `input`.\n ", "desc": "Identity op for gradient debugging.", "type": "API"}, {"name": "tf.raw_ops.DebugIdentity", "docs": "Provides an identity mapping of the non-Ref type input tensor for debugging.\n\n Provides an identity mapping of the non-Ref type input tensor for debugging.\n\n Args:\n input: A `Tensor`. Input tensor, non-Reference type\n device_name: An optional `string`. Defaults to `\"\"`.\n Name of the device on which the tensor resides.\n tensor_name: An optional `string`. Defaults to `\"\"`.\n Name of the input tensor.\n debug_urls: An optional list of `strings`. Defaults to `[]`.\n List of URLs to debug targets, e.g.,\n file:///foo/tfdbg_dump, grpc:://localhost:11011\n gated_grpc: An optional `bool`. Defaults to `False`.\n Whether this op will be gated. If any of the debug_urls of this\n debug node is of the grpc:// scheme, when the value of this attribute is set\n to True, the data will not actually be sent via the grpc stream unless this\n debug op has been enabled at the debug_url. If all of the debug_urls of this\n debug node are of the grpc:// scheme and the debug op is enabled at none of\n them, the output will be an empty Tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Provides an identity mapping of the non-Ref type input tensor for debugging.", "type": "API"}, {"name": "tf.raw_ops.DebugIdentityV2", "docs": "Debug Identity V2 Op.\n\n Provides an identity mapping from input to output, while writing the content of\n the input tensor by calling DebugEventsWriter.\n\n The semantics of the input tensor depends on tensor_debug_mode. In typical\n usage, the input tensor comes directly from the user computation only when\n graph_debug_mode is FULL_TENSOR (see protobuf/debug_event.proto for a\n list of all the possible values of graph_debug_mode). For the other debug modes,\n the input tensor should be produced by an additional op or subgraph that\n computes summary information about one or more tensors.\n\n Args:\n input: A `Tensor`. Input tensor, non-Reference type\n tfdbg_context_id: An optional `string`. Defaults to `\"\"`.\n A tfdbg-generated ID for the context that the op belongs to,\n e.g., a concrete compiled tf.function.\n op_name: An optional `string`. Defaults to `\"\"`.\n Optional. Name of the op that the debug op is concerned with.\n Used only for single-tensor trace.\n output_slot: An optional `int`. Defaults to `-1`.\n Optional. Output slot index of the tensor that the debug op\n is concerned with. Used only for single-tensor trace.\n tensor_debug_mode: An optional `int`. Defaults to `-1`.\n TensorDebugMode enum value. See debug_event.proto for details.\n debug_urls: An optional list of `strings`. Defaults to `[]`.\n List of URLs to debug targets, e.g., file:///foo/tfdbg_dump.\n circular_buffer_size: An optional `int`. Defaults to `1000`.\n tfdbg_run_id: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Debug Identity V2 Op.", "type": "API"}, {"name": "tf.raw_ops.DebugNanCount", "docs": "Debug NaN Value Counter Op.\n\n Counts number of NaNs in the input tensor, for debugging.\n\n Args:\n input: A `Tensor`. Input tensor, non-Reference type.\n device_name: An optional `string`. Defaults to `\"\"`.\n tensor_name: An optional `string`. Defaults to `\"\"`.\n Name of the input tensor.\n debug_urls: An optional list of `strings`. Defaults to `[]`.\n List of URLs to debug targets, e.g.,\n file:///foo/tfdbg_dump, grpc:://localhost:11011.\n gated_grpc: An optional `bool`. Defaults to `False`.\n Whether this op will be gated. If any of the debug_urls of this\n debug node is of the grpc:// scheme, when the value of this attribute is set\n to True, the data will not actually be sent via the grpc stream unless this\n debug op has been enabled at the debug_url. If all of the debug_urls of this\n debug node are of the grpc:// scheme and the debug op is enabled at none of\n them, the output will be an empty Tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Debug NaN Value Counter Op.", "type": "API"}, {"name": "tf.raw_ops.DebugNumericSummary", "docs": "Debug Numeric Summary Op.\n\n Provide a basic summary of numeric value types, range and distribution.\n\n output: A double tensor of shape [14 + nDimensions], where nDimensions is the\n number of dimensions of the tensor's shape. The elements of output are:\n [0]: is initialized (1.0) or not (0.0).\n [1]: total number of elements\n [2]: NaN element count\n [3]: generalized -inf count: elements <= lower_bound. lower_bound is -inf by\n default.\n [4]: negative element count (excluding -inf), if lower_bound is the default\n -inf. Otherwise, this is the count of elements > lower_bound and < 0.\n [5]: zero element count\n [6]: positive element count (excluding +inf), if upper_bound is the default\n +inf. Otherwise, this is the count of elements < upper_bound and > 0.\n [7]: generalized +inf count, elements >= upper_bound. upper_bound is +inf by\n default.\n Output elements [1:8] are all zero, if the tensor is uninitialized.\n [8]: minimum of all non-inf and non-NaN elements.\n If uninitialized or no such element exists: +inf.\n [9]: maximum of all non-inf and non-NaN elements.\n If uninitialized or no such element exists: -inf.\n [10]: mean of all non-inf and non-NaN elements.\n If uninitialized or no such element exists: NaN.\n [11]: variance of all non-inf and non-NaN elements.\n If uninitialized or no such element exists: NaN.\n [12]: Data type of the tensor encoded as an enum integer. See the DataType\n proto for more details.\n [13]: Number of dimensions of the tensor (ndims).\n [14+]: Sizes of the dimensions.\n\n Args:\n input: A `Tensor`. Input tensor, non-Reference type.\n device_name: An optional `string`. Defaults to `\"\"`.\n tensor_name: An optional `string`. Defaults to `\"\"`.\n Name of the input tensor.\n debug_urls: An optional list of `strings`. Defaults to `[]`.\n List of URLs to debug targets, e.g.,\n file:///foo/tfdbg_dump, grpc:://localhost:11011.\n lower_bound: An optional `float`. Defaults to `float('-inf')`.\n (float) The lower bound <= which values will be included in the\n generalized -inf count. Default: -inf.\n upper_bound: An optional `float`. Defaults to `float('inf')`.\n (float) The upper bound >= which values will be included in the\n generalized +inf count. Default: +inf.\n mute_if_healthy: An optional `bool`. Defaults to `False`.\n (bool) Do not send data to the debug URLs unless at least one\n of elements [2], [3] and [7] (i.e., the nan count and the generalized -inf and\n inf counts) is non-zero.\n gated_grpc: An optional `bool`. Defaults to `False`.\n Whether this op will be gated. If any of the debug_urls of this\n debug node is of the grpc:// scheme, when the value of this attribute is set\n to True, the data will not actually be sent via the grpc stream unless this\n debug op has been enabled at the debug_url. If all of the debug_urls of this\n debug node are of the grpc:// scheme and the debug op is enabled at none of\n them, the output will be an empty Tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float64`.\n ", "desc": "Debug Numeric Summary Op.", "type": "API"}, {"name": "tf.raw_ops.DebugNumericSummaryV2", "docs": "Debug Numeric Summary V2 Op.\n\n Computes a numeric summary of the input tensor. The shape of the output\n depends on the tensor_debug_mode attribute.\n This op is used internally by TensorFlow Debugger (tfdbg) v2.\n\n Args:\n input: A `Tensor`. Input tensor, to be summarized by the op.\n output_dtype: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n Optional. The type of the output. Can be float32 or float64 (default: float32).\n tensor_debug_mode: An optional `int`. Defaults to `-1`.\n Tensor debug mode: the mode in which the input tensor is summarized\n by the op. See the TensorDebugMode enum in\n tensorflow/core/protobuf/debug_event.proto for details.\n\n Supported values:\n 2 (CURT_HEALTH): Output a float32/64 tensor of shape [2]. The 1st\n element is the tensor_id, if provided, and -1 otherwise. The 2nd\n element is a bit which is set to 1 if the input tensor has an\n infinity or nan value, or zero otherwise.\n\n 3 (CONCISE_HEALTH): Output a float32/64 tensor of shape [5]. The 1st\n element is the tensor_id, if provided, and -1 otherwise. The\n remaining four slots are the total number of elements, -infs,\n +infs, and nans in the input tensor respectively.\n\n 4 (FULL_HEALTH): Output a float32/64 tensor of shape [11]. The 1st\n element is the tensor_id, if provided, and -1 otherwise. The 2nd\n element is the device_id, if provided, and -1 otherwise. The 3rd\n element holds the datatype value of the input tensor as according\n to the enumerated type in tensorflow/core/framework/types.proto.\n The remaining elements hold the total number of elements, -infs,\n +infs, nans, negative finite numbers, zeros, and positive finite\n numbers in the input tensor respectively.\n\n 5 (SHAPE): Output a float32/64 tensor of shape [10]. The 1st\n element is the tensor_id, if provided, and -1 otherwise. The 2nd\n element holds the datatype value of the input tensor as according\n to the enumerated type in tensorflow/core/framework/types.proto.\n The 3rd element holds the rank of the tensor. The 4th element holds\n the number of elements within the tensor. Finally the remaining 6\n elements hold the shape of the tensor. If the rank of the tensor\n is lower than 6, the shape is right padded with zeros. If the rank\n is greater than 6, the head of the shape is truncated.\n\n 6 (FULL_NUMERICS): Output a float32/64 tensor of shape [22]. The 1st\n element is the tensor_id, if provided, and -1 otherwise. The 2nd\n element is the device_id, if provided, and -1 otherwise. The 3rd\n element holds the datatype value of the input tensor as according\n to the enumerated type in tensorflow/core/framework/types.proto.\n The 4th element holds the rank of the tensor. The 5th to 11th\n elements hold the shape of the tensor. If the rank of the tensor\n is lower than 6, the shape is right padded with zeros. If the rank\n is greater than 6, the head of the shape is truncated. The 12th to\n 18th elements hold the number of elements, -infs, +infs, nans,\n denormal floats, negative finite numbers, zeros, and positive\n finite numbers in the input tensor respectively. The final four\n elements hold the min value, max value, mean, and variance of the\n input tensor.\n\n 8 (REDUCE_INF_NAN_THREE_SLOTS): Output a float32/64 tensor of shape\n [3]. The 1st element is -inf if any elements of the input tensor\n is -inf, or zero otherwise. The 2nd element is +inf if any elements\n of the input tensor is +inf, or zero otherwise. The 3rd element is\n nan if any element of the input tensor is nan, or zero otherwise.\n tensor_id: An optional `int`. Defaults to `-1`.\n Optional. An integer identifier for the tensor being summarized by this op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_dtype`.\n ", "desc": "Debug Numeric Summary V2 Op.", "type": "API"}, {"name": "tf.raw_ops.DecodeAndCropJpeg", "docs": "Decode and Crop a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n It is equivalent to a combination of decode and crop, but much faster by only\n decoding partial jpeg image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n crop_window: A `Tensor` of type `int32`.\n 1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode and Crop a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeBase64", "docs": "Decode web-safe base64-encoded strings.\n\n Input may or may not have padding at the end. See\n [EncodeBase64](https://www.tensorflow.org/api_docs/python/tf/io/encode_base64)\n for padding. Web-safe means that input must use - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Base64 strings to decode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decode web-safe base64-encoded strings.", "type": "API"}, {"name": "tf.raw_ops.DecodeBmp", "docs": "Decode the first frame of a BMP-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the BMP-encoded image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The BMP-encoded image.\n channels: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the first frame of a BMP-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeCompressed", "docs": "Decompress strings.\n\n This op decompresses each element of the `bytes` input `Tensor`, which\n is assumed to be compressed using the given `compression_type`.\n\n The `output` is a string `Tensor` of the same shape as `bytes`,\n each element containing the decompressed data from the corresponding\n element in `bytes`.\n\n Args:\n bytes: A `Tensor` of type `string`.\n A Tensor of string which is compressed.\n compression_type: An optional `string`. Defaults to `\"\"`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Decompress strings.", "type": "API"}, {"name": "tf.raw_ops.DecodeCSV", "docs": "Convert CSV records to tensors. Each column maps to one tensor.\n\n RFC 4180 format is expected for the CSV records.\n (https://tools.ietf.org/html/rfc4180)\n Note that we allow leading and trailing spaces with int or float field.\n\n Args:\n records: A `Tensor` of type `string`.\n Each string is a record/row in the csv and all records should have\n the same format.\n record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.\n One tensor per column of the input record, with either a\n scalar default value for that column or an empty vector if the column is\n required.\n field_delim: An optional `string`. Defaults to `\",\"`.\n char delimiter to separate fields in a record.\n use_quote_delim: An optional `bool`. Defaults to `True`.\n If false, treats double quotation marks as regular\n characters inside of the string fields (ignoring RFC 4180, Section 2,\n Bullet 5).\n na_value: An optional `string`. Defaults to `\"\"`.\n Additional string to recognize as NA/NaN.\n select_cols: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `record_defaults`.\n ", "desc": "Convert CSV records to tensors. Each column maps to one tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeGif", "docs": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.\n\n GIF images with frame or transparency compression are not supported.\n On Linux and MacOS systems, convert animated GIFs from compressed to\n uncompressed by running:\n\n convert $src.gif -coalesce $dst.gif\n\n This op also supports decoding JPEGs and PNGs, though it is cleaner to use\n `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The GIF-encoded image.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode the frame(s) of a GIF-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeImage", "docs": "Function for decode_bmp, decode_gif, decode_jpeg, and decode_png.\n\n Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the\n appropriate operation to convert the input bytes string into a Tensor of type\n dtype.\n\n *NOTE*: decode_gif returns a 4-D array [num_frames, height, width, 3], as\n opposed to decode_bmp, decode_jpeg and decode_png, which return 3-D arrays\n [height, width, num_channels]. Make sure to take this into account when\n constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or\n PNG files. Alternately, set the expand_animations argument of this function to\n False, in which case the op will return 3-dimensional tensors and will truncate\n animated GIF files to the first frame.\n\n *NOTE*: If the first frame of an animated GIF does not occupy the entire\n canvas (maximum frame width x maximum frame height), then it fills the\n unoccupied areas (in the first frame) with zeros (black). For frames after the\n first frame that does not occupy the entire canvas, it uses the previous\n frame to fill the unoccupied areas.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The encoded image bytes.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16, tf.float32`. Defaults to `tf.uint8`.\n The desired DType of the returned Tensor.\n expand_animations: An optional `bool`. Defaults to `True`.\n Controls the output shape of the returned op. If True, the returned op will\n produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all\n GIFs, whether animated or not. If, False, the returned op will produce a 3-D\n tensor for all file types and will truncate animated GIFs to the first frame.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Function for decode_bmp, decode_gif, decode_jpeg, and decode_png.", "type": "API"}, {"name": "tf.raw_ops.DecodeJpeg", "docs": "Decode a JPEG-encoded image to a uint8 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the JPEG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n\n If needed, the JPEG-encoded image is transformed to match the requested number\n of color channels.\n\n The attr `ratio` allows downscaling the image by an integer factor during\n decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than\n downscaling the image later.\n\n\n This op also supports decoding PNGs and non-animated GIFs since the interface is\n the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n ratio: An optional `int`. Defaults to `1`. Downscaling ratio.\n fancy_upscaling: An optional `bool`. Defaults to `True`.\n If true use a slower but nicer upscaling of the\n chroma planes (yuv420/422 only).\n try_recover_truncated: An optional `bool`. Defaults to `False`.\n If true try to recover an image from truncated input.\n acceptable_fraction: An optional `float`. Defaults to `1`.\n The minimum required fraction of lines before a truncated\n input is accepted.\n dct_method: An optional `string`. Defaults to `\"\"`.\n string specifying a hint about the algorithm used for\n decompression. Defaults to \"\" which maps to a system-specific\n default. Currently valid values are [\"INTEGER_FAST\",\n \"INTEGER_ACCURATE\"]. The hint may be ignored (e.g., the internal\n jpeg library changes to a version that does not have that specific\n option.)\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Decode a JPEG-encoded image to a uint8 tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeJSONExample", "docs": "Convert JSON-encoded Example records to binary protocol buffer strings.\n\n \n Note: This is **not** a general purpose JSON parsing op.\n\n This op converts JSON-serialized\n `tf.train.Example` (created with `json_format.MessageToJson`, following the\n [standard JSON mapping](https://developers.google.com/protocol-buffers/docs/proto3#json))\n to a binary-serialized `tf.train.Example` (equivalent to\n `Example.SerializeToString()`) suitable for conversion to tensors with\n `tf.io.parse_example`.\n\n Args:\n json_examples: A `Tensor` of type `string`.\n Each string is a JSON object serialized according to the JSON\n mapping of the Example proto.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Convert JSON-encoded Example records to binary protocol buffer strings.", "type": "API"}, {"name": "tf.raw_ops.DecodePaddedRaw", "docs": "Reinterpret the bytes of a string as a vector of numbers.\n\n Args:\n input_bytes: A `Tensor` of type `string`. Tensor of string to be decoded.\n fixed_length: A `Tensor` of type `int32`.\n Length in bytes for each element of the decoded output. Must be a multiple\n of the size of the output type.\n out_type: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64, tf.bfloat16`.\n little_endian: An optional `bool`. Defaults to `True`.\n Whether the input `input_bytes` is in little-endian order. Ignored for\n `out_type` values that are stored in a single byte, like `uint8`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Reinterpret the bytes of a string as a vector of numbers.", "type": "API"}, {"name": "tf.raw_ops.DecodePng", "docs": "Decode a PNG-encoded image to a uint8 or uint16 tensor.\n\n The attr `channels` indicates the desired number of color channels for the\n decoded image.\n\n Accepted values are:\n\n * 0: Use the number of channels in the PNG-encoded image.\n * 1: output a grayscale image.\n * 3: output an RGB image.\n * 4: output an RGBA image.\n\n If needed, the PNG-encoded image is transformed to match the requested number\n of color channels.\n\n This op also supports decoding JPEGs and non-animated GIFs since the interface\n is the same, though it is cleaner to use `tf.io.decode_image`.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The PNG-encoded image.\n channels: An optional `int`. Defaults to `0`.\n Number of color channels for the decoded image.\n dtype: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Decode a PNG-encoded image to a uint8 or uint16 tensor.", "type": "API"}, {"name": "tf.raw_ops.DecodeProtoV2", "docs": "The op extracts fields from a serialized protocol buffers message into tensors.\n\n Note: This API is designed for orthogonality rather than human-friendliness. It\n can be used to parse input protos by hand, but it is intended for use in\n generated code.\n\n The `decode_proto` op extracts fields from a serialized protocol buffers\n message into tensors. The fields in `field_names` are decoded and converted\n to the corresponding `output_types` if possible.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n Each output tensor is a dense tensor. This means that it is padded to hold\n the largest number of repeated elements seen in the input minibatch. (The\n shape is also padded by one to prevent zero-sized dimensions). The actual\n repeat counts for each example in the minibatch can be found in the `sizes`\n output. In many cases the output of `decode_proto` is fed immediately into\n tf.squeeze if missing values are not a concern. When using tf.squeeze, always\n pass the squeeze dimension explicitly to avoid surprises.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n - `map` fields are not directly decoded. They are treated as `repeated` fields,\n of the appropriate entry type. The proto-compiler defines entry types for each\n map field. The type-name is the field name, converted to \"CamelCase\" with\n \"Entry\" appended. The `tf.train.Features.FeatureEntry` message is an example of\n one of these implicit `Entry` types.\n\n - `enum` fields should be read as int32.\n\n Both binary and text proto serializations are supported, and can be\n chosen using the `format` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Here is an example:\n\n The, internal, `Summary.Value` proto contains a\n `oneof {float simple_value; Image image; ...}`\n\n >>> from google.protobuf import text_format\n >>>\n >>> # A Summary.Value contains: oneof {float simple_value; Image image}\n >>> values = [\n ... \"simple_value: 2.2\",\n ... \"simple_value: 1.2\",\n ... \"image { height: 128 width: 512 }\",\n ... \"image { height: 256 width: 256 }\",]\n >>> values = [\n ... text_format.Parse(v, tf.compat.v1.Summary.Value()).SerializeToString()\n ... for v in values]\n\n The following can decode both fields from the serialized strings:\n\n >>> sizes, [simple_value, image] = tf.io.decode_proto(\n ... values,\n ... tf.compat.v1.Summary.Value.DESCRIPTOR.full_name,\n ... field_names=['simple_value', 'image'],\n ... output_types=[tf.float32, tf.string])\n\n The `sizes` has the same shape as the input, with an additional axis across the\n fields that were decoded. Here the first column of `sizes` is the size of the\n decoded `simple_value` field:\n\n >>> print(sizes)\n tf.Tensor(\n [[1 0]\n [1 0]\n [0 1]\n [0 1]], shape=(4, 2), dtype=int32)\n\n The result tensors each have one more index than the input byte-strings.\n The valid elements of each result tensor are indicated by\n the appropriate column of `sizes`. The invalid elements are padded with a\n default value:\n\n >>> print(simple_value)\n tf.Tensor(\n [[2.2]\n [1.2]\n [0. ]\n [0. ]], shape=(4, 1), dtype=float32)\n\n Nested protos are extracted as string tensors:\n\n >>> print(image.dtype)\n \n >>> print(image.shape.as_list())\n [4, 1]\n\n To convert to a `tf.RaggedTensor` representation use:\n\n >>> tf.RaggedTensor.from_tensor(simple_value, lengths=sizes[:, 0]).to_list()\n [[2.2], [1.2], [], []]\n\n Args:\n bytes: A `Tensor` of type `string`.\n Tensor of serialized protos with shape `batch_shape`.\n message_type: A `string`. Name of the proto message type to decode.\n field_names: A list of `strings`.\n List of strings containing proto field names. An extension field can be decoded\n by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME.\n output_types: A list of `tf.DTypes`.\n List of TF types to use for the respective field in field_names.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n Either the special value `local://` or a path to a file containing\n a serialized `FileDescriptorSet`.\n message_format: An optional `string`. Defaults to `\"binary\"`.\n Either `binary` or `text`.\n sanitize: An optional `bool`. Defaults to `False`.\n Whether to sanitize the result or not.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sizes, values).\n\n sizes: A `Tensor` of type `int32`.\n values: A list of `Tensor` objects of type `output_types`.\n ", "desc": "The op extracts fields from a serialized protocol buffers message into tensors.", "type": "API"}, {"name": "tf.raw_ops.DecodeRaw", "docs": "Reinterpret the bytes of a string as a vector of numbers.\n\n Args:\n bytes: A `Tensor` of type `string`.\n All the elements must have the same length.\n out_type: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint16, tf.uint8, tf.int16, tf.int8, tf.int64, tf.complex64, tf.complex128, tf.bool, tf.bfloat16`.\n little_endian: An optional `bool`. Defaults to `True`.\n Whether the input `bytes` are in little-endian order.\n Ignored for `out_type` values that are stored in a single byte like\n `uint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Reinterpret the bytes of a string as a vector of numbers.", "type": "API"}, {"name": "tf.raw_ops.DecodeWav", "docs": "Decode a 16-bit PCM WAV file to a float tensor.\n\n The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.\n\n When desired_channels is set, if the input contains fewer channels than this\n then the last channel will be duplicated to give the requested number, else if\n the input has more channels than requested then the additional channels will be\n ignored.\n\n If desired_samples is set, then the audio will be cropped or padded with zeroes\n to the requested length.\n\n The first output contains a Tensor with the content of the audio samples. The\n lowest dimension will be the number of channels, and the second will be the\n number of samples. For example, a ten-sample-long stereo WAV file should give an\n output shape of [10, 2].\n\n Args:\n contents: A `Tensor` of type `string`.\n The WAV-encoded audio, usually from a file.\n desired_channels: An optional `int`. Defaults to `-1`.\n Number of sample channels wanted.\n desired_samples: An optional `int`. Defaults to `-1`.\n Length of audio requested.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (audio, sample_rate).\n\n audio: A `Tensor` of type `float32`.\n sample_rate: A `Tensor` of type `int32`.\n ", "desc": "Decode a 16-bit PCM WAV file to a float tensor.", "type": "API"}, {"name": "tf.raw_ops.DeepCopy", "docs": "Makes a copy of `x`.\n\n Args:\n x: A `Tensor`. The source tensor of type `T`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Makes a copy of `x`.", "type": "API"}, {"name": "tf.raw_ops.DeleteIterator", "docs": "A container for an iterator resource.\n\n Args:\n handle: A `Tensor` of type `resource`. A handle to the iterator to delete.\n deleter: A `Tensor` of type `variant`. A variant deleter.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "A container for an iterator resource.", "type": "API"}, {"name": "tf.raw_ops.DeleteMemoryCache", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DeleteMultiDeviceIterator", "docs": "A container for an iterator resource.\n\n Args:\n multi_device_iterator: A `Tensor` of type `resource`.\n A handle to the multi device iterator to delete.\n iterators: A list of `Tensor` objects with type `resource`.\n A list of iterator handles (unused). This is added so that automatic control dependencies get added during function tracing that ensure this op runs after all the dependent iterators are deleted.\n deleter: A `Tensor` of type `variant`. A variant deleter.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "A container for an iterator resource.", "type": "API"}, {"name": "tf.raw_ops.DeleteRandomSeedGenerator", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DeleteSeedGenerator", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type `resource`.\n deleter: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DeleteSessionTensor", "docs": "Delete the tensor specified by its handle in the session.\n\n Args:\n handle: A `Tensor` of type `string`.\n The handle for a tensor stored in the session state.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Delete the tensor specified by its handle in the session.", "type": "API"}, {"name": "tf.raw_ops.DenseBincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n Outputs a vector with length `size` and the same dtype as `weights`. If\n `weights` are empty, then index `i` stores the number of times the value `i` is\n counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of\n the value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Values in `arr` outside of the range [0, size) are ignored.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1D or 2D int `Tensor`.\n size: A `Tensor`. Must have the same type as `input`.\n non-negative int scalar `Tensor`.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n is an int32, int64, float32, or float64 `Tensor` with the same\n shape as `arr`, or a length-0 `Tensor`, in which case it acts as all weights\n equal to 1.\n binary_output: An optional `bool`. Defaults to `False`.\n bool; Whether the kernel should count the appearance or number of occurrences.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.raw_ops.DenseCountSparseOutput", "docs": "Performs sparse-output bin counting for a tf.tensor input.\n\n Counts the number of times each value occurs in the input.\n\n Args:\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Tensor containing data to count.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n A Tensor of the same shape as indices containing per-index weight values. May\n also be the empty tensor if no weights are used.\n binary_output: A `bool`.\n Whether to output the number of occurrences of each value or 1.\n minlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Minimum value to count. Can be set to -1 for no minimum.\n maxlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Maximum value to count. Can be set to -1 for no maximum.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_dense_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `weights`.\n output_dense_shape: A `Tensor` of type `int64`.\n ", "desc": "Performs sparse-output bin counting for a tf.tensor input.", "type": "API"}, {"name": "tf.raw_ops.DenseToCSRSparseMatrix", "docs": "Converts a dense tensor to a (possibly batched) CSRSparseMatrix.\n\n Args:\n dense_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`.\n A Dense tensor.\n indices: A `Tensor` of type `int64`. Indices of nonzero elements.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Converts a dense tensor to a (possibly batched) CSRSparseMatrix.", "type": "API"}, {"name": "tf.raw_ops.DenseToDenseSetOperation", "docs": "Applies set operation along last dimension of 2 `Tensor` inputs.\n\n See SetOperationOp::SetOperationFromContext for values of `set_operation`.\n\n Output `result` is a `SparseTensor` represented by `result_indices`,\n `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this\n has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth`\n dimension contains the result of `set_operation` applied to the corresponding\n `[0...n-1]` dimension of `set`.\n\n Args:\n set1: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`.\n `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.\n Dimension `n` contains values in a set, duplicates are allowed but ignored.\n set2: A `Tensor`. Must have the same type as `set1`.\n `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set1`.\n Dimension `n` contains values in a set, duplicates are allowed but ignored.\n set_operation: A `string`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (result_indices, result_values, result_shape).\n\n result_indices: A `Tensor` of type `int64`.\n result_values: A `Tensor`. Has the same type as `set1`.\n result_shape: A `Tensor` of type `int64`.\n ", "desc": "Applies set operation along last dimension of 2 `Tensor` inputs.", "type": "API"}, {"name": "tf.raw_ops.DenseToSparseBatchDataset", "docs": "Creates a dataset that batches input elements into a SparseTensor.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A handle to an input dataset. Must have a single component.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch.\n row_shape: A `Tensor` of type `int64`.\n A vector representing the dense shape of each row in the produced\n SparseTensor. The shape may be partially specified, using `-1` to indicate\n that a particular dimension should use the maximum size of all batch elements.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches input elements into a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.DenseToSparseSetOperation", "docs": "Applies set operation along last dimension of `Tensor` and `SparseTensor`.\n\n See SetOperationOp::SetOperationFromContext for values of `set_operation`.\n\n Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`,\n and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same\n as `set1`. Dimension `n` contains values in a set, duplicates are allowed but\n ignored.\n\n If `validate_indices` is `True`, this op validates the order and range of `set2`\n indices.\n\n Output `result` is a `SparseTensor` represented by `result_indices`,\n `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this\n has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth`\n dimension contains the result of `set_operation` applied to the corresponding\n `[0...n-1]` dimension of `set`.\n\n Args:\n set1: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`.\n `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.\n Dimension `n` contains values in a set, duplicates are allowed but ignored.\n set2_indices: A `Tensor` of type `int64`.\n 2D `Tensor`, indices of a `SparseTensor`. Must be in row-major\n order.\n set2_values: A `Tensor`. Must have the same type as `set1`.\n 1D `Tensor`, values of a `SparseTensor`. Must be in row-major\n order.\n set2_shape: A `Tensor` of type `int64`.\n 1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must\n be the same as the 1st `n-1` dimensions of `set1`, `result_shape[n]` is the\n max set size across `n-1` dimensions.\n set_operation: A `string`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (result_indices, result_values, result_shape).\n\n result_indices: A `Tensor` of type `int64`.\n result_values: A `Tensor`. Has the same type as `set1`.\n result_shape: A `Tensor` of type `int64`.\n ", "desc": "Applies set operation along last dimension of `Tensor` and `SparseTensor`.", "type": "API"}, {"name": "tf.raw_ops.DepthToSpace", "docs": "DepthToSpace for tensors of type T.\n\n Rearranges data from depth into blocks of spatial data.\n This is the reverse transformation of SpaceToDepth. More specifically,\n this op outputs a copy of the input tensor where values from the `depth`\n dimension are moved in spatial blocks to the `height` and `width` dimensions.\n The attr `block_size` indicates the input block size and how the data is moved.\n\n * Chunks of data of size `block_size * block_size` from depth are rearranged\n into non-overlapping blocks of size `block_size x block_size`\n * The width the output tensor is `input_depth * block_size`, whereas the\n height is `input_height * block_size`.\n * The Y, X coordinates within each block of the output image are determined\n by the high order component of the input channel index.\n * The depth of the input tensor must be divisible by\n `block_size * block_size`.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates\n within the input image, bX, bY means coordinates\n within the output block, oC means output channels).\n The output would be the input transposed to the following layout:\n n,iY,bY,iX,bX,oC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 1, 1, 4]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1, 2, 3, 4]]]]\n\n ```\n\n This operation will output a tensor of shape `[1, 2, 2, 1]`:\n\n ```\n [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,\n the corresponding output will have 2x2 elements and will have a depth of\n 1 channel (1 = `4 / (block_size * block_size)`).\n The output element shape is `[2, 2, 1]`.\n\n For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.\n\n ```\n x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n This operation, for block size of 2, will return the following tensor of shape\n `[1, 2, 2, 3]`\n\n ```\n [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n\n ```\n\n Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 4 4 1]`:\n\n ```\n x = [[[ [1], [2], [5], [6]],\n [ [3], [4], [7], [8]],\n [ [9], [10], [13], [14]],\n [ [11], [12], [15], [16]]]]\n\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`.\n The size of the spatial block, same as in Space2Depth.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "DepthToSpace for tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.DepthwiseConv2dNative", "docs": "Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.\n\n Given an input tensor of shape `[batch, in_height, in_width, in_channels]`\n and a filter / kernel tensor of shape\n `[filter_height, filter_width, in_channels, channel_multiplier]`, containing\n `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies\n a different filter to each input channel (expanding from 1 channel to\n `channel_multiplier` channels for each), then concatenates the results\n together. Thus, the output has `in_channels * channel_multiplier` channels.\n\n ```\n for k in 0..in_channels-1\n for q in 0..channel_multiplier-1\n output[b, i, j, k * channel_multiplier + q] =\n sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *\n filter[di, dj, k, q]\n ```\n\n Must have `strides[0] = strides[3] = 1`. For the most common case of the same\n horizontal and vertices strides, `strides = [1, stride, stride, 1]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n filter: A `Tensor`. Must have the same type as `input`.\n strides: A list of `ints`.\n 1-D of length 4. The stride of the sliding window for each dimension\n of `input`.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, height, width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each filter\n element on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.", "type": "API"}, {"name": "tf.raw_ops.DepthwiseConv2dNativeBackpropFilter", "docs": "Computes the gradients of depthwise convolution with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape based on `data_format`. For example, if\n `data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height,\n in_width, in_channels]` tensor.\n filter_sizes: A `Tensor` of type `int32`.\n An integer vector representing the tensor shape of `filter`,\n where `filter` is a 4-D\n `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n 4-D with shape based on `data_format`.\n For example, if `data_format` is 'NHWC' then\n out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, height, width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each filter\n element on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the filter.", "type": "API"}, {"name": "tf.raw_ops.DepthwiseConv2dNativeBackpropInput", "docs": "Computes the gradients of depthwise convolution with respect to the input.\n\n Args:\n input_sizes: A `Tensor` of type `int32`.\n An integer vector representing the shape of `input`, based\n on `data_format`. For example, if `data_format` is 'NHWC' then\n `input` is a 4-D `[batch, height, width, channels]` tensor.\n filter: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 4-D with shape\n `[filter_height, filter_width, in_channels, depthwise_multiplier]`.\n out_backprop: A `Tensor`. Must have the same type as `filter`.\n 4-D with shape based on `data_format`.\n For example, if `data_format` is 'NHWC' then\n out_backprop shape is `[batch, out_height, out_width, out_channels]`.\n Gradients w.r.t. the output of the convolution.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n of the convolution.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, height, width, channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, channels, height, width].\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each filter\n element on that dimension. The dimension order is determined by the value of\n `data_format`, see above for details. Dilations in the batch and depth\n dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `filter`.\n ", "desc": "Computes the gradients of depthwise convolution with respect to the input.", "type": "API"}, {"name": "tf.raw_ops.Dequantize", "docs": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.\n\n [min_range, max_range] are scalar floats that specify the range for\n the output. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n if T == qint8: in[i] += (range(T) + 1)/ 2.0\n out[i] = min_range + (in[i]* (max_range - min_range) / range(T))\n ```\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n If the input comes from a QuantizedRelu6, the output type is\n quint8 (range of 0-255) but the possible range of QuantizedRelu6 is\n 0-6. The min_range and max_range values are therefore 0.0 and 6.0.\n Dequantize on quint8 will take each value, cast to float, and multiply\n by 6 / 255.\n Note that if quantizedtype is qint8, the operation will additionally add\n each value by 128 prior to casting.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```c++\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = range / num_discrete_values\n const double offset_input = static_cast(input) - lowest_quantized;\n result = range_min + ((input - numeric_limits::min()) * range_scale)\n ```\n\n If the mode is `SCALED`, dequantization is performed by multiplying each\n input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).\n\n The scaling_factor is determined from `min_range`, `max_range`, and\n `narrow_range` in a way that is compatible with `QuantizeAndDequantize{V2|V3}`\n and `QuantizeV2`, using the following algorithm:\n\n ```c++\n\n const int min_expected_T = std::numeric_limits::min() +\n (narrow_range ? 1 : 0);\n const int max_expected_T = std::numeric_limits::max();\n const float max_expected_T = std::numeric_limits::max();\n\n const float scale_factor =\n (std::numeric_limits::min() == 0) ? (max_range / max_expected_T)\n : std::max(min_range / min_expected_T,\n max_range / max_expected_T);\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_range: A `Tensor` of type `float32`.\n The minimum scalar value possibly produced for the input.\n max_range: A `Tensor` of type `float32`.\n The maximum scalar value possibly produced for the input.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n dtype: An optional `tf.DType` from: `tf.bfloat16, tf.float32`. Defaults to `tf.float32`.\n Type of the output tensor. Currently Dequantize supports float and bfloat16.\n If 'dtype' is 'bfloat16', it only supports 'MIN_COMBINED' mode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Dequantize the 'input' tensor into a float or bfloat16 Tensor.", "type": "API"}, {"name": "tf.raw_ops.DeserializeIterator", "docs": "Converts the given variant tensor to an iterator and stores it in the given resource.\n\n Args:\n resource_handle: A `Tensor` of type `resource`.\n A handle to an iterator resource.\n serialized: A `Tensor` of type `variant`.\n A variant tensor storing the state of the iterator contained in the\n resource.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Converts the given variant tensor to an iterator and stores it in the given resource.", "type": "API"}, {"name": "tf.raw_ops.DeserializeManySparse", "docs": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.\n\n The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where\n `N` is the minibatch size and the rows correspond to packed outputs of\n `SerializeSparse`. The ranks of the original `SparseTensor` objects\n must all match. When the final `SparseTensor` is created, it has rank one\n higher than the ranks of the incoming `SparseTensor` objects\n (they have been concatenated along a new row dimension).\n\n The output `SparseTensor` object's shape values for all dimensions but the\n first are the max across the input `SparseTensor` objects' shape values\n for the corresponding dimensions. Its first shape value is `N`, the minibatch\n size.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `SparseReorder` to restore index ordering.\n\n For example, if the serialized input is a `[2 x 3]` matrix representing two\n original `SparseTensor` objects:\n\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n\n and\n\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n\n then the final deserialized `SparseTensor` will be:\n\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n\n Args:\n serialized_sparse: A `Tensor` of type `string`.\n 2-D, The `N` serialized `SparseTensor` objects.\n Must have 3 columns.\n dtype: A `tf.DType`. The `dtype` of the serialized `SparseTensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape).\n\n sparse_indices: A `Tensor` of type `int64`.\n sparse_values: A `Tensor` of type `dtype`.\n sparse_shape: A `Tensor` of type `int64`.\n ", "desc": "Deserialize and concatenate `SparseTensors` from a serialized minibatch.", "type": "API"}, {"name": "tf.raw_ops.DeserializeSparse", "docs": "Deserialize `SparseTensor` objects.\n\n The input `serialized_sparse` must have the shape `[?, ?, ..., ?, 3]` where\n the last dimension stores serialized `SparseTensor` objects and the other N\n dimensions (N >= 0) correspond to a batch. The ranks of the original\n `SparseTensor` objects must all match. When the final `SparseTensor` is\n created, its rank is the rank of the incoming `SparseTensor` objects plus N;\n the sparse tensors have been concatenated along new dimensions, one for each\n batch.\n\n The output `SparseTensor` object's shape values for the original dimensions\n are the max across the input `SparseTensor` objects' shape values for the\n corresponding dimensions. The new dimensions match the size of the batch.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `SparseReorder` to restore index ordering.\n\n For example, if the serialized input is a `[2 x 3]` matrix representing two\n original `SparseTensor` objects:\n\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n\n and\n\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n\n then the final deserialized `SparseTensor` will be:\n\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n\n Args:\n serialized_sparse: A `Tensor`. Must be one of the following types: `string`, `variant`.\n The serialized `SparseTensor` objects. The last dimension\n must have 3 columns.\n dtype: A `tf.DType`. The `dtype` of the serialized `SparseTensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape).\n\n sparse_indices: A `Tensor` of type `int64`.\n sparse_values: A `Tensor` of type `dtype`.\n sparse_shape: A `Tensor` of type `int64`.\n ", "desc": "Deserialize `SparseTensor` objects.", "type": "API"}, {"name": "tf.raw_ops.DestroyResourceOp", "docs": "Deletes the resource specified by the handle.\n\n All subsequent operations using the resource will result in a NotFound\n error status.\n\n Args:\n resource: A `Tensor` of type `resource`. handle to the resource to delete.\n ignore_lookup_error: An optional `bool`. Defaults to `True`.\n whether to ignore the error when the resource\n doesn't exist.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Deletes the resource specified by the handle.", "type": "API"}, {"name": "tf.raw_ops.DestroyTemporaryVariable", "docs": "Destroys the temporary variable and returns its final value.\n\n Sets output to the value of the Tensor pointed to by 'ref', then destroys\n the temporary variable called 'var_name'.\n All other uses of 'ref' *must* have executed before this op.\n This is typically achieved by chaining the ref through each assign op, or by\n using control dependencies.\n\n Outputs the final value of the tensor pointed to by 'ref'.\n\n Args:\n ref: A mutable `Tensor`. A reference to the temporary variable tensor.\n var_name: A `string`.\n Name of the temporary variable, usually the name of the matching\n 'TemporaryVariable' op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `ref`.\n ", "desc": "Destroys the temporary variable and returns its final value.", "type": "API"}, {"name": "tf.raw_ops.DeviceIndex", "docs": "Return the index of device the op runs.\n\n Given a list of device names, this operation returns the index of the device\n this op runs. The length of the list is returned in two cases:\n (1) Device does not exist in the given device list.\n (2) It is in XLA compilation.\n\n Args:\n device_names: A list of `strings`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Return the index of device the op runs.", "type": "API"}, {"name": "tf.raw_ops.Diag", "docs": "Returns a diagonal tensor with a given diagonal values.\n\n Given a `diagonal`, this operation returns a tensor with the `diagonal` and\n everything else padded with zeros. The diagonal is computed as follows:\n\n Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of\n rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:\n\n `output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.\n\n For example:\n\n ```\n # 'diagonal' is [1, 2, 3, 4]\n tf.diag(diagonal) ==> [[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]]\n ```\n\n Args:\n diagonal: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n Rank k tensor where k is at most 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a diagonal tensor with a given diagonal values.", "type": "API"}, {"name": "tf.raw_ops.DiagPart", "docs": "Returns the diagonal part of the tensor.\n\n This operation returns a tensor with the `diagonal` part\n of the `input`. The `diagonal` part is computed as follows:\n\n Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a\n tensor of rank `k` with dimensions `[D1,..., Dk]` where:\n\n `diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.\n\n For example:\n\n ```\n # 'input' is [[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]]\n\n tf.diag_part(input) ==> [1, 2, 3, 4]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n Rank k tensor where k is even and not zero.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the diagonal part of the tensor.", "type": "API"}, {"name": "tf.raw_ops.Digamma", "docs": "Computes Psi, the derivative of Lgamma (the log of the absolute value of\n\n `Gamma(x)`), element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes Psi, the derivative of Lgamma (the log of the absolute value of", "type": "API"}, {"name": "tf.raw_ops.Dilation2D", "docs": "Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.\n\n The `input` tensor has shape `[batch, in_height, in_width, depth]` and the\n `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each\n input channel is processed independently of the others with its own structuring\n function. The `output` tensor has shape\n `[batch, out_height, out_width, depth]`. The spatial dimensions of the output\n tensor depend on the `padding` algorithm. We currently only support the default\n \"NHWC\" `data_format`.\n\n In detail, the grayscale morphological 2-D dilation is the max-sum correlation\n (for consistency with `conv2d`, we use unmirrored filters):\n\n output[b, y, x, c] =\n max_{dy, dx} input[b,\n strides[1] * y + rates[1] * dy,\n strides[2] * x + rates[2] * dx,\n c] +\n filter[dy, dx, c]\n\n Max-pooling is a special case when the filter has size equal to the pooling\n kernel size and contains all zeros.\n\n Note on duality: The dilation of `input` by the `filter` is equal to the\n negation of the erosion of `-input` by the reflected `filter`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, in_height, in_width, depth]`.\n filter: A `Tensor`. Must have the same type as `input`.\n 3-D with shape `[filter_height, filter_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the input\n tensor. Must be: `[1, stride_height, stride_width, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n The input stride for atrous morphological dilation. Must be:\n `[1, rate_height, rate_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.", "type": "API"}, {"name": "tf.raw_ops.Dilation2DBackpropFilter", "docs": "Computes the gradient of morphological 2-D dilation with respect to the filter.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, in_height, in_width, depth]`.\n filter: A `Tensor`. Must have the same type as `input`.\n 3-D with shape `[filter_height, filter_width, depth]`.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, out_height, out_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The stride of the sliding window for each dimension of\n the input tensor. Must be: `[1, stride_height, stride_width, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The input stride for atrous morphological dilation.\n Must be: `[1, rate_height, rate_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradient of morphological 2-D dilation with respect to the filter.", "type": "API"}, {"name": "tf.raw_ops.Dilation2DBackpropInput", "docs": "Computes the gradient of morphological 2-D dilation with respect to the input.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, in_height, in_width, depth]`.\n filter: A `Tensor`. Must have the same type as `input`.\n 3-D with shape `[filter_height, filter_width, depth]`.\n out_backprop: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, out_height, out_width, depth]`.\n strides: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The stride of the sliding window for each dimension of\n the input tensor. Must be: `[1, stride_height, stride_width, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n 1-D of length 4. The input stride for atrous morphological dilation.\n Must be: `[1, rate_height, rate_width, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the gradient of morphological 2-D dilation with respect to the input.", "type": "API"}, {"name": "tf.raw_ops.DirectedInterleaveDataset", "docs": "A substitute for `InterleaveDataset` on a fixed list of `N` datasets.\n\n Args:\n selector_input_dataset: A `Tensor` of type `variant`.\n A dataset of scalar `DT_INT64` elements that determines which of the\n `N` data inputs should produce the next output element.\n data_input_datasets: A list of at least 1 `Tensor` objects with type `variant`.\n `N` datasets with the same type that will be interleaved according to\n the values of `selector_input_dataset`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n stop_on_empty_dataset: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "A substitute for `InterleaveDataset` on a fixed list of `N` datasets.", "type": "API"}, {"name": "tf.raw_ops.Div", "docs": "Returns x / y element-wise.\n\n *NOTE*: `Div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise.", "type": "API"}, {"name": "tf.raw_ops.DivNoNan", "docs": "Returns 0 if the denominator is zero.\n\n \n *NOTE*: `DivNoNan` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `bfloat16`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if the denominator is zero.", "type": "API"}, {"name": "tf.raw_ops.DrawBoundingBoxes", "docs": "Draw bounding boxes on a batch of images.\n\n Outputs a copy of `images` but draws on top of the pixels zero or more bounding\n boxes specified by the locations in `boxes`. The coordinates of the each\n bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The\n bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\n height of the underlying image.\n\n For example, if an image is 100 x 200 pixels (height x width) and the bounding\n box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of\n the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).\n\n Parts of the bounding box may fall outside the image.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `float32`, `half`.\n 4-D with shape `[batch, height, width, depth]`. A batch of images.\n boxes: A `Tensor` of type `float32`.\n 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding\n boxes.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Draw bounding boxes on a batch of images.", "type": "API"}, {"name": "tf.raw_ops.DrawBoundingBoxesV2", "docs": "Draw bounding boxes on a batch of images.\n\n Outputs a copy of `images` but draws on top of the pixels zero or more bounding\n boxes specified by the locations in `boxes`. The coordinates of the each\n bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The\n bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\n height of the underlying image.\n\n For example, if an image is 100 x 200 pixels (height x width) and the bounding\n box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of\n the bounding box will be `(40, 10)` to `(100, 50)` (in (x,y) coordinates).\n\n Parts of the bounding box may fall outside the image.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `float32`, `half`.\n 4-D with shape `[batch, height, width, depth]`. A batch of images.\n boxes: A `Tensor` of type `float32`.\n 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding\n boxes.\n colors: A `Tensor` of type `float32`.\n 2-D. A list of RGBA colors to cycle through for the boxes.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Draw bounding boxes on a batch of images.", "type": "API"}, {"name": "tf.raw_ops.DummyIterationCounter", "docs": "TODO: add doc.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DummyMemoryCache", "docs": "TODO: add doc.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DummySeedGenerator", "docs": "TODO: add doc.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.DynamicPartition", "docs": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.\n\n For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`\n becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`\n are placed in `outputs[i]` in lexicographic order of `js`, and the first\n dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.\n In detail,\n\n ```python\n outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]\n\n outputs[i] = pack([data[js, ...] for js if partitions[js] == i])\n ```\n\n `data.shape` must start with `partitions.shape`.\n\n For example:\n\n ```python\n # Scalar partitions.\n partitions = 1\n num_partitions = 2\n data = [10, 20]\n outputs[0] = [] # Empty with shape [0, 2]\n outputs[1] = [[10, 20]]\n\n # Vector partitions.\n partitions = [0, 0, 1, 1, 0]\n num_partitions = 2\n data = [10, 20, 30, 40, 50]\n outputs[0] = [10, 20, 50]\n outputs[1] = [30, 40]\n ```\n\n See `dynamic_stitch` for an example on how to merge partitions back.\n\n
\n \n
\n\n Args:\n data: A `Tensor`.\n partitions: A `Tensor` of type `int32`.\n Any shape. Indices in the range `[0, num_partitions)`.\n num_partitions: An `int` that is `>= 1`.\n The number of partitions to output.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_partitions` `Tensor` objects with the same type as `data`.\n ", "desc": "Partitions `data` into `num_partitions` tensors using indices from `partitions`.", "type": "API"}, {"name": "tf.raw_ops.DynamicStitch", "docs": "Interleave the values from the `data` tensors into a single tensor.\n\n Builds a merged tensor such that\n\n ```python\n merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]\n ```\n\n For example, if each `indices[m]` is scalar or vector, we have\n\n ```python\n # Scalar indices:\n merged[indices[m], ...] = data[m][...]\n\n # Vector indices:\n merged[indices[m][i], ...] = data[m][i, ...]\n ```\n\n Each `data[i].shape` must start with the corresponding `indices[i].shape`,\n and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we\n must have `data[i].shape = indices[i].shape + constant`. In terms of this\n `constant`, the output shape is\n\n merged.shape = [max(indices)] + constant\n\n Values are merged in order, so if an index appears in both `indices[m][i]` and\n `indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the\n merged result. If you do not need this guarantee, ParallelDynamicStitch might\n perform better on some devices.\n\n For example:\n\n ```python\n indices[0] = 6\n indices[1] = [4, 1]\n indices[2] = [[5, 2], [0, 3]]\n data[0] = [61, 62]\n data[1] = [[41, 42], [11, 12]]\n data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]\n merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],\n [51, 52], [61, 62]]\n ```\n\n This method can be used to merge partitions created by `dynamic_partition`\n as illustrated on the following example:\n\n ```python\n # Apply function (increments x_i) on elements for which a certain condition\n # apply (x_i != -1 in this example).\n x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])\n condition_mask=tf.not_equal(x,tf.constant(-1.))\n partitioned_data = tf.dynamic_partition(\n x, tf.cast(condition_mask, tf.int32) , 2)\n partitioned_data[1] = partitioned_data[1] + 1.0\n condition_indices = tf.dynamic_partition(\n tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)\n x = tf.dynamic_stitch(condition_indices, partitioned_data)\n # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain\n # unchanged.\n ```\n\n
\n \n
\n\n Args:\n indices: A list of at least 1 `Tensor` objects with type `int32`.\n data: A list with the same length as `indices` of `Tensor` objects with the same type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Interleave the values from the `data` tensors into a single tensor.", "type": "API"}, {"name": "tf.raw_ops.EagerPyFunc", "docs": "Eagerly executes a python function to compute func(input)->output. The\n\n semantics of the input, output, and attributes are the same as those for\n PyFunc.\n\n Args:\n input: A list of `Tensor` objects.\n token: A `string`.\n Tout: A list of `tf.DTypes`.\n is_async: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Eagerly executes a python function to compute func(input)->output. The", "type": "API"}, {"name": "tf.raw_ops.EditDistance", "docs": "Computes the (possibly normalized) Levenshtein Edit Distance.\n\n The inputs are variable-length sequences provided by SparseTensors\n (hypothesis_indices, hypothesis_values, hypothesis_shape)\n and\n (truth_indices, truth_values, truth_shape).\n\n The inputs are:\n\n Args:\n hypothesis_indices: A `Tensor` of type `int64`.\n The indices of the hypothesis list SparseTensor.\n This is an N x R int64 matrix.\n hypothesis_values: A `Tensor`.\n The values of the hypothesis list SparseTensor.\n This is an N-length vector.\n hypothesis_shape: A `Tensor` of type `int64`.\n The shape of the hypothesis list SparseTensor.\n This is an R-length vector.\n truth_indices: A `Tensor` of type `int64`.\n The indices of the truth list SparseTensor.\n This is an M x R int64 matrix.\n truth_values: A `Tensor`. Must have the same type as `hypothesis_values`.\n The values of the truth list SparseTensor.\n This is an M-length vector.\n truth_shape: A `Tensor` of type `int64`. truth indices, vector.\n normalize: An optional `bool`. Defaults to `True`.\n boolean (if true, edit distances are normalized by length of truth).\n\n The output is:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Computes the (possibly normalized) Levenshtein Edit Distance.", "type": "API"}, {"name": "tf.raw_ops.Eig", "docs": "Computes the eigen decomposition of one or more square matrices.\n\n Computes the eigenvalues and (optionally) right eigenvectors of each inner matrix in\n `input` such that `input[..., :, :] = v[..., :, :] * diag(e[..., :])`. The eigenvalues\n are sorted in non-decreasing order.\n\n ```python\n # a is a tensor.\n # e is a tensor of eigenvalues.\n # v is a tensor of eigenvectors.\n e, v = eig(a)\n e = eig(a, compute_v=False)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`.\n `Tensor` input of shape `[N, N]`.\n Tout: A `tf.DType` from: `tf.complex64, tf.complex128`.\n compute_v: An optional `bool`. Defaults to `True`.\n If `True` then eigenvectors will be computed and returned in `v`.\n Otherwise, only the eigenvalues will be computed.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (e, v).\n\n e: A `Tensor` of type `Tout`.\n v: A `Tensor` of type `Tout`.\n ", "desc": "Computes the eigen decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.raw_ops.Einsum", "docs": "Tensor contraction according to Einstein summation convention.\n\n Implements generalized Tensor contraction and reduction. Each input Tensor must\n have a corresponding input subscript appearing in the comma-separated left-hand\n side of the equation. The right-hand side of the equation consists of the\n output subscript. The input subscripts and the output subscript should consist\n of zero or more named axis labels and at most one ellipsis (`...`).\n\n The named axis labels may be any single character other than those having\n special meaning, namely `,.->`. The behavior of this Op is undefined if it\n receives an ill-formatted equation; since the validation is done at\n graph-building time, we omit format validation checks at runtime.\n\n Note: This Op is *not* intended to be called by the user; instead users should\n call `tf.einsum` directly. It is a hidden Op used by `tf.einsum`.\n\n Operations are applied to the input(s) according to the following rules:\n\n (a) Generalized Diagonals: For input dimensions corresponding to axis labels\n appearing more than once in the same input subscript, we take the\n generalized (`k`-dimensional) diagonal.\n For example, in the equation `iii->i` with input shape `[3, 3, 3]`, the\n generalized diagonal would consist of `3` elements at indices `(0, 0, 0)`,\n `(1, 1, 1)` and `(2, 2, 2)` to create a Tensor of shape `[3]`.\n\n (b) Reduction: Axes corresponding to labels appearing only in one input\n subscript but not in the output subscript are summed over prior to Tensor\n contraction.\n For example, in the equation `ab,bc->b`, the axis labels `a` and `c` are\n the reduction axis labels.\n\n (c) Batch Dimensions: Axes corresponding to labels appearing in each of the\n input subscripts and also in the output subscript make up the batch\n dimensions in Tensor contraction. Unnamed axis labels corresponding to\n ellipsis (`...`) also correspond to batch dimensions.\n For example, for the equation denoting batch matrix multiplication,\n `bij,bjk->bik`, the axis label `b` corresponds to a batch dimension.\n\n (d) Contraction: In case of binary einsum, axes corresponding to labels\n appearing in two different inputs (and not in the output) are contracted\n against each other.\n Considering the batch matrix multiplication equation again\n (`bij,bjk->bik`), the contracted axis label is `j`.\n\n (e) Expand Diagonal: If the output subscripts contain repeated (explicit) axis\n labels, the opposite operation of (a) is applied. For example, in the\n equation `i->iii`, and input shape `[3]`, the output of shape `[3, 3, 3]`\n are all zeros, except for the (generalized) diagonal which is populated\n with values from the input.\n Note: This operation is not supported by `np.einsum` or `tf.einsum`; it is\n provided to enable computing the symbolic gradient of `tf.einsum`.\n\n The output subscripts must contain only labels appearing in at least one of the\n input subscripts. Furthermore, all dimensions mapping to the same axis label\n must be equal.\n\n Any of the input and output subscripts may contain at most a single ellipsis\n (`...`). These ellipsis are mapped against dimensions not corresponding to any\n named axis label. If two inputs contain ellipsis, then they are broadcasted\n according to standard NumPy broadcasting\n [rules](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).\n\n The broadcasted dimensions are placed in the corresponding location of the\n ellipsis in the output subscript. If the broadcasted dimensions are non-empty\n and the output subscripts do not contain ellipsis, then an InvalidArgument error\n is raised.\n\n @compatibility(numpy)\n Similar to [`numpy.einsum`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html).\n\n Comparison with `numpy.einsum`:\n\n * This Op only supports unary and binary forms of `numpy.einsum`.\n * This Op does not support implicit form. (i.e. equations without `->`).\n * This Op also supports repeated indices in the output subscript, which is not\n supported by `numpy.einsum`.\n @end_compatibility\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with the same type.\n List of 1 or 2 Tensors.\n equation: A `string`.\n String describing the Einstein Summation operation; in the format of np.einsum.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Tensor contraction according to Einstein summation convention.", "type": "API"}, {"name": "tf.raw_ops.Elu", "docs": "Computes the exponential linear function.\n\n The ELU function is defined as:\n\n * $ e ^ x - 1 $ if $ x < 0 $\n * $ x $ if $ x >= 0 $\n\n Examples:\n\n >>> tf.nn.elu(1.0)\n \n >>> tf.nn.elu(0.0)\n \n >>> tf.nn.elu(-1000.0)\n \n\n See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)\n ](http://arxiv.org/abs/1511.07289)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes the exponential linear function.", "type": "API"}, {"name": "tf.raw_ops.EluGrad", "docs": "Computes gradients for the exponential linear (Elu) operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The backpropagated gradients to the corresponding Elu operation.\n outputs: A `Tensor`. Must have the same type as `gradients`.\n The outputs of the corresponding Elu operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes gradients for the exponential linear (Elu) operation.", "type": "API"}, {"name": "tf.raw_ops.Empty", "docs": "Creates a tensor with the given shape.\n\nThis operation creates a tensor of `shape` and `dtype`.\n\n Args:\n shape: A `Tensor` of type `int32`.\n 1-D. Represents the shape of the output tensor.\n dtype: A `tf.DType`.\n init: An optional `bool`. Defaults to `False`.\n If True, initialize the returned tensor with the default value of dtype. Otherwise, the implementation is free not to initializethe tensor's content.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Creates a tensor with the given shape.", "type": "API"}, {"name": "tf.raw_ops.EmptyTensorList", "docs": "Creates and returns an empty tensor list.\n\n All list elements must be tensors of dtype element_dtype and shape compatible\n with element_shape.\n\n handle: an empty tensor list.\n element_dtype: the type of elements in the list.\n element_shape: a shape compatible with that of elements in the list.\n\n Args:\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n max_num_elements: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates and returns an empty tensor list.", "type": "API"}, {"name": "tf.raw_ops.EncodeBase64", "docs": "Encode strings into web-safe base64 format.\n\n Refer to [this article](https://en.wikipedia.org/wiki/Base64) for more information on\n base64 format. Base64 strings may have padding with '=' at the\n end so that the encoded has length multiple of 4. See Padding section of the\n link above.\n\n Web-safe means that the encoder uses - and _ instead of + and /.\n\n Args:\n input: A `Tensor` of type `string`. Strings to be encoded.\n pad: An optional `bool`. Defaults to `False`.\n Bool whether padding is applied at the ends.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode strings into web-safe base64 format.", "type": "API"}, {"name": "tf.raw_ops.EncodeJpeg", "docs": "JPEG-encode an image.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n\n The attr `format` can be used to override the color format of the encoded\n output. Values can be:\n\n * `''`: Use a default format based on the number of channels in the image.\n * `grayscale`: Output a grayscale JPEG image. The `channels` dimension\n of `image` must be 1.\n * `rgb`: Output an RGB JPEG image. The `channels` dimension\n of `image` must be 3.\n\n If `format` is not specified or is the empty string, a default format is picked\n in function of the number of channels in `image`:\n\n * 1: Output a grayscale image.\n * 3: Output an RGB image.\n\n Args:\n image: A `Tensor` of type `uint8`.\n 3-D with shape `[height, width, channels]`.\n format: An optional `string` from: `\"\", \"grayscale\", \"rgb\"`. Defaults to `\"\"`.\n Per pixel image format.\n quality: An optional `int`. Defaults to `95`.\n Quality of the compression from 0 to 100 (higher is better and slower).\n progressive: An optional `bool`. Defaults to `False`.\n If True, create a JPEG that loads progressively (coarse to fine).\n optimize_size: An optional `bool`. Defaults to `False`.\n If True, spend CPU/RAM to reduce size with no quality change.\n chroma_downsampling: An optional `bool`. Defaults to `True`.\n See http://en.wikipedia.org/wiki/Chroma_subsampling.\n density_unit: An optional `string` from: `\"in\", \"cm\"`. Defaults to `\"in\"`.\n Unit used to specify `x_density` and `y_density`:\n pixels per inch (`'in'`) or centimeter (`'cm'`).\n x_density: An optional `int`. Defaults to `300`.\n Horizontal pixels per density unit.\n y_density: An optional `int`. Defaults to `300`.\n Vertical pixels per density unit.\n xmp_metadata: An optional `string`. Defaults to `\"\"`.\n If not empty, embed this XMP metadata in the image header.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG-encode an image.", "type": "API"}, {"name": "tf.raw_ops.EncodeJpegVariableQuality", "docs": "JPEG encode input image with provided compression quality.\n\n `image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.\n `quality` is an int32 jpeg compression quality value between 0 and 100.\n\n Args:\n images: A `Tensor` of type `uint8`. Images to adjust. At least 3-D.\n quality: A `Tensor` of type `int32`. An int quality to encode to.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "JPEG encode input image with provided compression quality.", "type": "API"}, {"name": "tf.raw_ops.EncodePng", "docs": "PNG-encode an image.\n\n `image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`\n where `channels` is:\n\n * 1: for grayscale.\n * 2: for grayscale + alpha.\n * 3: for RGB.\n * 4: for RGBA.\n\n The ZLIB compression level, `compression`, can be -1 for the PNG-encoder\n default or a value from 0 to 9. 9 is the highest compression level, generating\n the smallest output, but is slower.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.\n 3-D with shape `[height, width, channels]`.\n compression: An optional `int`. Defaults to `-1`. Compression level.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "PNG-encode an image.", "type": "API"}, {"name": "tf.raw_ops.EncodeProto", "docs": "The op serializes protobuf messages provided in the input tensors.\n\n The types of the tensors in `values` must match the schema for the fields\n specified in `field_names`. All the tensors in `values` must have a common\n shape prefix, *batch_shape*.\n\n The `sizes` tensor specifies repeat counts for each field. The repeat count\n (last dimension) of a each tensor in `values` must be greater than or equal\n to corresponding repeat count in `sizes`.\n\n A `message_type` name must be provided to give context for the field names.\n The actual message descriptor can be looked up either in the linked-in\n descriptor pool or a filename provided by the caller using the\n `descriptor_source` attribute.\n\n For the most part, the mapping between Proto field types and TensorFlow dtypes\n is straightforward. However, there are a few special cases:\n\n - A proto field that contains a submessage or group can only be converted\n to `DT_STRING` (the serialized submessage). This is to reduce the complexity\n of the API. The resulting string can be used as input to another instance of\n the decode_proto op.\n\n - TensorFlow lacks support for unsigned integers. The ops represent uint64\n types as a `DT_INT64` with the same twos-complement bit pattern (the obvious\n way). Unsigned int32 values can be represented exactly by specifying type\n `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in\n the `output_types` attribute.\n\n The `descriptor_source` attribute selects the source of protocol\n descriptors to consult when looking up `message_type`. This may be:\n\n - An empty string or \"local://\", in which case protocol descriptors are\n created for C++ (not Python) proto definitions linked to the binary.\n\n - A file, in which case protocol descriptors are created from the file,\n which is expected to contain a `FileDescriptorSet` serialized as a string.\n NOTE: You can build a `descriptor_source` file using the `--descriptor_set_out`\n and `--include_imports` options to the protocol compiler `protoc`.\n\n - A \"bytes://\", in which protocol descriptors are created from ``,\n which is expected to be a `FileDescriptorSet` serialized as a string.\n\n Args:\n sizes: A `Tensor` of type `int32`.\n Tensor of int32 with shape `[batch_shape, len(field_names)]`.\n values: A list of `Tensor` objects.\n List of tensors containing values for the corresponding field.\n field_names: A list of `strings`.\n List of strings containing proto field names.\n message_type: A `string`. Name of the proto message type to decode.\n descriptor_source: An optional `string`. Defaults to `\"local://\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "The op serializes protobuf messages provided in the input tensors.", "type": "API"}, {"name": "tf.raw_ops.EncodeWav", "docs": "Encode audio data using the WAV file format.\n\n This operation will generate a string suitable to be saved out to create a .wav\n audio file. It will be encoded in the 16-bit PCM format. It takes in float\n values in the range -1.0f to 1.0f, and any outside that value will be clamped to\n that range.\n\n `audio` is a 2-D float Tensor of shape `[length, channels]`.\n `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100).\n\n Args:\n audio: A `Tensor` of type `float32`. 2-D with shape `[length, channels]`.\n sample_rate: A `Tensor` of type `int32`.\n Scalar containing the sample frequency.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode audio data using the WAV file format.", "type": "API"}, {"name": "tf.raw_ops.EnqueueTPUEmbeddingIntegerBatch", "docs": "An op that enqueues a list of input batch tensors to TPUEmbedding.\n\n Args:\n batch: A list of at least 1 `Tensor` objects with type `int32`.\n A list of 1D tensors, one for each embedding table, containing the\n indices into the tables.\n mode_override: A `Tensor` of type `string`.\n A string input that overrides the mode specified in the\n TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference',\n 'training', 'backward_pass_only'}. When set to 'unspecified', the mode set\n in TPUEmbeddingConfiguration is used, otherwise mode_override is used.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. Should be >= 0 and less than the number\n of TPU cores in the task on which the node is placed.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "An op that enqueues a list of input batch tensors to TPUEmbedding.", "type": "API"}, {"name": "tf.raw_ops.EnqueueTPUEmbeddingRaggedTensorBatch", "docs": "Eases the porting of code that uses tf.nn.embedding_lookup().\n\n sample_splits[i], embedding_indices[i] and aggregation_weights[i] correspond\n to the ith feature. table_ids[i] indicates which embedding table to look up ith\n feature.\n\n The tensors at corresponding positions in two of the input lists,\n embedding_indices and aggregation_weights, must have the same shape, i.e. rank 1\n with dim_size() equal to the total number of lookups into the table described by\n the corresponding feature.\n\n Args:\n sample_splits: A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors specifying the break points for splitting\n embedding_indices and aggregation_weights into rows.\n It corresponds to ids.row_splits in embedding_lookup(), when ids is a\n RaggedTensor.\n embedding_indices: A list with the same length as `sample_splits` of `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors, indices into the embedding tables.\n It corresponds to ids.values in embedding_lookup(), when ids is a RaggedTensor.\n aggregation_weights: A list with the same length as `sample_splits` of `Tensor` objects with the same type in: `float32`, `float64`.\n A list of rank 1 Tensors containing per training example\n aggregation weights. It corresponds to the values field of a RaggedTensor\n with the same row_splits as ids in embedding_lookup(), when ids is a\n RaggedTensor.\n mode_override: A `Tensor` of type `string`.\n A string input that overrides the mode specified in the\n TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference',\n 'training', 'backward_pass_only'}. When set to 'unspecified', the mode set\n in TPUEmbeddingConfiguration is used, otherwise mode_override is used.\n table_ids: A list of `ints`.\n A list of integers specifying the identifier of the embedding table\n (offset of TableDescriptor in the TPUEmbeddingConfiguration) to lookup the\n corresponding input. The ith input is looked up using table_ids[i]. The size\n of the table_ids list must be equal to that of sample_indices,\n embedding_indices and aggregation_weights.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. Should be >= 0 and less than the number\n of TPU cores in the task on which the node is placed.\n combiners: An optional list of `strings`. Defaults to `[]`.\n A list of string scalars, one for each embedding table that specify\n how to normalize the embedding activations after weighted summation.\n Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have\n the sum of the weights be 0 for 'mean' or the sum of the squared weights be\n 0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for\n all tables.\n max_sequence_lengths: An optional list of `ints`. Defaults to `[]`.\n num_features: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Eases the porting of code that uses tf.nn.embedding_lookup().", "type": "API"}, {"name": "tf.raw_ops.EnqueueTPUEmbeddingSparseBatch", "docs": "An op that enqueues TPUEmbedding input indices from a SparseTensor.\n\n This Op eases the porting of code that uses embedding_lookup_sparse(),\n although some Python preprocessing of the SparseTensor arguments to\n embedding_lookup_sparse() is required to produce the arguments to this Op,\n since only a single EnqueueTPUEmbeddingSparseBatch Op is allowed per training\n step.\n\n The tensors at corresponding positions in the three input lists\n must have the same shape, i.e. rank 1 with dim_size() equal to the total\n number of lookups into the table described by the corresponding table_id.\n\n Args:\n sample_indices: A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors specifying the training example and\n feature to which the corresponding embedding_indices and aggregation_weights\n values belong. sample_indices[i] must equal b * nf + f, where nf is the\n number of features from the corresponding table, f is in [0, nf), and\n b is in [0, batch size).\n embedding_indices: A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors, indices into the embedding tables.\n aggregation_weights: A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `float32`, `float64`.\n A list of rank 1 Tensors containing per sample -- i.e. per\n (training example, feature) -- aggregation weights.\n mode_override: A `Tensor` of type `string`.\n A string input that overrides the mode specified in the\n TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference',\n 'training', 'backward_pass_only'}. When set to 'unspecified', the mode set\n in TPUEmbeddingConfiguration is used, otherwise mode_override is used.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. Should be >= 0 and less than the number\n of TPU cores in the task on which the node is placed.\n combiners: An optional list of `strings`. Defaults to `[]`.\n A list of string scalars, one for each embedding table that specify\n how to normalize the embedding activations after weighted summation.\n Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have\n the sum of the weights be 0 for 'mean' or the sum of the squared weights be\n 0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for\n all tables.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "An op that enqueues TPUEmbedding input indices from a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.EnqueueTPUEmbeddingSparseTensorBatch", "docs": "Eases the porting of code that uses tf.nn.embedding_lookup_sparse().\n\n sample_indices[i], embedding_indices[i] and aggregation_weights[i] correspond\n to the ith feature. table_ids[i] indicates which embedding table to look up ith\n feature.\n\n The tensors at corresponding positions in the three input lists (sample_indices,\n embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1\n with dim_size() equal to the total number of lookups into the table described by\n the corresponding feature.\n\n Args:\n sample_indices: A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors specifying the training example to\n which the corresponding embedding_indices and aggregation_weights values\n belong. It corresponds to sp_ids.indices[:,0] in embedding_lookup_sparse().\n embedding_indices: A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `int32`, `int64`.\n A list of rank 1 Tensors, indices into the embedding tables.\n It corresponds to sp_ids.values in embedding_lookup_sparse().\n aggregation_weights: A list with the same length as `sample_indices` of `Tensor` objects with the same type in: `float32`, `float64`.\n A list of rank 1 Tensors containing per training example\n aggregation weights. It corresponds to sp_weights.values in\n embedding_lookup_sparse().\n mode_override: A `Tensor` of type `string`.\n A string input that overrides the mode specified in the\n TPUEmbeddingConfiguration. Supported values are {'unspecified', 'inference',\n 'training', 'backward_pass_only'}. When set to 'unspecified', the mode set\n in TPUEmbeddingConfiguration is used, otherwise mode_override is used.\n table_ids: A list of `ints`.\n A list of integers specifying the identifier of the embedding table\n (offset of TableDescriptor in the TPUEmbeddingConfiguration) to lookup the\n corresponding input. The ith input is looked up using table_ids[i]. The size\n of the table_ids list must be equal to that of sample_indices,\n embedding_indices and aggregation_weights.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. Should be >= 0 and less than the number\n of TPU cores in the task on which the node is placed.\n combiners: An optional list of `strings`. Defaults to `[]`.\n A list of string scalars, one for each embedding table that specify\n how to normalize the embedding activations after weighted summation.\n Supported combiners are 'mean', 'sum', or 'sqrtn'. It is invalid to have\n the sum of the weights be 0 for 'mean' or the sum of the squared weights be\n 0 for 'sqrtn'. If combiners isn't passed, the default is to use 'sum' for\n all tables.\n max_sequence_lengths: An optional list of `ints`. Defaults to `[]`.\n num_features: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Eases the porting of code that uses tf.nn.embedding_lookup_sparse().", "type": "API"}, {"name": "tf.raw_ops.EnsureShape", "docs": "Ensures that the tensor's shape matches the expected shape.\n\n Raises an error if the input tensor's shape does not match the specified shape.\n Returns the input tensor otherwise.\n\n Args:\n input: A `Tensor`. A tensor, whose shape is to be validated.\n shape: A `tf.TensorShape` or list of `ints`.\n The expected (possibly partially specified) shape of the input tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Ensures that the tensor's shape matches the expected shape.", "type": "API"}, {"name": "tf.raw_ops.Enter", "docs": "Creates or finds a child frame, and makes `data` available to the child frame.\n\n This op is used together with `Exit` to create loops in the graph.\n The unique `frame_name` is used by the `Executor` to identify frames. If\n `is_constant` is true, `output` is a constant in the child frame; otherwise\n it may be changed in the child frame. At most `parallel_iterations` iterations\n are run in parallel in the child frame.\n\n Args:\n data: A `Tensor`. The tensor to be made available to the child frame.\n frame_name: A `string`. The name of the child frame.\n is_constant: An optional `bool`. Defaults to `False`.\n If true, the output is constant within the child frame.\n parallel_iterations: An optional `int`. Defaults to `10`.\n The number of iterations allowed to run in parallel.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Creates or finds a child frame, and makes `data` available to the child frame.", "type": "API"}, {"name": "tf.raw_ops.Equal", "docs": "Returns the truth value of (x == y) element-wise.\n\n *NOTE*: `Equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n ```python\n x = tf.constant([2, 4])\n y = tf.constant(2)\n tf.math.equal(x, y) ==> array([True, False])\n\n x = tf.constant([2, 4])\n y = tf.constant([2, 4])\n tf.math.equal(x, y) ==> array([True, True])\n ```\n\n Args:\n x: A `Tensor`.\n y: A `Tensor`. Must have the same type as `x`.\n incompatible_shape_error: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x == y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.Erf", "docs": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.\n\n For example:\n\n >>> tf.math.erf([[1.0, 2.0, 3.0], [0.0, -1.0, -2.0]])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the [Gauss error function](https://en.wikipedia.org/wiki/Error_function) of `x` element-wise. In statistics, for non-negative values of $x$, the error function has the following interpretation: for a random variable $Y$ that is normally distributed with mean 0 and variance $1/\\sqrt{2}$, $erf(x)$ is the probability that $Y$ falls in the range $[\u2212x, x]$.", "type": "API"}, {"name": "tf.raw_ops.Erfc", "docs": "Computes the complementary error function of `x` element-wise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the complementary error function of `x` element-wise.", "type": "API"}, {"name": "tf.raw_ops.Erfinv", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.EuclideanNorm", "docs": "Computes the euclidean norm of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the euclidean norm of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Exit", "docs": "Exits the current frame to its parent frame.\n\n Exit makes its input `data` available to the parent frame.\n\n Args:\n data: A `Tensor`. The tensor to be made available to the parent frame.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Exits the current frame to its parent frame.", "type": "API"}, {"name": "tf.raw_ops.Exp", "docs": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).\n\n This function computes the exponential of every element in the input tensor.\n i.e. `exp(x)` or `e^(x)`, where `x` is the input tensor.\n `e` denotes Euler's number and is approximately equal to 2.718281.\n Output is positive for any real input.\n\n ```python\n x = tf.constant(2.0)\n tf.math.exp(x) ==> 7.389056\n\n x = tf.constant([2.0, 8.0])\n tf.math.exp(x) ==> array([7.389056, 2980.958], dtype=float32)\n ```\n\n For complex numbers, the exponential value is calculated as follows:\n\n ```\n e^(x+iy) = e^x * e^iy = e^x * (cos y + i sin y)\n ```\n\n Let's consider complex number 1+1j as an example.\n e^1 * (cos 1 + i sin 1) = 2.7182818284590 * (0.54030230586+0.8414709848j)\n\n ```python\n x = tf.constant(1 + 1j)\n tf.math.exp(x) ==> 1.4686939399158851+2.2873552871788423j\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes exponential of x element-wise. \\\\(y = e^x\\\\).", "type": "API"}, {"name": "tf.raw_ops.ExpandDims", "docs": "Inserts a dimension of 1 into a tensor's shape.\n\n Given a tensor `input`, this operation inserts a dimension of 1 at the\n dimension index `axis` of `input`'s shape. The dimension index `axis` starts at\n zero; if you specify a negative number for `axis` it is counted backward from\n the end.\n\n This operation is useful if you want to add a batch dimension to a single\n element. For example, if you have a single image of shape `[height, width,\n channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`,\n which will make the shape `[1, height, width, channels]`.\n\n Other examples:\n\n ```\n # 't' is a tensor of shape [2]\n shape(expand_dims(t, 0)) ==> [1, 2]\n shape(expand_dims(t, 1)) ==> [2, 1]\n shape(expand_dims(t, -1)) ==> [2, 1]\n\n # 't2' is a tensor of shape [2, 3, 5]\n shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]\n shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]\n shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]\n ```\n\n This operation requires that:\n\n `-1-input.dims() <= dim <= input.dims()`\n\n This operation is related to `squeeze()`, which removes dimensions of\n size 1.\n\n Args:\n input: A `Tensor`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D (scalar). Specifies the dimension index at which to\n expand the shape of `input`. Must be in the range\n `[-rank(input) - 1, rank(input)]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inserts a dimension of 1 into a tensor's shape.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalAssertNextDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n transformations: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalAutoShardDataset", "docs": "Creates a dataset that shards the input dataset.\n\n Creates a dataset that shards the input dataset by num_workers, returning a\n sharded dataset for the index-th worker. This attempts to automatically shard\n a dataset by examining the Dataset graph and inserting a shard op before the\n inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset).\n\n This dataset will throw a NotFound error if we cannot shard the dataset\n automatically.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n num_workers: A `Tensor` of type `int64`.\n A scalar representing the number of workers to distribute this dataset across.\n index: A `Tensor` of type `int64`.\n A scalar representing the index of the current worker out of num_workers.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n auto_shard_policy: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that shards the input dataset.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalBytesProducedStatsDataset", "docs": "Records the bytes size of each element of `input_dataset` in a StatsAggregator.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n tag: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Records the bytes size of each element of `input_dataset` in a StatsAggregator.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalChooseFastestDataset", "docs": "TODO: add doc.\n\n Args:\n input_datasets: A list of at least 2 `Tensor` objects with type `variant`.\n num_experiments: An `int`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalCSVDataset", "docs": "TODO: add doc.\n\n Args:\n filenames: A `Tensor` of type `string`.\n compression_type: A `Tensor` of type `string`.\n buffer_size: A `Tensor` of type `int64`.\n header: A `Tensor` of type `bool`.\n field_delim: A `Tensor` of type `string`.\n use_quote_delim: A `Tensor` of type `bool`.\n na_value: A `Tensor` of type `string`.\n select_cols: A `Tensor` of type `int64`.\n record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalDatasetCardinality", "docs": "Returns the cardinality of `input_dataset`.\n\n Returns the cardinality of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to return cardinality for.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the cardinality of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalDatasetToTFRecord", "docs": "Writes the given dataset to the given file using the TFRecord format.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the dataset to write.\n filename: A `Tensor` of type `string`.\n A scalar string tensor representing the filename to use.\n compression_type: A `Tensor` of type `string`.\n A scalar string tensor containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes the given dataset to the given file using the TFRecord format.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalDenseToSparseBatchDataset", "docs": "Creates a dataset that batches input elements into a SparseTensor.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A handle to an input dataset. Must have a single component.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch.\n row_shape: A `Tensor` of type `int64`.\n A vector representing the dense shape of each row in the produced\n SparseTensor. The shape may be partially specified, using `-1` to indicate\n that a particular dimension should use the maximum size of all batch elements.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches input elements into a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalDirectedInterleaveDataset", "docs": "A substitute for `InterleaveDataset` on a fixed list of `N` datasets.\n\n Args:\n selector_input_dataset: A `Tensor` of type `variant`.\n A dataset of scalar `DT_INT64` elements that determines which of the\n `N` data inputs should produce the next output element.\n data_input_datasets: A list of at least 1 `Tensor` objects with type `variant`.\n `N` datasets with the same type that will be interleaved according to\n the values of `selector_input_dataset`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "A substitute for `InterleaveDataset` on a fixed list of `N` datasets.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalGroupByReducerDataset", "docs": "Creates a dataset that computes a group-by on `input_dataset`.\n\n Creates a dataset that computes a group-by on `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n key_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `key_func`.\n init_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `init_func`.\n reduce_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `reduce_func`.\n finalize_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `finalize_func`.\n key_func: A function decorated with @Defun.\n A function mapping an element of `input_dataset`, concatenated\n with `key_func_other_arguments` to a scalar value of type DT_INT64.\n init_func: A function decorated with @Defun.\n A function mapping a key of type DT_INT64, concatenated with\n `init_func_other_arguments` to the initial reducer state.\n reduce_func: A function decorated with @Defun.\n A function mapping the current reducer state and an element of `input_dataset`,\n concatenated with `reduce_func_other_arguments` to a new reducer state.\n finalize_func: A function decorated with @Defun.\n A function mapping the final reducer state to an output element.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that computes a group-by on `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalGroupByWindowDataset", "docs": "Creates a dataset that computes a windowed group-by on `input_dataset`.\n\n // TODO(mrry): Support non-int64 keys.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n key_func_other_arguments: A list of `Tensor` objects.\n reduce_func_other_arguments: A list of `Tensor` objects.\n window_size_func_other_arguments: A list of `Tensor` objects.\n key_func: A function decorated with @Defun.\n A function mapping an element of `input_dataset`, concatenated\n with `key_func_other_arguments` to a scalar value of type DT_INT64.\n reduce_func: A function decorated with @Defun.\n window_size_func: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that computes a windowed group-by on `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalIgnoreErrorsDataset", "docs": "Creates a dataset that contains the elements of `input_dataset` ignoring errors.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n log_warning: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that contains the elements of `input_dataset` ignoring errors.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalIteratorGetDevice", "docs": "Returns the name of the device on which `resource` has been placed.\n\n Args:\n resource: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the name of the device on which `resource` has been placed.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalLatencyStatsDataset", "docs": "Records the latency of producing `input_dataset` elements in a StatsAggregator.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n tag: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Records the latency of producing `input_dataset` elements in a StatsAggregator.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalLMDBDataset", "docs": "TODO: add doc.\n\n Args:\n filenames: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalMapAndBatchDataset", "docs": "Creates a dataset that fuses mapping with batching.\n\n Creates a dataset that applies `f` to the outputs of `input_dataset` and then\n batches `batch_size` of them.\n\n Unlike a \"MapDataset\", which applies `f` sequentially, this dataset invokes up\n to `batch_size * num_parallel_batches` copies of `f` in parallel.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when building a closure\n for `f`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch. It determines the number of concurrent invocations of `f` that process\n elements from `input_dataset` in parallel.\n num_parallel_calls: A `Tensor` of type `int64`.\n A scalar representing the maximum number of parallel invocations of the `map_fn`\n function. Applying the `map_fn` on consecutive input elements in parallel has\n the potential to improve input pipeline throughput.\n drop_remainder: A `Tensor` of type `bool`.\n A scalar representing whether the last batch should be dropped in case its size\n is smaller than desired.\n f: A function decorated with @Defun.\n A function to apply to the outputs of `input_dataset`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that fuses mapping with batching.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalMapDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_inter_op_parallelism: An optional `bool`. Defaults to `True`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalMatchingFilesDataset", "docs": "TODO: add doc.\n\n Args:\n patterns: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalMaxIntraOpParallelismDataset", "docs": "Creates a dataset that overrides the maximum intra-op parallelism.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n max_intra_op_parallelism: A `Tensor` of type `int64`.\n Identifies the maximum intra-op parallelism to use.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that overrides the maximum intra-op parallelism.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalNonSerializableDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalParallelInterleaveDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, with the exception\n that if retrieving the next value from a dataset would cause the requester to\n block, it will skip that input dataset. This dataset is especially useful\n when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it\n allows the training step to proceed so long as some data is available.\n\n !! WARNING !! This dataset is not deterministic!\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n cycle_length: A `Tensor` of type `int64`.\n block_length: A `Tensor` of type `int64`.\n sloppy: A `Tensor` of type `bool`.\n buffer_output_elements: A `Tensor` of type `int64`.\n prefetch_input_elements: A `Tensor` of type `int64`.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalParseExampleDataset", "docs": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_parallel_calls: A `Tensor` of type `int64`.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A dict mapping string keys to `Tensor`s.\n The keys of the dict must match the dense_keys of the feature.\n sparse_keys: A list of `strings`.\n A list of string keys in the examples features.\n The results for these keys will be returned as `SparseTensor` objects.\n dense_keys: A list of `strings`.\n A list of Ndense string Tensors (scalars).\n The keys expected in the Examples features associated with dense values.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `DTypes` of the same length as `sparse_keys`.\n Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),\n and `tf.string` (`BytesList`) are supported.\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n List of tuples with the same length as `dense_keys`.\n The shape of the data for each dense feature referenced by `dense_keys`.\n Required for any input tensors identified by `dense_keys`. Must be\n either fully defined, or may contain an unknown first dimension.\n An unknown first dimension means the feature is treated as having\n a variable number of blocks, and the output shape along this dimension\n is considered unknown at graph build time. Padding is applied for\n minibatch elements smaller than the maximum number of blocks for the\n given feature along this dimension.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n The list of shapes being produced.\n sloppy: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalPrivateThreadPoolDataset", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_threads: A `Tensor` of type `int64`.\n Identifies the number of threads to use for the private threadpool.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalRandomDataset", "docs": "Creates a Dataset that returns pseudorandom numbers.\n\n Args:\n seed: A `Tensor` of type `int64`.\n A scalar seed for the random number generator. If either seed or\n seed2 is set to be non-zero, the random number generator is seeded\n by the given seed. Otherwise, a random seed is used.\n seed2: A `Tensor` of type `int64`.\n A second scalar seed to avoid seed collision.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a Dataset that returns pseudorandom numbers.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalRebatchDataset", "docs": "Creates a dataset that changes the batch size.\n\n Creates a dataset that changes the batch size of the dataset to current batch\n size // num_replicas.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n num_replicas: A `Tensor` of type `int64`.\n A scalar representing the number of replicas to distribute this batch across. As\n a result of this transformation the current batch size would end up being\n divided by this parameter.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_fallback: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that changes the batch size.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalScanDataset", "docs": "Creates a dataset successively reduces `f` over the elements of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n initial_state: A list of `Tensor` objects.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset successively reduces `f` over the elements of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalSetStatsAggregatorDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n stats_aggregator: A `Tensor` of type `resource`.\n tag: A `Tensor` of type `string`.\n counter_prefix: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalSleepDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n sleep_microseconds: A `Tensor` of type `int64`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalSlidingWindowDataset", "docs": "Creates a dataset that passes a sliding window over `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n window_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements in the\n sliding window.\n window_shift: A `Tensor` of type `int64`.\n A scalar representing the steps moving the sliding window\n forward in one iteration. It must be positive.\n window_stride: A `Tensor` of type `int64`.\n A scalar representing the stride of the input elements of the sliding window.\n It must be positive.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that passes a sliding window over `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalSqlDataset", "docs": "Creates a dataset that executes a SQL query and emits rows of the result set.\n\n Args:\n driver_name: A `Tensor` of type `string`.\n The database type. Currently, the only supported type is 'sqlite'.\n data_source_name: A `Tensor` of type `string`.\n A connection string to connect to the database.\n query: A `Tensor` of type `string`. A SQL query to execute.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that executes a SQL query and emits rows of the result set.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalStatsAggregatorHandle", "docs": "Creates a statistics manager resource.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a statistics manager resource.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalStatsAggregatorSummary", "docs": "Produces a summary of any statistics recorded by the given statistics manager.\n\n Args:\n iterator: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Produces a summary of any statistics recorded by the given statistics manager.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalTakeWhileDataset", "docs": "Creates a dataset that stops iteration when predicate` is false.\n\n The `predicate` function must return a scalar boolean and accept the\n following arguments:\n\n * One tensor for each component of an element of `input_dataset`.\n * One tensor for each value in `other_arguments`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `predicate`.\n predicate: A function decorated with @Defun.\n A function returning a scalar boolean.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that stops iteration when predicate` is false.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalThreadPoolDataset", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n thread_pool: A `Tensor` of type `resource`.\n A resource produced by the ThreadPoolHandle op.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalThreadPoolHandle", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n num_threads: An `int`. The number of threads in the thread pool.\n display_name: A `string`.\n A human-readable name for the threads that may be visible in some\n visualizations.\n threadpool.\n max_intra_op_parallelism: An optional `int`. Defaults to `1`.\n The maximum degree of parallelism to use within operations that execute on this\n threadpool.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalUnbatchDataset", "docs": "A dataset that splits the elements of its input into multiple elements.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "A dataset that splits the elements of its input into multiple elements.", "type": "API"}, {"name": "tf.raw_ops.ExperimentalUniqueDataset", "docs": "Creates a dataset that contains the unique elements of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that contains the unique elements of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Expint", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Expm1", "docs": "Computes `exp(x) - 1` element-wise.\n\n i.e. `exp(x) - 1` or `e^(x) - 1`, where `x` is the input tensor.\n `e` denotes Euler's number and is approximately equal to 2.718281.\n\n ```python\n x = tf.constant(2.0)\n tf.math.expm1(x) ==> 6.389056\n\n x = tf.constant([2.0, 8.0])\n tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)\n\n x = tf.constant(1 + 1j)\n tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes `exp(x) - 1` element-wise.", "type": "API"}, {"name": "tf.raw_ops.ExtractGlimpse", "docs": "Extracts a glimpse from the input tensor.\n\n Returns a set of windows called glimpses extracted at location\n `offsets` from the input tensor. If the windows only partially\n overlaps the inputs, the non overlapping areas will be filled with\n random noise.\n\n The result is a 4-D tensor of shape `[batch_size, glimpse_height,\n glimpse_width, channels]`. The channels and batch dimensions are the\n same as that of the input tensor. The height and width of the output\n windows are specified in the `size` parameter.\n\n The argument `normalized` and `centered` controls how the windows are built:\n\n * If the coordinates are normalized but not centered, 0.0 and 1.0\n correspond to the minimum and maximum of each height and width\n dimension.\n * If the coordinates are both normalized and centered, they range from\n -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper\n left corner, the lower right corner is located at (1.0, 1.0) and the\n center is at (0, 0).\n * If the coordinates are not normalized they are interpreted as\n numbers of pixels.\n\n Args:\n input: A `Tensor` of type `float32`.\n A 4-D float tensor of shape `[batch_size, height, width, channels]`.\n size: A `Tensor` of type `int32`.\n A 1-D tensor of 2 elements containing the size of the glimpses\n to extract. The glimpse height must be specified first, following\n by the glimpse width.\n offsets: A `Tensor` of type `float32`.\n A 2-D integer tensor of shape `[batch_size, 2]` containing\n the y, x locations of the center of each window.\n centered: An optional `bool`. Defaults to `True`.\n indicates if the offset coordinates are centered relative to\n the image, in which case the (0, 0) offset is relative to the center\n of the input images. If false, the (0,0) offset corresponds to the\n upper left corner of the input images.\n normalized: An optional `bool`. Defaults to `True`.\n indicates if the offset coordinates are normalized.\n uniform_noise: An optional `bool`. Defaults to `True`.\n indicates if the noise should be generated using a\n uniform distribution or a Gaussian distribution.\n noise: An optional `string`. Defaults to `\"uniform\"`.\n indicates if the noise should `uniform`, `gaussian`, or\n `zero`. The default is `uniform` which means the noise type\n will be decided by `uniform_noise`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts a glimpse from the input tensor.", "type": "API"}, {"name": "tf.raw_ops.ExtractGlimpseV2", "docs": "Extracts a glimpse from the input tensor.\n\n Returns a set of windows called glimpses extracted at location\n `offsets` from the input tensor. If the windows only partially\n overlaps the inputs, the non overlapping areas will be filled with\n random noise.\n\n The result is a 4-D tensor of shape `[batch_size, glimpse_height,\n glimpse_width, channels]`. The channels and batch dimensions are the\n same as that of the input tensor. The height and width of the output\n windows are specified in the `size` parameter.\n\n The argument `normalized` and `centered` controls how the windows are built:\n\n * If the coordinates are normalized but not centered, 0.0 and 1.0\n correspond to the minimum and maximum of each height and width\n dimension.\n * If the coordinates are both normalized and centered, they range from\n -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper\n left corner, the lower right corner is located at (1.0, 1.0) and the\n center is at (0, 0).\n * If the coordinates are not normalized they are interpreted as\n numbers of pixels.\n\n Args:\n input: A `Tensor` of type `float32`.\n A 4-D float tensor of shape `[batch_size, height, width, channels]`.\n size: A `Tensor` of type `int32`.\n A 1-D tensor of 2 elements containing the size of the glimpses\n to extract. The glimpse height must be specified first, following\n by the glimpse width.\n offsets: A `Tensor` of type `float32`.\n A 2-D integer tensor of shape `[batch_size, 2]` containing\n the y, x locations of the center of each window.\n centered: An optional `bool`. Defaults to `True`.\n indicates if the offset coordinates are centered relative to\n the image, in which case the (0, 0) offset is relative to the center\n of the input images. If false, the (0,0) offset corresponds to the\n upper left corner of the input images.\n normalized: An optional `bool`. Defaults to `True`.\n indicates if the offset coordinates are normalized.\n uniform_noise: An optional `bool`. Defaults to `True`.\n indicates if the noise should be generated using a\n uniform distribution or a Gaussian distribution.\n noise: An optional `string`. Defaults to `\"uniform\"`.\n indicates if the noise should `uniform`, `gaussian`, or\n `zero`. The default is `uniform` which means the noise type\n will be decided by `uniform_noise`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Extracts a glimpse from the input tensor.", "type": "API"}, {"name": "tf.raw_ops.ExtractImagePatches", "docs": "Extract `patches` from `images` and put them in the \"depth\" output dimension.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`.\n 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 4`.\n The size of the sliding window for each dimension of `images`.\n strides: A list of `ints` that has length `>= 4`.\n How far the centers of two consecutive patches are in\n the images. Must be: `[1, stride_rows, stride_cols, 1]`.\n rates: A list of `ints` that has length `>= 4`.\n Must be: `[1, rate_rows, rate_cols, 1]`. This is the\n input stride, specifying how far two consecutive patch samples are in the\n input. Equivalent to extracting patches with\n `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by\n subsampling them spatially by a factor of `rates`. This is equivalent to\n `rate` in dilated (a.k.a. Atrous) convolutions.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Extract `patches` from `images` and put them in the \"depth\" output dimension.", "type": "API"}, {"name": "tf.raw_ops.ExtractJpegShape", "docs": "Extract the shape information of a JPEG-encoded image.\n\n This op only parses the image header, so it is much faster than DecodeJpeg.\n\n Args:\n contents: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.\n output_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n (Optional) The output type of the operation (int32 or int64).\n Defaults to int32.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_type`.\n ", "desc": "Extract the shape information of a JPEG-encoded image.", "type": "API"}, {"name": "tf.raw_ops.ExtractVolumePatches", "docs": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`.\n ksizes: A list of `ints` that has length `>= 5`.\n The size of the sliding window for each dimension of `input`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D of length 5. How far the centers of two consecutive patches are in\n `input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n\n The size-related attributes are specified as follows:\n\n ```python\n ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]\n strides = [1, stride_planes, strides_rows, strides_cols, 1]\n ```\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Extract `patches` from `input` and put them in the `\"depth\"` output dimension. 3D extension of `extract_image_patches`.", "type": "API"}, {"name": "tf.raw_ops.Fact", "docs": "Output a fact about factorials.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Output a fact about factorials.", "type": "API"}, {"name": "tf.raw_ops.FakeParam", "docs": " This op is used as a placeholder in If branch functions. It doesn't provide a\n valid output when run, so must either be removed (e.g. replaced with a\n function input) or guaranteed not to be used (e.g. if mirroring an\n intermediate output needed for the gradient computation of the other branch).\n\n Args:\n dtype: A `tf.DType`. The type of the output.\n shape: A `tf.TensorShape` or list of `ints`.\n The purported shape of the output. This is only used for shape inference;\n the output will not necessarily have this shape. Can be a partial shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": " This op is used as a placeholder in If branch functions. It doesn't provide a", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxArgs", "docs": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n Quantization is called fake since the output is still in floating point.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxArgsGradient", "docs": "Compute gradients for a FakeQuantWithMinMaxArgs operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxArgs operation.\n min: An optional `float`. Defaults to `-6`.\n max: An optional `float`. Defaults to `6`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxArgs operation.", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxVars", "docs": "Fake-quantize the 'inputs' tensor of type float via global float scalars\n\n Fake-quantize the `inputs` tensor of type float via global float scalars\n `min` and `max` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via global float scalars", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxVarsGradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVars operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation.\n min, max: Quantization interval, scalar floats.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 8, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVars operation.", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel", "docs": "Fake-quantize the 'inputs' tensor of type float via per-channel floats\n\n Fake-quantize the `inputs` tensor of type float per-channel and one of the\n shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max`\n of shape `[d]` to `outputs` tensor of same shape as `inputs`.\n\n Attributes\n\n * `[min; max]` define the clamping range for the `inputs` data.\n * `inputs` values are quantized into the quantization range (\n `[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]`\n when it is true) and then de-quantized and output as floats in `[min; max]`\n interval.\n * `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.\n\n Before quantization, `min` and `max` values are adjusted with the following\n logic.\n It is suggested to have `min <= 0 <= max`. If `0` is not in the range of values,\n the behavior can be unexpected:\n\n * If `0 < min < max`: `min_adj = 0` and `max_adj = max - min`.\n * If `min < max < 0`: `min_adj = min - max` and `max_adj = 0`.\n * If `min <= 0 <= max`: `scale = (max - min) / (2^num_bits - 1) `,\n `min_adj = scale * round(min / scale)` and `max_adj = max + min_adj - min`.\n\n This operation has a gradient and thus allows for training `min` and `max`\n values.\n\n Args:\n inputs: A `Tensor` of type `float32`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n narrow_range: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Fake-quantize the 'inputs' tensor of type float via per-channel floats", "type": "API"}, {"name": "tf.raw_ops.FakeQuantWithMinMaxVarsPerChannelGradient", "docs": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.\n\n Args:\n gradients: A `Tensor` of type `float32`.\n Backpropagated gradients above the FakeQuantWithMinMaxVars operation,\n shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.\n inputs: A `Tensor` of type `float32`.\n Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape\n same as `gradients`.\n min, max: Quantization interval, floats of shape `[d]`.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization; between 2 and 16, inclusive.\n narrow_range: An optional `bool`. Defaults to `False`.\n Whether to quantize into 2^num_bits - 1 distinct values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).\n\n backprops_wrt_input: A `Tensor` of type `float32`.\n backprop_wrt_min: A `Tensor` of type `float32`.\n backprop_wrt_max: A `Tensor` of type `float32`.\n ", "desc": "Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.", "type": "API"}, {"name": "tf.raw_ops.FakeQueue", "docs": "Deprecated. Do not use.\n\n Args:\n resource: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Deprecated. Do not use.", "type": "API"}, {"name": "tf.raw_ops.FFT", "docs": "Fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform over the inner-most\n dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.FFT2D", "docs": "2D fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform over the inner-most\n 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "2D fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.FFT3D", "docs": "3D fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform over the inner-most 3\n dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "3D fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.FIFOQueue", "docs": "A queue that produces elements in first-in first-out order.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A queue that produces elements in first-in first-out order.", "type": "API"}, {"name": "tf.raw_ops.FIFOQueueV2", "docs": "A queue that produces elements in first-in first-out order.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A queue that produces elements in first-in first-out order.", "type": "API"}, {"name": "tf.raw_ops.Fill", "docs": "Creates a tensor filled with a scalar value.\n\n This operation creates a tensor of shape `dims` and fills it with `value`.\n\n For example:\n\n ```\n # Output tensor has shape [2, 3].\n fill([2, 3], 9) ==> [[9, 9, 9]\n [9, 9, 9]]\n ```\n\n `tf.fill` differs from `tf.constant` in a few ways:\n\n * `tf.fill` only supports scalar contents, whereas `tf.constant` supports\n Tensor values.\n * `tf.fill` creates an Op in the computation graph that constructs the actual\n Tensor value at runtime. This is in contrast to `tf.constant` which embeds\n the entire Tensor into the graph with a `Const` node.\n * Because `tf.fill` evaluates at graph runtime, it supports dynamic shapes\n based on other runtime Tensors, unlike `tf.constant`.\n\n Args:\n dims: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. Represents the shape of the output tensor.\n value: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.\n\n @compatibility(numpy)\n Equivalent to np.full\n @end_compatibility\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n ", "desc": "Creates a tensor filled with a scalar value.", "type": "API"}, {"name": "tf.raw_ops.FilterByLastComponentDataset", "docs": "Creates a dataset containing elements of first component of `input_dataset` having true in the last component.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset containing elements of first component of `input_dataset` having true in the last component.", "type": "API"}, {"name": "tf.raw_ops.FilterDataset", "docs": "Creates a dataset containing elements of `input_dataset` matching `predicate`.\n\n The `predicate` function must return a scalar boolean and accept the\n following arguments:\n\n * One tensor for each component of an element of `input_dataset`.\n * One tensor for each value in `other_arguments`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `predicate`.\n predicate: A function decorated with @Defun.\n A function returning a scalar boolean.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset containing elements of `input_dataset` matching `predicate`.", "type": "API"}, {"name": "tf.raw_ops.FinalizeDataset", "docs": "Creates a dataset by applying `tf.data.Options` to `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n has_captured_ref: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset by applying `tf.data.Options` to `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Fingerprint", "docs": "Generates fingerprint values.\n\n Generates fingerprint values of `data`.\n\n Fingerprint op considers the first dimension of `data` as the batch dimension,\n and `output[i]` contains the fingerprint value generated from contents in\n `data[i, ...]` for all `i`.\n\n Fingerprint op writes fingerprint values as byte arrays. For example, the\n default method `farmhash64` generates a 64-bit fingerprint value at a time.\n This 8-byte value is written out as an `uint8` array of size 8, in little-endian\n order.\n\n For example, suppose that `data` has data type `DT_INT32` and shape (2, 3, 4),\n and that the fingerprint method is `farmhash64`. In this case, the output shape\n is (2, 8), where 2 is the batch dimension size of `data`, and 8 is the size of\n each fingerprint value in bytes. `output[0, :]` is generated from 12 integers in\n `data[0, :, :]` and similarly `output[1, :]` is generated from other 12 integers\n in `data[1, :, :]`.\n\n Note that this op fingerprints the raw underlying buffer, and it does not\n fingerprint Tensor's metadata such as data type and/or shape. For example, the\n fingerprint values are invariant under reshapes and bitcasts as long as the\n batch dimension remain the same:\n\n ```\n Fingerprint(data) == Fingerprint(Reshape(data, ...))\n Fingerprint(data) == Fingerprint(Bitcast(data, ...))\n ```\n\n For string data, one should expect `Fingerprint(data) !=\n Fingerprint(ReduceJoin(data))` in general.\n\n Args:\n data: A `Tensor`. Must have rank 1 or higher.\n method: A `Tensor` of type `string`.\n Fingerprint method used by this op. Currently available method is\n `farmhash::fingerprint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Generates fingerprint values.", "type": "API"}, {"name": "tf.raw_ops.FixedLengthRecordDataset", "docs": "Creates a dataset that emits the records from one or more binary files.\n\n Args:\n filenames: A `Tensor` of type `string`.\n A scalar or a vector containing the name(s) of the file(s) to be\n read.\n header_bytes: A `Tensor` of type `int64`.\n A scalar representing the number of bytes to skip at the\n beginning of a file.\n record_bytes: A `Tensor` of type `int64`.\n A scalar representing the number of bytes in each record.\n footer_bytes: A `Tensor` of type `int64`.\n A scalar representing the number of bytes to skip at the end\n of a file.\n buffer_size: A `Tensor` of type `int64`.\n A scalar representing the number of bytes to buffer. Must be > 0.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits the records from one or more binary files.", "type": "API"}, {"name": "tf.raw_ops.FixedLengthRecordDatasetV2", "docs": "TODO: add doc.\n\n Args:\n filenames: A `Tensor` of type `string`.\n header_bytes: A `Tensor` of type `int64`.\n record_bytes: A `Tensor` of type `int64`.\n footer_bytes: A `Tensor` of type `int64`.\n buffer_size: A `Tensor` of type `int64`.\n compression_type: A `Tensor` of type `string`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.FixedLengthRecordReader", "docs": "A Reader that outputs fixed-length records from a file.\n\n Args:\n record_bytes: An `int`. Number of bytes in the record.\n header_bytes: An optional `int`. Defaults to `0`.\n Number of bytes in the header, defaults to 0.\n footer_bytes: An optional `int`. Defaults to `0`.\n Number of bytes in the footer, defaults to 0.\n hop_bytes: An optional `int`. Defaults to `0`.\n Number of bytes to hop before each read. Default of 0 means using\n record_bytes.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs fixed-length records from a file.", "type": "API"}, {"name": "tf.raw_ops.FixedLengthRecordReaderV2", "docs": "A Reader that outputs fixed-length records from a file.\n\n Args:\n record_bytes: An `int`. Number of bytes in the record.\n header_bytes: An optional `int`. Defaults to `0`.\n Number of bytes in the header, defaults to 0.\n footer_bytes: An optional `int`. Defaults to `0`.\n Number of bytes in the footer, defaults to 0.\n hop_bytes: An optional `int`. Defaults to `0`.\n Number of bytes to hop before each read. Default of 0 means using\n record_bytes.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n encoding: An optional `string`. Defaults to `\"\"`.\n The type of encoding for the file. Currently ZLIB and GZIP\n are supported. Defaults to none.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A Reader that outputs fixed-length records from a file.", "type": "API"}, {"name": "tf.raw_ops.FixedUnigramCandidateSampler", "docs": "Generates labels for candidate sampling with a learned unigram distribution.\n\n A unigram sampler could use a fixed unigram distribution read from a\n file or passed in as an in-memory array instead of building up the distribution\n from data on the fly. There is also an option to skew the distribution by\n applying a distortion power to the weights.\n\n The vocabulary file should be in CSV-like format, with the last field\n being the weight associated with the word.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`.\n Number of candidates to randomly sample.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n range_max: An `int` that is `>= 1`.\n The sampler will sample integers from the interval [0, range_max).\n vocab_file: An optional `string`. Defaults to `\"\"`.\n Each valid line in this file (which should have a CSV-like format)\n corresponds to a valid word ID. IDs are in sequential order, starting from\n num_reserved_ids. The last entry in each line is expected to be a value\n corresponding to the count or relative probability. Exactly one of vocab_file\n and unigrams needs to be passed to this op.\n distortion: An optional `float`. Defaults to `1`.\n The distortion is used to skew the unigram probability distribution.\n Each weight is first raised to the distortion's power before adding to the\n internal unigram distribution. As a result, distortion = 1.0 gives regular\n unigram sampling (as defined by the vocab file), and distortion = 0.0 gives\n a uniform distribution.\n num_reserved_ids: An optional `int`. Defaults to `0`.\n Optionally some reserved IDs can be added in the range [0,\n ..., num_reserved_ids) by the users. One use case is that a special unknown\n word token is used as ID 0. These IDs will have a sampling probability of 0.\n num_shards: An optional `int` that is `>= 1`. Defaults to `1`.\n A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This parameter\n (together with 'shard') indicates the number of partitions that are being\n used in the overall computation.\n shard: An optional `int` that is `>= 0`. Defaults to `0`.\n A sampler can be used to sample from a subset of the original range\n in order to speed up the whole computation through parallelism. This parameter\n (together with 'num_shards') indicates the particular partition number of a\n sampler op, when partitioning is being used.\n unigrams: An optional list of `floats`. Defaults to `[]`.\n A list of unigram counts or probabilities, one per ID in sequential\n order. Exactly one of vocab_file and unigrams should be passed to this op.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a learned unigram distribution.", "type": "API"}, {"name": "tf.raw_ops.FlatMapDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Unlike MapDataset, the `f` in FlatMapDataset is expected to return a\n Dataset variant, and FlatMapDataset will flatten successive results\n into a single Dataset.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Floor", "docs": "Returns element-wise largest integer not greater than x.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise largest integer not greater than x.", "type": "API"}, {"name": "tf.raw_ops.FloorDiv", "docs": "Returns x // y element-wise.\n\n *NOTE*: `floor_div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x // y element-wise.", "type": "API"}, {"name": "tf.raw_ops.FloorMod", "docs": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is\n\n true, this follows Python semantics in that the result here is consistent\n with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.\n\n *NOTE*: `math.floormod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. When `x < 0` xor `y < 0` is", "type": "API"}, {"name": "tf.raw_ops.FlushSummaryWriter", "docs": "TODO: add doc.\n\n Args:\n writer: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.For", "docs": "Applies a for loop.\n\n ```python\n output = input;\n for i in range(start, limit, delta)\n output = body(i, output);\n ```\n\n Args:\n start: A `Tensor` of type `int32`. The lower bound. An int32\n limit: A `Tensor` of type `int32`. The upper bound. An int32\n delta: A `Tensor` of type `int32`. The increment. An int32\n input: A list of `Tensor` objects.\n A list of input tensors whose types are T.\n body: A function decorated with @Defun.\n A function that takes a list of tensors (int32, T) and returns another\n list of tensors (T).\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "Applies a for loop.", "type": "API"}, {"name": "tf.raw_ops.FractionalAvgPool", "docs": "Performs fractional average pooling on the input.\n\n Fractional average pooling is similar to Fractional max pooling in the pooling\n region generation step. The only difference is that after pooling regions are\n generated, a mean operation is performed instead of a max operation in each\n pooling region.\n\n Args:\n value: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.\n 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: A list of `floats` that has length `>= 4`.\n Pooling ratio for each dimension of `value`, currently only\n supports row and col dimension and should be >= 1.0. For example, a valid\n pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements\n must be 1.0 because we don't allow pooling on batch and channels\n dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions\n respectively.\n pseudo_random: An optional `bool`. Defaults to `False`.\n When set to True, generates the pooling sequence in a\n pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin\n Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for\n difference between pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`.\n When set to True, it means when pooling, the values at the boundary\n of adjacent pooling cells are used by both cells. For example:\n\n `index 0 1 2 3 4`\n\n `value 20 5 16 3 7`\n\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.\n The result would be [41/3, 26/3] for fractional avg pooling.\n deterministic: An optional `bool`. Defaults to `False`.\n When set to True, a fixed pooling region will be used when\n iterating over a FractionalAvgPool node in the computation graph. Mainly used\n in unit test to make FractionalAvgPool deterministic.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).\n\n output: A `Tensor`. Has the same type as `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n ", "desc": "Performs fractional average pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.FractionalAvgPoolGrad", "docs": "Computes gradient of the FractionalAvgPool function.\n\n Unlike FractionalMaxPoolGrad, we don't need to find arg_max for\n FractionalAvgPoolGrad, we just need to evenly back-propagate each element of\n out_backprop to those indices that form the same pooling cell. Therefore, we\n just need to know the shape of original input tensor, instead of the whole\n tensor.\n\n Args:\n orig_input_tensor_shape: A `Tensor` of type `int64`.\n Original input tensor shape for `fractional_avg_pool`\n out_backprop: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.\n 4-D with shape `[batch, height, width, channels]`. Gradients\n w.r.t. the output of `fractional_avg_pool`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n row pooling sequence, form pooling region with\n col_pooling_sequence.\n col_pooling_sequence: A `Tensor` of type `int64`.\n column pooling sequence, form pooling region with\n row_pooling sequence.\n overlapping: An optional `bool`. Defaults to `False`.\n When set to True, it means when pooling, the values at the boundary\n of adjacent pooling cells are used by both cells. For example:\n\n `index 0 1 2 3 4`\n\n `value 20 5 16 3 7`\n\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.\n The result would be [41/3, 26/3] for fractional avg pooling.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `out_backprop`.\n ", "desc": "Computes gradient of the FractionalAvgPool function.", "type": "API"}, {"name": "tf.raw_ops.FractionalMaxPool", "docs": "Performs fractional max pooling on the input.\n\n Fractional max pooling is slightly different than regular max pooling. In\n regular max pooling, you downsize an input set by taking the maximum value of\n smaller N x N subsections of the set (often 2x2), and try to reduce the set by\n a factor of N, where N is an integer. Fractional max pooling, as you might\n expect from the word \"fractional\", means that the overall reduction ratio N\n does not have to be an integer.\n\n The sizes of the pooling regions are generated randomly but are fairly uniform.\n For example, let's look at the height dimension, and the constraints on the\n list of rows that will be pool boundaries.\n\n First we define the following:\n\n 1. input_row_length : the number of rows from the input set\n 2. output_row_length : which will be smaller than the input\n 3. alpha = input_row_length / output_row_length : our reduction ratio\n 4. K = floor(alpha)\n 5. row_pooling_sequence : this is the result list of pool boundary rows\n\n Then, row_pooling_sequence should satisfy:\n\n 1. a[0] = 0 : the first value of the sequence is 0\n 2. a[end] = input_row_length : the last value of the sequence is the size\n 3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size\n 4. length(row_pooling_sequence) = output_row_length+1\n\n For more details on fractional max pooling, see this paper:\n [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)\n\n Args:\n value: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.\n 4-D with shape `[batch, height, width, channels]`.\n pooling_ratio: A list of `floats` that has length `>= 4`.\n Pooling ratio for each dimension of `value`, currently only\n supports row and col dimension and should be >= 1.0. For example, a valid\n pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements\n must be 1.0 because we don't allow pooling on batch and channels\n dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions\n respectively.\n pseudo_random: An optional `bool`. Defaults to `False`.\n When set to True, generates the pooling sequence in a\n pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin\n Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for\n difference between pseudorandom and random.\n overlapping: An optional `bool`. Defaults to `False`.\n When set to True, it means when pooling, the values at the boundary\n of adjacent pooling cells are used by both cells. For example:\n\n `index 0 1 2 3 4`\n\n `value 20 5 16 3 7`\n\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.\n The result would be [20, 16] for fractional max pooling.\n deterministic: An optional `bool`. Defaults to `False`.\n When set to True, a fixed pooling region will be used when\n iterating over a FractionalMaxPool node in the computation graph. Mainly used\n in unit test to make FractionalMaxPool deterministic.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).\n\n output: A `Tensor`. Has the same type as `value`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n col_pooling_sequence: A `Tensor` of type `int64`.\n ", "desc": "Performs fractional max pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.FractionalMaxPoolGrad", "docs": "Computes gradient of the FractionalMaxPool function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.\n Original input for `fractional_max_pool`\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n Original output for `fractional_max_pool`\n out_backprop: A `Tensor`. Must have the same type as `orig_input`.\n 4-D with shape `[batch, height, width, channels]`. Gradients\n w.r.t. the output of `fractional_max_pool`.\n row_pooling_sequence: A `Tensor` of type `int64`.\n row pooling sequence, form pooling region with\n col_pooling_sequence.\n col_pooling_sequence: A `Tensor` of type `int64`.\n column pooling sequence, form pooling region with\n row_pooling sequence.\n overlapping: An optional `bool`. Defaults to `False`.\n When set to True, it means when pooling, the values at the boundary\n of adjacent pooling cells are used by both cells. For example:\n\n `index 0 1 2 3 4`\n\n `value 20 5 16 3 7`\n\n If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.\n The result would be [20, 16] for fractional max pooling.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes gradient of the FractionalMaxPool function.", "type": "API"}, {"name": "tf.raw_ops.FresnelCos", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.FresnelSin", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNorm", "docs": "Batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`.\n A 4D Tensor for input data.\n scale: A `Tensor`. Must have the same type as `x`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n offset: A `Tensor`. Must have the same type as `x`.\n A 1D Tensor for offset, to shift to the normalized x.\n mean: A `Tensor`. Must have the same type as `x`.\n A 1D Tensor for population mean. Used for inference only;\n must be empty for training.\n variance: A `Tensor`. Must have the same type as `x`.\n A 1D Tensor for population variance. Used for inference only;\n must be empty for training.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n exponential_avg_factor: An optional `float`. Defaults to `1`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n The data format for x and y. Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2).\n\n y: A `Tensor`. Has the same type as `x`.\n batch_mean: A `Tensor`. Has the same type as `x`.\n batch_variance: A `Tensor`. Has the same type as `x`.\n reserve_space_1: A `Tensor`. Has the same type as `x`.\n reserve_space_2: A `Tensor`. Has the same type as `x`.\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNormGrad", "docs": "Gradient for batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n y_backprop: A `Tensor`. Must be one of the following types: `float32`.\n A 4D Tensor for the gradient with respect to y.\n x: A `Tensor`. Must have the same type as `y_backprop`.\n A 4D Tensor for input data.\n scale: A `Tensor`. Must have the same type as `y_backprop`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n reserve_space_1: A `Tensor`. Must have the same type as `y_backprop`.\n When is_training is True, a 1D Tensor for the computed batch\n mean to be reused in gradient computation. When is_training is\n False, a 1D Tensor for the population mean to be reused in both\n 1st and 2nd order gradient computation.\n reserve_space_2: A `Tensor`. Must have the same type as `y_backprop`.\n When is_training is True, a 1D Tensor for the computed batch\n variance (inverted variance in the cuDNN case) to be reused in\n gradient computation. When is_training is False, a 1D Tensor\n for the population variance to be reused in both 1st and 2nd\n order gradient computation.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n The data format for y_backprop, x, x_backprop.\n Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_3, reserve_space_4).\n\n x_backprop: A `Tensor`. Has the same type as `y_backprop`.\n scale_backprop: A `Tensor`. Has the same type as `y_backprop`.\n offset_backprop: A `Tensor`. Has the same type as `y_backprop`.\n reserve_space_3: A `Tensor`. Has the same type as `y_backprop`.\n reserve_space_4: A `Tensor`. Has the same type as `y_backprop`.\n ", "desc": "Gradient for batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNormGradV2", "docs": "Gradient for batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n y_backprop: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n A 4D Tensor for the gradient with respect to y.\n x: A `Tensor`. Must have the same type as `y_backprop`.\n A 4D Tensor for input data.\n scale: A `Tensor` of type `float32`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n reserve_space_1: A `Tensor`. Must be one of the following types: `float32`.\n When is_training is True, a 1D Tensor for the computed batch\n mean to be reused in gradient computation. When is_training is\n False, a 1D Tensor for the population mean to be reused in both\n 1st and 2nd order gradient computation.\n reserve_space_2: A `Tensor`. Must have the same type as `reserve_space_1`.\n When is_training is True, a 1D Tensor for the computed batch\n variance (inverted variance in the cuDNN case) to be reused in\n gradient computation. When is_training is False, a 1D Tensor\n for the population variance to be reused in both 1st and 2nd\n order gradient computation.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n The data format for y_backprop, x, x_backprop.\n Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_3, reserve_space_4).\n\n x_backprop: A `Tensor`. Has the same type as `y_backprop`.\n scale_backprop: A `Tensor`. Has the same type as `reserve_space_1`.\n offset_backprop: A `Tensor`. Has the same type as `reserve_space_1`.\n reserve_space_3: A `Tensor`. Has the same type as `reserve_space_1`.\n reserve_space_4: A `Tensor`. Has the same type as `reserve_space_1`.\n ", "desc": "Gradient for batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNormGradV3", "docs": "Gradient for batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n y_backprop: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n A 4D Tensor for the gradient with respect to y.\n x: A `Tensor`. Must have the same type as `y_backprop`.\n A 4D Tensor for input data.\n scale: A `Tensor` of type `float32`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n reserve_space_1: A `Tensor`. Must be one of the following types: `float32`.\n When is_training is True, a 1D Tensor for the computed batch\n mean to be reused in gradient computation. When is_training is\n False, a 1D Tensor for the population mean to be reused in both\n 1st and 2nd order gradient computation.\n reserve_space_2: A `Tensor`. Must have the same type as `reserve_space_1`.\n When is_training is True, a 1D Tensor for the computed batch\n variance (inverted variance in the cuDNN case) to be reused in\n gradient computation. When is_training is False, a 1D Tensor\n for the population variance to be reused in both 1st and 2nd\n order gradient computation.\n reserve_space_3: A `Tensor`. Must have the same type as `reserve_space_1`.\n When is_training is True, a 1D Tensor for some intermediate results to be reused\n in gradient computation. When is_training is False, a dummy empty Tensor will be\n created.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NDHWC\", \"NCDHW\"`. Defaults to `\"NHWC\"`.\n The data format for y_backprop, x, x_backprop.\n Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (x_backprop, scale_backprop, offset_backprop, reserve_space_4, reserve_space_5).\n\n x_backprop: A `Tensor`. Has the same type as `y_backprop`.\n scale_backprop: A `Tensor`. Has the same type as `reserve_space_1`.\n offset_backprop: A `Tensor`. Has the same type as `reserve_space_1`.\n reserve_space_4: A `Tensor`. Has the same type as `reserve_space_1`.\n reserve_space_5: A `Tensor`. Has the same type as `reserve_space_1`.\n ", "desc": "Gradient for batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNormV2", "docs": "Batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n A 4D Tensor for input data.\n scale: A `Tensor`. Must be one of the following types: `float32`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n offset: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for offset, to shift to the normalized x.\n mean: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for population mean. Used for inference only;\n must be empty for training.\n variance: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for population variance. Used for inference only;\n must be empty for training.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n exponential_avg_factor: An optional `float`. Defaults to `1`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n The data format for x and y. Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2).\n\n y: A `Tensor`. Has the same type as `x`.\n batch_mean: A `Tensor`. Has the same type as `scale`.\n batch_variance: A `Tensor`. Has the same type as `scale`.\n reserve_space_1: A `Tensor`. Has the same type as `scale`.\n reserve_space_2: A `Tensor`. Has the same type as `scale`.\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedBatchNormV3", "docs": "Batch normalization.\n\n Note that the size of 4D Tensors are defined by either \"NHWC\" or \"NCHW\".\n The size of 1D Tensors matches the dimension C of the 4D Tensors.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n A 4D Tensor for input data.\n scale: A `Tensor`. Must be one of the following types: `bfloat16`, `float32`.\n A 1D Tensor for scaling factor, to scale the normalized x.\n offset: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for offset, to shift to the normalized x.\n mean: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for population mean. Used for inference only;\n must be empty for training.\n variance: A `Tensor`. Must have the same type as `scale`.\n A 1D Tensor for population variance. Used for inference only;\n must be empty for training.\n epsilon: An optional `float`. Defaults to `0.0001`.\n A small float number added to the variance of x.\n exponential_avg_factor: An optional `float`. Defaults to `1`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NDHWC\", \"NCDHW\"`. Defaults to `\"NHWC\"`.\n The data format for x and y. Either \"NHWC\" (default) or \"NCHW\".\n is_training: An optional `bool`. Defaults to `True`.\n A bool value to indicate the operation is for training (default)\n or inference.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, batch_mean, batch_variance, reserve_space_1, reserve_space_2, reserve_space_3).\n\n y: A `Tensor`. Has the same type as `x`.\n batch_mean: A `Tensor`. Has the same type as `scale`.\n batch_variance: A `Tensor`. Has the same type as `scale`.\n reserve_space_1: A `Tensor`. Has the same type as `scale`.\n reserve_space_2: A `Tensor`. Has the same type as `scale`.\n reserve_space_3: A `Tensor`. Has the same type as `scale`.\n ", "desc": "Batch normalization.", "type": "API"}, {"name": "tf.raw_ops.FusedPadConv2D", "docs": "Performs a padding as a preprocess during a convolution.\n\n Similar to FusedResizeAndPadConv2d, this op allows for an optimized\n implementation where the spatial padding transformation stage is fused with the\n im2col lookup, but in this case without the bilinear filtering required for\n resizing. Fusing the padding prevents the need to write out the intermediate\n results as whole tensors, reducing memory pressure, and we can get some latency\n gains by merging the transformation calculations.\n The data_format attribute for Conv2D isn't supported by this op, and 'NHWC'\n order is used instead.\n Internally this op uses a single per-graph scratch buffer, which means that it\n will block if multiple versions are being run in parallel. This is because this\n operator is primarily an optimization to minimize memory usage.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n 4-D with shape `[batch, in_height, in_width, in_channels]`.\n paddings: A `Tensor` of type `int32`.\n A two-column matrix specifying the padding sizes. The number of\n rows must be the same as the rank of `input`.\n filter: A `Tensor`. Must have the same type as `input`. 4-D with shape\n `[filter_height, filter_width, in_channels, out_channels]`.\n mode: A `string` from: `\"REFLECT\", \"SYMMETRIC\"`.\n strides: A list of `ints`.\n 1-D of length 4. The stride of the sliding window for each dimension\n of `input`. Must be in the same order as the dimension specified with format.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs a padding as a preprocess during a convolution.", "type": "API"}, {"name": "tf.raw_ops.FusedResizeAndPadConv2D", "docs": "Performs a resize and padding as a preprocess during a convolution.\n\n It's often possible to do spatial transformations more efficiently as part of\n the packing stage of a convolution, so this op allows for an optimized\n implementation where these stages are fused together. This prevents the need to\n write out the intermediate results as whole tensors, reducing memory pressure,\n and we can get some latency gains by merging the transformation calculations.\n The data_format attribute for Conv2D isn't supported by this op, and defaults to\n 'NHWC' order.\n Internally this op uses a single per-graph scratch buffer, which means that it\n will block if multiple versions are being run in parallel. This is because this\n operator is primarily an optimization to minimize memory usage.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n 4-D with shape `[batch, in_height, in_width, in_channels]`.\n size: A `Tensor` of type `int32`.\n A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n paddings: A `Tensor` of type `int32`.\n A two-column matrix specifying the padding sizes. The number of\n rows must be the same as the rank of `input`.\n filter: A `Tensor`. Must have the same type as `input`. 4-D with shape\n `[filter_height, filter_width, in_channels, out_channels]`.\n mode: A `string` from: `\"REFLECT\", \"SYMMETRIC\"`.\n strides: A list of `ints`.\n 1-D of length 4. The stride of the sliding window for each dimension\n of `input`. Must be in the same order as the dimension specified with format.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n resize_align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs a resize and padding as a preprocess during a convolution.", "type": "API"}, {"name": "tf.raw_ops.Gather", "docs": "Gather slices from `params` according to `indices`.\n\n `indices` must be an integer tensor of any dimension (usually 0-D or 1-D).\n Produces an output tensor with shape `indices.shape + params.shape[1:]` where:\n\n ```python\n # Scalar indices\n output[:, ..., :] = params[indices, :, ... :]\n\n # Vector indices\n output[i, :, ..., :] = params[indices[i], :, ... :]\n\n # Higher rank indices\n output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]\n ```\n\n If `indices` is a permutation and `len(indices) == params.shape[0]` then\n this operation will permute `params` accordingly.\n\n `validate_indices`: DEPRECATED. If this operation is assigned to CPU, values in\n `indices` are always validated to be within range. If assigned to GPU,\n out-of-bound indices result in safe but unspecified behavior, which may include\n raising an error.\n\n
\n \n
\n\n Args:\n params: A `Tensor`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.GatherNd", "docs": "Gather slices from `params` into a Tensor with shape specified by `indices`.\n\n `indices` is a K-dimensional integer tensor, best thought of as a\n (K-1)-dimensional tensor of indices into `params`, where each element defines a\n slice of `params`:\n\n output[\\\\(i_0, ..., i_{K-2}\\\\)] = params[indices[\\\\(i_0, ..., i_{K-2}\\\\)]]\n\n Whereas in `tf.gather` `indices` defines slices into the `axis`\n dimension of `params`, in `tf.gather_nd`, `indices` defines slices into the\n first `N` dimensions of `params`, where `N = indices.shape[-1]`.\n\n The last dimension of `indices` can be at most the rank of\n `params`:\n\n indices.shape[-1] <= params.rank\n\n The last dimension of `indices` corresponds to elements\n (if `indices.shape[-1] == params.rank`) or slices\n (if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]`\n of `params`. The output tensor has shape\n\n indices.shape[:-1] + params.shape[indices.shape[-1]:]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n\n Some examples below.\n\n Simple indexing into a matrix:\n\n ```python\n indices = [[0, 0], [1, 1]]\n params = [['a', 'b'], ['c', 'd']]\n output = ['a', 'd']\n ```\n\n Slice indexing into a matrix:\n\n ```python\n indices = [[1], [0]]\n params = [['a', 'b'], ['c', 'd']]\n output = [['c', 'd'], ['a', 'b']]\n ```\n\n Indexing into a 3-tensor:\n\n ```python\n indices = [[1]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = [[['a1', 'b1'], ['c1', 'd1']]]\n\n\n indices = [[0, 1], [1, 0]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = [['c0', 'd0'], ['a1', 'b1']]\n\n\n indices = [[0, 0, 1], [1, 0, 1]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = ['b0', 'b1']\n ```\n\n Batched indexing into a matrix:\n\n ```python\n indices = [[[0, 0]], [[0, 1]]]\n params = [['a', 'b'], ['c', 'd']]\n output = [['a'], ['b']]\n ```\n\n Batched slice indexing into a matrix:\n\n ```python\n indices = [[[1]], [[0]]]\n params = [['a', 'b'], ['c', 'd']]\n output = [[['c', 'd']], [['a', 'b']]]\n ```\n\n Batched indexing into a 3-tensor:\n\n ```python\n indices = [[[1]], [[0]]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = [[[['a1', 'b1'], ['c1', 'd1']]],\n [[['a0', 'b0'], ['c0', 'd0']]]]\n\n indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = [[['c0', 'd0'], ['a1', 'b1']],\n [['a0', 'b0'], ['c1', 'd1']]]\n\n\n indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]\n params = [[['a0', 'b0'], ['c0', 'd0']],\n [['a1', 'b1'], ['c1', 'd1']]]\n output = [['b0', 'b1'], ['d0', 'c1']]\n ```\n\n See also `tf.gather` and `tf.batch_gather`.\n\n Args:\n params: A `Tensor`. The tensor from which to gather values.\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Index tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` into a Tensor with shape specified by `indices`.", "type": "API"}, {"name": "tf.raw_ops.GatherV2", "docs": "Gather slices from `params` axis `axis` according to `indices`.\n\n `indices` must be an integer tensor of any dimension (usually 0-D or 1-D).\n Produces an output tensor with shape `params.shape[:axis] +\n indices.shape[batch_dims:] + params.shape[axis + 1:]` where:\n\n ```python\n # Scalar indices (output is rank(params) - 1).\n output[a_0, ..., a_n, b_0, ..., b_n] =\n params[a_0, ..., a_n, indices, b_0, ..., b_n]\n\n # Vector indices (output is rank(params)).\n output[a_0, ..., a_n, i, b_0, ..., b_n] =\n params[a_0, ..., a_n, indices[i], b_0, ..., b_n]\n\n # Higher rank indices (output is rank(params) + rank(indices) - 1).\n output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =\n params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]\n ```\n\n
\n \n
\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, a 0 is stored in the\n corresponding output value.\n\n See also `tf.batch_gather` and `tf.gather_nd`.\n\n Args:\n params: A `Tensor`.\n The tensor from which to gather values. Must be at least rank\n `axis + 1`.\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Index tensor. Must be in range `[0, params.shape[axis])`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The axis in `params` to gather `indices` from. Defaults to the first\n dimension. Supports negative indexes.\n batch_dims: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `params`.\n ", "desc": "Gather slices from `params` axis `axis` according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.GenerateBoundingBoxProposals", "docs": "This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497\n\n The op selects top `pre_nms_topn` scoring boxes, decodes them with respect to anchors,\n applies non-maximal suppression on overlapping boxes with higher than\n `nms_threshold` intersection-over-union (iou) value, discarding boxes where shorter\n side is less than `min_size`.\n Inputs:\n `scores`: A 4D tensor of shape [Batch, Height, Width, Num Anchors] containing the scores per anchor at given position\n `bbox_deltas`: is a tensor of shape [Batch, Height, Width, 4 x Num Anchors] boxes encoded to each anchor\n `anchors`: A 1D tensor of shape [4 x Num Anchors], representing the anchors.\n Outputs:\n `rois`: output RoIs, a 3D tensor of shape [Batch, post_nms_topn, 4], padded by 0 if less than post_nms_topn candidates found.\n `roi_probabilities`: probability scores of each roi in 'rois', a 2D tensor of shape [Batch,post_nms_topn], padded with 0 if needed, sorted by scores.\n\n Args:\n scores: A `Tensor` of type `float32`.\n A 4-D float tensor of shape `[num_images, height, width, num_achors]` containing scores of the boxes for given anchors, can be unsorted.\n bbox_deltas: A `Tensor` of type `float32`.\n A 4-D float tensor of shape `[num_images, height, width, 4 x num_anchors]`. encoding boxes with respec to each anchor.\n Coordinates are given in the form [dy, dx, dh, dw].\n image_info: A `Tensor` of type `float32`.\n A 2-D float tensor of shape `[num_images, 5]` containing image information Height, Width, Scale.\n anchors: A `Tensor` of type `float32`.\n A 2-D float tensor of shape `[num_anchors, 4]` describing the anchor boxes. Boxes are formatted in the form [y1, x1, y2, x2].\n nms_threshold: A `Tensor` of type `float32`.\n A scalar float tensor for non-maximal-suppression threshold.\n pre_nms_topn: A `Tensor` of type `int32`.\n A scalar int tensor for the number of top scoring boxes to be used as input.\n min_size: A `Tensor` of type `float32`.\n A scalar float tensor. Any box that has a smaller size than min_size will be discarded.\n post_nms_topn: An optional `int`. Defaults to `300`.\n An integer. Maximum number of rois in the output.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (rois, roi_probabilities).\n\n rois: A `Tensor` of type `float32`.\n roi_probabilities: A `Tensor` of type `float32`.\n ", "desc": "This op produces Region of Interests from given bounding boxes(bbox_deltas) encoded wrt anchors according to eq.2 in arXiv:1506.01497", "type": "API"}, {"name": "tf.raw_ops.GenerateVocabRemapping", "docs": "Given a path to new and old vocabulary files, returns a remapping Tensor of\n\n length `num_new_vocab`, where `remapping[i]` contains the row number in the old\n vocabulary that corresponds to row `i` in the new vocabulary (starting at line\n `new_vocab_offset` and up to `num_new_vocab` entities), or `-1` if entry `i`\n in the new vocabulary is not in the old vocabulary. The old vocabulary is\n constrained to the first `old_vocab_size` entries if `old_vocab_size` is not the\n default value of -1.\n\n `num_vocab_offset` enables\n use in the partitioned variable case, and should generally be set through\n examining partitioning info. The format of the files should be a text file,\n with each line containing a single entity within the vocabulary.\n\n For example, with `new_vocab_file` a text file containing each of the following\n elements on a single line: `[f0, f1, f2, f3]`, old_vocab_file = [f1, f0, f3],\n `num_new_vocab = 3, new_vocab_offset = 1`, the returned remapping would be\n `[0, -1, 2]`.\n\n The op also returns a count of how many entries in the new vocabulary\n were present in the old vocabulary, which is used to calculate the number of\n values to initialize in a weight matrix remapping\n\n This functionality can be used to remap both row vocabularies (typically,\n features) and column vocabularies (typically, classes) from TensorFlow\n checkpoints. Note that the partitioning logic relies on contiguous vocabularies\n corresponding to div-partitioned variables. Moreover, the underlying remapping\n uses an IndexTable (as opposed to an inexact CuckooTable), so client code should\n use the corresponding index_table_from_file() as the FeatureColumn framework\n does (as opposed to tf.feature_to_id(), which uses a CuckooTable).\n\n Args:\n new_vocab_file: A `Tensor` of type `string`. Path to the new vocab file.\n old_vocab_file: A `Tensor` of type `string`. Path to the old vocab file.\n new_vocab_offset: An `int` that is `>= 0`.\n How many entries into the new vocab file to start reading.\n num_new_vocab: An `int` that is `>= 0`.\n Number of entries in the new vocab file to remap.\n old_vocab_size: An optional `int` that is `>= -1`. Defaults to `-1`.\n Number of entries in the old vocab file to consider. If -1,\n use the entire old vocabulary.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (remapping, num_present).\n\n remapping: A `Tensor` of type `int64`.\n num_present: A `Tensor` of type `int32`.\n ", "desc": "Given a path to new and old vocabulary files, returns a remapping Tensor of", "type": "API"}, {"name": "tf.raw_ops.GeneratorDataset", "docs": "Creates a dataset that invokes a function to generate elements.\n\n Args:\n init_func_other_args: A list of `Tensor` objects.\n next_func_other_args: A list of `Tensor` objects.\n finalize_func_other_args: A list of `Tensor` objects.\n init_func: A function decorated with @Defun.\n next_func: A function decorated with @Defun.\n finalize_func: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that invokes a function to generate elements.", "type": "API"}, {"name": "tf.raw_ops.GetOptions", "docs": "Returns the `tf.data.Options` attached to `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the `tf.data.Options` attached to `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.GetSessionHandle", "docs": "Store the input tensor in the state of the current session.\n\n Args:\n value: A `Tensor`. The tensor to be stored.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Store the input tensor in the state of the current session.", "type": "API"}, {"name": "tf.raw_ops.GetSessionHandleV2", "docs": "Store the input tensor in the state of the current session.\n\n Args:\n value: A `Tensor`. The tensor to be stored.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Store the input tensor in the state of the current session.", "type": "API"}, {"name": "tf.raw_ops.GetSessionTensor", "docs": "Get the value of the tensor specified by its handle.\n\n Args:\n handle: A `Tensor` of type `string`.\n The handle for a tensor stored in the session state.\n dtype: A `tf.DType`. The type of the output value.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Get the value of the tensor specified by its handle.", "type": "API"}, {"name": "tf.raw_ops.Greater", "docs": "Returns the truth value of (x > y) element-wise.\n\n *NOTE*: `math.greater` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 2, 5])\n tf.math.greater(x, y) ==> [False, True, True]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.greater(x, y) ==> [False, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x > y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.GreaterEqual", "docs": "Returns the truth value of (x >= y) element-wise.\n\n *NOTE*: `math.greater_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5, 2, 5, 10])\n tf.math.greater_equal(x, y) ==> [True, True, True, False]\n\n x = tf.constant([5, 4, 6, 7])\n y = tf.constant([5])\n tf.math.greater_equal(x, y) ==> [True, False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x >= y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.GroupByReducerDataset", "docs": "Creates a dataset that computes a group-by on `input_dataset`.\n\n Creates a dataset that computes a group-by on `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n key_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `key_func`.\n init_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `init_func`.\n reduce_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `reduce_func`.\n finalize_func_other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `finalize_func`.\n key_func: A function decorated with @Defun.\n A function mapping an element of `input_dataset`, concatenated\n with `key_func_other_arguments` to a scalar value of type DT_INT64.\n init_func: A function decorated with @Defun.\n A function mapping a key of type DT_INT64, concatenated with\n `init_func_other_arguments` to the initial reducer state.\n reduce_func: A function decorated with @Defun.\n A function mapping the current reducer state and an element of `input_dataset`,\n concatenated with `reduce_func_other_arguments` to a new reducer state.\n finalize_func: A function decorated with @Defun.\n A function mapping the final reducer state to an output element.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that computes a group-by on `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.GroupByWindowDataset", "docs": "Creates a dataset that computes a windowed group-by on `input_dataset`.\n\n // TODO(mrry): Support non-int64 keys.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n key_func_other_arguments: A list of `Tensor` objects.\n reduce_func_other_arguments: A list of `Tensor` objects.\n window_size_func_other_arguments: A list of `Tensor` objects.\n key_func: A function decorated with @Defun.\n A function mapping an element of `input_dataset`, concatenated\n with `key_func_other_arguments` to a scalar value of type DT_INT64.\n reduce_func: A function decorated with @Defun.\n window_size_func: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that computes a windowed group-by on `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.GRUBlockCell", "docs": "Computes the GRU cell forward propagation for 1 time step.\n\n Args\n x: Input to the GRU cell.\n h_prev: State input from the previous GRU cell.\n w_ru: Weight matrix for the reset and update gate.\n w_c: Weight matrix for the cell connection gate.\n b_ru: Bias vector for the reset and update gate.\n b_c: Bias vector for the cell connection gate.\n\n Returns\n r: Output of the reset gate.\n u: Output of the update gate.\n c: Output of the cell connection gate.\n h: Current state of the GRU cell.\n\n Note on notation of the variables:\n\n Concatenation of a and b is represented by a_b\n Element-wise dot product of a and b is represented by ab\n Element-wise dot product is represented by \\circ\n Matrix multiplication is represented by *\n\n Biases are initialized with :\n `b_ru` - constant_initializer(1.0)\n `b_c` - constant_initializer(0.0)\n\n This kernel op implements the following mathematical equations:\n\n ```\n x_h_prev = [x, h_prev]\n\n [r_bar u_bar] = x_h_prev * w_ru + b_ru\n\n r = sigmoid(r_bar)\n u = sigmoid(u_bar)\n\n h_prevr = h_prev \\circ r\n\n x_h_prevr = [x h_prevr]\n\n c_bar = x_h_prevr * w_c + b_c\n c = tanh(c_bar)\n\n h = (1-u) \\circ c + u \\circ h_prev\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`.\n h_prev: A `Tensor`. Must have the same type as `x`.\n w_ru: A `Tensor`. Must have the same type as `x`.\n w_c: A `Tensor`. Must have the same type as `x`.\n b_ru: A `Tensor`. Must have the same type as `x`.\n b_c: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (r, u, c, h).\n\n r: A `Tensor`. Has the same type as `x`.\n u: A `Tensor`. Has the same type as `x`.\n c: A `Tensor`. Has the same type as `x`.\n h: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the GRU cell forward propagation for 1 time step.", "type": "API"}, {"name": "tf.raw_ops.GRUBlockCellGrad", "docs": "Computes the GRU cell back-propagation for 1 time step.\n\n Args\n x: Input to the GRU cell.\n h_prev: State input from the previous GRU cell.\n w_ru: Weight matrix for the reset and update gate.\n w_c: Weight matrix for the cell connection gate.\n b_ru: Bias vector for the reset and update gate.\n b_c: Bias vector for the cell connection gate.\n r: Output of the reset gate.\n u: Output of the update gate.\n c: Output of the cell connection gate.\n d_h: Gradients of the h_new wrt to objective function.\n\n Returns\n d_x: Gradients of the x wrt to objective function.\n d_h_prev: Gradients of the h wrt to objective function.\n d_c_bar Gradients of the c_bar wrt to objective function.\n d_r_bar_u_bar Gradients of the r_bar & u_bar wrt to objective function.\n\n This kernel op implements the following mathematical equations:\n\n Note on notation of the variables:\n\n Concatenation of a and b is represented by a_b\n Element-wise dot product of a and b is represented by ab\n Element-wise dot product is represented by \\circ\n Matrix multiplication is represented by *\n\n Additional notes for clarity:\n\n `w_ru` can be segmented into 4 different matrices.\n ```\n w_ru = [w_r_x w_u_x\n w_r_h_prev w_u_h_prev]\n ```\n Similarly, `w_c` can be segmented into 2 different matrices.\n ```\n w_c = [w_c_x w_c_h_prevr]\n ```\n Same goes for biases.\n ```\n b_ru = [b_ru_x b_ru_h]\n b_c = [b_c_x b_c_h]\n ```\n Another note on notation:\n ```\n d_x = d_x_component_1 + d_x_component_2\n\n where d_x_component_1 = d_r_bar * w_r_x^T + d_u_bar * w_r_x^T\n and d_x_component_2 = d_c_bar * w_c_x^T\n\n d_h_prev = d_h_prev_component_1 + d_h_prevr \\circ r + d_h \\circ u\n where d_h_prev_componenet_1 = d_r_bar * w_r_h_prev^T + d_u_bar * w_r_h_prev^T\n ```\n\n Mathematics behind the Gradients below:\n ```\n d_c_bar = d_h \\circ (1-u) \\circ (1-c \\circ c)\n d_u_bar = d_h \\circ (h-c) \\circ u \\circ (1-u)\n\n d_r_bar_u_bar = [d_r_bar d_u_bar]\n\n [d_x_component_1 d_h_prev_component_1] = d_r_bar_u_bar * w_ru^T\n\n [d_x_component_2 d_h_prevr] = d_c_bar * w_c^T\n\n d_x = d_x_component_1 + d_x_component_2\n\n d_h_prev = d_h_prev_component_1 + d_h_prevr \\circ r + u\n ```\n Below calculation is performed in the python wrapper for the Gradients\n (not in the gradient kernel.)\n ```\n d_w_ru = x_h_prevr^T * d_c_bar\n\n d_w_c = x_h_prev^T * d_r_bar_u_bar\n\n d_b_ru = sum of d_r_bar_u_bar along axis = 0\n\n d_b_c = sum of d_c_bar along axis = 0\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`.\n h_prev: A `Tensor`. Must have the same type as `x`.\n w_ru: A `Tensor`. Must have the same type as `x`.\n w_c: A `Tensor`. Must have the same type as `x`.\n b_ru: A `Tensor`. Must have the same type as `x`.\n b_c: A `Tensor`. Must have the same type as `x`.\n r: A `Tensor`. Must have the same type as `x`.\n u: A `Tensor`. Must have the same type as `x`.\n c: A `Tensor`. Must have the same type as `x`.\n d_h: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (d_x, d_h_prev, d_c_bar, d_r_bar_u_bar).\n\n d_x: A `Tensor`. Has the same type as `x`.\n d_h_prev: A `Tensor`. Has the same type as `x`.\n d_c_bar: A `Tensor`. Has the same type as `x`.\n d_r_bar_u_bar: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the GRU cell back-propagation for 1 time step.", "type": "API"}, {"name": "tf.raw_ops.GuaranteeConst", "docs": "Gives a guarantee to the TF runtime that the input tensor is a constant.\n\n The runtime is then free to make optimizations based on this.\n\n Only accepts value typed tensors as inputs and rejects resource variable handles\n as input.\n\n Returns the input tensor without modification.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Gives a guarantee to the TF runtime that the input tensor is a constant.", "type": "API"}, {"name": "tf.raw_ops.HashTable", "docs": "Creates a non-initialized hash table.\n\n This op creates a hash table, specifying the type of its keys and values.\n Before using the table you will have to initialize it. After initialization the\n table will be immutable.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n If true and shared_name is empty, the table is shared\n using the node name.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Creates a non-initialized hash table.", "type": "API"}, {"name": "tf.raw_ops.HashTableV2", "docs": "Creates a non-initialized hash table.\n\n This op creates a hash table, specifying the type of its keys and values.\n Before using the table you will have to initialize it. After initialization the\n table will be immutable.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n If true and shared_name is empty, the table is shared\n using the node name.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a non-initialized hash table.", "type": "API"}, {"name": "tf.raw_ops.HistogramFixedWidth", "docs": "Return histogram of values.\n\n Given the tensor `values`, this operation returns a rank 1 histogram counting\n the number of entries in `values` that fall into every bin. The bins are\n equal width and determined by the arguments `value_range` and `nbins`.\n\n ```python\n # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)\n nbins = 5\n value_range = [0.0, 5.0]\n new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]\n\n with tf.get_default_session() as sess:\n hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)\n variables.global_variables_initializer().run()\n sess.run(hist) => [2, 1, 1, 0, 2]\n ```\n\n Args:\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n Numeric `Tensor`.\n value_range: A `Tensor`. Must have the same type as `values`.\n Shape [2] `Tensor` of same `dtype` as `values`.\n values <= value_range[0] will be mapped to hist[0],\n values >= value_range[1] will be mapped to hist[-1].\n nbins: A `Tensor` of type `int32`.\n Scalar `int32 Tensor`. Number of histogram bins.\n dtype: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Return histogram of values.", "type": "API"}, {"name": "tf.raw_ops.HistogramSummary", "docs": "Outputs a `Summary` protocol buffer with a histogram.\n\n The generated\n [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)\n has one summary value containing a histogram for `values`.\n\n This op reports an `InvalidArgument` error if any value is not finite.\n\n Args:\n tag: A `Tensor` of type `string`.\n Scalar. Tag to use for the `Summary.Value`.\n values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n Any shape. Values to use to build the histogram.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with a histogram.", "type": "API"}, {"name": "tf.raw_ops.HSVToRGB", "docs": "Convert one or more images from HSV to RGB.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the RGB\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n See `rgb_to_hsv` for a description of the HSV encoding.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. HSV data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Convert one or more images from HSV to RGB.", "type": "API"}, {"name": "tf.raw_ops.Identity", "docs": "Return a tensor with the same shape and contents as the input tensor or value.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Return a tensor with the same shape and contents as the input tensor or value.", "type": "API"}, {"name": "tf.raw_ops.IdentityN", "docs": "Returns a list of tensors with the same shapes and contents as the input\n\n tensors.\n\n This op can be used to override the gradient for complicated functions. For\n example, suppose y = f(x) and we wish to apply a custom function g for backprop\n such that dx = g(dy). In Python,\n\n ```python\n with tf.get_default_graph().gradient_override_map(\n {'IdentityN': 'OverrideGradientWithG'}):\n y, _ = identity_n([f(x), x])\n\n @tf.RegisterGradient('OverrideGradientWithG')\n def ApplyG(op, dy, _):\n return [None, g(dy)] # Do not backprop to f(x).\n ```\n\n Args:\n input: A list of `Tensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "Returns a list of tensors with the same shapes and contents as the input", "type": "API"}, {"name": "tf.raw_ops.IdentityReader", "docs": "A Reader that outputs the queued work as both the key and value.\n\n To use, enqueue strings in a Queue. ReaderRead will take the front\n work string and output (work, work).\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs the queued work as both the key and value.", "type": "API"}, {"name": "tf.raw_ops.IdentityReaderV2", "docs": "A Reader that outputs the queued work as both the key and value.\n\n To use, enqueue strings in a Queue. ReaderRead will take the front\n work string and output (work, work).\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A Reader that outputs the queued work as both the key and value.", "type": "API"}, {"name": "tf.raw_ops.If", "docs": "output = cond ? then_branch(input) : else_branch(input)\n\n Args:\n cond: A `Tensor`.\n A Tensor. If the tensor is a scalar of non-boolean type, the\n scalar is converted to a boolean according to the\n following rule: if the scalar is a numerical value, non-zero means\n `True` and zero means False; if the scalar is a string, non-empty\n means `True` and empty means `False`. If the tensor is not a scalar,\n being empty means False and being non-empty means True.\n input: A list of `Tensor` objects. A list of input tensors.\n Tout: A list of `tf.DTypes`. A list of output types.\n then_branch: A function decorated with @Defun.\n A function that takes 'inputs' and returns a list of tensors, whose\n types are the same as what else_branch returns.\n else_branch: A function decorated with @Defun.\n A function that takes 'inputs' and returns a list of tensors, whose\n types are the same as what then_branch returns.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "output = cond ? then_branch(input) : else_branch(input)", "type": "API"}, {"name": "tf.raw_ops.IFFT", "docs": "Inverse fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform over the\n inner-most dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.IFFT2D", "docs": "Inverse 2D fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform over the\n inner-most 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 2D fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.IFFT3D", "docs": "Inverse 3D fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform over the\n inner-most 3 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 3D fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.Igamma", "docs": "Compute the lower regularized incomplete Gamma function `P(a, x)`.\n\n The lower regularized incomplete Gamma function is defined as:\n\n\n \\\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\\\)\n\n where\n\n \\\\(gamma(a, x) = \\\\int_{0}^{x} t^{a-1} exp(-t) dt\\\\)\n\n is the lower incomplete Gamma function.\n\n Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the lower regularized incomplete Gamma function `P(a, x)`.", "type": "API"}, {"name": "tf.raw_ops.Igammac", "docs": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.\n\n The upper regularized incomplete Gamma function is defined as:\n\n \\\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\\\)\n\n where\n\n \\\\(Gamma(a, x) = \\int_{x}^{\\infty} t^{a-1} exp(-t) dt\\\\)\n\n is the upper incomplete Gamma function.\n\n Note, above `P(a, x)` (`Igamma`) is the lower regularized complete\n Gamma function.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the upper regularized incomplete Gamma function `Q(a, x)`.", "type": "API"}, {"name": "tf.raw_ops.IgammaGradA", "docs": "Computes the gradient of `igamma(a, x)` wrt `a`.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Computes the gradient of `igamma(a, x)` wrt `a`.", "type": "API"}, {"name": "tf.raw_ops.IgnoreErrorsDataset", "docs": "Creates a dataset that contains the elements of `input_dataset` ignoring errors.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n log_warning: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that contains the elements of `input_dataset` ignoring errors.", "type": "API"}, {"name": "tf.raw_ops.Imag", "docs": "Returns the imaginary part of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n type `float` that is the imaginary part of each element in `input`. All\n elements in `input` must be complex numbers of the form \\\\(a + bj\\\\), where *a*\n is the real part and *b* is the imaginary part returned by this operation.\n\n For example:\n\n ```\n # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]\n tf.imag(input) ==> [4.75, 5.75]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Tout: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tout`.\n ", "desc": "Returns the imaginary part of a complex number.", "type": "API"}, {"name": "tf.raw_ops.ImageProjectiveTransformV2", "docs": "Applies the given transform to each of the images.\n\n If one row of `transforms` is `[a0, a1, a2, b0, b1, b2, c0, c1]`, then it maps\n the *output* point `(x, y)` to a transformed *input* point\n `(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k)`, where\n `k = c0 x + c1 y + 1`. If the transformed point lays outside of the input\n image, the output pixel is set to 0.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`, `half`, `float32`, `float64`.\n 4-D with shape `[batch, height, width, channels]`.\n transforms: A `Tensor` of type `float32`.\n 2-D Tensor, `[batch, 8]` or `[1, 8]` matrix, where each row corresponds to a 3 x 3\n projective transformation matrix, with the last entry assumed to be 1. If there\n is one row, the same transformation will be applied to all images.\n output_shape: A `Tensor` of type `int32`.\n 1-D Tensor [new_height, new_width].\n interpolation: A `string`. Interpolation method, \"NEAREST\" or \"BILINEAR\".\n fill_mode: An optional `string`. Defaults to `\"CONSTANT\"`.\n Fill mode, \"REFLECT\", \"WRAP\", or \"CONSTANT\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Applies the given transform to each of the images.", "type": "API"}, {"name": "tf.raw_ops.ImageProjectiveTransformV3", "docs": "Applies the given transform to each of the images.\n\n If one row of `transforms` is `[a0, a1, a2, b0, b1, b2, c0, c1]`, then it maps\n the *output* point `(x, y)` to a transformed *input* point\n `(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k)`, where\n `k = c0 x + c1 y + 1`. If the transformed point lays outside of the input\n image, the output pixel is set to fill_value.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`, `half`, `float32`, `float64`.\n 4-D with shape `[batch, height, width, channels]`.\n transforms: A `Tensor` of type `float32`.\n 2-D Tensor, `[batch, 8]` or `[1, 8]` matrix, where each row corresponds to a 3 x 3\n projective transformation matrix, with the last entry assumed to be 1. If there\n is one row, the same transformation will be applied to all images.\n output_shape: A `Tensor` of type `int32`.\n 1-D Tensor [new_height, new_width].\n fill_value: A `Tensor` of type `float32`.\n float, the value to be filled when fill_mode is constant\".\n interpolation: A `string`. Interpolation method, \"NEAREST\" or \"BILINEAR\".\n fill_mode: An optional `string`. Defaults to `\"CONSTANT\"`.\n Fill mode, \"REFLECT\", \"WRAP\", \"CONSTANT\", or \"NEAREST\".\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Applies the given transform to each of the images.", "type": "API"}, {"name": "tf.raw_ops.ImageSummary", "docs": "Outputs a `Summary` protocol buffer with images.\n\n The summary has up to `max_images` summary values containing images. The\n images are built from `tensor` which must be 4-D with shape `[batch_size,\n height, width, channels]` and where `channels` can be:\n\n * 1: `tensor` is interpreted as Grayscale.\n * 3: `tensor` is interpreted as RGB.\n * 4: `tensor` is interpreted as RGBA.\n\n The images have the same number of channels as the input tensor. For float\n input, the values are normalized one image at a time to fit in the range\n `[0, 255]`. `uint8` values are unchanged. The op uses two different\n normalization algorithms:\n\n * If the input values are all positive, they are rescaled so the largest one\n is 255.\n\n * If any input value is negative, the values are shifted so input value 0.0\n is at 127. They are then rescaled so that either the smallest value is 0,\n or the largest one is 255.\n\n The `tag` argument is a scalar `Tensor` of type `string`. It is used to\n build the `tag` of the summary values:\n\n * If `max_images` is 1, the summary value tag is '*tag*/image'.\n * If `max_images` is greater than 1, the summary value tags are\n generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.\n\n The `bad_color` argument is the color to use in the generated images for\n non-finite input values. It is a `uint8` 1-D tensor of length `channels`.\n Each element must be in the range `[0, 255]` (It represents the value of a\n pixel in the output image). Non-finite values in the input tensor are\n replaced by this tensor in the output image. The default value is the color\n red.\n\n Args:\n tag: A `Tensor` of type `string`.\n Scalar. Used to build the `tag` attribute of the summary values.\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `float32`, `half`, `float64`.\n 4-D of shape `[batch_size, height, width, channels]` where\n `channels` is 1, 3, or 4.\n max_images: An optional `int` that is `>= 1`. Defaults to `3`.\n Max number of batch elements to generate images for.\n bad_color: An optional `tf.TensorProto`. Defaults to `dtype: DT_UINT8 tensor_shape { dim { size: 4 } } int_val: 255 int_val: 0 int_val: 0 int_val: 255`.\n Color to use for pixels with non-finite values.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with images.", "type": "API"}, {"name": "tf.raw_ops.ImmutableConst", "docs": "Returns immutable tensor from memory region.\n\n The current implementation memmaps the tensor from a file.\n\n Args:\n dtype: A `tf.DType`. Type of the returned tensor.\n shape: A `tf.TensorShape` or list of `ints`. Shape of the returned tensor.\n memory_region_name: A `string`.\n Name of readonly memory region used by the tensor, see\n NewReadOnlyMemoryRegionFromFile in tensorflow::Env.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Returns immutable tensor from memory region.", "type": "API"}, {"name": "tf.raw_ops.ImportEvent", "docs": "TODO: add doc.\n\n Args:\n writer: A `Tensor` of type `resource`.\n event: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.InfeedDequeue", "docs": "A placeholder op for a value that will be fed into the computation.\n\n Args:\n dtype: A `tf.DType`. The type of elements in the tensor.\n shape: A `tf.TensorShape` or list of `ints`. The shape of the tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "A placeholder op for a value that will be fed into the computation.", "type": "API"}, {"name": "tf.raw_ops.InfeedDequeueTuple", "docs": "Fetches multiple values from infeed as an XLA tuple.\n\n Args:\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n The element types of each element in `outputs`.\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of each tensor in `outputs`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Fetches multiple values from infeed as an XLA tuple.", "type": "API"}, {"name": "tf.raw_ops.InfeedEnqueue", "docs": "An op which feeds a single Tensor value into the computation.\n\n Args:\n input: A `Tensor`.\n A tensor that will be provided using the infeed mechanism.\n shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n The shape of the tensor.\n layout: An optional list of `ints`. Defaults to `[]`.\n A vector holding the requested layout in minor-to-major sequence.\n If a layout attribute is passed, but its values are all -1, the layout will\n be computed by the infeed operation.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. This should be -1 when the Op\n is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "An op which feeds a single Tensor value into the computation.", "type": "API"}, {"name": "tf.raw_ops.InfeedEnqueuePrelinearizedBuffer", "docs": "An op which enqueues prelinearized buffer into TPU infeed.\n\n Args:\n input: A `Tensor` of type `variant`.\n A variant tensor representing linearized output.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. This should be -1 when the Op is running on a TPU device\n and = 0 when the Op is running on the CPU device.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "An op which enqueues prelinearized buffer into TPU infeed.", "type": "API"}, {"name": "tf.raw_ops.InfeedEnqueueTuple", "docs": "Feeds multiple Tensor values into the computation as an XLA tuple.\n\n Args:\n inputs: A list of `Tensor` objects.\n A list of tensors that will be provided using the infeed mechanism.\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of each tensor in `inputs`.\n layouts: An optional list of `ints`. Defaults to `[]`.\n A vector holding the requested layout in minor-to-major sequence for\n all the tuple shapes, in the order the shapes appear in the \"shapes\" input.\n The layout elements for a sub-shape can be set to -1, in which case the\n corresponding layout will be computed by the infeed operation.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. This should be -1 when the Op\n is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Feeds multiple Tensor values into the computation as an XLA tuple.", "type": "API"}, {"name": "tf.raw_ops.InitializeTable", "docs": "Table initializer that takes two tensors for keys and values respectively.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`.\n Handle to a table which will be initialized.\n keys: A `Tensor`. Keys of type Tkey.\n values: A `Tensor`. Values of type Tval.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Table initializer that takes two tensors for keys and values respectively.", "type": "API"}, {"name": "tf.raw_ops.InitializeTableFromDataset", "docs": "TODO: add doc.\n\n Args:\n table_handle: A `Tensor` of type `resource`.\n dataset: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.InitializeTableFromTextFile", "docs": "Initializes a table from a text file.\n\n It inserts one key-value pair into the table for each line of the file.\n The key and value is extracted from the whole line content, elements from the\n split line based on `delimiter` or the line number (starting from zero).\n Where to extract the key and value from a line is specified by `key_index` and\n `value_index`.\n\n - A value of -1 means use the line number(starting from zero), expects `int64`.\n - A value of -2 means use the whole line content, expects `string`.\n - A value >= 0 means use the index (starting at zero) of the split line based\n on `delimiter`.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`.\n Handle to a table which will be initialized.\n filename: A `Tensor` of type `string`. Filename of a vocabulary text file.\n key_index: An `int` that is `>= -2`.\n Column index in a line to get the table `key` values from.\n value_index: An `int` that is `>= -2`.\n Column index that represents information of a line to get the table\n `value` values from.\n vocab_size: An optional `int` that is `>= -1`. Defaults to `-1`.\n Number of elements of the file, use -1 if unknown.\n delimiter: An optional `string`. Defaults to `\"\\t\"`.\n Delimiter to separate fields in a line.\n offset: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Initializes a table from a text file.", "type": "API"}, {"name": "tf.raw_ops.InitializeTableFromTextFileV2", "docs": "Initializes a table from a text file.\n\n It inserts one key-value pair into the table for each line of the file.\n The key and value is extracted from the whole line content, elements from the\n split line based on `delimiter` or the line number (starting from zero).\n Where to extract the key and value from a line is specified by `key_index` and\n `value_index`.\n\n - A value of -1 means use the line number(starting from zero), expects `int64`.\n - A value of -2 means use the whole line content, expects `string`.\n - A value >= 0 means use the index (starting at zero) of the split line based\n on `delimiter`.\n\n Args:\n table_handle: A `Tensor` of type `resource`.\n Handle to a table which will be initialized.\n filename: A `Tensor` of type `string`. Filename of a vocabulary text file.\n key_index: An `int` that is `>= -2`.\n Column index in a line to get the table `key` values from.\n value_index: An `int` that is `>= -2`.\n Column index that represents information of a line to get the table\n `value` values from.\n vocab_size: An optional `int` that is `>= -1`. Defaults to `-1`.\n Number of elements of the file, use -1 if unknown.\n delimiter: An optional `string`. Defaults to `\"\\t\"`.\n Delimiter to separate fields in a line.\n offset: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Initializes a table from a text file.", "type": "API"}, {"name": "tf.raw_ops.InitializeTableV2", "docs": "Table initializer that takes two tensors for keys and values respectively.\n\n Args:\n table_handle: A `Tensor` of type `resource`.\n Handle to a table which will be initialized.\n keys: A `Tensor`. Keys of type Tkey.\n values: A `Tensor`. Values of type Tval.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Table initializer that takes two tensors for keys and values respectively.", "type": "API"}, {"name": "tf.raw_ops.InplaceAdd", "docs": "Adds v into specified rows of x.\n\n Computes y = x; y[i, :] += v; return y.\n\n Args:\n x: A `Tensor`. A `Tensor` of type T.\n i: A `Tensor` of type `int32`.\n A vector. Indices into the left-most dimension of `x`.\n v: A `Tensor`. Must have the same type as `x`.\n A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Adds v into specified rows of x.", "type": "API"}, {"name": "tf.raw_ops.InplaceSub", "docs": " Subtracts `v` into specified rows of `x`.\n\n Computes y = x; y[i, :] -= v; return y.\n\n Args:\n x: A `Tensor`. A `Tensor` of type T.\n i: A `Tensor` of type `int32`.\n A vector. Indices into the left-most dimension of `x`.\n v: A `Tensor`. Must have the same type as `x`.\n A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": " Subtracts `v` into specified rows of `x`.", "type": "API"}, {"name": "tf.raw_ops.InplaceUpdate", "docs": "Updates specified rows 'i' with values 'v'.\n\n Computes `x[i, :] = v; return x`.\n\n Originally this function is mutative however for compilation we make this\n operation create / operate on a copy of `x`.\n\n Args:\n x: A `Tensor`. A tensor of type `T`.\n i: A `Tensor` of type `int32`.\n A vector. Indices into the left-most dimension of `x`.\n v: A `Tensor`. Must have the same type as `x`.\n A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Updates specified rows 'i' with values 'v'.", "type": "API"}, {"name": "tf.raw_ops.InterleaveDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Unlike MapDataset, the `f` in InterleaveDataset is expected to return\n a Dataset variant, and InterleaveDataset will flatten successive\n results into a single Dataset. Unlike FlatMapDataset,\n InterleaveDataset will interleave sequences of up to `block_length`\n consecutive elements from `cycle_length` input elements.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n cycle_length: A `Tensor` of type `int64`.\n block_length: A `Tensor` of type `int64`.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.InTopK", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is among the top `k` predictions among\n all predictions for example `i`. Note that the behavior of `InTopK` differs\n from the `TopK` op in its handling of ties; if multiple classes have the\n same prediction value and straddle the top-`k` boundary, all of those\n classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: An `int`. Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.raw_ops.InTopKV2", "docs": "Says whether the targets are in the top `K` predictions.\n\n This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the\n prediction for the target class is among the top `k` predictions among\n all predictions for example `i`. Note that the behavior of `InTopK` differs\n from the `TopK` op in its handling of ties; if multiple classes have the\n same prediction value and straddle the top-`k` boundary, all of those\n classes are considered to be in the top `k`.\n\n More formally, let\n\n \\\\(predictions_i\\\\) be the predictions for all classes for example `i`,\n \\\\(targets_i\\\\) be the target class for example `i`,\n \\\\(out_i\\\\) be the output for example `i`,\n\n $$out_i = predictions_{i, targets_i} \\in TopKIncludingTies(predictions_i)$$\n\n Args:\n predictions: A `Tensor` of type `float32`.\n A `batch_size` x `classes` tensor.\n targets: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `batch_size` vector of class ids.\n k: A `Tensor`. Must have the same type as `targets`.\n Number of top elements to look at for computing precision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Says whether the targets are in the top `K` predictions.", "type": "API"}, {"name": "tf.raw_ops.Inv", "docs": "Computes the reciprocal of x element-wise.\n\n I.e., \\\\(y = 1 / x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the reciprocal of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Invert", "docs": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.\n\n Flip each bit of supported types. For example, type `int8` (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101.\n This operation is performed on each element of the tensor argument `x`.\n\n Example:\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n\n # flip 2 (00000010) to -3 (11111101)\n tf.assert_equal(-3, bitwise_ops.invert(2))\n\n dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,\n dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]\n\n inputs = [0, 5, 3, 14]\n for dtype in dtype_list:\n # Because of issues with negative numbers, let's test this indirectly.\n # 1. invert(a) and a = 0\n # 2. invert(a) or a = invert(0)\n input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)\n not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.bitwise_or(\n input_tensor, bitwise_ops.invert(input_tensor)),\n bitwise_ops.invert(\n tf.constant(0, dtype=dtype))]\n\n expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)\n tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)\n\n expected = tf.cast([not_0] * 4, tf.float32)\n tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)\n\n # For unsigned dtypes let's also check the result directly.\n if dtype.is_unsigned:\n inverted = bitwise_ops.invert(input_tensor)\n expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)\n tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Invert (flip) each bit of supported types; for example, type `uint8` value 01010101 becomes 10101010.", "type": "API"}, {"name": "tf.raw_ops.InvertPermutation", "docs": "Computes the inverse permutation of a tensor.\n\n This operation computes the inverse of an index permutation. It takes a 1-D\n integer tensor `x`, which represents the indices of a zero-based array, and\n swaps each value with its index position. In other words, for an output tensor\n `y` and an input tensor `x`, this operation computes the following:\n\n `y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`\n\n The values must include 0. There can be no duplicate values or negative values.\n\n For example:\n\n ```\n # tensor `x` is [3, 4, 0, 2, 1]\n invert_permutation(x) ==> [2, 4, 3, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the inverse permutation of a tensor.", "type": "API"}, {"name": "tf.raw_ops.InvGrad", "docs": "Computes the gradient for the inverse of `x` wrt its input.\n\n Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy`\n is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient for the inverse of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.IRFFT", "docs": "Inverse real-valued fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most dimension of `input`.\n\n The inner-most dimension of `input` is assumed to be the result of `RFFT`: the\n `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If\n `fft_length` is not provided, it is computed from the size of the inner-most\n dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to\n compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller\n than the corresponding dimension of `input`, the dimension is cropped. If it is\n larger, the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n Treal: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.IRFFT2D", "docs": "Inverse 2D real-valued fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 2 dimensions of `input`.\n\n The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 2 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT2D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n Treal: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.IRFFT3D", "docs": "Inverse 3D real-valued fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 3 dimensions of `input`.\n\n The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 3 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT3D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n Treal: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.IsBoostedTreesEnsembleInitialized", "docs": "Checks whether a tree ensemble has been initialized.\n\n Args:\n tree_ensemble_handle: A `Tensor` of type `resource`.\n Handle to the tree ensemble resource.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Checks whether a tree ensemble has been initialized.", "type": "API"}, {"name": "tf.raw_ops.IsBoostedTreesQuantileStreamResourceInitialized", "docs": "Checks whether a quantile stream has been initialized.\n\n An Op that checks if quantile stream resource is initialized.\n\n Args:\n quantile_stream_resource_handle: A `Tensor` of type `resource`.\n resource; The reference to quantile stream resource handle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Checks whether a quantile stream has been initialized.", "type": "API"}, {"name": "tf.raw_ops.IsFinite", "docs": "Returns which elements of x are finite.\n\n @compatibility(numpy)\n Equivalent to np.isfinite\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])\n tf.math.is_finite(x) ==> [True, True, True, False, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are finite.", "type": "API"}, {"name": "tf.raw_ops.IsInf", "docs": "Returns which elements of x are Inf.\n\n @compatibility(numpy)\n Equivalent to np.isinf\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.inf, 6.8, np.inf])\n tf.math.is_inf(x) ==> [False, True, False, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are Inf.", "type": "API"}, {"name": "tf.raw_ops.IsNan", "docs": "Returns which elements of x are NaN.\n\n @compatibility(numpy)\n Equivalent to np.isnan\n @end_compatibility\n\n Example:\n\n ```python\n x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])\n tf.math.is_nan(x) ==> [False, True, False, True, False]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns which elements of x are NaN.", "type": "API"}, {"name": "tf.raw_ops.IsotonicRegression", "docs": "Solves a batch of isotonic regression problems.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n A (batch_size, dim)-tensor holding a batch of inputs.\n output_dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n Dtype of output.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, segments).\n\n output: A `Tensor` of type `output_dtype`.\n segments: A `Tensor` of type `int32`.\n ", "desc": "Solves a batch of isotonic regression problems.", "type": "API"}, {"name": "tf.raw_ops.IsVariableInitialized", "docs": "Checks whether a tensor has been initialized.\n\n Outputs boolean scalar indicating whether the tensor has been initialized.\n\n Args:\n ref: A mutable `Tensor`.\n Should be from a `Variable` node. May be uninitialized.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Checks whether a tensor has been initialized.", "type": "API"}, {"name": "tf.raw_ops.Iterator", "docs": "A container for an iterator resource.\n\n Args:\n shared_name: A `string`.\n container: A `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A container for an iterator resource.", "type": "API"}, {"name": "tf.raw_ops.IteratorFromStringHandle", "docs": "Converts the given string representing a handle to an iterator to a resource.\n\n Args:\n string_handle: A `Tensor` of type `string`.\n A string representation of the given handle.\n output_types: An optional list of `tf.DTypes`. Defaults to `[]`.\n If specified, defines the type of each tuple component in an\n element produced by the resulting iterator.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n If specified, defines the shape of each tuple component in an\n element produced by the resulting iterator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Converts the given string representing a handle to an iterator to a resource.", "type": "API"}, {"name": "tf.raw_ops.IteratorFromStringHandleV2", "docs": "TODO: add doc.\n\n Args:\n string_handle: A `Tensor` of type `string`.\n output_types: An optional list of `tf.DTypes`. Defaults to `[]`.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.IteratorGetDevice", "docs": "Returns the name of the device on which `resource` has been placed.\n\n Args:\n resource: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the name of the device on which `resource` has been placed.", "type": "API"}, {"name": "tf.raw_ops.IteratorGetNext", "docs": "Gets the next output from the given iterator .\n\n Args:\n iterator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Gets the next output from the given iterator .", "type": "API"}, {"name": "tf.raw_ops.IteratorGetNextAsOptional", "docs": "Gets the next output from the given iterator as an Optional variant.\n\n Args:\n iterator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Gets the next output from the given iterator as an Optional variant.", "type": "API"}, {"name": "tf.raw_ops.IteratorGetNextSync", "docs": "Gets the next output from the given iterator.\n\n This operation is a synchronous version IteratorGetNext. It should only be used\n in situations where the iterator does not block the calling thread, or where\n the calling thread is not a member of the thread pool used to execute parallel\n operations (e.g. in eager mode).\n\n Args:\n iterator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Gets the next output from the given iterator.", "type": "API"}, {"name": "tf.raw_ops.IteratorToStringHandle", "docs": "Converts the given `resource_handle` representing an iterator to a string.\n\n Args:\n resource_handle: A `Tensor` of type `resource`.\n A handle to an iterator resource.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts the given `resource_handle` representing an iterator to a string.", "type": "API"}, {"name": "tf.raw_ops.IteratorV2", "docs": "TODO: add doc.\n\n Args:\n shared_name: A `string`.\n container: A `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.L2Loss", "docs": "L2 Loss.\n\n Computes half the L2 norm of a tensor without the `sqrt`:\n\n output = sum(t ** 2) / 2\n\n Args:\n t: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n Typically 2-D, but may have any dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "L2 Loss.", "type": "API"}, {"name": "tf.raw_ops.LatencyStatsDataset", "docs": "Records the latency of producing `input_dataset` elements in a StatsAggregator.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n tag: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Records the latency of producing `input_dataset` elements in a StatsAggregator.", "type": "API"}, {"name": "tf.raw_ops.LeakyRelu", "docs": "Computes rectified linear: `max(features, features * alpha)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n alpha: An optional `float`. Defaults to `0.2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes rectified linear: `max(features, features * alpha)`.", "type": "API"}, {"name": "tf.raw_ops.LeakyReluGrad", "docs": "Computes rectified linear gradients for a LeakyRelu operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The backpropagated gradients to the corresponding LeakyRelu operation.\n features: A `Tensor`. Must have the same type as `gradients`.\n The features passed as input to the corresponding LeakyRelu operation,\n OR the outputs of that operation (both work equivalently).\n alpha: An optional `float`. Defaults to `0.2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes rectified linear gradients for a LeakyRelu operation.", "type": "API"}, {"name": "tf.raw_ops.LearnedUnigramCandidateSampler", "docs": "Generates labels for candidate sampling with a learned unigram distribution.\n\n See explanations of candidate sampling and the data formats at\n go/candidate-sampling.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`.\n Number of candidates to randomly sample.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n range_max: An `int` that is `>= 1`.\n The sampler will sample integers from the interval [0, range_max).\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a learned unigram distribution.", "type": "API"}, {"name": "tf.raw_ops.LeftShift", "docs": "Elementwise computes the bitwise left-shift of `x` and `y`.\n\n If `y` is negative, or greater than or equal to the width of `x` in bits the\n result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n left_shift_result = bitwise_ops.left_shift(lhs, rhs)\n\n print(left_shift_result)\n\n # This will print:\n # tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32)\n # tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.left_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise left-shift of `x` and `y`.", "type": "API"}, {"name": "tf.raw_ops.LegacyParallelInterleaveDatasetV2", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, with the exception\n that if retrieving the next value from a dataset would cause the requester to\n block, it will skip that input dataset. This dataset is especially useful\n when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it\n allows the training step to proceed so long as some data is available.\n\n !! WARNING !! This dataset is not deterministic!\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n cycle_length: A `Tensor` of type `int64`.\n block_length: A `Tensor` of type `int64`.\n buffer_output_elements: A `Tensor` of type `int64`.\n prefetch_input_elements: A `Tensor` of type `int64`.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Less", "docs": "Returns the truth value of (x < y) element-wise.\n\n *NOTE*: `math.less` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less(x, y) ==> [False, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 7])\n tf.math.less(x, y) ==> [False, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x < y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.LessEqual", "docs": "Returns the truth value of (x <= y) element-wise.\n\n *NOTE*: `math.less_equal` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Example:\n\n ```python\n x = tf.constant([5, 4, 6])\n y = tf.constant([5])\n tf.math.less_equal(x, y) ==> [True, True, False]\n\n x = tf.constant([5, 4, 6])\n y = tf.constant([5, 6, 6])\n tf.math.less_equal(x, y) ==> [True, True, True]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x <= y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.Lgamma", "docs": "Computes the log of the absolute value of `Gamma(x)` element-wise.\n\n For positive numbers, this function computes log((input - 1)!) for every element in the tensor.\n `lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539`\n\n Example:\n\n ```python\n x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])\n tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the log of the absolute value of `Gamma(x)` element-wise.", "type": "API"}, {"name": "tf.raw_ops.LinSpace", "docs": "Generates values in an interval.\n\n A sequence of `num` evenly-spaced values are generated beginning at `start`.\n If `num > 1`, the values in the sequence increase by `stop - start / num - 1`,\n so that the last one is exactly `stop`.\n\n For example:\n\n ```\n tf.linspace(10.0, 12.0, 3, name=\"linspace\") => [ 10.0 11.0 12.0]\n ```\n\n Args:\n start: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n 0-D tensor. First entry in the range.\n stop: A `Tensor`. Must have the same type as `start`.\n 0-D tensor. Last entry in the range.\n num: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D tensor. Number of values to generate.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `start`.\n ", "desc": "Generates values in an interval.", "type": "API"}, {"name": "tf.raw_ops.ListDiff", "docs": "Computes the difference between two lists of numbers or strings.\n\n Given a list `x` and a list `y`, this operation returns a list `out` that\n represents all values that are in `x` but not in `y`. The returned list `out`\n is sorted in the same order that the numbers appear in `x` (duplicates are\n preserved). This operation also returns a list `idx` that represents the\n position of each `out` element in `x`. In other words:\n\n `out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`\n\n For example, given this input:\n\n ```\n x = [1, 2, 3, 4, 5, 6]\n y = [1, 3, 5]\n ```\n\n This operation would return:\n\n ```\n out ==> [2, 4, 6]\n idx ==> [1, 3, 5]\n ```\n\n Args:\n x: A `Tensor`. 1-D. Values to keep.\n y: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, idx).\n\n out: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Computes the difference between two lists of numbers or strings.", "type": "API"}, {"name": "tf.raw_ops.LMDBDataset", "docs": "Creates a dataset that emits the key-value pairs in one or more LMDB files.\n\n The Lightning Memory-Mapped Database Manager, or LMDB, is an embedded binary\n key-value database. This dataset can read the contents of LMDB database files,\n the names of which generally have the `.mdb` suffix.\n\n Each output element consists of a key-value pair represented as a pair of\n scalar string `Tensor`s, where the first `Tensor` contains the key and the\n second `Tensor` contains the value.\n\n LMDB uses different file formats on big- and little-endian machines.\n `LMDBDataset` can only read files in the format of the host machine.\n\n Args:\n filenames: A `Tensor` of type `string`.\n A scalar or a vector containing the name(s) of the binary file(s) to be\n read.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits the key-value pairs in one or more LMDB files.", "type": "API"}, {"name": "tf.raw_ops.LMDBReader", "docs": "A Reader that outputs the records from a LMDB file.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs the records from a LMDB file.", "type": "API"}, {"name": "tf.raw_ops.LoadAndRemapMatrix", "docs": "Loads a 2-D (matrix) `Tensor` with name `old_tensor_name` from the checkpoint\n\n at `ckpt_path` and potentially reorders its rows and columns using the\n specified remappings.\n\n Most users should use one of the wrapper initializers (such as\n `tf.contrib.framework.load_and_remap_matrix_initializer`) instead of this\n function directly.\n\n The remappings are 1-D tensors with the following properties:\n\n * `row_remapping` must have exactly `num_rows` entries. Row `i` of the output\n matrix will be initialized from the row corresponding to index\n `row_remapping[i]` in the old `Tensor` from the checkpoint.\n * `col_remapping` must have either 0 entries (indicating that no column\n reordering is needed) or `num_cols` entries. If specified, column `j` of the\n output matrix will be initialized from the column corresponding to index\n `col_remapping[j]` in the old `Tensor` from the checkpoint.\n * A value of -1 in either of the remappings signifies a \"missing\" entry. In that\n case, values from the `initializing_values` tensor will be used to fill that\n missing row or column. If `row_remapping` has `r` missing entries and\n `col_remapping` has `c` missing entries, then the following condition must be\n true:\n\n `(r * num_cols) + (c * num_rows) - (r * c) == len(initializing_values)`\n\n The remapping tensors can be generated using the GenerateVocabRemapping op.\n\n As an example, with row_remapping = [1, 0, -1], col_remapping = [0, 2, -1],\n initializing_values = [0.5, -0.5, 0.25, -0.25, 42], and w(i, j) representing\n the value from row i, column j of the old tensor in the checkpoint, the output\n matrix will look like the following:\n\n [[w(1, 0), w(1, 2), 0.5],\n [w(0, 0), w(0, 2), -0.5],\n [0.25, -0.25, 42]]\n\n Args:\n ckpt_path: A `Tensor` of type `string`.\n Path to the TensorFlow checkpoint (version 2, `TensorBundle`) from\n which the old matrix `Tensor` will be loaded.\n old_tensor_name: A `Tensor` of type `string`.\n Name of the 2-D `Tensor` to load from checkpoint.\n row_remapping: A `Tensor` of type `int64`.\n An int `Tensor` of row remappings (generally created by\n `generate_vocab_remapping`). Even if no row remapping is needed, this must\n still be an index-valued Tensor (e.g. [0, 1, 2, ...]), or a shifted\n index-valued `Tensor` (e.g. [8, 9, 10, ...], for partitioned `Variables`).\n col_remapping: A `Tensor` of type `int64`.\n An int `Tensor` of column remappings (generally created by\n `generate_vocab_remapping`). May be a size-0 `Tensor` if only row remapping\n is to be done (e.g. column ordering is the same).\n initializing_values: A `Tensor` of type `float32`.\n A float `Tensor` containing values to fill in for cells\n in the output matrix that are not loaded from the checkpoint. Length must be\n exactly the same as the number of missing / new cells.\n num_rows: An `int` that is `>= 0`.\n Number of rows (length of the 1st dimension) in the output matrix.\n num_cols: An `int` that is `>= 1`.\n Number of columns (length of the 2nd dimension) in the output matrix.\n max_rows_in_memory: An optional `int`. Defaults to `-1`.\n The maximum number of rows to load from the checkpoint at\n once. If less than or equal to 0, the entire matrix will be loaded into\n memory. Setting this arg trades increased disk reads for lower memory usage.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Loads a 2-D (matrix) `Tensor` with name `old_tensor_name` from the checkpoint", "type": "API"}, {"name": "tf.raw_ops.LoadDataset", "docs": "TODO: add doc.\n\n Args:\n path: A `Tensor` of type `string`.\n reader_func_other_args: A list of `Tensor` objects.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reader_func: A function decorated with @Defun.\n compression: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingAdadeltaParameters", "docs": "Load Adadelta embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the Adadelta optimization algorithm.\n accumulators: A `Tensor` of type `float32`.\n Value of accumulators used in the Adadelta optimization algorithm.\n updates: A `Tensor` of type `float32`.\n Value of updates used in the Adadelta optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load Adadelta embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingAdagradParameters", "docs": "Load Adagrad embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the Adagrad optimization algorithm.\n accumulators: A `Tensor` of type `float32`.\n Value of accumulators used in the Adagrad optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load Adagrad embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingADAMParameters", "docs": "Load ADAM embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the ADAM optimization algorithm.\n momenta: A `Tensor` of type `float32`.\n Value of momenta used in the ADAM optimization algorithm.\n velocities: A `Tensor` of type `float32`.\n Value of velocities used in the ADAM optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load ADAM embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingCenteredRMSPropParameters", "docs": "Load centered RMSProp embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the centered RMSProp optimization algorithm.\n ms: A `Tensor` of type `float32`.\n Value of ms used in the centered RMSProp optimization algorithm.\n mom: A `Tensor` of type `float32`.\n Value of mom used in the centered RMSProp optimization algorithm.\n mg: A `Tensor` of type `float32`.\n Value of mg used in the centered RMSProp optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load centered RMSProp embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingFrequencyEstimatorParameters", "docs": "Load frequency estimator embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the frequency estimator optimization algorithm.\n last_hit_step: A `Tensor` of type `float32`.\n Value of last_hit_step used in the frequency estimator optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load frequency estimator embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingFTRLParameters", "docs": "Load FTRL embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the FTRL optimization algorithm.\n accumulators: A `Tensor` of type `float32`.\n Value of accumulators used in the FTRL optimization algorithm.\n linears: A `Tensor` of type `float32`.\n Value of linears used in the FTRL optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load FTRL embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingMDLAdagradLightParameters", "docs": "Load MDL Adagrad Light embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the MDL Adagrad Light optimization algorithm.\n accumulators: A `Tensor` of type `float32`.\n Value of accumulators used in the MDL Adagrad Light optimization algorithm.\n weights: A `Tensor` of type `float32`.\n Value of weights used in the MDL Adagrad Light optimization algorithm.\n benefits: A `Tensor` of type `float32`.\n Value of benefits used in the MDL Adagrad Light optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load MDL Adagrad Light embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingMomentumParameters", "docs": "Load Momentum embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the Momentum optimization algorithm.\n momenta: A `Tensor` of type `float32`.\n Value of momenta used in the Momentum optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load Momentum embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingProximalAdagradParameters", "docs": "Load proximal Adagrad embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the proximal Adagrad optimization algorithm.\n accumulators: A `Tensor` of type `float32`.\n Value of accumulators used in the proximal Adagrad optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load proximal Adagrad embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingProximalYogiParameters", "docs": "TODO: add doc.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n v: A `Tensor` of type `float32`.\n m: A `Tensor` of type `float32`.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingRMSPropParameters", "docs": "Load RMSProp embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the RMSProp optimization algorithm.\n ms: A `Tensor` of type `float32`.\n Value of ms used in the RMSProp optimization algorithm.\n mom: A `Tensor` of type `float32`.\n Value of mom used in the RMSProp optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load RMSProp embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParameters", "docs": "Load SGD embedding parameters.\n\n An op that loads optimization parameters into HBM for embedding. Must be\n preceded by a ConfigureTPUEmbeddingHost op that sets up the correct\n embedding table configuration. For example, this op is used to install\n parameters that are loaded from a checkpoint before a training loop is\n executed.\n\n Args:\n parameters: A `Tensor` of type `float32`.\n Value of parameters used in the stochastic gradient descent optimization algorithm.\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Load SGD embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.Log", "docs": "Computes natural logarithm of x element-wise.\n\n I.e., \\\\(y = \\log_e x\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log(x)\n \n\n See: https://en.wikipedia.org/wiki/Logarithm\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Log1p", "docs": "Computes natural logarithm of (1 + x) element-wise.\n\n I.e., \\\\(y = \\log_e (1 + x)\\\\).\n\n Example:\n >>> x = tf.constant([0, 0.5, 1, 5])\n >>> tf.math.log1p(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes natural logarithm of (1 + x) element-wise.", "type": "API"}, {"name": "tf.raw_ops.LogicalAnd", "docs": "Returns the truth value of x AND y element-wise.\n\n Logical AND function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical AND with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical AND of the two input tensors.\n\n You can also use the `&` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_and(a, b)\n \n >>> a & b\n \n\n >>> c = tf.constant([True])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_and(c, x)\n \n >>> c & x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_and(y, z)\n \n >>> y & z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_and([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_all`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x AND y element-wise.", "type": "API"}, {"name": "tf.raw_ops.LogicalNot", "docs": "Returns the truth value of `NOT x` element-wise.\n\n Example:\n\n >>> tf.math.logical_not(tf.constant([True, False]))\n \n\n Args:\n x: A `Tensor` of type `bool`. A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of `NOT x` element-wise.", "type": "API"}, {"name": "tf.raw_ops.LogicalOr", "docs": "Returns the truth value of x OR y element-wise.\n\n Logical OR function.\n\n Requires that `x` and `y` have the same shape or have\n [broadcast-compatible](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n shapes. For example, `x` and `y` can be:\n\n - Two single elements of type `bool`.\n - One `tf.Tensor` of type `bool` and one single `bool`, where the result will\n be calculated by applying logical OR with the single element to each\n element in the larger Tensor.\n - Two `tf.Tensor` objects of type `bool` of the same shape. In this case,\n the result will be the element-wise logical OR of the two input tensors.\n\n You can also use the `|` operator instead.\n\n Usage:\n\n >>> a = tf.constant([True])\n >>> b = tf.constant([False])\n >>> tf.math.logical_or(a, b)\n \n >>> a | b\n \n\n >>> c = tf.constant([False])\n >>> x = tf.constant([False, True, True, False])\n >>> tf.math.logical_or(c, x)\n \n >>> c | x\n \n\n >>> y = tf.constant([False, False, True, True])\n >>> z = tf.constant([False, True, False, True])\n >>> tf.math.logical_or(y, z)\n \n >>> y | z\n \n\n This op also supports broadcasting\n\n >>> tf.logical_or([[True, False]], [[True], [False]])\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_any`.\n\n Args:\n x: A `tf.Tensor` of type bool.\n y: A `tf.Tensor` of type bool.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of type bool with the shape that `x` and `y` broadcast to.\n\n Args:\n x: A `Tensor` of type `bool`.\n y: A `Tensor` of type `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of x OR y element-wise.", "type": "API"}, {"name": "tf.raw_ops.LogMatrixDeterminant", "docs": "Computes the sign and the log of the absolute value of the determinant of\n\n one or more square matrices.\n\n The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions\n form square matrices. The outputs are two tensors containing the signs and\n absolute values of the log determinants for all N input submatrices\n `[..., :, :]` such that `determinant = sign*exp(log_abs_determinant)`.\n The `log_abs_determinant` is computed as `det(P)*sum(log(diag(LU)))` where `LU`\n is the `LU` decomposition of the input and `P` is the corresponding\n permutation matrix.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[N, M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sign, log_abs_determinant).\n\n sign: A `Tensor`. Has the same type as `input`.\n log_abs_determinant: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the sign and the log of the absolute value of the determinant of", "type": "API"}, {"name": "tf.raw_ops.LogSoftmax", "docs": "Computes log softmax activations.\n\n For each batch `i` and class `j` we have\n\n logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))\n\n Args:\n logits: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 2-D with shape `[batch_size, num_classes]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `logits`.\n ", "desc": "Computes log softmax activations.", "type": "API"}, {"name": "tf.raw_ops.LogUniformCandidateSampler", "docs": "Generates labels for candidate sampling with a log-uniform distribution.\n\n See explanations of candidate sampling and the data formats at\n go/candidate-sampling.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`.\n Number of candidates to randomly sample.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n range_max: An `int` that is `>= 1`.\n The sampler will sample integers from the interval [0, range_max).\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a log-uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.LookupTableExport", "docs": "Outputs all keys and values in the table.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`. Handle to the table.\n Tkeys: A `tf.DType`.\n Tvalues: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (keys, values).\n\n keys: A `Tensor` of type `Tkeys`.\n values: A `Tensor` of type `Tvalues`.\n ", "desc": "Outputs all keys and values in the table.", "type": "API"}, {"name": "tf.raw_ops.LookupTableExportV2", "docs": "Outputs all keys and values in the table.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n Tkeys: A `tf.DType`.\n Tvalues: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (keys, values).\n\n keys: A `Tensor` of type `Tkeys`.\n values: A `Tensor` of type `Tvalues`.\n ", "desc": "Outputs all keys and values in the table.", "type": "API"}, {"name": "tf.raw_ops.LookupTableFind", "docs": "Looks up keys in a table, outputs the corresponding values.\n\n The tensor `keys` must of the same type as the keys of the table.\n The output `values` is of the type of the table values.\n\n The scalar `default_value` is the value output for keys not present in the\n table. It must also be of the same type as the table values.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n default_value: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `default_value`.\n ", "desc": "Looks up keys in a table, outputs the corresponding values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableFindV2", "docs": "Looks up keys in a table, outputs the corresponding values.\n\n The tensor `keys` must of the same type as the keys of the table.\n The output `values` is of the type of the table values.\n\n The scalar `default_value` is the value output for keys not present in the\n table. It must also be of the same type as the table values.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n default_value: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `default_value`.\n ", "desc": "Looks up keys in a table, outputs the corresponding values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableImport", "docs": "Replaces the contents of the table with the specified keys and values.\n\n The tensor `keys` must be of the same type as the keys of the table.\n The tensor `values` must be of the type of the table values.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n values: A `Tensor`. Values to associate with keys.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Replaces the contents of the table with the specified keys and values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableImportV2", "docs": "Replaces the contents of the table with the specified keys and values.\n\n The tensor `keys` must be of the same type as the keys of the table.\n The tensor `values` must be of the type of the table values.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n values: A `Tensor`. Values to associate with keys.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Replaces the contents of the table with the specified keys and values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableInsert", "docs": "Updates the table to associates keys with values.\n\n The tensor `keys` must be of the same type as the keys of the table.\n The tensor `values` must be of the type of the table values.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n values: A `Tensor`. Values to associate with keys.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the table to associates keys with values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableInsertV2", "docs": "Updates the table to associates keys with values.\n\n The tensor `keys` must be of the same type as the keys of the table.\n The tensor `values` must be of the type of the table values.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys to look up.\n values: A `Tensor`. Values to associate with keys.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the table to associates keys with values.", "type": "API"}, {"name": "tf.raw_ops.LookupTableRemoveV2", "docs": "Removes keys and its associated values from a table.\n\n The tensor `keys` must of the same type as the keys of the table. Keys not\n already in the table are silently ignored.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n keys: A `Tensor`. Any shape. Keys of the elements to remove.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Removes keys and its associated values from a table.", "type": "API"}, {"name": "tf.raw_ops.LookupTableSize", "docs": "Computes the number of elements in the given table.\n\n Args:\n table_handle: A `Tensor` of type mutable `string`. Handle to the table.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Computes the number of elements in the given table.", "type": "API"}, {"name": "tf.raw_ops.LookupTableSizeV2", "docs": "Computes the number of elements in the given table.\n\n Args:\n table_handle: A `Tensor` of type `resource`. Handle to the table.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Computes the number of elements in the given table.", "type": "API"}, {"name": "tf.raw_ops.LoopCond", "docs": "Forwards the input to the output.\n\n This operator represents the loop termination condition used by the\n \"pivot\" switches of a loop.\n\n Args:\n input: A `Tensor` of type `bool`.\n A boolean scalar, representing the branch predicate of the Switch op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Forwards the input to the output.", "type": "API"}, {"name": "tf.raw_ops.LowerBound", "docs": "Applies lower_bound(sorted_search_values, values) along each row.\n\n Each set of rows with the same index in (sorted_inputs, values) is treated\n independently. The resulting row is the equivalent of calling\n `np.searchsorted(sorted_inputs, values, side='left')`.\n\n The result is not a global index to the entire\n `Tensor`, but rather just the index in the last dimension.\n\n A 2-D example:\n sorted_sequence = [[0, 3, 9, 9, 10],\n [1, 2, 3, 4, 5]]\n values = [[2, 4, 9],\n [0, 2, 6]]\n\n result = LowerBound(sorted_sequence, values)\n\n result == [[1, 2, 2],\n [0, 1, 5]]\n\n Args:\n sorted_inputs: A `Tensor`. 2-D Tensor where each row is ordered.\n values: A `Tensor`. Must have the same type as `sorted_inputs`.\n 2-D Tensor with the same numbers of rows as `sorted_search_values`. Contains\n the values that will be searched for in `sorted_search_values`.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Applies lower_bound(sorted_search_values, values) along each row.", "type": "API"}, {"name": "tf.raw_ops.LRN", "docs": "Local Response Normalization.\n\n The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last\n dimension), and each vector is normalized independently. Within a given vector,\n each component is divided by the weighted, squared sum of inputs within\n `depth_radius`. In detail,\n\n sqr_sum[a, b, c, d] =\n sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)\n output = input / (bias + alpha * sqr_sum) ** beta\n\n For details, see [Krizhevsky et al., ImageNet classification with deep\n convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D.\n depth_radius: An optional `int`. Defaults to `5`.\n 0-D. Half-width of the 1-D normalization window.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually positive to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Local Response Normalization.", "type": "API"}, {"name": "tf.raw_ops.LRNGrad", "docs": "Gradients for Local Response Normalization.\n\n Args:\n input_grads: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n 4-D with shape `[batch, height, width, channels]`.\n input_image: A `Tensor`. Must have the same type as `input_grads`.\n 4-D with shape `[batch, height, width, channels]`.\n output_image: A `Tensor`. Must have the same type as `input_grads`.\n 4-D with shape `[batch, height, width, channels]`.\n depth_radius: An optional `int`. Defaults to `5`. A depth radius.\n bias: An optional `float`. Defaults to `1`.\n An offset (usually > 0 to avoid dividing by 0).\n alpha: An optional `float`. Defaults to `1`.\n A scale factor, usually positive.\n beta: An optional `float`. Defaults to `0.5`. An exponent.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input_grads`.\n ", "desc": "Gradients for Local Response Normalization.", "type": "API"}, {"name": "tf.raw_ops.LSTMBlockCell", "docs": "Computes the LSTM cell forward propagation for 1 time step.\n\n This implementation uses 1 weight matrix and 1 bias vector, and there's an\n optional peephole connection.\n\n This kernel op implements the following mathematical equations:\n\n ```python\n xh = [x, h_prev]\n [i, f, ci, o] = xh * w + b\n f = f + forget_bias\n\n if not use_peephole:\n wci = wcf = wco = 0\n\n i = sigmoid(cs_prev * wci + i)\n f = sigmoid(cs_prev * wcf + f)\n ci = tanh(ci)\n\n cs = ci .* i + cs_prev .* f\n cs = clip(cs, cell_clip)\n\n o = sigmoid(cs * wco + o)\n co = tanh(cs)\n h = co .* o\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The input to the LSTM cell, shape (batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n Value of the cell state at previous time step.\n h_prev: A `Tensor`. Must have the same type as `x`.\n Output of the previous cell at previous time step.\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n forget_bias: An optional `float`. Defaults to `1`. The forget gate bias.\n cell_clip: An optional `float`. Defaults to `3`.\n Value to clip the 'cs' value to.\n use_peephole: An optional `bool`. Defaults to `False`.\n Whether to use peephole weights.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (i, cs, f, o, ci, co, h).\n\n i: A `Tensor`. Has the same type as `x`.\n cs: A `Tensor`. Has the same type as `x`.\n f: A `Tensor`. Has the same type as `x`.\n o: A `Tensor`. Has the same type as `x`.\n ci: A `Tensor`. Has the same type as `x`.\n co: A `Tensor`. Has the same type as `x`.\n h: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell forward propagation for 1 time step.", "type": "API"}, {"name": "tf.raw_ops.LSTMBlockCellGrad", "docs": "Computes the LSTM cell backward propagation for 1 timestep.\n\n This implementation is to be used in conjunction of LSTMBlockCell.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`.\n The input to the LSTM cell, shape (batch_size, num_inputs).\n cs_prev: A `Tensor`. Must have the same type as `x`.\n The previous cell state.\n h_prev: A `Tensor`. Must have the same type as `x`. The previous h state.\n w: A `Tensor`. Must have the same type as `x`. The weight matrix.\n wci: A `Tensor`. Must have the same type as `x`.\n The weight matrix for input gate peephole connection.\n wcf: A `Tensor`. Must have the same type as `x`.\n The weight matrix for forget gate peephole connection.\n wco: A `Tensor`. Must have the same type as `x`.\n The weight matrix for output gate peephole connection.\n b: A `Tensor`. Must have the same type as `x`. The bias vector.\n i: A `Tensor`. Must have the same type as `x`. The input gate.\n cs: A `Tensor`. Must have the same type as `x`.\n The cell state before the tanh.\n f: A `Tensor`. Must have the same type as `x`. The forget gate.\n o: A `Tensor`. Must have the same type as `x`. The output gate.\n ci: A `Tensor`. Must have the same type as `x`. The cell input.\n co: A `Tensor`. Must have the same type as `x`. The cell after the tanh.\n cs_grad: A `Tensor`. Must have the same type as `x`.\n The current gradient of cs.\n h_grad: A `Tensor`. Must have the same type as `x`.\n The gradient of h vector.\n use_peephole: A `bool`. Whether the cell uses peephole connections.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (cs_prev_grad, dicfo, wci_grad, wcf_grad, wco_grad).\n\n cs_prev_grad: A `Tensor`. Has the same type as `x`.\n dicfo: A `Tensor`. Has the same type as `x`.\n wci_grad: A `Tensor`. Has the same type as `x`.\n wcf_grad: A `Tensor`. Has the same type as `x`.\n wco_grad: A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the LSTM cell backward propagation for 1 timestep.", "type": "API"}, {"name": "tf.raw_ops.Lu", "docs": "Computes the LU decomposition of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices.\n\n The input has to be invertible.\n\n The output consists of two tensors LU and P containing the LU decomposition\n of all input submatrices `[..., :, :]`. LU encodes the lower triangular and\n upper triangular factors.\n\n For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of\n shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower\n triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose\n entries correspond to the upper triangular part, including the diagonal, of LU.\n\n P represents a permutation matrix encoded as a list of indices each between `0`\n and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to\n P, then the L, U and P satisfies P_mat * input = L * U.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of\n size `[M, M]`.\n output_idx_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (lu, p).\n\n lu: A `Tensor`. Has the same type as `input`.\n p: A `Tensor` of type `output_idx_type`.\n ", "desc": "Computes the LU decomposition of one or more square matrices.", "type": "API"}, {"name": "tf.raw_ops.MakeIterator", "docs": "Makes a new iterator from the given `dataset` and stores it in `iterator`.\n\n This operation may be executed multiple times. Each execution will reset the\n iterator in `iterator` to the first element of `dataset`.\n\n Args:\n dataset: A `Tensor` of type `variant`.\n iterator: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Makes a new iterator from the given `dataset` and stores it in `iterator`.", "type": "API"}, {"name": "tf.raw_ops.MapAndBatchDataset", "docs": "Creates a dataset that fuses mapping with batching.\n\n Creates a dataset that applies `f` to the outputs of `input_dataset` and then\n batches `batch_size` of them.\n\n Unlike a \"MapDataset\", which applies `f` sequentially, this dataset invokes up\n to `batch_size * num_parallel_batches` copies of `f` in parallel.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when building a closure\n for `f`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch. It determines the number of concurrent invocations of `f` that process\n elements from `input_dataset` in parallel.\n num_parallel_calls: A `Tensor` of type `int64`.\n A scalar representing the maximum number of parallel invocations of the `map_fn`\n function. Applying the `map_fn` on consecutive input elements in parallel has\n the potential to improve input pipeline throughput.\n drop_remainder: A `Tensor` of type `bool`.\n A scalar representing whether the last batch should be dropped in case its size\n is smaller than desired.\n f: A function decorated with @Defun.\n A function to apply to the outputs of `input_dataset`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that fuses mapping with batching.", "type": "API"}, {"name": "tf.raw_ops.MapClear", "docs": "Op removes all elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Op removes all elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.MapDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_inter_op_parallelism: An optional `bool`. Defaults to `True`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.MapDefun", "docs": " Maps a function on the list of tensors unpacked from arguments on dimension 0.\n The function given by `f` is assumed to be stateless, and is executed\n concurrently on all the slices; up to batch_size (i.e. the size of the 0th\n dimension of each argument) functions will be scheduled at once.\n\n The `max_intra_op_parallelism` attr, which defaults to 1, can be used to\n limit the intra op parallelism. To limit inter-op parallelism, a user can\n set a private threadpool on the dataset using `tf.data.Options`'s\n `ThreadingOptions`.\n\n Note that this op is not exposed to users directly, but is invoked in tf.data\n rewrites.\n\n Args:\n arguments: A list of `Tensor` objects.\n A list of tensors whose types are `Targuments`, corresponding to the inputs\n the function should be mapped over.\n captured_inputs: A list of `Tensor` objects.\n A list of tensors whose types are `Tcaptured`, corresponding to the captured\n inputs of the defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n A list of types.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n A list of shapes.\n f: A function decorated with @Defun.\n max_intra_op_parallelism: An optional `int`. Defaults to `1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": " Maps a function on the list of tensors unpacked from arguments on dimension 0.", "type": "API"}, {"name": "tf.raw_ops.MapIncompleteSize", "docs": "Op returns the number of incomplete elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Op returns the number of incomplete elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.MapPeek", "docs": "Op peeks at the values at the specified key. If the\n\n underlying container does not contain this key\n this op will block until it does.\n\n Args:\n key: A `Tensor` of type `int64`.\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op peeks at the values at the specified key. If the", "type": "API"}, {"name": "tf.raw_ops.MapSize", "docs": "Op returns the number of elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Op returns the number of elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.MapStage", "docs": "Stage (key, values) in the underlying container which behaves like a hashtable.\n\n Args:\n key: A `Tensor` of type `int64`. int64\n indices: A `Tensor` of type `int32`.\n values: A list of `Tensor` objects. a list of tensors\n dtypes A list of data types that inserted values should adhere to.\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n Maximum number of elements in the Staging Area. If > 0, inserts\n on the container will block when the capacity is reached.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container. Otherwise,\n a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n It is necessary to match this name to the matching Unstage Op.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Stage (key, values) in the underlying container which behaves like a hashtable.", "type": "API"}, {"name": "tf.raw_ops.MapUnstage", "docs": "Op removes and returns the values associated with the key\n\n from the underlying container. If the underlying container\n does not contain this key, the op will block until it does.\n\n Args:\n key: A `Tensor` of type `int64`.\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op removes and returns the values associated with the key", "type": "API"}, {"name": "tf.raw_ops.MapUnstageNoKey", "docs": "Op removes and returns a random (key, value)\n\n from the underlying container. If the underlying container\n does not contain elements, the op will block until it does.\n\n Args:\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, values).\n\n key: A `Tensor` of type `int64`.\n values: A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op removes and returns a random (key, value)", "type": "API"}, {"name": "tf.raw_ops.MatchingFiles", "docs": "Returns the set of files matching one or more glob patterns.\n\n Note that this routine only supports wildcard characters in the\n basename portion of the pattern, not in the directory portion.\n Note also that the order of filenames returned is deterministic.\n\n Args:\n pattern: A `Tensor` of type `string`.\n Shell wildcard pattern(s). Scalar or vector of type string.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the set of files matching one or more glob patterns.", "type": "API"}, {"name": "tf.raw_ops.MatchingFilesDataset", "docs": "TODO: add doc.\n\n Args:\n patterns: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.MatMul", "docs": "Multiply the matrix \"a\" by the matrix \"b\".\n\n The inputs must be two-dimensional matrices and the inner dimension of\n \"a\" (after being transposed if transpose_a is true) must match the\n outer dimension of \"b\" (after being transposed if transposed_b is\n true).\n\n *Note*: The default kernel implementation for MatMul on GPUs uses\n cublas.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n b: A `Tensor`. Must have the same type as `a`.\n transpose_a: An optional `bool`. Defaults to `False`.\n If true, \"a\" is transposed before multiplication.\n transpose_b: An optional `bool`. Defaults to `False`.\n If true, \"b\" is transposed before multiplication.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Multiply the matrix \"a\" by the matrix \"b\".", "type": "API"}, {"name": "tf.raw_ops.MatrixBandPart", "docs": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.\n\n The `band` part is computed as follows:\n Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a\n tensor with the same shape where\n\n `band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.\n\n The indicator function\n\n `in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&\n (num_upper < 0 || (n-m) <= num_upper)`.\n\n For example:\n\n ```\n # if 'input' is [[ 0, 1, 2, 3]\n # [-1, 0, 1, 2]\n # [-2, -1, 0, 1]\n # [-3, -2, -1, 0]],\n\n tf.linalg.band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]\n [-1, 0, 1, 2]\n [ 0, -1, 0, 1]\n [ 0, 0, -1, 0]],\n\n tf.linalg.band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]\n [-1, 0, 1, 0]\n [-2, -1, 0, 1]\n [ 0, -2, -1, 0]]\n ```\n\n Useful special cases:\n\n ```\n tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.\n tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.\n tf.linalg.band_part(input, 0, 0) ==> Diagonal.\n ```\n\n Args:\n input: A `Tensor`. Rank `k` tensor.\n num_lower: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D tensor. Number of subdiagonals to keep. If negative, keep entire\n lower triangle.\n num_upper: A `Tensor`. Must have the same type as `num_lower`.\n 0-D tensor. Number of superdiagonals to keep. If negative, keep\n entire upper triangle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Copy a tensor setting everything outside a central band in each innermost matrix to zero.", "type": "API"}, {"name": "tf.raw_ops.MatrixDeterminant", "docs": "Computes the determinant of one or more square matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor containing the determinants\n for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the determinant of one or more square matrices.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiag", "docs": "Returns a batched diagonal tensor with a given batched diagonal values.\n\n Given a `diagonal`, this operation returns a tensor with the `diagonal` and\n everything else padded with zeros. The diagonal is computed as follows:\n\n Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a\n tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:\n\n `output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.\n\n For example:\n\n ```\n # 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]\n\n and diagonal.shape = (2, 4)\n\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]],\n [[5, 0, 0, 0]\n [0, 6, 0, 0]\n [0, 0, 7, 0]\n [0, 0, 0, 8]]]\n\n which has shape (2, 4, 4)\n ```\n\n Args:\n diagonal: A `Tensor`. Rank `k`, where `k >= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with a given batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiagPart", "docs": "Returns the batched diagonal part of a batched tensor.\n\n This operation returns a tensor with the `diagonal` part\n of the batched `input`. The `diagonal` part is computed as follows:\n\n Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a\n tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where:\n\n `diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n # 'input' is [[[1, 0, 0, 0]\n [0, 2, 0, 0]\n [0, 0, 3, 0]\n [0, 0, 0, 4]],\n [[5, 0, 0, 0]\n [0, 6, 0, 0]\n [0, 0, 7, 0]\n [0, 0, 0, 8]]]\n\n and input.shape = (2, 4, 4)\n\n tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]\n\n which has shape (2, 4)\n ```\n\n Args:\n input: A `Tensor`. Rank `k` tensor where `k >= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiagPartV2", "docs": "Returns the batched diagonal part of a batched tensor.\n\n Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched\n `input`.\n\n Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`.\n Let `max_diag_len` be the maximum length among all diagonals to be extracted,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n Let `num_diags` be the number of diagonals to extract,\n `num_diags = k[1] - k[0] + 1`.\n\n If `num_diags == 1`, the output tensor is of rank `r - 1` with shape\n `[I, J, ..., L, max_diag_len]` and values:\n\n ```\n diagonal[i, j, ..., l, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.\n\n Otherwise, the output tensor has rank `r` with dimensions\n `[I, J, ..., L, num_diags, max_diag_len]` with values:\n\n ```\n diagonal[i, j, ..., l, m, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `d = k[1] - m`, `y = max(-d, 0)`, and `x = max(d, 0)`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)\n [5, 6, 7, 8],\n [9, 8, 7, 6]],\n [[5, 4, 3, 2],\n [1, 2, 3, 4],\n [5, 6, 7, 8]]])\n\n # A main diagonal from each batch.\n tf.matrix_diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)\n [5, 2, 7]]\n\n # A superdiagonal from each batch.\n tf.matrix_diag_part(input, k = 1)\n ==> [[2, 7, 6], # Output shape: (2, 3)\n [4, 3, 8]]\n\n # A tridiagonal band from each batch.\n tf.matrix_diag_part(input, k = (-1, 1))\n ==> [[[2, 7, 6], # Output shape: (2, 3, 3)\n [1, 6, 7],\n [5, 8, 0]],\n [[4, 3, 8],\n [5, 2, 7],\n [1, 6, 0]]]\n\n # Padding value = 9\n tf.matrix_diag_part(input, k = (1, 3), padding_value = 9)\n ==> [[[4, 9, 9], # Output shape: (2, 3, 3)\n [3, 8, 9],\n [2, 7, 6]],\n [[2, 9, 9],\n [3, 4, 9],\n [4, 3, 8]]]\n ```\n\n Args:\n input: A `Tensor`. Rank `r` tensor where `r >= 2`.\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n padding_value: A `Tensor`. Must have the same type as `input`.\n The value to fill the area outside the specified diagonal band with.\n Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiagPartV3", "docs": "Returns the batched diagonal part of a batched tensor.\n\n Returns a tensor with the `k[0]`-th to `k[1]`-th diagonals of the batched\n `input`.\n\n Assume `input` has `r` dimensions `[I, J, ..., L, M, N]`.\n Let `max_diag_len` be the maximum length among all diagonals to be extracted,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n Let `num_diags` be the number of diagonals to extract,\n `num_diags = k[1] - k[0] + 1`.\n\n If `num_diags == 1`, the output tensor is of rank `r - 1` with shape\n `[I, J, ..., L, max_diag_len]` and values:\n\n ```\n diagonal[i, j, ..., l, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `y = max(-k[1], 0)`, `x = max(k[1], 0)`.\n\n Otherwise, the output tensor has rank `r` with dimensions\n `[I, J, ..., L, num_diags, max_diag_len]` with values:\n\n ```\n diagonal[i, j, ..., l, m, n]\n = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,\n padding_value ; otherwise.\n ```\n where `d = k[1] - m`, `y = max(-d, 0) - offset`, and `x = max(d, 0) - offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n The input must be at least a matrix.\n\n For example:\n\n ```\n input = np.array([[[1, 2, 3, 4], # Input shape: (2, 3, 4)\n [5, 6, 7, 8],\n [9, 8, 7, 6]],\n [[5, 4, 3, 2],\n [1, 2, 3, 4],\n [5, 6, 7, 8]]])\n\n # A main diagonal from each batch.\n tf.matrix_diag_part(input) ==> [[1, 6, 7], # Output shape: (2, 3)\n [5, 2, 7]]\n\n # A superdiagonal from each batch.\n tf.matrix_diag_part(input, k = 1)\n ==> [[2, 7, 6], # Output shape: (2, 3)\n [4, 3, 8]]\n\n # A band from each batch.\n tf.matrix_diag_part(input, k = (-1, 2))\n ==> [[[0, 3, 8], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [5, 8, 0]],\n [[0, 3, 4],\n [4, 3, 8],\n [5, 2, 7],\n [1, 6, 0]]]\n\n # LEFT_RIGHT alignment.\n tf.matrix_diag_part(input, k = (-1, 2), align=\"LEFT_RIGHT\")\n ==> [[[3, 8, 0], # Output shape: (2, 4, 3)\n [2, 7, 6],\n [1, 6, 7],\n [0, 5, 8]],\n [[3, 4, 0],\n [4, 3, 8],\n [5, 2, 7],\n [0, 1, 6]]]\n\n # max_diag_len can be shorter than the main diagonal.\n tf.matrix_diag_part(input, k = (-2, -1))\n ==> [[[5, 8],\n [9, 0]],\n [[1, 6],\n [5, 0]]]\n\n # padding_value = 9\n tf.matrix_diag_part(input, k = (1, 3), padding_value = 9)\n ==> [[[9, 9, 4], # Output shape: (2, 3, 3)\n [9, 3, 8],\n [2, 7, 6]],\n [[9, 9, 2],\n [9, 3, 4],\n [4, 3, 8]]]\n\n ```\n\n Args:\n input: A `Tensor`. Rank `r` tensor where `r >= 2`.\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n padding_value: A `Tensor`. Must have the same type as `input`.\n The value to fill the area outside the specified diagonal band with.\n Default is 0.\n align: An optional `string` from: `\"LEFT_RIGHT\", \"RIGHT_LEFT\", \"LEFT_LEFT\", \"RIGHT_RIGHT\"`. Defaults to `\"RIGHT_LEFT\"`.\n Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is\n a string specifying how superdiagonals and subdiagonals should be aligned,\n respectively. There are four possible alignments: \"RIGHT_LEFT\" (default),\n \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\" aligns superdiagonals\n to the right (left-pads the row) and subdiagonals to the left (right-pads the\n row). It is the packing format LAPACK uses. cuSPARSE uses \"LEFT_RIGHT\", which is\n the opposite alignment.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the batched diagonal part of a batched tensor.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiagV2", "docs": "Returns a batched diagonal tensor with given batched diagonal values.\n\n Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th\n diagonals of a matrix, with everything else padded with `padding`. `num_rows`\n and `num_cols` specify the dimension of the innermost matrix of the output. If\n both are not specified, the op assumes the innermost matrix is square and infers\n its size from `k` and the innermost dimension of `diagonal`. If only one of them\n is specified, the op assumes the unspecified value is the smallest possible\n based on other criteria.\n\n Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor has\n rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only one\n diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has rank\n `r` with shape `[I, J, ..., L, num_rows, num_cols]`.\n\n The second innermost dimension of `diagonal` has double meaning.\n When `k` is scalar or `k[0] == k[1]`, `M` is part of the batch size\n [I, J, ..., M], and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper\n padding_value ; otherwise\n ```\n\n Otherwise, `M` is treated as the number of diagonals for the matrix in the\n same batch (`M = k[1]-k[0]+1`), and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n padding_value ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0)`.\n\n For example:\n\n ```\n # The main diagonal.\n diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)\n [5, 6, 7, 8]])\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)\n [0, 2, 0, 0],\n [0, 0, 3, 0],\n [0, 0, 0, 4]],\n [[5, 0, 0, 0],\n [0, 6, 0, 0],\n [0, 0, 7, 0],\n [0, 0, 0, 8]]]\n\n # A superdiagonal (per batch).\n diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_diag(diagonal, k = 1)\n ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)\n [0, 0, 2, 0],\n [0, 0, 0, 3],\n [0, 0, 0, 0]],\n [[0, 4, 0, 0],\n [0, 0, 5, 0],\n [0, 0, 0, 6],\n [0, 0, 0, 0]]]\n\n # A band of diagonals.\n diagonals = np.array([[[1, 2, 3], # Input shape: (2, 2, 3)\n [4, 5, 0]],\n [[6, 7, 9],\n [9, 1, 0]]])\n tf.matrix_diag(diagonals, k = (-1, 0))\n ==> [[[1, 0, 0], # Output shape: (2, 3, 3)\n [4, 2, 0],\n [0, 5, 3]],\n [[6, 0, 0],\n [9, 7, 0],\n [0, 1, 9]]]\n\n # Rectangular matrix.\n diagonal = np.array([1, 2]) # Input shape: (2)\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)\n ==> [[0, 0, 0, 0], # Output shape: (3, 4)\n [1, 0, 0, 0],\n [0, 2, 0, 0]]\n\n # Rectangular matrix with inferred num_cols and padding_value = 9.\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)\n ==> [[9, 9], # Output shape: (3, 2)\n [1, 9],\n [9, 2]]\n ```\n\n Args:\n diagonal: A `Tensor`. Rank `r`, where `r >= 1`\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n num_rows: A `Tensor` of type `int32`.\n The number of rows of the output matrix. If it is not provided, the op assumes\n the output matrix is a square matrix and infers the matrix size from k and the\n innermost dimension of `diagonal`.\n num_cols: A `Tensor` of type `int32`.\n The number of columns of the output matrix. If it is not provided, the op\n assumes the output matrix is a square matrix and infers the matrix size from\n k and the innermost dimension of `diagonal`.\n padding_value: A `Tensor`. Must have the same type as `diagonal`.\n The number to fill the area outside the specified diagonal band with.\n Default is 0.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with given batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixDiagV3", "docs": "Returns a batched diagonal tensor with given batched diagonal values.\n\n Returns a tensor with the contents in `diagonal` as `k[0]`-th to `k[1]`-th\n diagonals of a matrix, with everything else padded with `padding`. `num_rows`\n and `num_cols` specify the dimension of the innermost matrix of the output. If\n both are not specified, the op assumes the innermost matrix is square and infers\n its size from `k` and the innermost dimension of `diagonal`. If only one of them\n is specified, the op assumes the unspecified value is the smallest possible\n based on other criteria.\n\n Let `diagonal` have `r` dimensions `[I, J, ..., L, M, N]`. The output tensor has\n rank `r+1` with shape `[I, J, ..., L, M, num_rows, num_cols]` when only one\n diagonal is given (`k` is an integer or `k[0] == k[1]`). Otherwise, it has rank\n `r` with shape `[I, J, ..., L, num_rows, num_cols]`.\n\n The second innermost dimension of `diagonal` has double meaning.\n When `k` is scalar or `k[0] == k[1]`, `M` is part of the batch size\n [I, J, ..., M], and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper\n padding_value ; otherwise\n ```\n\n Otherwise, `M` is treated as the number of diagonals for the matrix in the\n same batch (`M = k[1]-k[0]+1`), and the output tensor is:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n padding_value ; otherwise\n ```\n where `d = n - m`, `diag_index = [k] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n diagonal = np.array([[1, 2, 3, 4], # Input shape: (2, 4)\n [5, 6, 7, 8]])\n tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0], # Output shape: (2, 4, 4)\n [0, 2, 0, 0],\n [0, 0, 3, 0],\n [0, 0, 0, 4]],\n [[5, 0, 0, 0],\n [0, 6, 0, 0],\n [0, 0, 7, 0],\n [0, 0, 0, 8]]]\n\n # A superdiagonal (per batch).\n diagonal = np.array([[1, 2, 3], # Input shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_diag(diagonal, k = 1)\n ==> [[[0, 1, 0, 0], # Output shape: (2, 4, 4)\n [0, 0, 2, 0],\n [0, 0, 0, 3],\n [0, 0, 0, 0]],\n [[0, 4, 0, 0],\n [0, 0, 5, 0],\n [0, 0, 0, 6],\n [0, 0, 0, 0]]]\n\n # A tridiagonal band (per batch).\n diagonals = np.array([[[0, 8, 9], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 2, 3],\n [6, 7, 9],\n [9, 1, 0]]])\n tf.matrix_diag(diagonals, k = (-1, 1))\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # LEFT_RIGHT alignment.\n diagonals = np.array([[[8, 9, 0], # Input shape: (2, 2, 3)\n [1, 2, 3],\n [0, 4, 5]],\n [[2, 3, 0],\n [6, 7, 9],\n [0, 9, 1]]])\n tf.matrix_diag(diagonals, k = (-1, 1), align=\"LEFT_RIGHT\")\n ==> [[[1, 8, 0], # Output shape: (2, 3, 3)\n [4, 2, 9],\n [0, 5, 3]],\n [[6, 2, 0],\n [9, 7, 3],\n [0, 1, 9]]]\n\n # Rectangular matrix.\n diagonal = np.array([1, 2]) # Input shape: (2)\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)\n ==> [[0, 0, 0, 0], # Output shape: (3, 4)\n [1, 0, 0, 0],\n [0, 2, 0, 0]]\n\n # Rectangular matrix with inferred num_cols and padding_value = 9.\n tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)\n ==> [[9, 9], # Output shape: (3, 2)\n [1, 9],\n [9, 2]]\n\n ```\n\n Args:\n diagonal: A `Tensor`. Rank `r`, where `r >= 1`\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n num_rows: A `Tensor` of type `int32`.\n The number of rows of the output matrix. If it is not provided, the op assumes\n the output matrix is a square matrix and infers the matrix size from k and the\n innermost dimension of `diagonal`.\n num_cols: A `Tensor` of type `int32`.\n The number of columns of the output matrix. If it is not provided, the op\n assumes the output matrix is a square matrix and infers the matrix size from\n k and the innermost dimension of `diagonal`.\n padding_value: A `Tensor`. Must have the same type as `diagonal`.\n The number to fill the area outside the specified diagonal band with.\n Default is 0.\n align: An optional `string` from: `\"LEFT_RIGHT\", \"RIGHT_LEFT\", \"LEFT_LEFT\", \"RIGHT_RIGHT\"`. Defaults to `\"RIGHT_LEFT\"`.\n Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is\n a string specifying how superdiagonals and subdiagonals should be aligned,\n respectively. There are four possible alignments: \"RIGHT_LEFT\" (default),\n \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\" aligns superdiagonals\n to the right (left-pads the row) and subdiagonals to the left (right-pads the\n row). It is the packing format LAPACK uses. cuSPARSE uses \"LEFT_RIGHT\", which is\n the opposite alignment.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonal`.\n ", "desc": "Returns a batched diagonal tensor with given batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixExponential", "docs": "Deprecated, use python implementation tf.linalg.matrix_exponential.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Deprecated, use python implementation tf.linalg.matrix_exponential.", "type": "API"}, {"name": "tf.raw_ops.MatrixInverse", "docs": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).\n\n \n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the inverse for all input submatrices `[..., :, :]`.\n\n The op uses LU decomposition with partial pivoting to compute the inverses.\n\n If a matrix is not invertible there is no guarantee what the op does. It\n may detect the condition and raise an exception or it may simply return a\n garbage result.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n adjoint: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).", "type": "API"}, {"name": "tf.raw_ops.MatrixLogarithm", "docs": "Computes the matrix logarithm of one or more square matrices:\n\n \n \\\\(log(exp(A)) = A\\\\)\n\n This op is only defined for complex matrices. If A is positive-definite and\n real, then casting to a complex matrix, taking the logarithm and casting back\n to a real matrix will give the correct result.\n\n This function computes the matrix logarithm using the Schur-Parlett algorithm.\n Details of the algorithm can be found in Section 11.6.2 of:\n Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008.\n ISBN 978-0-898716-46-7.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the exponential for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix logarithm of one or more square matrices:", "type": "API"}, {"name": "tf.raw_ops.MatrixSetDiag", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the main diagonal of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n The output is computed as follows:\n\n Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has\n `k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a\n tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where:\n\n * `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`.\n * `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.\n\n Args:\n input: A `Tensor`. Rank `k+1`, where `k >= 1`.\n diagonal: A `Tensor`. Must have the same type as `input`.\n Rank `k`, where `k >= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixSetDiagV2", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the specified diagonals of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n `input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or\n `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`.\n Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`.\n `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`.\n `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n\n The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`.\n If `k` is scalar or `k[0] == k[1]`:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n\n Otherwise,\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and `index_in_diag = n - max(d, 0)`.\n\n For example:\n\n ```\n # The main diagonal.\n input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)\n [7, 7, 7, 7],\n [7, 7, 7, 7]],\n [[7, 7, 7, 7],\n [7, 7, 7, 7],\n [7, 7, 7, 7]]])\n diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_set_diag(diagonal) ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [7, 2, 7, 7],\n [7, 7, 3, 7]],\n [[4, 7, 7, 7],\n [7, 5, 7, 7],\n [7, 7, 6, 7]]]\n\n # A superdiagonal (per batch).\n tf.matrix_set_diag(diagonal, k = 1)\n ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)\n [7, 7, 2, 7],\n [7, 7, 7, 3]],\n [[7, 4, 7, 7],\n [7, 7, 5, 7],\n [7, 7, 7, 6]]]\n\n # A band of diagonals.\n diagonals = np.array([[[1, 2, 3], # Diagonal shape: (2, 2, 3)\n [4, 5, 0]],\n [[6, 1, 2],\n [3, 4, 0]]])\n tf.matrix_set_diag(diagonals, k = (-1, 0))\n ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [4, 2, 7, 7],\n [0, 5, 3, 7]],\n [[6, 7, 7, 7],\n [3, 1, 7, 7],\n [7, 4, 2, 7]]]\n\n ```\n\n Args:\n input: A `Tensor`. Rank `r+1`, where `r >= 1`.\n diagonal: A `Tensor`. Must have the same type as `input`.\n Rank `r` when `k` is an integer or `k[0] == k[1]`. Otherwise, it has rank `r+1`.\n `k >= 1`.\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixSetDiagV3", "docs": "Returns a batched matrix tensor with new batched diagonal values.\n\n Given `input` and `diagonal`, this operation returns a tensor with the\n same shape and values as `input`, except for the specified diagonals of the\n innermost matrices. These will be overwritten by the values in `diagonal`.\n\n `input` has `r+1` dimensions `[I, J, ..., L, M, N]`. When `k` is scalar or\n `k[0] == k[1]`, `diagonal` has `r` dimensions `[I, J, ..., L, max_diag_len]`.\n Otherwise, it has `r+1` dimensions `[I, J, ..., L, num_diags, max_diag_len]`.\n `num_diags` is the number of diagonals, `num_diags = k[1] - k[0] + 1`.\n `max_diag_len` is the longest diagonal in the range `[k[0], k[1]]`,\n `max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))`\n\n The output is a tensor of rank `k+1` with dimensions `[I, J, ..., L, M, N]`.\n If `k` is scalar or `k[0] == k[1]`:\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n\n Otherwise,\n\n ```\n output[i, j, ..., l, m, n]\n = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]\n input[i, j, ..., l, m, n] ; otherwise\n ```\n where `d = n - m`, `diag_index = k[1] - d`, and\n `index_in_diag = n - max(d, 0) + offset`.\n\n `offset` is zero except when the alignment of the diagonal is to the right.\n ```\n offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}\n and `d >= 0`) or\n (`align` in {LEFT_RIGHT, RIGHT_RIGHT}\n and `d <= 0`)\n 0 ; otherwise\n ```\n where `diag_len(d) = min(cols - max(d, 0), rows + min(d, 0))`.\n\n For example:\n\n ```\n # The main diagonal.\n input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4)\n [7, 7, 7, 7],\n [7, 7, 7, 7]],\n [[7, 7, 7, 7],\n [7, 7, 7, 7],\n [7, 7, 7, 7]]])\n diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3)\n [4, 5, 6]])\n tf.matrix_set_diag(input, diagonal)\n ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4)\n [7, 2, 7, 7],\n [7, 7, 3, 7]],\n [[4, 7, 7, 7],\n [7, 5, 7, 7],\n [7, 7, 6, 7]]]\n\n # A superdiagonal (per batch).\n tf.matrix_set_diag(input, diagonal, k = 1)\n ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4)\n [7, 7, 2, 7],\n [7, 7, 7, 3]],\n [[7, 4, 7, 7],\n [7, 7, 5, 7],\n [7, 7, 7, 6]]]\n\n # A band of diagonals.\n diagonals = np.array([[[0, 9, 1], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [4, 5, 0]],\n [[0, 1, 2],\n [5, 6, 4],\n [6, 1, 2],\n [3, 4, 0]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2))\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n # LEFT_RIGHT alignment.\n diagonals = np.array([[[9, 1, 0], # Diagonal shape: (2, 4, 3)\n [6, 5, 8],\n [1, 2, 3],\n [0, 4, 5]],\n [[1, 2, 0],\n [5, 6, 4],\n [6, 1, 2],\n [0, 3, 4]]])\n tf.matrix_set_diag(input, diagonals, k = (-1, 2), align=\"LEFT_RIGHT\")\n ==> [[[1, 6, 9, 7], # Output shape: (2, 3, 4)\n [4, 2, 5, 1],\n [7, 5, 3, 8]],\n [[6, 5, 1, 7],\n [3, 1, 6, 2],\n [7, 4, 2, 4]]]\n\n ```\n\n Args:\n input: A `Tensor`. Rank `r+1`, where `r >= 1`.\n diagonal: A `Tensor`. Must have the same type as `input`.\n Rank `r` when `k` is an integer or `k[0] == k[1]`. Otherwise, it has rank `r+1`.\n `k >= 1`.\n k: A `Tensor` of type `int32`.\n Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main\n diagonal, and negative value means subdiagonals. `k` can be a single integer\n (for a single diagonal) or a pair of integers specifying the low and high ends\n of a matrix band. `k[0]` must not be larger than `k[1]`.\n align: An optional `string` from: `\"LEFT_RIGHT\", \"RIGHT_LEFT\", \"LEFT_LEFT\", \"RIGHT_RIGHT\"`. Defaults to `\"RIGHT_LEFT\"`.\n Some diagonals are shorter than `max_diag_len` and need to be padded. `align` is\n a string specifying how superdiagonals and subdiagonals should be aligned,\n respectively. There are four possible alignments: \"RIGHT_LEFT\" (default),\n \"LEFT_RIGHT\", \"LEFT_LEFT\", and \"RIGHT_RIGHT\". \"RIGHT_LEFT\" aligns superdiagonals\n to the right (left-pads the row) and subdiagonals to the left (right-pads the\n row). It is the packing format LAPACK uses. cuSPARSE uses \"LEFT_RIGHT\", which is\n the opposite alignment.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns a batched matrix tensor with new batched diagonal values.", "type": "API"}, {"name": "tf.raw_ops.MatrixSolve", "docs": "Solves systems of linear equations.\n\n `Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is\n a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix\n satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.\n If `adjoint` is `True` then each output matrix satisfies\n `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n adjoint: An optional `bool`. Defaults to `False`.\n Boolean indicating whether to solve with `matrix` or its (block-wise)\n adjoint.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves systems of linear equations.", "type": "API"}, {"name": "tf.raw_ops.MatrixSolveLs", "docs": "Solves one or more linear least-squares problems.\n\n `matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form real or complex matrices of size `[M, N]`. `Rhs` is a tensor of the same\n type as `matrix` and shape `[..., M, K]`.\n The output is a tensor shape `[..., N, K]` where each output matrix solves\n each of the equations\n `matrix[..., :, :]` * `output[..., :, :]` = `rhs[..., :, :]`\n in the least squares sense.\n\n We use the following notation for (complex) matrix and right-hand sides\n in the batch:\n\n `matrix`=\\\\(A \\in \\mathbb{C}^{m \\times n}\\\\),\n `rhs`=\\\\(B \\in \\mathbb{C}^{m \\times k}\\\\),\n `output`=\\\\(X \\in \\mathbb{C}^{n \\times k}\\\\),\n `l2_regularizer`=\\\\(\\lambda \\in \\mathbb{R}\\\\).\n\n If `fast` is `True`, then the solution is computed by solving the normal\n equations using Cholesky decomposition. Specifically, if \\\\(m \\ge n\\\\) then\n \\\\(X = (A^H A + \\lambda I)^{-1} A^H B\\\\), which solves the least-squares\n problem \\\\(X = \\mathrm{argmin}_{Z \\in \\Re^{n \\times k} } ||A Z - B||_F^2 + \\lambda ||Z||_F^2\\\\).\n If \\\\(m \\lt n\\\\) then `output` is computed as\n \\\\(X = A^H (A A^H + \\lambda I)^{-1} B\\\\), which (for \\\\(\\lambda = 0\\\\)) is the\n minimum-norm solution to the under-determined linear system, i.e.\n \\\\(X = \\mathrm{argmin}_{Z \\in \\mathbb{C}^{n \\times k} } ||Z||_F^2 \\\\),\n subject to \\\\(A Z = B\\\\). Notice that the fast path is only numerically stable\n when \\\\(A\\\\) is numerically full rank and has a condition number\n \\\\(\\mathrm{cond}(A) \\lt \\frac{1}{\\sqrt{\\epsilon_{mach} } }\\\\) or \\\\(\\lambda\\\\) is\n sufficiently large.\n\n If `fast` is `False` an algorithm based on the numerically robust complete\n orthogonal decomposition is used. This computes the minimum-norm\n least-squares solution, even when \\\\(A\\\\) is rank deficient. This path is\n typically 6-7 times slower than the fast path. If `fast` is `False` then\n `l2_regularizer` is ignored.\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, N]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n l2_regularizer: A `Tensor` of type `float64`. Scalar tensor.\n\n @compatibility(numpy)\n Equivalent to np.linalg.lstsq\n @end_compatibility\n fast: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves one or more linear least-squares problems.", "type": "API"}, {"name": "tf.raw_ops.MatrixSquareRoot", "docs": "Computes the matrix square root of one or more square matrices:\n\n matmul(sqrtm(A), sqrtm(A)) = A\n\n The input matrix should be invertible. If the input matrix is real, it should\n have no eigenvalues which are real and negative (pairs of complex conjugate\n eigenvalues are allowed).\n\n The matrix square root is computed by first reducing the matrix to\n quasi-triangular form with the real Schur decomposition. The square root\n of the quasi-triangular matrix is then computed directly. Details of\n the algorithm can be found in: Nicholas J. Higham, \"Computing real\n square roots of a real matrix\", Linear Algebra Appl., 1987.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices. The output is a tensor of the same shape as the input\n containing the matrix square root for all input submatrices `[..., :, :]`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the matrix square root of one or more square matrices:", "type": "API"}, {"name": "tf.raw_ops.MatrixTriangularSolve", "docs": "Solves systems of linear equations with upper or lower triangular matrices by backsubstitution.\n\n \n `matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form\n square matrices. If `lower` is `True` then the strictly upper triangular part\n of each inner-most matrix is assumed to be zero and not accessed.\n If `lower` is False then the strictly lower triangular part of each inner-most\n matrix is assumed to be zero and not accessed.\n `rhs` is a tensor of shape `[..., M, N]`.\n\n The output is a tensor of shape `[..., M, N]`. If `adjoint` is\n `True` then the innermost matrices in `output` satisfy matrix equations\n `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.\n If `adjoint` is `False` then the strictly then the innermost matrices in\n `output` satisfy matrix equations\n `adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.\n\n Note, the batch shapes for the inputs only need to broadcast.\n\n Example:\n ```python\n\n a = tf.constant([[3, 0, 0, 0],\n [2, 1, 0, 0],\n [1, 0, 1, 0],\n [1, 1, 1, 1]], dtype=tf.float32)\n\n b = tf.constant([[4],\n [2],\n [4],\n [2]], dtype=tf.float32)\n\n x = tf.linalg.triangular_solve(a, b, lower=True)\n x\n # \n\n # in python3 one can use `a@x`\n tf.matmul(a, x)\n # \n ```\n\n Args:\n matrix: A `Tensor`. Must be one of the following types: `bfloat16`, `float64`, `float32`, `half`, `complex64`, `complex128`.\n Shape is `[..., M, M]`.\n rhs: A `Tensor`. Must have the same type as `matrix`.\n Shape is `[..., M, K]`.\n lower: An optional `bool`. Defaults to `True`.\n Boolean indicating whether the innermost matrices in `matrix` are\n lower or upper triangular.\n adjoint: An optional `bool`. Defaults to `False`.\n Boolean indicating whether to solve with `matrix` or its (block-wise)\n adjoint.\n\n @compatibility(numpy)\n Equivalent to scipy.linalg.solve_triangular\n @end_compatibility\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `matrix`.\n ", "desc": "Solves systems of linear equations with upper or lower triangular matrices by backsubstitution.", "type": "API"}, {"name": "tf.raw_ops.Max", "docs": "Computes the maximum of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the maximum of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Maximum", "docs": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.\n\n Example:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-2., 0., 2., 5.])\n >>> tf.math.maximum(x, y)\n \n\n Note that `maximum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.maximum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_max`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the max of x and y (i.e. x > y ? x : y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.MaxIntraOpParallelismDataset", "docs": "Creates a dataset that overrides the maximum intra-op parallelism.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n max_intra_op_parallelism: A `Tensor` of type `int64`.\n Identifies the maximum intra-op parallelism to use.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that overrides the maximum intra-op parallelism.", "type": "API"}, {"name": "tf.raw_ops.MaxPool", "docs": "Performs max pooling on the input.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `qint8`.\n 4-D input to pool over.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs max pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.MaxPool3D", "docs": "Performs 3D max pooling on the input.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n Shape `[batch, depth, rows, cols, channels]` tensor to pool over.\n ksize: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The size of the window for each dimension of\n the input tensor. Must have `ksize[0] = ksize[4] = 1`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs 3D max pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.MaxPool3DGrad", "docs": "Computes gradients of 3D max pooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`.\n Output backprop of shape `[batch, depth, rows, cols, channels]`.\n ksize: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The size of the window for each dimension of\n the input tensor. Must have `ksize[0] = ksize[4] = 1`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grad`.\n ", "desc": "Computes gradients of 3D max pooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPool3DGradGrad", "docs": "Computes second-order gradients of the maxpooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must have the same type as `orig_input`.\n Output backprop of shape `[batch, depth, rows, cols, channels]`.\n ksize: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The size of the window for each dimension of\n the input tensor. Must have `ksize[0] = ksize[4] = 1`.\n strides: A list of `ints` that has length `>= 5`.\n 1-D tensor of length 5. The stride of the sliding window for each\n dimension of `input`. Must have `strides[0] = strides[4] = 1`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NDHWC\", \"NCDHW\"`. Defaults to `\"NDHWC\"`.\n The data format of the input and output data. With the\n default format \"NDHWC\", the data is stored in the order of:\n [batch, in_depth, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCDHW\", the data storage order is:\n [batch, in_channels, in_depth, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes second-order gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGrad", "docs": "Computes gradients of the maxpooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must have the same type as `orig_input`.\n 4-D. Gradients w.r.t. the output of `max_pool`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\", \"EXPLICIT\"`.\n The type of padding algorithm to use.\n explicit_paddings: An optional list of `ints`. Defaults to `[]`.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGradGrad", "docs": "Computes second-order gradients of the maxpooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must have the same type as `orig_input`.\n 4-D. Gradients of gradients w.r.t. the input of `max_pool`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes second-order gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGradGradV2", "docs": "Computes second-order gradients of the maxpooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must have the same type as `orig_input`.\n 4-D. Gradients of gradients w.r.t. the input of `max_pool`.\n ksize: A `Tensor` of type `int32`.\n The size of the window for each dimension of the input tensor.\n strides: A `Tensor` of type `int32`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes second-order gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGradGradWithArgmax", "docs": "Computes second-order gradients of the maxpooling function.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input.\n grad: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, height, width, channels]`. Gradients w.r.t. the\n input of `max_pool`.\n argmax: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The indices of the maximum values chosen for each output of `max_pool`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n include_batch_in_index: An optional `bool`. Defaults to `False`.\n Whether to include batch dimension in flattened index of `argmax`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes second-order gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGradV2", "docs": "Computes gradients of the maxpooling function.\n\n Args:\n orig_input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input tensor.\n orig_output: A `Tensor`. Must have the same type as `orig_input`.\n The original output tensor.\n grad: A `Tensor`. Must have the same type as `orig_input`.\n 4-D. Gradients w.r.t. the output of `max_pool`.\n ksize: A `Tensor` of type `int32`.\n The size of the window for each dimension of the input tensor.\n strides: A `Tensor` of type `int32`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `orig_input`.\n ", "desc": "Computes gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolGradWithArgmax", "docs": "Computes gradients of the maxpooling function.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The original input.\n grad: A `Tensor`. Must have the same type as `input`.\n 4-D with shape `[batch, height, width, channels]`. Gradients w.r.t. the\n output of `max_pool`.\n argmax: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The indices of the maximum values chosen for each output of `max_pool`.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n include_batch_in_index: An optional `bool`. Defaults to `False`.\n Whether to include batch dimension in flattened index of `argmax`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes gradients of the maxpooling function.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolV2", "docs": "Performs max pooling on the input.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `qint8`.\n 4-D input to pool over.\n ksize: A `Tensor` of type `int32`.\n The size of the window for each dimension of the input tensor.\n strides: A `Tensor` of type `int32`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n Specify the data format of the input and output data. With the\n default format \"NHWC\", the data is stored in the order of:\n [batch, in_height, in_width, in_channels].\n Alternatively, the format could be \"NCHW\", the data storage order of:\n [batch, in_channels, in_height, in_width].\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Performs max pooling on the input.", "type": "API"}, {"name": "tf.raw_ops.MaxPoolWithArgmax", "docs": "Performs max pooling on the input and outputs both max values and indices.\n\n The indices in `argmax` are flattened, so that a maximum value at position\n `[b, y, x, c]` becomes flattened index:\n `(y * width + x) * channels + c` if `include_batch_in_index` is False;\n `((b * height + y) * width + x) * channels + c` if `include_batch_in_index` is True.\n\n The indices returned are always in `[0, height) x [0, width)` before flattening,\n even if padding is involved and the mathematically correct answer is outside\n (either negative or too large). This is a bug, but fixing it is difficult to do\n in a safe backwards compatible way, especially due to flattening.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 4-D with shape `[batch, height, width, channels]`. Input to pool over.\n ksize: A list of `ints` that has length `>= 4`.\n The size of the window for each dimension of the input tensor.\n strides: A list of `ints` that has length `>= 4`.\n The stride of the sliding window for each dimension of the\n input tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n Targmax: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n include_batch_in_index: An optional `bool`. Defaults to `False`.\n Whether to include batch dimension in flattened index of `argmax`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, argmax).\n\n output: A `Tensor`. Has the same type as `input`.\n argmax: A `Tensor` of type `Targmax`.\n ", "desc": "Performs max pooling on the input and outputs both max values and indices.", "type": "API"}, {"name": "tf.raw_ops.Mean", "docs": "Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the mean of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Merge", "docs": "Forwards the value of an available tensor from `inputs` to `output`.\n\n `Merge` waits for at least one of the tensors in `inputs` to become available.\n It is usually combined with `Switch` to implement branching.\n\n `Merge` forwards the first tensor to become available to `output`, and sets\n `value_index` to its index in `inputs`.\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with the same type.\n The input tensors, exactly one of which will become available.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, value_index).\n\n output: A `Tensor`. Has the same type as `inputs`.\n value_index: A `Tensor` of type `int32`.\n ", "desc": "Forwards the value of an available tensor from `inputs` to `output`.", "type": "API"}, {"name": "tf.raw_ops.MergeSummary", "docs": "Merges summaries.\n\n This op creates a\n [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)\n protocol buffer that contains the union of all the values in the input\n summaries.\n\n When the Op is run, it reports an `InvalidArgument` error if multiple values\n in the summaries to merge use the same tag.\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with type `string`.\n Can be of any shape. Each must contain serialized `Summary` protocol\n buffers.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Merges summaries.", "type": "API"}, {"name": "tf.raw_ops.MergeV2Checkpoints", "docs": "V2 format specific: merges the metadata files of sharded checkpoints. The\n\n result is one logical checkpoint, with one physical metadata file and renamed\n data files.\n\n Intended for \"grouping\" multiple checkpoints in a sharded checkpoint setup.\n\n If delete_old_dirs is true, attempts to delete recursively the dirname of each\n path in the input checkpoint_prefixes. This is useful when those paths are non\n user-facing temporary locations.\n\n Args:\n checkpoint_prefixes: A `Tensor` of type `string`.\n prefixes of V2 checkpoints to merge.\n destination_prefix: A `Tensor` of type `string`.\n scalar. The desired final prefix. Allowed to be the same\n as one of the checkpoint_prefixes.\n delete_old_dirs: An optional `bool`. Defaults to `True`. see above.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "V2 format specific: merges the metadata files of sharded checkpoints. The", "type": "API"}, {"name": "tf.raw_ops.Mfcc", "docs": "Transforms a spectrogram into a form that's useful for speech recognition.\n\n Mel Frequency Cepstral Coefficients are a way of representing audio data that's\n been effective as an input feature for machine learning. They are created by\n taking the spectrum of a spectrogram (a 'cepstrum'), and discarding some of the\n higher frequencies that are less significant to the human ear. They have a long\n history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum\n is a good resource to learn more.\n\n Args:\n spectrogram: A `Tensor` of type `float32`.\n Typically produced by the Spectrogram op, with magnitude_squared\n set to true.\n sample_rate: A `Tensor` of type `int32`.\n How many samples per second the source audio used.\n upper_frequency_limit: An optional `float`. Defaults to `4000`.\n The highest frequency to use when calculating the\n ceptstrum.\n lower_frequency_limit: An optional `float`. Defaults to `20`.\n The lowest frequency to use when calculating the\n ceptstrum.\n filterbank_channel_count: An optional `int`. Defaults to `40`.\n Resolution of the Mel bank used internally.\n dct_coefficient_count: An optional `int`. Defaults to `13`.\n How many output channels to produce per time slice.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Transforms a spectrogram into a form that's useful for speech recognition.", "type": "API"}, {"name": "tf.raw_ops.Min", "docs": "Computes the minimum of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the minimum of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Minimum", "docs": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.\n\n Both inputs are number-type tensors (except complex). `minimum` expects that\n both tensors have the same `dtype`.\n\n Examples:\n\n >>> x = tf.constant([0., 0., 0., 0.])\n >>> y = tf.constant([-5., -2., 0., 3.])\n >>> tf.math.minimum(x, y)\n \n\n Note that `minimum` supports [broadcast semantics](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for `x` and `y`.\n\n >>> x = tf.constant([-5., 0., 0., 0.])\n >>> y = tf.constant([-3.])\n >>> tf.math.minimum(x, y)\n \n\n The reduction version of this elementwise operation is `tf.math.reduce_min`\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns the min of x and y (i.e. x < y ? x : y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.MirrorPad", "docs": "Pads a tensor with mirrored values.\n\n This operation pads a `input` with mirrored values according to the `paddings`\n you specify. `paddings` is an integer tensor with shape `[n, 2]`, where n is\n the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates\n how many values to add before the contents of `input` in that dimension, and\n `paddings[D, 1]` indicates how many values to add after the contents of `input`\n in that dimension. Both `paddings[D, 0]` and `paddings[D, 1]` must be no greater\n than `input.dim_size(D)` (or `input.dim_size(D) - 1`) if `copy_border` is true\n (if false, respectively).\n\n The padded size of each dimension D of the output is:\n\n `paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`\n\n For example:\n\n ```\n # 't' is [[1, 2, 3], [4, 5, 6]].\n # 'paddings' is [[1, 1]], [2, 2]].\n # 'mode' is SYMMETRIC.\n # rank of 't' is 2.\n pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2]\n [2, 1, 1, 2, 3, 3, 2]\n [5, 4, 4, 5, 6, 6, 5]\n [5, 4, 4, 5, 6, 6, 5]]\n ```\n\n Args:\n input: A `Tensor`. The input tensor to be padded.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A two-column matrix specifying the padding sizes. The number of\n rows must be the same as the rank of `input`.\n mode: A `string` from: `\"REFLECT\", \"SYMMETRIC\"`.\n Either `REFLECT` or `SYMMETRIC`. In reflect mode the padded regions\n do not include the borders, while in symmetric mode the padded regions\n do include the borders. For example, if `input` is `[1, 2, 3]` and `paddings`\n is `[0, 2]`, then the output is `[1, 2, 3, 2, 1]` in reflect mode, and\n it is `[1, 2, 3, 3, 2]` in symmetric mode.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Pads a tensor with mirrored values.", "type": "API"}, {"name": "tf.raw_ops.MirrorPadGrad", "docs": "Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor.\n\n This operation folds the padded areas of `input` by `MirrorPad` according to the\n `paddings` you specify. `paddings` must be the same as `paddings` argument\n given to the corresponding `MirrorPad` op.\n\n The folded size of each dimension D of the output is:\n\n `input.dim_size(D) - paddings(D, 0) - paddings(D, 1)`\n\n For example:\n\n ```\n # 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]].\n # 'paddings' is [[0, 1]], [0, 1]].\n # 'mode' is SYMMETRIC.\n # rank of 't' is 2.\n pad(t, paddings) ==> [[ 1, 5]\n [11, 28]]\n ```\n\n Args:\n input: A `Tensor`. The input tensor to be folded.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A two-column matrix specifying the padding sizes. The number of\n rows must be the same as the rank of `input`.\n mode: A `string` from: `\"REFLECT\", \"SYMMETRIC\"`.\n The mode used in the `MirrorPad` op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor.", "type": "API"}, {"name": "tf.raw_ops.Mod", "docs": "Returns element-wise remainder of division. This emulates C semantics in that\n\n the result here is consistent with a truncating divide. E.g.\n `tf.truncatediv(x, y) * y + truncate_mod(x, y) = x`.\n\n *NOTE*: `Mod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`, `half`, `half`, `bfloat16`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. This emulates C semantics in that", "type": "API"}, {"name": "tf.raw_ops.ModelDataset", "docs": "Identity transformation that models performance.\n\n Identity transformation that models performance.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n algorithm: An optional `int`. Defaults to `0`.\n cpu_budget: An optional `int`. Defaults to `0`.\n ram_budget: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Identity transformation that models performance.", "type": "API"}, {"name": "tf.raw_ops.Mul", "docs": "Returns x * y element-wise.\n\n *NOTE*: `Multiply` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x * y element-wise.", "type": "API"}, {"name": "tf.raw_ops.MulNoNan", "docs": "Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN.\n\n *NOTE*: `MulNoNan` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN.", "type": "API"}, {"name": "tf.raw_ops.MultiDeviceIterator", "docs": "Creates a MultiDeviceIterator resource.\n\n Args:\n devices: A list of `strings` that has length `>= 1`.\n A list of devices the iterator works across.\n shared_name: A `string`.\n If non-empty, this resource will be shared under the given name\n across multiple sessions.\n container: A `string`.\n If non-empty, this resource is placed in the given container.\n Otherwise, a default container is used.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n The list of shapes being produced.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a MultiDeviceIterator resource.", "type": "API"}, {"name": "tf.raw_ops.MultiDeviceIteratorFromStringHandle", "docs": "Generates a MultiDeviceIterator resource from its provided string handle.\n\n Args:\n string_handle: A `Tensor` of type `string`.\n String representing the resource.\n output_types: An optional list of `tf.DTypes`. Defaults to `[]`.\n The type list for the return values.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The list of shapes being produced.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Generates a MultiDeviceIterator resource from its provided string handle.", "type": "API"}, {"name": "tf.raw_ops.MultiDeviceIteratorGetNextFromShard", "docs": "Gets next element for the provided shard number.\n\n Args:\n multi_device_iterator: A `Tensor` of type `resource`.\n A MultiDeviceIterator resource.\n shard_num: A `Tensor` of type `int32`.\n Integer representing which shard to fetch data for.\n incarnation_id: A `Tensor` of type `int64`.\n Which incarnation of the MultiDeviceIterator is running.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n The list of shapes being produced.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Gets next element for the provided shard number.", "type": "API"}, {"name": "tf.raw_ops.MultiDeviceIteratorInit", "docs": "Initializes the multi device iterator with the given dataset.\n\n Args:\n dataset: A `Tensor` of type `variant`. Dataset to be iterated upon.\n multi_device_iterator: A `Tensor` of type `resource`.\n A MultiDeviceIteratorResource.\n max_buffer_size: A `Tensor` of type `int64`.\n The maximum size of the host side per device buffer to keep.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Initializes the multi device iterator with the given dataset.", "type": "API"}, {"name": "tf.raw_ops.MultiDeviceIteratorToStringHandle", "docs": "Produces a string handle for the given MultiDeviceIterator.\n\n Args:\n multi_device_iterator: A `Tensor` of type `resource`.\n A MultiDeviceIterator resource.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Produces a string handle for the given MultiDeviceIterator.", "type": "API"}, {"name": "tf.raw_ops.Multinomial", "docs": "Draws samples from a multinomial distribution.\n\n Args:\n logits: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]`\n represents the unnormalized log probabilities for all classes.\n num_samples: A `Tensor` of type `int32`.\n 0-D. Number of independent samples to draw for each row slice.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 is set to be non-zero, the internal random number\n generator is seeded by the given seed. Otherwise, a random seed is used.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n output_dtype: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_dtype`.\n ", "desc": "Draws samples from a multinomial distribution.", "type": "API"}, {"name": "tf.raw_ops.MutableDenseHashTable", "docs": "Creates an empty hash table that uses tensors as the backing store.\n\n It uses \"open addressing\" with quadratic reprobing to resolve\n collisions.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a scalar. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n empty_key: A `Tensor`.\n The key used to represent empty key buckets internally. Must not\n be used in insert or lookup operations.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n value_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n The shape of each value.\n initial_num_buckets: An optional `int`. Defaults to `131072`.\n The initial number of hash table buckets. Must be a power\n to 2.\n max_load_factor: An optional `float`. Defaults to `0.8`.\n The maximum ratio between number of entries and number of\n buckets before growing the table. Must be between 0 and 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Creates an empty hash table that uses tensors as the backing store.", "type": "API"}, {"name": "tf.raw_ops.MutableDenseHashTableV2", "docs": "Creates an empty hash table that uses tensors as the backing store.\n\n It uses \"open addressing\" with quadratic reprobing to resolve\n collisions.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a scalar. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n empty_key: A `Tensor`.\n The key used to represent empty key buckets internally. Must not\n be used in insert or lookup operations.\n deleted_key: A `Tensor`. Must have the same type as `empty_key`.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n value_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n The shape of each value.\n initial_num_buckets: An optional `int`. Defaults to `131072`.\n The initial number of hash table buckets. Must be a power\n to 2.\n max_load_factor: An optional `float`. Defaults to `0.8`.\n The maximum ratio between number of entries and number of\n buckets before growing the table. Must be between 0 and 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates an empty hash table that uses tensors as the backing store.", "type": "API"}, {"name": "tf.raw_ops.MutableHashTable", "docs": "Creates an empty hash table.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a scalar. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n If true and shared_name is empty, the table is shared\n using the node name.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Creates an empty hash table.", "type": "API"}, {"name": "tf.raw_ops.MutableHashTableOfTensors", "docs": "Creates an empty hash table.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a vector. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n value_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Creates an empty hash table.", "type": "API"}, {"name": "tf.raw_ops.MutableHashTableOfTensorsV2", "docs": "Creates an empty hash table.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a vector. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n value_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates an empty hash table.", "type": "API"}, {"name": "tf.raw_ops.MutableHashTableV2", "docs": "Creates an empty hash table.\n\n This op creates a mutable hash table, specifying the type of its keys and\n values. Each value must be a scalar. Data can be inserted into the table using\n the insert operations. It does not support the initialization operation.\n\n Args:\n key_dtype: A `tf.DType`. Type of the table keys.\n value_dtype: A `tf.DType`. Type of the table values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this table is shared under the given name across\n multiple sessions.\n use_node_name_sharing: An optional `bool`. Defaults to `False`.\n If true and shared_name is empty, the table is shared\n using the node name.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates an empty hash table.", "type": "API"}, {"name": "tf.raw_ops.MutexLock", "docs": "Locks a mutex resource. The output is the lock. So long as the lock tensor\n\n is alive, any other request to use `MutexLock` with this mutex will wait.\n\n This is particularly useful for creating a critical section when used in\n conjunction with `MutexLockIdentity`:\n\n ```python\n\n mutex = mutex_v2(\n shared_name=handle_name, container=container, name=name)\n\n def execute_in_critical_section(fn, *args, **kwargs):\n lock = gen_resource_variable_ops.mutex_lock(mutex)\n\n with ops.control_dependencies([lock]):\n r = fn(*args, **kwargs)\n\n with ops.control_dependencies(nest.flatten(r)):\n with ops.colocate_with(mutex):\n ensure_lock_exists = mutex_lock_identity(lock)\n\n # Make sure that if any element of r is accessed, all of\n # them are executed together.\n r = nest.map_structure(tf.identity, r)\n\n with ops.control_dependencies([ensure_lock_exists]):\n return nest.map_structure(tf.identity, r)\n ```\n\n While `fn` is running in the critical section, no other functions which wish to\n use this critical section may run.\n\n Often the use case is that two executions of the same graph, in parallel,\n wish to run `fn`; and we wish to ensure that only one of them executes\n at a time. This is especially important if `fn` modifies one or more\n variables at a time.\n\n It is also useful if two separate functions must share a resource, but we\n wish to ensure the usage is exclusive.\n\n Args:\n mutex: A `Tensor` of type `resource`. The mutex resource to lock.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Locks a mutex resource. The output is the lock. So long as the lock tensor", "type": "API"}, {"name": "tf.raw_ops.MutexV2", "docs": "Creates a Mutex resource that can be locked by `MutexLock`.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this variable is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this variable is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a Mutex resource that can be locked by `MutexLock`.", "type": "API"}, {"name": "tf.raw_ops.NcclAllReduce", "docs": "Outputs a tensor containing the reduction across all input tensors.\n\n Outputs a tensor containing the reduction across all input tensors passed to ops\n within the same `shared_name.\n\n The graph should be constructed so if one op runs with shared_name value `c`,\n then `num_devices` ops will run with shared_name value `c`. Failure to do so\n will cause the graph execution to fail to complete.\n\n input: the input to the reduction\n data: the value of the reduction across all `num_devices` devices.\n reduction: the reduction operation to perform.\n num_devices: The number of devices participating in this reduction.\n shared_name: Identifier that shared between ops of the same reduction.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n reduction: A `string` from: `\"min\", \"max\", \"prod\", \"sum\"`.\n num_devices: An `int`.\n shared_name: A `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Outputs a tensor containing the reduction across all input tensors.", "type": "API"}, {"name": "tf.raw_ops.NcclBroadcast", "docs": "Sends `input` to all devices that are connected to the output.\n\n Sends `input` to all devices that are connected to the output.\n\n The graph should be constructed so that all ops connected to the output have a\n valid device assignment, and the op itself is assigned one of these devices.\n\n input: The input to the broadcast.\n output: The same as input.\n shape: The shape of the input tensor.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n shape: A `tf.TensorShape` or list of `ints`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Sends `input` to all devices that are connected to the output.", "type": "API"}, {"name": "tf.raw_ops.NcclReduce", "docs": "Reduces `input` from `num_devices` using `reduction` to a single device.\n\n Reduces `input` from `num_devices` using `reduction` to a single device.\n\n The graph should be constructed so that all inputs have a valid device\n assignment, and the op itself is assigned one of these devices.\n\n input: The input to the reduction.\n data: the value of the reduction across all `num_devices` devices.\n reduction: the reduction operation to perform.\n\n Args:\n input: A list of at least 1 `Tensor` objects with the same type in: `half`, `float32`, `float64`, `int32`, `int64`.\n reduction: A `string` from: `\"min\", \"max\", \"prod\", \"sum\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Reduces `input` from `num_devices` using `reduction` to a single device.", "type": "API"}, {"name": "tf.raw_ops.Ndtri", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Neg", "docs": "Computes numerical negative value element-wise.\n\n I.e., \\\\(y = -x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes numerical negative value element-wise.", "type": "API"}, {"name": "tf.raw_ops.NextAfter", "docs": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.\n\n This operation returns the same result as the C++ std::nextafter function.\n\n It can also return a subnormal number.\n\n @compatibility(cpp)\n Equivalent to C++ std::nextafter function.\n @end_compatibility\n\n Args:\n x1: A `Tensor`. Must be one of the following types: `float64`, `float32`.\n x2: A `Tensor`. Must have the same type as `x1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x1`.\n ", "desc": "Returns the next representable value of `x1` in the direction of `x2`, element-wise.", "type": "API"}, {"name": "tf.raw_ops.NextIteration", "docs": "Makes its input available to the next iteration.\n\n Args:\n data: A `Tensor`. The tensor to be made available to the next iteration.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Makes its input available to the next iteration.", "type": "API"}, {"name": "tf.raw_ops.NonDeterministicInts", "docs": "Non-deterministically generates some integers.\n\n This op may use some OS-provided source of non-determinism (e.g. an RNG), so each execution will give different results.\n\n Args:\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.int64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Non-deterministically generates some integers.", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppression", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n selected_indices = tf.image.non_max_suppression(\n boxes, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n\n Args:\n boxes: A `Tensor` of type `float32`.\n A 2-D float tensor of shape `[num_boxes, 4]`.\n scores: A `Tensor` of type `float32`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n iou_threshold: An optional `float`. Defaults to `0.5`.\n A float representing the threshold for deciding whether boxes\n overlap too much with respect to IOU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppressionV2", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system. Note that this\n algorithm is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n\n selected_indices = tf.image.non_max_suppression_v2(\n boxes, scores, max_output_size, iou_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n\n Args:\n boxes: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 2-D float tensor of shape `[num_boxes, 4]`.\n scores: A `Tensor`. Must have the same type as `boxes`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n iou_threshold: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too much with respect to IOU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppressionV3", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes with score less than\n `score_threshold` are removed. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system and more\n generally is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n selected_indices = tf.image.non_max_suppression_v2(\n boxes, scores, max_output_size, iou_threshold, score_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n\n Args:\n boxes: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 2-D float tensor of shape `[num_boxes, 4]`.\n scores: A `Tensor`. Must have the same type as `boxes`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n iou_threshold: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too much with respect to IOU.\n score_threshold: A `Tensor`. Must have the same type as `iou_threshold`.\n A 0-D float tensor representing the threshold for deciding when to remove\n boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppressionV4", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes with score less than\n `score_threshold` are removed. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system and more\n generally is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n selected_indices = tf.image.non_max_suppression_v2(\n boxes, scores, max_output_size, iou_threshold, score_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n\n Args:\n boxes: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 2-D float tensor of shape `[num_boxes, 4]`.\n scores: A `Tensor`. Must have the same type as `boxes`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n iou_threshold: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too much with respect to IOU.\n score_threshold: A `Tensor`. Must have the same type as `iou_threshold`.\n A 0-D float tensor representing the threshold for deciding when to remove\n boxes based on score.\n pad_to_max_output_size: An optional `bool`. Defaults to `False`.\n If true, the output `selected_indices` is padded to be of length\n `max_output_size`. Defaults to false.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (selected_indices, valid_outputs).\n\n selected_indices: A `Tensor` of type `int32`.\n valid_outputs: A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppressionV5", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high intersection-over-union (IOU) overlap\n with previously selected boxes. Bounding boxes with score less than\n `score_threshold` are removed. Bounding boxes are supplied as\n [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any\n diagonal pair of box corners and the coordinates can be provided as normalized\n (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm\n is agnostic to where the origin is in the coordinate system and more\n generally is invariant to orthogonal transformations and translations\n of the coordinate system; thus translating or reflections of the coordinate\n system result in the same boxes being selected by the algorithm.\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n selected_indices = tf.image.non_max_suppression_v2(\n boxes, scores, max_output_size, iou_threshold, score_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n This op also supports a Soft-NMS (with Gaussian weighting) mode (c.f.\n Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score\n of other overlapping boxes instead of directly causing them to be pruned.\n To enable this Soft-NMS mode, set the `soft_nms_sigma` parameter to be\n larger than 0.\n\n Args:\n boxes: A `Tensor`. Must be one of the following types: `half`, `float32`.\n A 2-D float tensor of shape `[num_boxes, 4]`.\n scores: A `Tensor`. Must have the same type as `boxes`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n iou_threshold: A `Tensor`. Must have the same type as `boxes`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too much with respect to IOU.\n score_threshold: A `Tensor`. Must have the same type as `boxes`.\n A 0-D float tensor representing the threshold for deciding when to remove\n boxes based on score.\n soft_nms_sigma: A `Tensor`. Must have the same type as `boxes`.\n A 0-D float tensor representing the sigma parameter for Soft NMS; see Bodla et\n al (c.f. https://arxiv.org/abs/1704.04503). When `soft_nms_sigma=0.0` (which\n is default), we fall back to standard (hard) NMS.\n pad_to_max_output_size: An optional `bool`. Defaults to `False`.\n If true, the output `selected_indices` is padded to be of length\n `max_output_size`. Defaults to false.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (selected_indices, selected_scores, valid_outputs).\n\n selected_indices: A `Tensor` of type `int32`.\n selected_scores: A `Tensor`. Has the same type as `boxes`.\n valid_outputs: A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonMaxSuppressionWithOverlaps", "docs": "Greedily selects a subset of bounding boxes in descending order of score,\n\n pruning away boxes that have high overlaps\n with previously selected boxes. Bounding boxes with score less than\n `score_threshold` are removed. N-by-n overlap values are supplied as square matrix,\n which allows for defining a custom overlap criterium (eg. intersection over union,\n intersection over area, etc.).\n\n The output of this operation is a set of integers indexing into the input\n collection of bounding boxes representing the selected boxes. The bounding\n box coordinates corresponding to the selected indices can then be obtained\n using the `tf.gather operation`. For example:\n\n selected_indices = tf.image.non_max_suppression_with_overlaps(\n overlaps, scores, max_output_size, overlap_threshold, score_threshold)\n selected_boxes = tf.gather(boxes, selected_indices)\n\n Args:\n overlaps: A `Tensor` of type `float32`.\n A 2-D float tensor of shape `[num_boxes, num_boxes]` representing\n the n-by-n box overlap values.\n scores: A `Tensor` of type `float32`.\n A 1-D float tensor of shape `[num_boxes]` representing a single\n score corresponding to each box (each row of boxes).\n max_output_size: A `Tensor` of type `int32`.\n A scalar integer tensor representing the maximum number of\n boxes to be selected by non max suppression.\n overlap_threshold: A `Tensor` of type `float32`.\n A 0-D float tensor representing the threshold for deciding whether\n boxes overlap too.\n score_threshold: A `Tensor` of type `float32`.\n A 0-D float tensor representing the threshold for deciding when to remove\n boxes based on score.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Greedily selects a subset of bounding boxes in descending order of score,", "type": "API"}, {"name": "tf.raw_ops.NonSerializableDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.NoOp", "docs": "Does nothing. Only useful as a placeholder for control edges.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Does nothing. Only useful as a placeholder for control edges.", "type": "API"}, {"name": "tf.raw_ops.NotEqual", "docs": "Returns the truth value of (x != y) element-wise.\n\n *NOTE*: `NotEqual` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`.\n y: A `Tensor`. Must have the same type as `x`.\n incompatible_shape_error: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns the truth value of (x != y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.NthElement", "docs": "Finds values of the `n`-th order statistic for the last dimension.\n\n If the input is a vector (rank-1), finds the entries which is the nth-smallest\n value in the vector and outputs their values as scalar tensor.\n\n For matrices (resp. higher rank input), computes the entries which is the\n nth-smallest value in each row (resp. vector along the last dimension). Thus,\n\n values.shape = input.shape[:-1]\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D or higher with last dimension at least `n+1`.\n n: A `Tensor` of type `int32`.\n 0-D. Position of sorted vector to select along the last dimension (along\n each row for matrices). Valid range of n is `[0, input.shape[:-1])`\n reverse: An optional `bool`. Defaults to `False`.\n When set to True, find the nth-largest value in the vector and vice\n versa.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Finds values of the `n`-th order statistic for the last dimension.", "type": "API"}, {"name": "tf.raw_ops.OneHot", "docs": "Returns a one-hot tensor.\n\n The locations represented by indices in `indices` take value `on_value`,\n while all other locations take value `off_value`.\n\n If the input `indices` is rank `N`, the output will have rank `N+1`,\n The new axis is created at dimension `axis` (default: the new axis is\n appended at the end).\n\n If `indices` is a scalar the output shape will be a vector of length `depth`.\n\n If `indices` is a vector of length `features`, the output shape will be:\n ```\n features x depth if axis == -1\n depth x features if axis == 0\n ```\n\n If `indices` is a matrix (batch) with shape `[batch, features]`,\n the output shape will be:\n ```\n batch x features x depth if axis == -1\n batch x depth x features if axis == 1\n depth x batch x features if axis == 0\n ```\n\n\n Examples\n =========\n\n Suppose that\n ```\n indices = [0, 2, -1, 1]\n depth = 3\n on_value = 5.0\n off_value = 0.0\n axis = -1\n ```\n\n Then output is `[4 x 3]`:\n ```\n output =\n [5.0 0.0 0.0] // one_hot(0)\n [0.0 0.0 5.0] // one_hot(2)\n [0.0 0.0 0.0] // one_hot(-1)\n [0.0 5.0 0.0] // one_hot(1)\n ```\n\n Suppose that\n ```\n indices = [0, 2, -1, 1]\n depth = 3\n on_value = 0.0\n off_value = 3.0\n axis = 0\n ```\n\n Then output is `[3 x 4]`:\n ```\n output =\n [0.0 3.0 3.0 3.0]\n [3.0 3.0 3.0 0.0]\n [3.0 3.0 3.0 3.0]\n [3.0 0.0 3.0 3.0]\n // ^ one_hot(0)\n // ^ one_hot(2)\n // ^ one_hot(-1)\n // ^ one_hot(1)\n ```\n\n Suppose that\n ```\n indices = [[0, 2], [1, -1]]\n depth = 3\n on_value = 1.0\n off_value = 0.0\n axis = -1\n ```\n\n Then output is `[2 x 2 x 3]`:\n ```\n output =\n [\n [1.0, 0.0, 0.0] // one_hot(0)\n [0.0, 0.0, 1.0] // one_hot(2)\n ][\n [0.0, 1.0, 0.0] // one_hot(1)\n [0.0, 0.0, 0.0] // one_hot(-1)\n ]\n ```\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `uint8`, `int32`, `int64`.\n A tensor of indices.\n depth: A `Tensor` of type `int32`.\n A scalar defining the depth of the one hot dimension.\n on_value: A `Tensor`.\n A scalar defining the value to fill in output when `indices[j] = i`.\n off_value: A `Tensor`. Must have the same type as `on_value`.\n A scalar defining the value to fill in output when `indices[j] != i`.\n axis: An optional `int`. Defaults to `-1`.\n The axis to fill (default: -1, a new inner-most axis).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `on_value`.\n ", "desc": "Returns a one-hot tensor.", "type": "API"}, {"name": "tf.raw_ops.OneShotIterator", "docs": "Makes a \"one-shot\" iterator that can be iterated only once.\n\n A one-shot iterator bundles the logic for defining the dataset and\n the state of the iterator in a single op, which allows simple input\n pipelines to be defined without an additional initialization\n (\"MakeIterator\") step.\n\n One-shot iterators have the following limitations:\n\n * They do not support parameterization: all logic for creating the underlying\n dataset must be bundled in the `dataset_factory` function.\n * They are not resettable. Once a one-shot iterator reaches the end of its\n underlying dataset, subsequent \"IteratorGetNext\" operations on that\n iterator will always produce an `OutOfRange` error.\n\n For greater flexibility, use \"Iterator\" and \"MakeIterator\" to define\n an iterator using an arbitrary subgraph, which may capture tensors\n (including fed values) as parameters, and which may be reset multiple\n times by rerunning \"MakeIterator\".\n\n Args:\n dataset_factory: A function decorated with @Defun.\n A function of type `() -> DT_VARIANT`, where the returned\n DT_VARIANT is a dataset.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Makes a \"one-shot\" iterator that can be iterated only once.", "type": "API"}, {"name": "tf.raw_ops.OnesLike", "docs": "Returns a tensor of ones with the same shape and type as x.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`, `complex64`, `complex128`, `bool`.\n a tensor of type T.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns a tensor of ones with the same shape and type as x.", "type": "API"}, {"name": "tf.raw_ops.OptimizeDataset", "docs": "Creates a dataset by applying optimizations to `input_dataset`.\n\n Creates a dataset by applying optimizations to `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n optimizations: A `Tensor` of type `string`.\n A `tf.string` vector `tf.Tensor` identifying optimizations to use.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n optimization_configs: An optional list of `strings`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset by applying optimizations to `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.OptimizeDatasetV2", "docs": "Creates a dataset by applying related optimizations to `input_dataset`.\n\n Creates a dataset by applying related optimizations to `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n optimizations_enabled: A `Tensor` of type `string`.\n A `tf.string` vector `tf.Tensor` identifying user enabled optimizations.\n optimizations_disabled: A `Tensor` of type `string`.\n A `tf.string` vector `tf.Tensor` identifying user disabled optimizations.\n optimizations_default: A `Tensor` of type `string`.\n A `tf.string` vector `tf.Tensor` identifying optimizations by default.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n optimization_configs: An optional list of `strings`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset by applying related optimizations to `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.OptionalFromValue", "docs": "Constructs an Optional variant from a tuple of tensors.\n\n Args:\n components: A list of `Tensor` objects.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Constructs an Optional variant from a tuple of tensors.", "type": "API"}, {"name": "tf.raw_ops.OptionalGetValue", "docs": "Returns the value stored in an Optional variant or raises an error if none exists.\n\n Args:\n optional: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Returns the value stored in an Optional variant or raises an error if none exists.", "type": "API"}, {"name": "tf.raw_ops.OptionalHasValue", "docs": "Returns true if and only if the given Optional variant has a value.\n\n Args:\n optional: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns true if and only if the given Optional variant has a value.", "type": "API"}, {"name": "tf.raw_ops.OptionalNone", "docs": "Creates an Optional variant with no value.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates an Optional variant with no value.", "type": "API"}, {"name": "tf.raw_ops.OptionsDataset", "docs": "Creates a dataset by attaching tf.data.Options to `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n serialized_options: A `string`.\n A `tf.string` scalar `tf.Tensor` of serialized `tf.data.Options` protocol buffer.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset by attaching tf.data.Options to `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.OrderedMapClear", "docs": "Op removes all elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Op removes all elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.OrderedMapIncompleteSize", "docs": "Op returns the number of incomplete elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Op returns the number of incomplete elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.OrderedMapPeek", "docs": "Op peeks at the values at the specified key. If the\n\n underlying container does not contain this key\n this op will block until it does. This Op is optimized for\n performance.\n\n Args:\n key: A `Tensor` of type `int64`.\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op peeks at the values at the specified key. If the", "type": "API"}, {"name": "tf.raw_ops.OrderedMapSize", "docs": "Op returns the number of elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Op returns the number of elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.OrderedMapStage", "docs": "Stage (key, values) in the underlying container which behaves like a ordered\n\n associative container. Elements are ordered by key.\n\n Args:\n key: A `Tensor` of type `int64`. int64\n indices: A `Tensor` of type `int32`.\n values: A list of `Tensor` objects. a list of tensors\n dtypes A list of data types that inserted values should adhere to.\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n Maximum number of elements in the Staging Area. If > 0, inserts\n on the container will block when the capacity is reached.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container. Otherwise,\n a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n It is necessary to match this name to the matching Unstage Op.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Stage (key, values) in the underlying container which behaves like a ordered", "type": "API"}, {"name": "tf.raw_ops.OrderedMapUnstage", "docs": "Op removes and returns the values associated with the key\n\n from the underlying container. If the underlying container\n does not contain this key, the op will block until it does.\n\n Args:\n key: A `Tensor` of type `int64`.\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op removes and returns the values associated with the key", "type": "API"}, {"name": "tf.raw_ops.OrderedMapUnstageNoKey", "docs": "Op removes and returns the (key, value) element with the smallest\n\n key from the underlying container. If the underlying container\n does not contain elements, the op will block until it does.\n\n Args:\n indices: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, values).\n\n key: A `Tensor` of type `int64`.\n values: A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op removes and returns the (key, value) element with the smallest", "type": "API"}, {"name": "tf.raw_ops.OutfeedDequeue", "docs": "Retrieves a single tensor from the computation outfeed.\n\n This operation will block indefinitely until data is available.\n\n Args:\n dtype: A `tf.DType`. The type of elements in the tensor.\n shape: A `tf.TensorShape` or list of `ints`. The shape of the tensor.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. This should be -1 when the Op\n is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Retrieves a single tensor from the computation outfeed.", "type": "API"}, {"name": "tf.raw_ops.OutfeedDequeueTuple", "docs": "Retrieve multiple values from the computation outfeed.\n\n This operation will block indefinitely until data is available. Output `i`\n corresponds to XLA tuple element `i`.\n\n Args:\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n The element types of each element in `outputs`.\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of each tensor in `outputs`.\n device_ordinal: An optional `int`. Defaults to `-1`.\n The TPU device to use. This should be -1 when the Op\n is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Retrieve multiple values from the computation outfeed.", "type": "API"}, {"name": "tf.raw_ops.OutfeedDequeueTupleV2", "docs": "Retrieve multiple values from the computation outfeed. Device ordinal is a\ntensor allowing dynamic outfeed.\n\n This operation will block indefinitely until data is available. Output `i`\n corresponds to XLA tuple element `i`.\n\n Args:\n device_ordinal: A `Tensor` of type `int32`.\n An int scalar tensor, representing the TPU device to use. This should be -1 when\n the Op is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n The element types of each element in `outputs`.\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of each tensor in `outputs`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Retrieve multiple values from the computation outfeed. Device ordinal is a", "type": "API"}, {"name": "tf.raw_ops.OutfeedDequeueV2", "docs": "Retrieves a single tensor from the computation outfeed. Device ordinal is a\ntensor allowing dynamic outfeed.\n\n This operation will block indefinitely until data is available.\n\n Args:\n device_ordinal: A `Tensor` of type `int32`.\n An int scalar tensor, representing the TPU device to use. This should be -1 when\n the Op is running on a TPU device, and >= 0 when the Op is running on the CPU\n device.\n dtype: A `tf.DType`. The type of elements in the tensor.\n shape: A `tf.TensorShape` or list of `ints`. The shape of the tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Retrieves a single tensor from the computation outfeed. Device ordinal is a", "type": "API"}, {"name": "tf.raw_ops.OutfeedEnqueue", "docs": "Enqueue a Tensor on the computation outfeed.\n\n Args:\n input: A `Tensor`. A tensor that will be inserted into the outfeed queue.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueue a Tensor on the computation outfeed.", "type": "API"}, {"name": "tf.raw_ops.OutfeedEnqueueTuple", "docs": "Enqueue multiple Tensor values on the computation outfeed.\n\n Args:\n inputs: A list of `Tensor` objects.\n A list of tensors that will be inserted into the outfeed queue as an\n XLA tuple.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueue multiple Tensor values on the computation outfeed.", "type": "API"}, {"name": "tf.raw_ops.Pack", "docs": "Packs a list of `N` rank-`R` tensors into one rank-`(R+1)` tensor.\n\n Packs the `N` tensors in `values` into a tensor with rank one higher than each\n tensor in `values`, by packing them along the `axis` dimension.\n Given a list of tensors of shape `(A, B, C)`;\n\n if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.\n if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.\n Etc.\n\n For example:\n\n ```\n # 'x' is [1, 4]\n # 'y' is [2, 5]\n # 'z' is [3, 6]\n pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.\n pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]\n ```\n\n This is the opposite of `unpack`.\n\n Args:\n values: A list of at least 1 `Tensor` objects with the same type.\n Must be of same shape and type.\n axis: An optional `int`. Defaults to `0`.\n Dimension along which to pack. Negative values wrap around, so the\n valid range is `[-(R+1), R+1)`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `values`.\n ", "desc": "Packs a list of `N` rank-`R` tensors into one rank-`(R+1)` tensor.", "type": "API"}, {"name": "tf.raw_ops.Pad", "docs": "Pads a tensor with zeros.\n\n This operation pads a `input` with zeros according to the `paddings` you\n specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is the\n rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates\n how many zeros to add before the contents of `input` in that dimension, and\n `paddings[D, 1]` indicates how many zeros to add after the contents of `input`\n in that dimension.\n\n The padded size of each dimension D of the output is:\n\n `paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`\n\n For example:\n\n ```\n # 't' is [[1, 1], [2, 2]]\n # 'paddings' is [[1, 1], [2, 2]]\n # rank of 't' is 2\n pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]\n [0, 0, 1, 1, 0, 0]\n [0, 0, 2, 2, 0, 0]\n [0, 0, 0, 0, 0, 0]]\n ```\n\n Args:\n input: A `Tensor`.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Pads a tensor with zeros.", "type": "API"}, {"name": "tf.raw_ops.PaddedBatchDataset", "docs": "Creates a dataset that batches and pads `batch_size` elements from the input.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch.\n padded_shapes: A list of at least 1 `Tensor` objects with type `int64`.\n A list of int64 tensors representing the desired padded shapes\n of the corresponding output components. These shapes may be partially\n specified, using `-1` to indicate that a particular dimension should be\n padded to the maximum size of all batch elements.\n padding_values: A list of `Tensor` objects.\n A list of scalars containing the padding value to use for\n each of the outputs.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches and pads `batch_size` elements from the input.", "type": "API"}, {"name": "tf.raw_ops.PaddedBatchDatasetV2", "docs": "Creates a dataset that batches and pads `batch_size` elements from the input.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n batch_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements to accumulate in a\n batch.\n padded_shapes: A list of at least 1 `Tensor` objects with type `int64`.\n A list of int64 tensors representing the desired padded shapes\n of the corresponding output components. These shapes may be partially\n specified, using `-1` to indicate that a particular dimension should be\n padded to the maximum size of all batch elements.\n padding_values: A list of `Tensor` objects.\n A list of scalars containing the padding value to use for\n each of the outputs.\n drop_remainder: A `Tensor` of type `bool`.\n A scalar representing whether the last batch should be dropped in case its size\n is smaller than desired.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n parallel_copy: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that batches and pads `batch_size` elements from the input.", "type": "API"}, {"name": "tf.raw_ops.PaddingFIFOQueue", "docs": "A queue that produces elements in first-in first-out order.\n\n Variable-size shapes are allowed by setting the corresponding shape dimensions\n to 0 in the shape attr. In this case DequeueMany will pad up to the maximum\n size of any given element in the minibatch. See below for details.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types.\n Shapes of fixed rank but variable size are allowed by setting\n any shape dimension to -1. In this case, the inputs' shape may vary along\n the given dimension, and DequeueMany will pad the given dimension with\n zeros up to the maximum shape of all elements in the given batch.\n If the length of this attr is 0, different queue elements may have\n different ranks and shapes, but only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A queue that produces elements in first-in first-out order.", "type": "API"}, {"name": "tf.raw_ops.PaddingFIFOQueueV2", "docs": "A queue that produces elements in first-in first-out order.\n\n Variable-size shapes are allowed by setting the corresponding shape dimensions\n to 0 in the shape attr. In this case DequeueMany will pad up to the maximum\n size of any given element in the minibatch. See below for details.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types.\n Shapes of fixed rank but variable size are allowed by setting\n any shape dimension to -1. In this case, the inputs' shape may vary along\n the given dimension, and DequeueMany will pad the given dimension with\n zeros up to the maximum shape of all elements in the given batch.\n If the length of this attr is 0, different queue elements may have\n different ranks and shapes, but only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A queue that produces elements in first-in first-out order.", "type": "API"}, {"name": "tf.raw_ops.PadV2", "docs": "Pads a tensor.\n\n This operation pads `input` according to the `paddings` and `constant_values`\n you specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is\n the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates\n how many padding values to add before the contents of `input` in that dimension,\n and `paddings[D, 1]` indicates how many padding values to add after the contents\n of `input` in that dimension. `constant_values` is a scalar tensor of the same\n type as `input` that indicates the value to use for padding `input`.\n\n The padded size of each dimension D of the output is:\n\n `paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`\n\n For example:\n\n ```\n # 't' is [[1, 1], [2, 2]]\n # 'paddings' is [[1, 1], [2, 2]]\n # 'constant_values' is 0\n # rank of 't' is 2\n pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]\n [0, 0, 1, 1, 0, 0]\n [0, 0, 2, 2, 0, 0]\n [0, 0, 0, 0, 0, 0]]\n ```\n\n Args:\n input: A `Tensor`.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n constant_values: A `Tensor`. Must have the same type as `input`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Pads a tensor.", "type": "API"}, {"name": "tf.raw_ops.ParallelBatchDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n batch_size: A `Tensor` of type `int64`.\n num_parallel_calls: A `Tensor` of type `int64`.\n drop_remainder: A `Tensor` of type `bool`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n parallel_copy: An optional `bool`. Defaults to `False`.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ParallelConcat", "docs": "Concatenates a list of `N` tensors along the first dimension.\n\n The input tensors are all required to have size 1 in the first dimension.\n\n For example:\n\n ```\n # 'x' is [[1, 4]]\n # 'y' is [[2, 5]]\n # 'z' is [[3, 6]]\n parallel_concat([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.\n ```\n\n The difference between concat and parallel_concat is that concat requires all\n of the inputs be computed before the operation will begin but doesn't require\n that the input shapes be known during graph construction. Parallel concat\n will copy pieces of the input into the output as they become available, in\n some situations this can provide a performance benefit.\n\n Args:\n values: A list of at least 1 `Tensor` objects with the same type.\n Tensors to be concatenated. All must have size 1 in the first dimension\n and same shape.\n shape: A `tf.TensorShape` or list of `ints`.\n the final shape of the result; should be equal to the shapes of any input\n but with the number of input values in the first dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `values`.\n ", "desc": "Concatenates a list of `N` tensors along the first dimension.", "type": "API"}, {"name": "tf.raw_ops.ParallelDynamicStitch", "docs": "Interleave the values from the `data` tensors into a single tensor.\n\n Builds a merged tensor such that\n\n ```python\n merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]\n ```\n\n For example, if each `indices[m]` is scalar or vector, we have\n\n ```python\n # Scalar indices:\n merged[indices[m], ...] = data[m][...]\n\n # Vector indices:\n merged[indices[m][i], ...] = data[m][i, ...]\n ```\n\n Each `data[i].shape` must start with the corresponding `indices[i].shape`,\n and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we\n must have `data[i].shape = indices[i].shape + constant`. In terms of this\n `constant`, the output shape is\n\n merged.shape = [max(indices)] + constant\n\n Values may be merged in parallel, so if an index appears in both `indices[m][i]`\n and `indices[n][j]`, the result may be invalid. This differs from the normal\n DynamicStitch operator that defines the behavior in that case.\n\n For example:\n\n ```python\n indices[0] = 6\n indices[1] = [4, 1]\n indices[2] = [[5, 2], [0, 3]]\n data[0] = [61, 62]\n data[1] = [[41, 42], [11, 12]]\n data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]\n merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],\n [51, 52], [61, 62]]\n ```\n\n This method can be used to merge partitions created by `dynamic_partition`\n as illustrated on the following example:\n\n ```python\n # Apply function (increments x_i) on elements for which a certain condition\n # apply (x_i != -1 in this example).\n x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])\n condition_mask=tf.not_equal(x,tf.constant(-1.))\n partitioned_data = tf.dynamic_partition(\n x, tf.cast(condition_mask, tf.int32) , 2)\n partitioned_data[1] = partitioned_data[1] + 1.0\n condition_indices = tf.dynamic_partition(\n tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)\n x = tf.dynamic_stitch(condition_indices, partitioned_data)\n # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain\n # unchanged.\n ```\n\n
\n \n
\n\n Args:\n indices: A list of at least 1 `Tensor` objects with type `int32`.\n data: A list with the same length as `indices` of `Tensor` objects with the same type.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Interleave the values from the `data` tensors into a single tensor.", "type": "API"}, {"name": "tf.raw_ops.ParallelInterleaveDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, with the exception\n that if retrieving the next value from a dataset would cause the requester to\n block, it will skip that input dataset. This dataset is especially useful\n when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it\n allows the training step to proceed so long as some data is available.\n\n !! WARNING !! If the `sloppy` parameter is set to `True`, the operation of this\n dataset will not be deterministic!\n\n This dataset has been superseded by `ParallelInterleaveDatasetV2`. New code\n should use `ParallelInterleaveDatasetV2`.\n\n The Python API `tf.data.experimental.parallel_interleave` creates instances of\n this op. `tf.data.experimental.parallel_interleave` is a deprecated API.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n Dataset that produces a stream of arguments for the function `f`.\n other_arguments: A list of `Tensor` objects.\n Additional arguments to pass to `f` beyond those produced by `input_dataset`.\n Evaluated once when the dataset is instantiated.\n cycle_length: A `Tensor` of type `int64`.\n Number of datasets (each created by applying `f` to the elements of\n `input_dataset`) among which the `ParallelInterleaveDataset` will cycle in a\n round-robin fashion.\n block_length: A `Tensor` of type `int64`.\n Number of elements at a time to produce from each interleaved invocation of a\n dataset returned by `f`.\n sloppy: A `Tensor` of type `bool`.\n If `True`, return elements as they become available, even if that means returning\n these elements in a non-deterministic order. Sloppy operation may result in better\n performance in the presence of stragglers, but the dataset will still block if\n all of its open streams are blocked.\n If `False`, always return elements in a deterministic order.\n buffer_output_elements: A `Tensor` of type `int64`.\n The number of elements each iterator being interleaved should buffer (similar\n to the `.prefetch()` transformation for each interleaved iterator).\n prefetch_input_elements: A `Tensor` of type `int64`.\n Determines the number of iterators to prefetch, allowing buffers to warm up and\n data to be pre-fetched without blocking the main thread.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParallelInterleaveDatasetV2", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, except that the\n dataset will fetch records from the interleaved datasets in parallel.\n\n The `tf.data` Python API creates instances of this op from\n `Dataset.interleave()` when the `num_parallel_calls` parameter of that method\n is set to any value other than `None`.\n\n By default, the output of this dataset will be deterministic, which may result\n in the dataset blocking if the next data item to be returned isn't available.\n In order to avoid head-of-line blocking, one can set the\n `experimental_deterministic` parameter of `tf.data.Options` to `False`,\n which can improve performance at the expense of non-determinism.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n Dataset that produces a stream of arguments for the function `f`.\n other_arguments: A list of `Tensor` objects.\n Additional arguments to pass to `f` beyond those produced by `input_dataset`.\n Evaluated once when the dataset is instantiated.\n cycle_length: A `Tensor` of type `int64`.\n Number of datasets (each created by applying `f` to the elements of\n `input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a\n round-robin fashion.\n block_length: A `Tensor` of type `int64`.\n Number of elements at a time to produce from each interleaved invocation of a\n dataset returned by `f`.\n num_parallel_calls: A `Tensor` of type `int64`.\n Determines the number of threads that should be used for fetching data from\n input datasets in parallel. The Python API `tf.data.experimental.AUTOTUNE`\n constant can be used to indicate that the level of parallelism should be autotuned.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n sloppy: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParallelInterleaveDatasetV3", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, except that the\n dataset will fetch records from the interleaved datasets in parallel.\n\n The `tf.data` Python API creates instances of this op from\n `Dataset.interleave()` when the `num_parallel_calls` parameter of that method\n is set to any value other than `None`.\n\n By default, the output of this dataset will be deterministic, which may result\n in the dataset blocking if the next data item to be returned isn't available.\n In order to avoid head-of-line blocking, one can either set the `deterministic`\n attribute to \"false\", or leave it as \"default\" and set the\n `experimental_deterministic` parameter of `tf.data.Options` to `False`.\n This can improve performance at the expense of non-determinism.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n Dataset that produces a stream of arguments for the function `f`.\n other_arguments: A list of `Tensor` objects.\n Additional arguments to pass to `f` beyond those produced by `input_dataset`.\n Evaluated once when the dataset is instantiated.\n cycle_length: A `Tensor` of type `int64`.\n Number of datasets (each created by applying `f` to the elements of\n `input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a\n round-robin fashion.\n block_length: A `Tensor` of type `int64`.\n Number of elements at a time to produce from each interleaved invocation of a\n dataset returned by `f`.\n num_parallel_calls: A `Tensor` of type `int64`.\n Determines the number of threads that should be used for fetching data from\n input datasets in parallel. The Python API `tf.data.experimental.AUTOTUNE`\n constant can be used to indicate that the level of parallelism should be autotuned.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n A string indicating the op-level determinism to use. Deterministic controls\n whether the interleave is allowed to return elements out of order if the next\n element to be returned isn't available, but a later element is. Options are\n \"true\", \"false\", and \"default\". \"default\" indicates that determinism should be\n decided by the `experimental_deterministic` parameter of `tf.data.Options`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParallelInterleaveDatasetV4", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n The resulting dataset is similar to the `InterleaveDataset`, except that the\n dataset will fetch records from the interleaved datasets in parallel.\n\n The `tf.data` Python API creates instances of this op from\n `Dataset.interleave()` when the `num_parallel_calls` parameter of that method\n is set to any value other than `None`.\n\n By default, the output of this dataset will be deterministic, which may result\n in the dataset blocking if the next data item to be returned isn't available.\n In order to avoid head-of-line blocking, one can either set the `deterministic`\n attribute to \"false\", or leave it as \"default\" and set the\n `experimental_deterministic` parameter of `tf.data.Options` to `False`.\n This can improve performance at the expense of non-determinism.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n Dataset that produces a stream of arguments for the function `f`.\n other_arguments: A list of `Tensor` objects.\n Additional arguments to pass to `f` beyond those produced by `input_dataset`.\n Evaluated once when the dataset is instantiated.\n cycle_length: A `Tensor` of type `int64`.\n Number of datasets (each created by applying `f` to the elements of\n `input_dataset`) among which the `ParallelInterleaveDatasetV2` will cycle in a\n round-robin fashion.\n block_length: A `Tensor` of type `int64`.\n Number of elements at a time to produce from each interleaved invocation of a\n dataset returned by `f`.\n buffer_output_elements: A `Tensor` of type `int64`.\n The number of elements each iterator being interleaved should buffer (similar\n to the `.prefetch()` transformation for each interleaved iterator).\n prefetch_input_elements: A `Tensor` of type `int64`.\n Determines the number of iterators to prefetch, allowing buffers to warm up and\n data to be pre-fetched without blocking the main thread.\n num_parallel_calls: A `Tensor` of type `int64`.\n Determines the number of threads that should be used for fetching data from\n input datasets in parallel. The Python API `tf.data.experimental.AUTOTUNE`\n constant can be used to indicate that the level of parallelism should be autotuned.\n f: A function decorated with @Defun.\n A function mapping elements of `input_dataset`, concatenated with\n `other_arguments`, to a Dataset variant that contains elements matching\n `output_types` and `output_shapes`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n A string indicating the op-level determinism to use. Deterministic controls\n whether the interleave is allowed to return elements out of order if the next\n element to be returned isn't available, but a later element is. Options are\n \"true\", \"false\", and \"default\". \"default\" indicates that determinism should be\n decided by the `experimental_deterministic` parameter of `tf.data.Options`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParallelMapDataset", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Unlike a \"MapDataset\", which applies `f` sequentially, this dataset invokes up\n to `num_parallel_calls` copies of `f` in parallel.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n num_parallel_calls: A `Tensor` of type `int32`.\n The number of concurrent invocations of `f` that process\n elements from `input_dataset` in parallel.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_inter_op_parallelism: An optional `bool`. Defaults to `True`.\n sloppy: An optional `bool`. Defaults to `False`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParallelMapDatasetV2", "docs": "Creates a dataset that applies `f` to the outputs of `input_dataset`.\n\n Unlike a \"MapDataset\", which applies `f` sequentially, this dataset invokes up\n to `num_parallel_calls` copies of `f` in parallel.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n num_parallel_calls: A `Tensor` of type `int64`.\n The number of concurrent invocations of `f` that process\n elements from `input_dataset` in parallel.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_inter_op_parallelism: An optional `bool`. Defaults to `True`.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that applies `f` to the outputs of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ParameterizedTruncatedNormal", "docs": "Outputs random values from a normal distribution. The parameters may each be a\n\n scalar which applies to the entire output, or a vector of length shape[0] which\n stores the parameters for each batch.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor. Batches are indexed by the 0th dimension.\n means: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The mean parameter of each batch.\n stdevs: A `Tensor`. Must have the same type as `means`.\n The standard deviation parameter of each batch. Must be greater than 0.\n minvals: A `Tensor`. Must have the same type as `means`.\n The minimum cutoff. May be -infinity.\n maxvals: A `Tensor`. Must have the same type as `means`.\n The maximum cutoff. May be +infinity, and must be more than the minval\n for each batch.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `means`.\n ", "desc": "Outputs random values from a normal distribution. The parameters may each be a", "type": "API"}, {"name": "tf.raw_ops.ParseExample", "docs": "Transforms a vector of brain.Example protos (as strings) into typed tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A vector containing a batch of binary serialized Example protos.\n names: A `Tensor` of type `string`.\n A vector containing the names of the serialized protos.\n May contain, for example, table key (descriptive) names for the\n corresponding serialized protos. These are purely useful for debugging\n purposes, and the presence of values here has no effect on the output.\n May also be an empty vector if no names are available.\n If non-empty, this vector must be the same length as \"serialized\".\n sparse_keys: A list of `Tensor` objects with type `string`.\n A list of Nsparse string Tensors (scalars).\n The keys expected in the Examples' features associated with sparse values.\n dense_keys: A list of `Tensor` objects with type `string`.\n A list of Ndense string Tensors (scalars).\n The keys expected in the Examples' features associated with dense values.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Ndense Tensors (some may be empty).\n dense_defaults[j] provides default values\n when the example's feature_map lacks dense_key[j]. If an empty Tensor is\n provided for dense_defaults[j], then the Feature dense_keys[j] is required.\n The input type is inferred from dense_defaults[j], even when it's empty.\n If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,\n then the shape of dense_defaults[j] must match that of dense_shapes[j].\n If dense_shapes[j] has an undefined major dimension (variable strides dense\n feature), dense_defaults[j] must contain a single element:\n the padding element.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of Nsparse types; the data types of data in each Feature\n given in sparse_keys.\n Currently the ParseExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n A list of Ndense shapes; the shapes of data in each Feature\n given in dense_keys.\n The number of elements in the Feature corresponding to dense_key[j]\n must always equal dense_shapes[j].NumEntries().\n If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output\n Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):\n The dense outputs are just the inputs row-stacked by batch.\n This works for dense_shapes[j] = (-1, D1, ..., DN). In this case\n the shape of the output Tensor dense_values[j] will be\n (|serialized|, M, D1, .., DN), where M is the maximum number of blocks\n of elements of length D1 * .... * DN, across all minibatch entries\n in the input. Any minibatch entry with less than M blocks of elements of\n length D1 * ... * DN will be padded with the corresponding default_value\n scalar element along the second dimension.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values).\n\n sparse_indices: A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`.\n sparse_values: A list of `Tensor` objects of type `sparse_types`.\n sparse_shapes: A list with the same length as `sparse_keys` of `Tensor` objects with type `int64`.\n dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.\n ", "desc": "Transforms a vector of brain.Example protos (as strings) into typed tensors.", "type": "API"}, {"name": "tf.raw_ops.ParseExampleDataset", "docs": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_parallel_calls: A `Tensor` of type `int64`.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A dict mapping string keys to `Tensor`s.\n The keys of the dict must match the dense_keys of the feature.\n sparse_keys: A list of `strings`.\n A list of string keys in the examples features.\n The results for these keys will be returned as `SparseTensor` objects.\n dense_keys: A list of `strings`.\n A list of Ndense string Tensors (scalars).\n The keys expected in the Examples features associated with dense values.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `DTypes` of the same length as `sparse_keys`.\n Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),\n and `tf.string` (`BytesList`) are supported.\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n List of tuples with the same length as `dense_keys`.\n The shape of the data for each dense feature referenced by `dense_keys`.\n Required for any input tensors identified by `dense_keys`. Must be\n either fully defined, or may contain an unknown first dimension.\n An unknown first dimension means the feature is treated as having\n a variable number of blocks, and the output shape along this dimension\n is considered unknown at graph build time. Padding is applied for\n minibatch elements smaller than the maximum number of blocks for the\n given feature along this dimension.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n The list of shapes being produced.\n sloppy: An optional `bool`. Defaults to `False`.\n ragged_keys: An optional list of `strings`. Defaults to `[]`.\n ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.", "type": "API"}, {"name": "tf.raw_ops.ParseExampleDatasetV2", "docs": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_parallel_calls: A `Tensor` of type `int64`.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A dict mapping string keys to `Tensor`s.\n The keys of the dict must match the dense_keys of the feature.\n sparse_keys: A list of `strings`.\n A list of string keys in the examples features.\n The results for these keys will be returned as `SparseTensor` objects.\n dense_keys: A list of `strings`.\n A list of Ndense string Tensors (scalars).\n The keys expected in the Examples features associated with dense values.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `DTypes` of the same length as `sparse_keys`.\n Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),\n and `tf.string` (`BytesList`) are supported.\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n List of tuples with the same length as `dense_keys`.\n The shape of the data for each dense feature referenced by `dense_keys`.\n Required for any input tensors identified by `dense_keys`. Must be\n either fully defined, or may contain an unknown first dimension.\n An unknown first dimension means the feature is treated as having\n a variable number of blocks, and the output shape along this dimension\n is considered unknown at graph build time. Padding is applied for\n minibatch elements smaller than the maximum number of blocks for the\n given feature along this dimension.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n The list of shapes being produced.\n deterministic: An optional `string`. Defaults to `\"default\"`.\n A string indicating the op-level determinism to use. Deterministic controls\n whether the dataset is allowed to return elements out of order if the next\n element to be returned isn't available, but a later element is. Options are\n \"true\", \"false\", and \"default\". \"default\" indicates that determinism should be\n decided by the `experimental_deterministic` parameter of `tf.data.Options`.\n ragged_keys: An optional list of `strings`. Defaults to `[]`.\n ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.", "type": "API"}, {"name": "tf.raw_ops.ParseExampleV2", "docs": "Transforms a vector of tf.Example protos (as strings) into typed tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar or vector containing binary serialized Example protos.\n names: A `Tensor` of type `string`.\n A tensor containing the names of the serialized protos.\n Corresponds 1:1 with the `serialized` tensor.\n May contain, for example, table key (descriptive) names for the\n corresponding serialized protos. These are purely useful for debugging\n purposes, and the presence of values here has no effect on the output.\n May also be an empty vector if no names are available.\n If non-empty, this tensor must have the same shape as \"serialized\".\n sparse_keys: A `Tensor` of type `string`. Vector of strings.\n The keys expected in the Examples' features associated with sparse values.\n dense_keys: A `Tensor` of type `string`. Vector of strings.\n The keys expected in the Examples' features associated with dense values.\n ragged_keys: A `Tensor` of type `string`. Vector of strings.\n The keys expected in the Examples' features associated with ragged values.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Tensors (some may be empty). Corresponds 1:1 with `dense_keys`.\n dense_defaults[j] provides default values\n when the example's feature_map lacks dense_key[j]. If an empty Tensor is\n provided for dense_defaults[j], then the Feature dense_keys[j] is required.\n The input type is inferred from dense_defaults[j], even when it's empty.\n If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,\n then the shape of dense_defaults[j] must match that of dense_shapes[j].\n If dense_shapes[j] has an undefined major dimension (variable strides dense\n feature), dense_defaults[j] must contain a single element:\n the padding element.\n num_sparse: An `int` that is `>= 0`. The number of sparse keys.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `num_sparse` types; the data types of data in each Feature\n given in sparse_keys.\n Currently the ParseExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n ragged_value_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `num_ragged` types; the data types of data in each Feature\n given in ragged_keys (where `num_ragged = sparse_keys.size()`).\n Currently the ParseExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n ragged_split_types: A list of `tf.DTypes` from: `tf.int32, tf.int64`.\n A list of `num_ragged` types; the data types of row_splits in each Feature\n given in ragged_keys (where `num_ragged = sparse_keys.size()`).\n May be DT_INT32 or DT_INT64.\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n A list of `num_dense` shapes; the shapes of data in each Feature\n given in dense_keys (where `num_dense = dense_keys.size()`).\n The number of elements in the Feature corresponding to dense_key[j]\n must always equal dense_shapes[j].NumEntries().\n If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output\n Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN):\n The dense outputs are just the inputs row-stacked by batch.\n This works for dense_shapes[j] = (-1, D1, ..., DN). In this case\n the shape of the output Tensor dense_values[j] will be\n (|serialized|, M, D1, .., DN), where M is the maximum number of blocks\n of elements of length D1 * .... * DN, across all minibatch entries\n in the input. Any minibatch entry with less than M blocks of elements of\n length D1 * ... * DN will be padded with the corresponding default_value\n scalar element along the second dimension.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values, ragged_values, ragged_row_splits).\n\n sparse_indices: A list of `num_sparse` `Tensor` objects with type `int64`.\n sparse_values: A list of `Tensor` objects of type `sparse_types`.\n sparse_shapes: A list of `num_sparse` `Tensor` objects with type `int64`.\n dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.\n ragged_values: A list of `Tensor` objects of type `ragged_value_types`.\n ragged_row_splits: A list of `Tensor` objects of type `ragged_split_types`.\n ", "desc": "Transforms a vector of tf.Example protos (as strings) into typed tensors.", "type": "API"}, {"name": "tf.raw_ops.ParseSequenceExample", "docs": "Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A vector containing binary serialized SequenceExample protos.\n debug_name: A `Tensor` of type `string`.\n A vector containing the names of the serialized protos.\n May contain, for example, table key (descriptive) name for the\n corresponding serialized proto. This is purely useful for debugging\n purposes, and the presence of values here has no effect on the output.\n May also be an empty vector if no name is available.\n context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Ncontext_dense Tensors (some may be empty).\n context_dense_defaults[j] provides default values\n when the SequenceExample's context map lacks context_dense_key[j].\n If an empty Tensor is provided for context_dense_defaults[j],\n then the Feature context_dense_keys[j] is required.\n The input type is inferred from context_dense_defaults[j], even when it's\n empty. If context_dense_defaults[j] is not empty, its shape must match\n context_dense_shapes[j].\n feature_list_dense_missing_assumed_empty: A list of `strings`.\n A vector listing the\n FeatureList keys which may be missing from the SequenceExamples. If the\n associated FeatureList is missing, it is treated as empty. By default,\n any FeatureList not listed in this vector must exist in the SequenceExamples.\n context_sparse_keys: A list of `strings`.\n A list of Ncontext_sparse string Tensors (scalars).\n The keys expected in the Examples' features associated with context_sparse\n values.\n context_dense_keys: A list of `strings`.\n A list of Ncontext_dense string Tensors (scalars).\n The keys expected in the SequenceExamples' context features associated with\n dense values.\n feature_list_sparse_keys: A list of `strings`.\n A list of Nfeature_list_sparse string Tensors\n (scalars). The keys expected in the FeatureLists associated with sparse\n values.\n feature_list_dense_keys: A list of `strings`.\n A list of Nfeature_list_dense string Tensors (scalars).\n The keys expected in the SequenceExamples' feature_lists associated\n with lists of dense values.\n Ncontext_sparse: An optional `int` that is `>= 0`. Defaults to `0`.\n Ncontext_dense: An optional `int` that is `>= 0`. Defaults to `0`.\n Nfeature_list_sparse: An optional `int` that is `>= 0`. Defaults to `0`.\n Nfeature_list_dense: An optional `int` that is `>= 0`. Defaults to `0`.\n context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Ncontext_sparse types; the data types of data in\n each context Feature given in context_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Ncontext_dense shapes; the shapes of data in\n each context Feature given in context_dense_keys.\n The number of elements in the Feature corresponding to context_dense_key[j]\n must always equal context_dense_shapes[j].NumEntries().\n The shape of context_dense_values[j] will match context_dense_shapes[j].\n feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Nfeature_list_sparse types; the data types\n of data in each FeatureList given in feature_list_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Nfeature_list_dense shapes; the shapes of\n data in each FeatureList given in feature_list_dense_keys.\n The shape of each Feature in the FeatureList corresponding to\n feature_list_dense_key[j] must always equal\n feature_list_dense_shapes[j].NumEntries().\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths).\n\n context_sparse_indices: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.\n context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.\n context_sparse_shapes: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.\n context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.\n feature_list_sparse_indices: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.\n feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.\n feature_list_sparse_shapes: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.\n feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.\n feature_list_dense_lengths: A list of `Nfeature_list_dense` `Tensor` objects with type `int64`.\n ", "desc": "Transforms a vector of brain.SequenceExample protos (as strings) into typed tensors.", "type": "API"}, {"name": "tf.raw_ops.ParseSequenceExampleV2", "docs": "Transforms a vector of tf.io.SequenceExample protos (as strings) into\ntyped tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar or vector containing binary serialized SequenceExample protos.\n debug_name: A `Tensor` of type `string`.\n A scalar or vector containing the names of the serialized protos.\n May contain, for example, table key (descriptive) name for the\n corresponding serialized proto. This is purely useful for debugging\n purposes, and the presence of values here has no effect on the output.\n May also be an empty vector if no name is available.\n context_sparse_keys: A `Tensor` of type `string`.\n The keys expected in the Examples' features associated with context_sparse\n values.\n context_dense_keys: A `Tensor` of type `string`.\n The keys expected in the SequenceExamples' context features associated with\n dense values.\n context_ragged_keys: A `Tensor` of type `string`.\n The keys expected in the Examples' features associated with context_ragged\n values.\n feature_list_sparse_keys: A `Tensor` of type `string`.\n The keys expected in the FeatureLists associated with sparse values.\n feature_list_dense_keys: A `Tensor` of type `string`.\n The keys expected in the SequenceExamples' feature_lists associated\n with lists of dense values.\n feature_list_ragged_keys: A `Tensor` of type `string`.\n The keys expected in the FeatureLists associated with ragged values.\n feature_list_dense_missing_assumed_empty: A `Tensor` of type `bool`.\n A vector corresponding 1:1 with feature_list_dense_keys, indicating which\n features may be missing from the SequenceExamples. If the associated\n FeatureList is missing, it is treated as empty.\n context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Ncontext_dense Tensors (some may be empty).\n context_dense_defaults[j] provides default values\n when the SequenceExample's context map lacks context_dense_key[j].\n If an empty Tensor is provided for context_dense_defaults[j],\n then the Feature context_dense_keys[j] is required.\n The input type is inferred from context_dense_defaults[j], even when it's\n empty. If context_dense_defaults[j] is not empty, its shape must match\n context_dense_shapes[j].\n Ncontext_sparse: An optional `int` that is `>= 0`. Defaults to `0`.\n context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Ncontext_sparse types; the data types of data in\n each context Feature given in context_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n context_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n RaggedTensor.value dtypes for the ragged context features.\n context_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.\n RaggedTensor.row_split dtypes for the ragged context features.\n context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Ncontext_dense shapes; the shapes of data in\n each context Feature given in context_dense_keys.\n The number of elements in the Feature corresponding to context_dense_key[j]\n must always equal context_dense_shapes[j].NumEntries().\n The shape of context_dense_values[j] will match context_dense_shapes[j].\n Nfeature_list_sparse: An optional `int` that is `>= 0`. Defaults to `0`.\n Nfeature_list_dense: An optional `int` that is `>= 0`. Defaults to `0`.\n feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Nfeature_list_sparse types; the data types\n of data in each FeatureList given in feature_list_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n feature_list_ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n RaggedTensor.value dtypes for the ragged FeatureList features.\n feature_list_ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.\n RaggedTensor.row_split dtypes for the ragged FeatureList features.\n feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Nfeature_list_dense shapes; the shapes of\n data in each FeatureList given in feature_list_dense_keys.\n The shape of each Feature in the FeatureList corresponding to\n feature_list_dense_key[j] must always equal\n feature_list_dense_shapes[j].NumEntries().\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, context_ragged_values, context_ragged_row_splits, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values, feature_list_dense_lengths, feature_list_ragged_values, feature_list_ragged_outer_splits, feature_list_ragged_inner_splits).\n\n context_sparse_indices: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.\n context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.\n context_sparse_shapes: A list of `Ncontext_sparse` `Tensor` objects with type `int64`.\n context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.\n context_ragged_values: A list of `Tensor` objects of type `context_ragged_value_types`.\n context_ragged_row_splits: A list of `Tensor` objects of type `context_ragged_split_types`.\n feature_list_sparse_indices: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.\n feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.\n feature_list_sparse_shapes: A list of `Nfeature_list_sparse` `Tensor` objects with type `int64`.\n feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.\n feature_list_dense_lengths: A list of `Nfeature_list_dense` `Tensor` objects with type `int64`.\n feature_list_ragged_values: A list of `Tensor` objects of type `feature_list_ragged_value_types`.\n feature_list_ragged_outer_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`.\n feature_list_ragged_inner_splits: A list of `Tensor` objects of type `feature_list_ragged_split_types`.\n ", "desc": "Transforms a vector of tf.io.SequenceExample protos (as strings) into", "type": "API"}, {"name": "tf.raw_ops.ParseSingleExample", "docs": "Transforms a tf.Example proto (as a string) into typed tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A vector containing a batch of binary serialized Example protos.\n dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Tensors (some may be empty), whose length matches\n the length of `dense_keys`. dense_defaults[j] provides default values\n when the example's feature_map lacks dense_key[j]. If an empty Tensor is\n provided for dense_defaults[j], then the Feature dense_keys[j] is required.\n The input type is inferred from dense_defaults[j], even when it's empty.\n If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined,\n then the shape of dense_defaults[j] must match that of dense_shapes[j].\n If dense_shapes[j] has an undefined major dimension (variable strides dense\n feature), dense_defaults[j] must contain a single element:\n the padding element.\n num_sparse: An `int` that is `>= 0`.\n The number of sparse features to be parsed from the example. This\n must match the lengths of `sparse_keys` and `sparse_types`.\n sparse_keys: A list of `strings`. A list of `num_sparse` strings.\n The keys expected in the Examples' features associated with sparse values.\n dense_keys: A list of `strings`.\n The keys expected in the Examples' features associated with dense\n values.\n sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.\n A list of `num_sparse` types; the data types of data in each\n Feature given in sparse_keys.\n Currently the ParseSingleExample op supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of data in each Feature given in dense_keys.\n The length of this list must match the length of `dense_keys`. The\n number of elements in the Feature corresponding to dense_key[j] must\n always equal dense_shapes[j].NumEntries(). If dense_shapes[j] ==\n (D0, D1, ..., DN) then the shape of output Tensor dense_values[j]\n will be (D0, D1, ..., DN): In the case dense_shapes[j] = (-1, D1,\n ..., DN), the shape of the output Tensor dense_values[j] will be (M,\n D1, .., DN), where M is the number of blocks of elements of length\n D1 * .... * DN, in the input.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shapes, dense_values).\n\n sparse_indices: A list of `num_sparse` `Tensor` objects with type `int64`.\n sparse_values: A list of `Tensor` objects of type `sparse_types`.\n sparse_shapes: A list of `num_sparse` `Tensor` objects with type `int64`.\n dense_values: A list of `Tensor` objects. Has the same type as `dense_defaults`.\n ", "desc": "Transforms a tf.Example proto (as a string) into typed tensors.", "type": "API"}, {"name": "tf.raw_ops.ParseSingleSequenceExample", "docs": "Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar containing a binary serialized SequenceExample proto.\n feature_list_dense_missing_assumed_empty: A `Tensor` of type `string`.\n A vector listing the\n FeatureList keys which may be missing from the SequenceExample. If the\n associated FeatureList is missing, it is treated as empty. By default,\n any FeatureList not listed in this vector must exist in the SequenceExample.\n context_sparse_keys: A list of `Tensor` objects with type `string`.\n A list of Ncontext_sparse string Tensors (scalars).\n The keys expected in the Examples' features associated with context_sparse\n values.\n context_dense_keys: A list of `Tensor` objects with type `string`.\n A list of Ncontext_dense string Tensors (scalars).\n The keys expected in the SequenceExamples' context features associated with\n dense values.\n feature_list_sparse_keys: A list of `Tensor` objects with type `string`.\n A list of Nfeature_list_sparse string Tensors\n (scalars). The keys expected in the FeatureLists associated with sparse\n values.\n feature_list_dense_keys: A list of `Tensor` objects with type `string`.\n A list of Nfeature_list_dense string Tensors (scalars).\n The keys expected in the SequenceExamples' feature_lists associated\n with lists of dense values.\n context_dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.\n A list of Ncontext_dense Tensors (some may be empty).\n context_dense_defaults[j] provides default values\n when the SequenceExample's context map lacks context_dense_key[j].\n If an empty Tensor is provided for context_dense_defaults[j],\n then the Feature context_dense_keys[j] is required.\n The input type is inferred from context_dense_defaults[j], even when it's\n empty. If context_dense_defaults[j] is not empty, its shape must match\n context_dense_shapes[j].\n debug_name: A `Tensor` of type `string`.\n A scalar containing the name of the serialized proto.\n May contain, for example, table key (descriptive) name for the\n corresponding serialized proto. This is purely useful for debugging\n purposes, and the presence of values here has no effect on the output.\n May also be an empty scalar if no name is available.\n context_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Ncontext_sparse types; the data types of data in\n each context Feature given in context_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n feature_list_dense_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n context_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Ncontext_dense shapes; the shapes of data in\n each context Feature given in context_dense_keys.\n The number of elements in the Feature corresponding to context_dense_key[j]\n must always equal context_dense_shapes[j].NumEntries().\n The shape of context_dense_values[j] will match context_dense_shapes[j].\n feature_list_sparse_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.\n A list of Nfeature_list_sparse types; the data types\n of data in each FeatureList given in feature_list_sparse_keys.\n Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList),\n DT_INT64 (Int64List), and DT_STRING (BytesList).\n feature_list_dense_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n A list of Nfeature_list_dense shapes; the shapes of\n data in each FeatureList given in feature_list_dense_keys.\n The shape of each Feature in the FeatureList corresponding to\n feature_list_dense_key[j] must always equal\n feature_list_dense_shapes[j].NumEntries().\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (context_sparse_indices, context_sparse_values, context_sparse_shapes, context_dense_values, feature_list_sparse_indices, feature_list_sparse_values, feature_list_sparse_shapes, feature_list_dense_values).\n\n context_sparse_indices: A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`.\n context_sparse_values: A list of `Tensor` objects of type `context_sparse_types`.\n context_sparse_shapes: A list with the same length as `context_sparse_keys` of `Tensor` objects with type `int64`.\n context_dense_values: A list of `Tensor` objects. Has the same type as `context_dense_defaults`.\n feature_list_sparse_indices: A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`.\n feature_list_sparse_values: A list of `Tensor` objects of type `feature_list_sparse_types`.\n feature_list_sparse_shapes: A list with the same length as `feature_list_sparse_keys` of `Tensor` objects with type `int64`.\n feature_list_dense_values: A list of `Tensor` objects of type `feature_list_dense_types`.\n ", "desc": "Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors.", "type": "API"}, {"name": "tf.raw_ops.ParseTensor", "docs": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.\n\n Args:\n serialized: A `Tensor` of type `string`.\n A scalar string containing a serialized TensorProto proto.\n out_type: A `tf.DType`.\n The type of the serialized tensor. The provided type must match the\n type of the serialized tensor and no implicit conversion will take place.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Transforms a serialized tensorflow.TensorProto proto into a Tensor.", "type": "API"}, {"name": "tf.raw_ops.PartitionedCall", "docs": "returns `f(inputs)`, where `f`'s body is placed and partitioned.\n\n Asynchronously executes a function, potentially across multiple devices but\n within a single process. The kernel places and partitions a given function's\n underlying graph, and executes each of the partitioned subgraphs as a function.\n\n Args:\n args: A list of `Tensor` objects. A list of input tensors.\n Tout: A list of `tf.DTypes`. A list of output types.\n f: A function decorated with @Defun.\n A function that takes 'args', a list of tensors, and returns 'output',\n another list of tensors. Input and output types are specified by 'Tin'\n and 'Tout'. The function body of f will be placed and partitioned across\n devices, setting this op apart from the regular Call op.\n config: An optional `string`. Defaults to `\"\"`.\n config_proto: An optional `string`. Defaults to `\"\"`.\n executor_type: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "returns `f(inputs)`, where `f`'s body is placed and partitioned.", "type": "API"}, {"name": "tf.raw_ops.Placeholder", "docs": "A placeholder op for a value that will be fed into the computation.\n\n N.B. This operation will fail with an error if it is executed. It is\n intended as a way to represent a value that will always be fed, and to\n provide attrs that enable the fed value to be checked at runtime.\n\n Args:\n dtype: A `tf.DType`. The type of elements in the tensor.\n shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n (Optional) The shape of the tensor. If the shape has 0 dimensions, the\n shape is unconstrained.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "A placeholder op for a value that will be fed into the computation.", "type": "API"}, {"name": "tf.raw_ops.PlaceholderV2", "docs": "A placeholder op for a value that will be fed into the computation.\n\n N.B. This operation will fail with an error if it is executed. It is\n intended as a way to represent a value that will always be fed, and to\n provide attrs that enable the fed value to be checked at runtime.\n\n Args:\n dtype: A `tf.DType`. The type of elements in the tensor.\n shape: A `tf.TensorShape` or list of `ints`.\n The shape of the tensor. The shape can be any partially-specified\n shape. To be unconstrained, pass in a shape with unknown rank.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "A placeholder op for a value that will be fed into the computation.", "type": "API"}, {"name": "tf.raw_ops.PlaceholderWithDefault", "docs": "A placeholder op that passes through `input` when its output is not fed.\n\n Args:\n input: A `Tensor`. The default value to produce when `output` is not fed.\n shape: A `tf.TensorShape` or list of `ints`.\n The (possibly partial) shape of the tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "A placeholder op that passes through `input` when its output is not fed.", "type": "API"}, {"name": "tf.raw_ops.Polygamma", "docs": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).\n\n The polygamma function is defined as:\n\n\n \\\\(\\psi^{(a)}(x) = \\frac{d^a}{dx^a} \\psi(x)\\\\)\n\n where \\\\(\\psi(x)\\\\) is the digamma function.\n The polygamma function is defined only for non-negative integer orders \\\\a\\\\.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n x: A `Tensor`. Must have the same type as `a`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a`.\n ", "desc": "Compute the polygamma function \\\\(\\psi^{(n)}(x)\\\\).", "type": "API"}, {"name": "tf.raw_ops.PopulationCount", "docs": "Computes element-wise population count (a.k.a. popcount, bitsum, bitcount).\n\n For each entry in `x`, calculates the number of `1` (on) bits in the binary\n representation of that entry.\n\n **NOTE**: It is more efficient to first `tf.bitcast` your tensors into\n `int32` or `int64` and perform the bitcount on the result, than to feed in\n 8- or 16-bit inputs and then aggregate the resulting counts.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `uint8`.\n ", "desc": "Computes element-wise population count (a.k.a. popcount, bitsum, bitcount).", "type": "API"}, {"name": "tf.raw_ops.Pow", "docs": "Computes the power of one value to another.\n\n Given a tensor `x` and a tensor `y`, this operation computes \\\\(x^y\\\\) for\n corresponding elements in `x` and `y`. For example:\n\n ```\n # tensor 'x' is [[2, 2]], [3, 3]]\n # tensor 'y' is [[8, 16], [2, 3]]\n tf.pow(x, y) ==> [[256, 65536], [9, 27]]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `half`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the power of one value to another.", "type": "API"}, {"name": "tf.raw_ops.PrefetchDataset", "docs": "Creates a dataset that asynchronously prefetches elements from `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n The maximum number of elements to buffer in an iterator over\n this dataset.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n slack_period: An optional `int`. Defaults to `0`.\n legacy_autotune: An optional `bool`. Defaults to `True`.\n buffer_size_min: An optional `int`. Defaults to `0`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that asynchronously prefetches elements from `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Prelinearize", "docs": "An op which linearizes one Tensor value to an opaque variant tensor.\n\n Args:\n input: A `Tensor`. A tensor that will be linearized.\n shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `[]`.\n The shape of the tensor.\n layout: An optional list of `ints`. Defaults to `[]`.\n A vector holding the requested layout in minor-to-major sequence. If a layout\n attribute is passed but its values are all -1 the layout will be computed by\n the infeed operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "An op which linearizes one Tensor value to an opaque variant tensor.", "type": "API"}, {"name": "tf.raw_ops.PrelinearizeTuple", "docs": "An op which linearizes multiple Tensor values to an opaque variant tensor.\n\n Args:\n inputs: A list of `Tensor` objects.\n A list of tensors that will be provided using the infeed mechanism.\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shapes of each tensor in `inputs`.\n layouts: An optional list of `ints`. Defaults to `[]`.\n A vector holding the requested layout in minor-to-major sequence for all the\n tuple shapes in the order the shapes appear in the \"shapes\" input. The layout\n elements for a sub-shape can be set to -1 in which case the corresponding layout\n will be computed by the infeed operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "An op which linearizes multiple Tensor values to an opaque variant tensor.", "type": "API"}, {"name": "tf.raw_ops.PreventGradient", "docs": "An identity op that triggers an error if a gradient is requested.\n\n When executed in a graph, this op outputs its input tensor as-is.\n\n When building ops to compute gradients, the TensorFlow gradient system\n will return an error when trying to lookup the gradient of this op,\n because no gradient must ever be registered for this function. This\n op exists to prevent subtle bugs from silently returning unimplemented\n gradients in some corner cases.\n\n Args:\n input: A `Tensor`. any tensor.\n message: An optional `string`. Defaults to `\"\"`.\n Will be printed in the error when anyone tries to differentiate\n this operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "An identity op that triggers an error if a gradient is requested.", "type": "API"}, {"name": "tf.raw_ops.Print", "docs": "Prints a list of tensors.\n\n Passes `input` through to `output` and prints `data` when evaluating.\n\n Args:\n input: A `Tensor`. The tensor passed to `output`\n data: A list of `Tensor` objects.\n A list of tensors to print out when op is evaluated.\n message: An optional `string`. Defaults to `\"\"`.\n A string, prefix of the error message.\n first_n: An optional `int`. Defaults to `-1`.\n Only log `first_n` number of times. -1 disables logging.\n summarize: An optional `int`. Defaults to `3`.\n Only print this many entries of each tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Prints a list of tensors.", "type": "API"}, {"name": "tf.raw_ops.PrintV2", "docs": "Prints a string scalar.\n\n Prints a string scalar to the desired output_stream.\n\n Args:\n input: A `Tensor` of type `string`. The string scalar to print.\n output_stream: An optional `string`. Defaults to `\"stderr\"`.\n A string specifying the output stream or logging level to print to.\n end: An optional `string`. Defaults to `\"\\n\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Prints a string scalar.", "type": "API"}, {"name": "tf.raw_ops.PriorityQueue", "docs": "A queue that produces elements sorted by the first component value.\n\n Note that the PriorityQueue requires the first component of any element\n to be a scalar int64, in addition to the other elements declared by\n component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue\n and DequeueMany) on a PriorityQueue will all require (resp. output) one extra\n entry in their input (resp. output) lists.\n\n Args:\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n component_types: An optional list of `tf.DTypes`. Defaults to `[]`.\n The type of each component in a value.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A queue that produces elements sorted by the first component value.", "type": "API"}, {"name": "tf.raw_ops.PriorityQueueV2", "docs": "A queue that produces elements sorted by the first component value.\n\n Note that the PriorityQueue requires the first component of any element\n to be a scalar int64, in addition to the other elements declared by\n component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue\n and DequeueMany) on a PriorityQueue will all require (resp. output) one extra\n entry in their input (resp. output) lists.\n\n Args:\n shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n component_types: An optional list of `tf.DTypes`. Defaults to `[]`.\n The type of each component in a value.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A queue that produces elements sorted by the first component value.", "type": "API"}, {"name": "tf.raw_ops.PrivateThreadPoolDataset", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_threads: A `Tensor` of type `int64`.\n Identifies the number of threads to use for the private threadpool.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Prod", "docs": "Computes the product of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the product of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.PyFunc", "docs": "Invokes a python function to compute func(input)->output.\n\n This operation is considered stateful. For a stateless version, see\n PyFuncStateless.\n\n Args:\n input: A list of `Tensor` objects.\n List of Tensors that will provide input to the Op.\n token: A `string`.\n A token representing a registered python function in this address space.\n Tout: A list of `tf.DTypes`. Data types of the outputs from the op.\n The length of the list specifies the number of outputs.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Invokes a python function to compute func(input)->output.", "type": "API"}, {"name": "tf.raw_ops.PyFuncStateless", "docs": "A stateless version of PyFunc.\n\n Args:\n input: A list of `Tensor` objects.\n token: A `string`.\n Tout: A list of `tf.DTypes`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "A stateless version of PyFunc.", "type": "API"}, {"name": "tf.raw_ops.Qr", "docs": "Computes the QR decompositions of one or more matrices.\n\n Computes the QR decomposition of each inner matrix in `tensor` such that\n `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`\n\n Currently, the gradient for the QR decomposition is well-defined only when\n the first `P` columns of the inner matrix are linearly independent, where\n `P` is the minimum of `M` and `N`, the 2 inner-most dimmensions of `tensor`.\n\n ```python\n # a is a tensor.\n # q is a tensor of orthonormal matrices.\n # r is a tensor of upper triangular matrices.\n q, r = qr(a)\n q_full, r_full = qr(a, full_matrices=True)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.\n full_matrices: An optional `bool`. Defaults to `False`.\n If true, compute full-sized `q` and `r`. If false\n (the default), compute only the leading `P` columns of `q`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (q, r).\n\n q: A `Tensor`. Has the same type as `input`.\n r: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the QR decompositions of one or more matrices.", "type": "API"}, {"name": "tf.raw_ops.QuantizeAndDequantize", "docs": "Use QuantizeAndDequantizeV2 instead.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n signed_input: An optional `bool`. Defaults to `True`.\n num_bits: An optional `int`. Defaults to `8`.\n range_given: An optional `bool`. Defaults to `False`.\n input_min: An optional `float`. Defaults to `0`.\n input_max: An optional `float`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Use QuantizeAndDequantizeV2 instead.", "type": "API"}, {"name": "tf.raw_ops.QuantizeAndDequantizeV2", "docs": "Quantizes then dequantizes a tensor.\n\n This op simulates the precision loss from the quantized forward pass by:\n\n 1. Quantizing the tensor to fixed point numbers, which should match the target\n quantization method when it is used in inference.\n 2. Dequantizing it back to floating point numbers for the following ops, most\n likely matmul.\n\n There are different ways to quantize. This version uses only scaling, so 0.0\n maps to 0.\n\n From the specified 'num_bits' in the quantized output type, it determines\n minimum and maximum representable quantized values.\n\n e.g.\n\n * [-128, 127] for signed, num_bits = 8, or\n * [0, 255] for unsigned, num_bits = 8.\n\n If range_given == False, the initial input_min, input_max will be determined\n automatically as the minimum and maximum values in the input tensor, otherwise\n the specified values of input_min, input_max are used.\n\n Note: If the input_min, input_max are specified, they do not need to equal the\n actual minimum and maximum values in the tensor. e.g. in some cases it may be\n beneficial to specify these values such that the low probability extremes of the\n input distribution are clipped.\n\n This op determines the maximum scale_factor that would map the initial\n [input_min, input_max] range to a range that lies within the representable\n quantized range.\n\n It determines the scale from one of input_min and input_max, then updates the\n other one to maximize the representable range.\n\n e.g.\n\n * if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0,\n 5.0]: it would use a scale_factor of -128 / -10.0 = 12.8 In this case, it\n would update input_max to be 127 / 12.8 = 9.921875\n * if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0,\n 10.0]: it would use a scale_factor of 127 / 10.0 = 12.7 In this case, it\n would update input_min to be 128.0 / 12.7 = -10.07874\n * if the output is unsigned, input_min is forced to be 0, and only the\n specified input_max is used.\n\n After determining the scale_factor and updating the input range, it applies the\n following to each value in the 'input' tensor.\n\n output = round(clamp(value, input_min, input_max) * scale_factor) / scale_factor.\n\n The above round function rounds the value based on the given round_mode.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n Tensor to quantize and then dequantize.\n input_min: A `Tensor`. Must have the same type as `input`.\n If `range_given == True`, this specifies the minimum input value that needs to\n be represented, otherwise it is determined from the min value of the `input`\n tensor.\n input_max: A `Tensor`. Must have the same type as `input`.\n If `range_given == True`, this specifies the maximum input value that needs to\n be represented, otherwise it is determined from the max value of the `input`\n tensor.\n signed_input: An optional `bool`. Defaults to `True`.\n Whether the quantization is signed or unsigned. (actually this parameter should\n have been called `signed_output`)\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization.\n range_given: An optional `bool`. Defaults to `False`.\n Whether the range is given or should be determined from the `input` tensor.\n round_mode: An optional `string` from: `\"HALF_TO_EVEN\", \"HALF_UP\"`. Defaults to `\"HALF_TO_EVEN\"`.\n The 'round_mode' attribute controls which rounding tie-breaking algorithm is\n used when rounding float values to their quantized equivalents. The following\n rounding modes are currently supported:\n\n * HALF_TO_EVEN: this is the default round_mode.\n * HALF_UP: round towards positive. In this mode 7.5 rounds up to 8 and -7.5\n rounds up to -7.\n narrow_range: An optional `bool`. Defaults to `False`.\n If True, then the absolute value of the quantized minimum value is the same as\n the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: An optional `int`. Defaults to `-1`.\n If specified, this axis is treated as a channel or slice axis, and a separate\n quantization range is used for each channel or slice along this axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Quantizes then dequantizes a tensor.", "type": "API"}, {"name": "tf.raw_ops.QuantizeAndDequantizeV3", "docs": "Quantizes then dequantizes a tensor.\n\n This is almost identical to QuantizeAndDequantizeV2, except that num_bits is a\n tensor, so its value can change during training.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n input_min: A `Tensor`. Must have the same type as `input`.\n input_max: A `Tensor`. Must have the same type as `input`.\n num_bits: A `Tensor` of type `int32`.\n signed_input: An optional `bool`. Defaults to `True`.\n range_given: An optional `bool`. Defaults to `True`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Quantizes then dequantizes a tensor.", "type": "API"}, {"name": "tf.raw_ops.QuantizeAndDequantizeV4", "docs": "Quantizes then dequantizes a tensor.\n\n This is almost identical to QuantizeAndDequantizeV2, except that it returns a\n gradient of 1 for inputs that are within the quantization range, or 0 otherwise.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n Tensor to quantize and then dequantize.\n input_min: A `Tensor`. Must have the same type as `input`.\n If `range_given == True`, this specifies the minimum input value that needs to\n be represented, otherwise it is determined from the min value of the `input`\n tensor.\n input_max: A `Tensor`. Must have the same type as `input`.\n If `range_given == True`, this specifies the maximum input value that needs to\n be represented, otherwise it is determined from the max value of the `input`\n tensor.\n signed_input: An optional `bool`. Defaults to `True`.\n Whether the quantization is signed or unsigned. (actually this parameter should\n have been called `signed_output`)\n num_bits: An optional `int`. Defaults to `8`.\n The bitwidth of the quantization.\n range_given: An optional `bool`. Defaults to `False`.\n Whether the range is given or should be determined from the `input` tensor.\n round_mode: An optional `string` from: `\"HALF_TO_EVEN\", \"HALF_UP\"`. Defaults to `\"HALF_TO_EVEN\"`.\n The 'round_mode' attribute controls which rounding tie-breaking algorithm is\n used when rounding float values to their quantized equivalents. The following\n rounding modes are currently supported:\n\n * HALF_TO_EVEN: this is the default round_mode.\n * HALF_UP: round towards positive. In this mode 7.5 rounds up to 8 and -7.5\n rounds up to -7.\n narrow_range: An optional `bool`. Defaults to `False`.\n If True, then the absolute value of the quantized minimum value is the same as\n the quantized maximum value, instead of 1 greater.\n i.e. for 8 bit quantization, the minimum value is -127 instead of -128.\n axis: An optional `int`. Defaults to `-1`.\n If specified, this axis is treated as a channel or slice axis, and a separate\n quantization range is used for each channel or slice along this axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Quantizes then dequantizes a tensor.", "type": "API"}, {"name": "tf.raw_ops.QuantizeAndDequantizeV4Grad", "docs": "Returns the gradient of `QuantizeAndDequantizeV4`.\n\n Returns a gradient of 1 for inputs that are within the quantization range,\n or 0 otherwise.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n input: A `Tensor`. Must have the same type as `gradients`.\n input_min: A `Tensor`. Must have the same type as `gradients`.\n input_max: A `Tensor`. Must have the same type as `gradients`.\n axis: An optional `int`. Defaults to `-1`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (input_backprop, input_min_backprop, input_max_backprop).\n\n input_backprop: A `Tensor`. Has the same type as `gradients`.\n input_min_backprop: A `Tensor`. Has the same type as `gradients`.\n input_max_backprop: A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Returns the gradient of `QuantizeAndDequantizeV4`.", "type": "API"}, {"name": "tf.raw_ops.QuantizedAdd", "docs": "Returns x + y element-wise, working on quantized buffers.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n y: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_x: A `Tensor` of type `float32`.\n The float value that the lowest quantized `x` value represents.\n max_x: A `Tensor` of type `float32`.\n The float value that the highest quantized `x` value represents.\n min_y: A `Tensor` of type `float32`.\n The float value that the lowest quantized `y` value represents.\n max_y: A `Tensor` of type `float32`.\n The float value that the highest quantized `y` value represents.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (z, min_z, max_z).\n\n z: A `Tensor` of type `Toutput`.\n min_z: A `Tensor` of type `float32`.\n max_z: A `Tensor` of type `float32`.\n ", "desc": "Returns x + y element-wise, working on quantized buffers.", "type": "API"}, {"name": "tf.raw_ops.QuantizedAvgPool", "docs": "Produces the average pool of the input tensor for quantized types.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n 4-D with shape `[batch, height, width, channels]`.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n ksize: A list of `ints`.\n The size of the window for each dimension of the input tensor.\n The length must be 4 to match the number of dimensions of the input.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor. The length must be 4 to match the number of dimensions of the input.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor`. Has the same type as `input`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Produces the average pool of the input tensor for quantized types.", "type": "API"}, {"name": "tf.raw_ops.QuantizedBatchNormWithGlobalNormalization", "docs": "Quantized Batch normalization.\n\n This op is deprecated and will be removed in the future. Prefer\n `tf.nn.batch_normalization`.\n\n Args:\n t: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A 4D input Tensor.\n t_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized input.\n t_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized input.\n m: A `Tensor`. Must have the same type as `t`.\n A 1D mean Tensor with size matching the last dimension of t.\n This is the first output from tf.nn.moments,\n or a saved moving average thereof.\n m_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized mean.\n m_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized mean.\n v: A `Tensor`. Must have the same type as `t`.\n A 1D variance Tensor with size matching the last dimension of t.\n This is the second output from tf.nn.moments,\n or a saved moving average thereof.\n v_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized variance.\n v_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized variance.\n beta: A `Tensor`. Must have the same type as `t`.\n A 1D beta Tensor with size matching the last dimension of t.\n An offset to be added to the normalized tensor.\n beta_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized offset.\n beta_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized offset.\n gamma: A `Tensor`. Must have the same type as `t`.\n A 1D gamma Tensor with size matching the last dimension of t.\n If \"scale_after_normalization\" is true, this tensor will be multiplied\n with the normalized tensor.\n gamma_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized gamma.\n gamma_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized gamma.\n out_type: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n variance_epsilon: A `float`. A small float number to avoid dividing by 0.\n scale_after_normalization: A `bool`.\n A bool indicating whether the resulted tensor\n needs to be multiplied with gamma.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (result, result_min, result_max).\n\n result: A `Tensor` of type `out_type`.\n result_min: A `Tensor` of type `float32`.\n result_max: A `Tensor` of type `float32`.\n ", "desc": "Quantized Batch normalization.", "type": "API"}, {"name": "tf.raw_ops.QuantizedBiasAdd", "docs": "Adds Tensor 'bias' to Tensor 'input' for Quantized types.\n\n Broadcasts the values of bias on dimensions 0..N-2 of 'input'.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A 1D bias Tensor with size matching the last dimension of 'input'.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n min_bias: A `Tensor` of type `float32`.\n The float value that the lowest quantized bias value represents.\n max_bias: A `Tensor` of type `float32`.\n The float value that the highest quantized bias value represents.\n out_type: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_out, max_out).\n\n output: A `Tensor` of type `out_type`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "Adds Tensor 'bias' to Tensor 'input' for Quantized types.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConcat", "docs": "Concatenates quantized tensors along one dimension.\n\n Args:\n concat_dim: A `Tensor` of type `int32`.\n 0-D. The dimension along which to concatenate. Must be in the\n range [0, rank(values)).\n values: A list of at least 2 `Tensor` objects with the same type.\n The `N` Tensors to concatenate. Their ranks and types must match,\n and their sizes must match in all dimensions except `concat_dim`.\n input_mins: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The minimum scalar values for each of the input tensors.\n input_maxes: A list with the same length as `values` of `Tensor` objects with type `float32`.\n The maximum scalar values for each of the input tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor`. Has the same type as `values`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Concatenates quantized tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2D", "docs": "Computes a 2D convolution given quantized 4D input and filter tensors.\n\n The inputs are quantized tensors where the lowest value represents the real\n number of the associated minimum, and the highest represents the maximum.\n This means that you can only interpret the quantized output in the same way, by\n taking the returned minimum and maximum values into account.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter's input_depth dimension must match input's depth dimensions.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the lowest quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the highest quantized filter value represents.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n 1-D tensor of length 4. The dilation factor for each dimension of\n `input`. If set to k > 1, there will be k-1 skipped cells between each\n filter element on that dimension. The dimension order is determined by the\n value of `data_format`, see above for details. Dilations in the batch and\n depth dimensions must be 1.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes a 2D convolution given quantized 4D input and filter tensors.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DAndRelu", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DAndReluAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DPerChannel", "docs": "Computes QuantizedConv2D per channel.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original filter tensor.\n min_input: A `Tensor` of type `float32`.\n The minimum value of the input tensor\n max_input: A `Tensor` of type `float32`.\n The maximum value of the input tensor.\n min_filter: A `Tensor` of type `float32`.\n The minimum value of the filter tensor.\n max_filter: A `Tensor` of type `float32`.\n The maximum value of the filter tensor.\n strides: A list of `ints`. list of stride values.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n The quantized type of output tensor that needs to be converted.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n list of dilation values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes QuantizedConv2D per channel.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBias", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor` of type `float32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasAndRelu", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor` of type `float32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasAndReluAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasSignedSumAndReluAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n summand: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_summand: A `Tensor` of type `float32`.\n max_summand: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasSumAndRelu", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor` of type `float32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n summand: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedConv2DWithBiasSumAndReluAndRequantize", "docs": "TODO: add doc.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_input: A `Tensor` of type `float32`.\n max_input: A `Tensor` of type `float32`.\n min_filter: A `Tensor` of type `float32`.\n max_filter: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n summand: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_summand: A `Tensor` of type `float32`.\n max_summand: A `Tensor` of type `float32`.\n strides: A list of `ints`.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedDepthwiseConv2D", "docs": "Computes quantized depthwise Conv2D.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original filter tensor.\n min_input: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the minimum quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the maximum quantized filter value represents.\n strides: A list of `ints`. List of stride values.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n The type of the output.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n List of dilation values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes quantized depthwise Conv2D.", "type": "API"}, {"name": "tf.raw_ops.QuantizedDepthwiseConv2DWithBias", "docs": "Computes quantized depthwise Conv2D with Bias.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original filter tensor.\n bias: A `Tensor` of type `float32`. The original bias tensor.\n min_input: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the minimum quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the maximum quantized filter value represents.\n strides: A list of `ints`. List of stride values.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n The type of the output.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n List of dilation values.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes quantized depthwise Conv2D with Bias.", "type": "API"}, {"name": "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu", "docs": "Computes quantized depthwise Conv2D with Bias and Relu.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original filter tensor.\n bias: A `Tensor` of type `float32`. The original bias tensor.\n min_input: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the minimum quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the maximum quantized filter value represents.\n strides: A list of `ints`. List of stride values.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n The type of the output.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n List of dilation values.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes quantized depthwise Conv2D with Bias and Relu.", "type": "API"}, {"name": "tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize", "docs": "Computes quantized depthwise Conv2D with Bias, Relu and Requantize.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n filter: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original filter tensor.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n The original bias tensor.\n min_input: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n min_filter: A `Tensor` of type `float32`.\n The float value that the minimum quantized filter value represents.\n max_filter: A `Tensor` of type `float32`.\n The float value that the maximum quantized filter value represents.\n min_freezed_output: A `Tensor` of type `float32`.\n The minimum float value of the output tensor.\n max_freezed_output: A `Tensor` of type `float32`.\n The maximum float value of the output tensor.\n strides: A list of `ints`. List of stride values.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n The type of the output.\n dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.\n List of dilation values.\n padding_list: An optional list of `ints`. Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor` of type `out_type`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Computes quantized depthwise Conv2D with Bias, Relu and Requantize.", "type": "API"}, {"name": "tf.raw_ops.QuantizedInstanceNorm", "docs": "Quantized Instance normalization.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A 4D input Tensor.\n x_min: A `Tensor` of type `float32`.\n The value represented by the lowest quantized input.\n x_max: A `Tensor` of type `float32`.\n The value represented by the highest quantized input.\n output_range_given: An optional `bool`. Defaults to `False`.\n If True, `given_y_min` and `given_y_min`\n and `given_y_max` are used as the output range. Otherwise,\n the implementation computes the output range.\n given_y_min: An optional `float`. Defaults to `0`.\n Output in `y_min` if `output_range_given` is True.\n given_y_max: An optional `float`. Defaults to `0`.\n Output in `y_max` if `output_range_given` is True.\n variance_epsilon: An optional `float`. Defaults to `1e-05`.\n A small float number to avoid dividing by 0.\n min_separation: An optional `float`. Defaults to `0.001`.\n Minimum value of `y_max - y_min`\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, y_min, y_max).\n\n y: A `Tensor`. Has the same type as `x`.\n y_min: A `Tensor` of type `float32`.\n y_max: A `Tensor` of type `float32`.\n ", "desc": "Quantized Instance normalization.", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMul", "docs": "Perform a quantized matrix multiplication of `a` by the matrix `b`.\n\n The inputs must be two-dimensional matrices and the inner dimension of\n `a` (after being transposed if `transpose_a` is non-zero) must match the\n outer dimension of `b` (after being transposed if `transposed_b` is\n non-zero).\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n Must be a two-dimensional tensor.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n Must be a two-dimensional tensor.\n min_a: A `Tensor` of type `float32`.\n The float value that the lowest quantized `a` value represents.\n max_a: A `Tensor` of type `float32`.\n The float value that the highest quantized `a` value represents.\n min_b: A `Tensor` of type `float32`.\n The float value that the lowest quantized `b` value represents.\n max_b: A `Tensor` of type `float32`.\n The float value that the highest quantized `b` value represents.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n transpose_a: An optional `bool`. Defaults to `False`.\n If true, `a` is transposed before multiplication.\n transpose_b: An optional `bool`. Defaults to `False`.\n If true, `b` is transposed before multiplication.\n Tactivation: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n The type of output produced by activation function\n following this operation.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, min_out, max_out).\n\n out: A `Tensor` of type `Toutput`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "Perform a quantized matrix multiplication of `a` by the matrix `b`.", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMulWithBias", "docs": "Performs a quantized matrix multiplication of `a` by the matrix `b` with bias\nadd.\n\n The inputs must be two-dimensional matrices and 1D bias vector. And the inner\n dimension of `a` (after being transposed if `transpose_a` is non-zero) must\n match the outer dimension of `b` (after being transposed if `transposed_b` is\n non-zero). Then do broadcast add operation with bias values on the matrix\n multiplication result. The bias size must match inner dimension of `b`.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n A 1D bias tensor with size matching inner dimension of `b` (after being\n transposed if `transposed_b` is non-zero).\n min_a: A `Tensor` of type `float32`.\n The float value that the lowest quantized `a` value represents.\n max_a: A `Tensor` of type `float32`.\n The float value that the highest quantized `a` value represents.\n min_b: A `Tensor` of type `float32`.\n The float value that the lowest quantized `b` value represents.\n max_b: A `Tensor` of type `float32`.\n The float value that the highest quantized `b` value represents.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n transpose_a: An optional `bool`. Defaults to `False`.\n If true, `a` is transposed before multiplication.\n transpose_b: An optional `bool`. Defaults to `False`.\n If true, `b` is transposed before multiplication.\n input_quant_mode: An optional `string` from: `\"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_FIRST\"`.\n Input data quantization mode. Either MIN_FIRST(default) or SCALED.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, min_out, max_out).\n\n out: A `Tensor` of type `Toutput`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "Performs a quantized matrix multiplication of `a` by the matrix `b` with bias", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMulWithBiasAndDequantize", "docs": "TODO: add doc.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_a: A `Tensor` of type `float32`.\n max_a: A `Tensor` of type `float32`.\n min_b: A `Tensor` of type `float32`.\n max_b: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n Toutput: A `tf.DType` from: `tf.float32`.\n transpose_a: An optional `bool`. Defaults to `False`.\n transpose_b: An optional `bool`. Defaults to `False`.\n input_quant_mode: An optional `string` from: `\"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_FIRST\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Toutput`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMulWithBiasAndRelu", "docs": "Perform a quantized matrix multiplication of `a` by the matrix `b` with bias\nadd and relu fusion.\n\n The inputs must be two-dimensional matrices and 1D bias vector. And the inner\n dimension of `a` (after being transposed if `transpose_a` is non-zero) must\n match the outer dimension of `b` (after being transposed if `transposed_b` is\n non-zero). Then do broadcast add operation with bias values on the matrix\n multiplication result. The bias size must match inner dimension of `b`. Then do\n relu activation to get non-negative result.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`.\n bias: A `Tensor` of type `float32`.\n A 1D bias tensor with size matching with inner dimension of `b` (after being\n transposed if `transposed_b` is non-zero).\n min_a: A `Tensor` of type `float32`.\n The float value that the lowest quantized `a` value represents.\n max_a: A `Tensor` of type `float32`.\n The float value that the highest quantized `a` value represents.\n min_b: A `Tensor` of type `float32`.\n The float value that the lowest quantized `b` value represents.\n max_b: A `Tensor` of type `float32`.\n The float value that the highest quantized `b` value represents.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n transpose_a: An optional `bool`. Defaults to `False`.\n If true, `a` is transposed before multiplication.\n transpose_b: An optional `bool`. Defaults to `False`.\n If true, `b` is transposed before multiplication.\n input_quant_mode: An optional `string` from: `\"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_FIRST\"`.\n Input data quantization mode. Either MIN_FIRST(default) or SCALED.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, min_out, max_out).\n\n out: A `Tensor` of type `Toutput`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "Perform a quantized matrix multiplication of `a` by the matrix `b` with bias", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize", "docs": "Perform a quantized matrix multiplication of `a` by the matrix `b` with bias\nadd and relu and requantize fusion.\n\n The inputs must be two-dimensional matrices and 1D bias vector. And the inner\n dimension of `a` (after being transposed if `transpose_a` is non-zero) must\n match the outer dimension of `b` (after being transposed if `transposed_b` is\n non-zero). Then do broadcast add operation with bias values on the matrix\n multiplication result. The bias size must match inner dimension of `b`. Then do\n relu activation to get non-negative result. Then do requantize operation to get\n final uint8 result.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied. Must be a two-dimensional tensor of type `quint8`.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n A matrix to be multiplied and must be a two-dimensional tensor of type `qint8`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n A 1D bias tensor with size matching with inner dimension of `b` (after being\n transposed if `transposed_b` is non-zero).\n min_a: A `Tensor` of type `float32`.\n The float value that the lowest quantized `a` value represents.\n max_a: A `Tensor` of type `float32`.\n The float value that the highest quantized `a` value represents.\n min_b: A `Tensor` of type `float32`.\n The float value that the lowest quantized `b` value represents.\n max_b: A `Tensor` of type `float32`.\n The float value that the highest quantized `b` value represents.\n min_freezed_output: A `Tensor` of type `float32`.\n The float value that the highest quantized output value after requantize.\n max_freezed_output: A `Tensor` of type `float32`.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n transpose_a: An optional `bool`. Defaults to `False`.\n If true, `a` is transposed before multiplication.\n transpose_b: An optional `bool`. Defaults to `False`.\n If true, `b` is transposed before multiplication.\n input_quant_mode: An optional `string` from: `\"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_FIRST\"`.\n Input data quantization mode. Either MIN_FIRST(default) or SCALED.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, min_out, max_out).\n\n out: A `Tensor` of type `Toutput`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "Perform a quantized matrix multiplication of `a` by the matrix `b` with bias", "type": "API"}, {"name": "tf.raw_ops.QuantizedMatMulWithBiasAndRequantize", "docs": "TODO: add doc.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n b: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n bias: A `Tensor`. Must be one of the following types: `float32`, `qint32`.\n min_a: A `Tensor` of type `float32`.\n max_a: A `Tensor` of type `float32`.\n min_b: A `Tensor` of type `float32`.\n max_b: A `Tensor` of type `float32`.\n min_freezed_output: A `Tensor` of type `float32`.\n max_freezed_output: A `Tensor` of type `float32`.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n transpose_a: An optional `bool`. Defaults to `False`.\n transpose_b: An optional `bool`. Defaults to `False`.\n input_quant_mode: An optional `string` from: `\"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_FIRST\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out, min_out, max_out).\n\n out: A `Tensor` of type `Toutput`.\n min_out: A `Tensor` of type `float32`.\n max_out: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.QuantizedMaxPool", "docs": "Produces the max pool of the input tensor for quantized types.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The 4D (batch x rows x cols x depth) Tensor to MaxReduce over.\n min_input: A `Tensor` of type `float32`.\n The float value that the lowest quantized input value represents.\n max_input: A `Tensor` of type `float32`.\n The float value that the highest quantized input value represents.\n ksize: A list of `ints`.\n The size of the window for each dimension of the input tensor.\n The length must be 4 to match the number of dimensions of the input.\n strides: A list of `ints`.\n The stride of the sliding window for each dimension of the input\n tensor. The length must be 4 to match the number of dimensions of the input.\n padding: A `string` from: `\"SAME\", \"VALID\"`.\n The type of padding algorithm to use.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, min_output, max_output).\n\n output: A `Tensor`. Has the same type as `input`.\n min_output: A `Tensor` of type `float32`.\n max_output: A `Tensor` of type `float32`.\n ", "desc": "Produces the max pool of the input tensor for quantized types.", "type": "API"}, {"name": "tf.raw_ops.QuantizedMul", "docs": "Returns x * y element-wise, working on quantized buffers.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n y: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_x: A `Tensor` of type `float32`.\n The float value that the lowest quantized `x` value represents.\n max_x: A `Tensor` of type `float32`.\n The float value that the highest quantized `x` value represents.\n min_y: A `Tensor` of type `float32`.\n The float value that the lowest quantized `y` value represents.\n max_y: A `Tensor` of type `float32`.\n The float value that the highest quantized `y` value represents.\n Toutput: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.qint32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (z, min_z, max_z).\n\n z: A `Tensor` of type `Toutput`.\n min_z: A `Tensor` of type `float32`.\n max_z: A `Tensor` of type `float32`.\n ", "desc": "Returns x * y element-wise, working on quantized buffers.", "type": "API"}, {"name": "tf.raw_ops.QuantizeDownAndShrinkRange", "docs": "Convert the quantized 'input' tensor into a lower-precision 'output', using the\n\n actual distribution of the values to maximize the usage of the lower bit depth\n and adjusting the output min and max ranges accordingly.\n\n [input_min, input_max] are scalar floats that specify the range for the float\n interpretation of the 'input' data. For example, if input_min is -1.0f and\n input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0\n value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.\n\n This operator tries to squeeze as much precision as possible into an output with\n a lower bit depth by calculating the actual min and max values found in the\n data. For example, maybe that quint16 input has no values lower than 16,384 and\n none higher than 49,152. That means only half the range is actually needed, all\n the float interpretations are between -0.5f and 0.5f, so if we want to compress\n the data into a quint8 output, we can use that range rather than the theoretical\n -1.0f to 1.0f that is suggested by the input min and max.\n\n In practice, this is most useful for taking output from operations like\n QuantizedMatMul that can produce higher bit-depth outputs than their inputs and\n may have large potential output ranges, but in practice have a distribution of\n input values that only uses a small fraction of the possible range. By feeding\n that output into this operator, we can reduce it from 32 bits down to 8 with\n minimal loss of accuracy.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n input_min: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n input_max: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n out_type: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n The type of the output. Should be a lower bit depth than Tinput.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `out_type`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Convert the quantized 'input' tensor into a lower-precision 'output', using the", "type": "API"}, {"name": "tf.raw_ops.QuantizedRelu", "docs": "Computes Quantized Rectified Linear: `max(features, 0)`\n\n Args:\n features: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_features: A `Tensor` of type `float32`.\n The float value that the lowest quantized value represents.\n max_features: A `Tensor` of type `float32`.\n The float value that the highest quantized value represents.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (activations, min_activations, max_activations).\n\n activations: A `Tensor` of type `out_type`.\n min_activations: A `Tensor` of type `float32`.\n max_activations: A `Tensor` of type `float32`.\n ", "desc": "Computes Quantized Rectified Linear: `max(features, 0)`", "type": "API"}, {"name": "tf.raw_ops.QuantizedRelu6", "docs": "Computes Quantized Rectified Linear 6: `min(max(features, 0), 6)`\n\n Args:\n features: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n min_features: A `Tensor` of type `float32`.\n The float value that the lowest quantized value represents.\n max_features: A `Tensor` of type `float32`.\n The float value that the highest quantized value represents.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (activations, min_activations, max_activations).\n\n activations: A `Tensor` of type `out_type`.\n min_activations: A `Tensor` of type `float32`.\n max_activations: A `Tensor` of type `float32`.\n ", "desc": "Computes Quantized Rectified Linear 6: `min(max(features, 0), 6)`", "type": "API"}, {"name": "tf.raw_ops.QuantizedReluX", "docs": "Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`\n\n Args:\n features: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n max_value: A `Tensor` of type `float32`.\n min_features: A `Tensor` of type `float32`.\n The float value that the lowest quantized value represents.\n max_features: A `Tensor` of type `float32`.\n The float value that the highest quantized value represents.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (activations, min_activations, max_activations).\n\n activations: A `Tensor` of type `out_type`.\n min_activations: A `Tensor` of type `float32`.\n max_activations: A `Tensor` of type `float32`.\n ", "desc": "Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`", "type": "API"}, {"name": "tf.raw_ops.QuantizedReshape", "docs": "Reshapes a quantized tensor as per the Reshape op.\n\n ```\n\n Args:\n tensor: A `Tensor`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Defines the shape of the output tensor.\n input_min: A `Tensor` of type `float32`. The minimum value of the input.\n input_max: A `Tensor` of type `float32`. The maximum value of the input.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor`. Has the same type as `tensor`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Reshapes a quantized tensor as per the Reshape op.", "type": "API"}, {"name": "tf.raw_ops.QuantizedResizeBilinear", "docs": "Resize quantized `images` to `size` using quantized bilinear interpolation.\n\n Input images and output images must be quantized types.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `quint8`, `qint32`, `float32`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n min: A `Tensor` of type `float32`.\n max: A `Tensor` of type `float32`.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (resized_images, out_min, out_max).\n\n resized_images: A `Tensor`. Has the same type as `images`.\n out_min: A `Tensor` of type `float32`.\n out_max: A `Tensor` of type `float32`.\n ", "desc": "Resize quantized `images` to `size` using quantized bilinear interpolation.", "type": "API"}, {"name": "tf.raw_ops.QuantizeV2", "docs": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.\n\n [min_range, max_range] are scalar floats that specify the range for\n the 'input' data. The 'mode' attribute controls exactly which calculations are\n used to convert the float values to their quantized equivalents. The\n 'round_mode' attribute controls which rounding tie-breaking algorithm is used\n when rounding float values to their quantized equivalents.\n\n In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:\n\n ```\n out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)\n if T == qint8: out[i] -= (range(T) + 1) / 2.0\n ```\n\n here `range(T) = numeric_limits::max() - numeric_limits::min()`\n\n *MIN_COMBINED Mode Example*\n\n Assume the input is type float and has a possible range of [0.0, 6.0] and the\n output type is quint8 ([0, 255]). The min_range and max_range values should be\n specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each\n value of the input by 255/6 and cast to quint8.\n\n If the output type was qint8 ([-128, 127]), the operation will additionally\n subtract each value by 128 prior to casting, so that the range of values aligns\n with the range of qint8.\n\n If the mode is 'MIN_FIRST', then this approach is used:\n\n ```\n num_discrete_values = 1 << (# of bits in T)\n range_adjust = num_discrete_values / (num_discrete_values - 1)\n range = (range_max - range_min) * range_adjust\n range_scale = num_discrete_values / range\n quantized = round(input * range_scale) - round(range_min * range_scale) +\n numeric_limits::min()\n quantized = max(quantized, numeric_limits::min())\n quantized = min(quantized, numeric_limits::max())\n ```\n\n The biggest difference between this and MIN_COMBINED is that the minimum range\n is rounded first, before it's subtracted from the rounded value. With\n MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing\n and dequantizing will introduce a larger and larger error.\n\n *SCALED mode Example*\n\n `SCALED` mode matches the quantization approach used in\n `QuantizeAndDequantize{V2|V3}`.\n\n If the mode is `SCALED`, the quantization is performed by multiplying each\n input value by a scaling_factor.\n The scaling_factor is determined from `min_range` and `max_range` to be as large\n as possible such that the range from `min_range` to `max_range` is representable\n within values of type T.\n\n ```c++\n\n const int min_T = std::numeric_limits::min();\n const int max_T = std::numeric_limits::max();\n const float max_float = std::numeric_limits::max();\n\n const float scale_factor_from_min_side =\n (min_T * min_range > 0) ? min_T / min_range : max_float;\n const float scale_factor_from_max_side =\n (max_T * max_range > 0) ? max_T / max_range : max_float;\n\n const float scale_factor = std::min(scale_factor_from_min_side,\n scale_factor_from_max_side);\n ```\n\n We next use the scale_factor to adjust min_range and max_range as follows:\n\n ```c++\n min_range = min_T / scale_factor;\n max_range = max_T / scale_factor;\n ```\n\n\n e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would\n compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8\n In this case, min_range would remain -10, but max_range would be adjusted to\n 127 / 12.8 = 9.921875\n\n So we will quantize input values in the range (-10, 9.921875) to (-128, 127).\n\n The input tensor can now be quantized by clipping values to the range\n `min_range` to `max_range`, then multiplying by scale_factor as follows:\n\n ```c++\n result = round(min(max_range, max(min_range, input)) * scale_factor)\n ```\n\n The adjusted `min_range` and `max_range` are returned as outputs 2 and 3 of\n this operation. These outputs should be used as the range for any further\n calculations.\n\n\n *narrow_range (bool) attribute*\n\n If true, we do not use the minimum quantized value.\n i.e. for int8 the quantized output, it would be restricted to the range\n -127..127 instead of the full -128..127 range.\n This is provided for compatibility with certain inference backends.\n (Only applies to SCALED mode)\n\n\n *axis (int) attribute*\n\n An optional `axis` attribute can specify a dimension index of the input tensor,\n such that quantization ranges will be calculated and applied separately for each\n slice of the tensor along that dimension. This is useful for per-channel\n quantization.\n\n If axis is specified, min_range and max_range\n\n if `axis`=None, per-tensor quantization is performed as normal.\n\n\n *ensure_minimum_range (float) attribute*\n\n Ensures the minimum quantization range is at least this value.\n The legacy default value for this is 0.01, but it is strongly suggested to\n set it to 0 for new uses.\n\n Args:\n input: A `Tensor` of type `float32`.\n min_range: A `Tensor` of type `float32`.\n The minimum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_min`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n max_range: A `Tensor` of type `float32`.\n The maximum value of the quantization range. This value may be adjusted by the\n op depending on other parameters. The adjusted value is written to `output_max`.\n If the `axis` attribute is specified, this must be a 1-D tensor whose size\n matches the `axis` dimension of the input and output tensors.\n T: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n mode: An optional `string` from: `\"MIN_COMBINED\", \"MIN_FIRST\", \"SCALED\"`. Defaults to `\"MIN_COMBINED\"`.\n round_mode: An optional `string` from: `\"HALF_AWAY_FROM_ZERO\", \"HALF_TO_EVEN\"`. Defaults to `\"HALF_AWAY_FROM_ZERO\"`.\n narrow_range: An optional `bool`. Defaults to `False`.\n axis: An optional `int`. Defaults to `-1`.\n ensure_minimum_range: An optional `float`. Defaults to `0.01`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `T`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.", "type": "API"}, {"name": "tf.raw_ops.QueueClose", "docs": "Closes the given queue.\n\n This operation signals that no more elements will be enqueued in the\n given queue. Subsequent Enqueue(Many) operations will fail.\n Subsequent Dequeue(Many) operations will continue to succeed if\n sufficient elements remain in the queue. Subsequent Dequeue(Many)\n operations that would block will fail immediately.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n cancel_pending_enqueues: An optional `bool`. Defaults to `False`.\n If true, all pending enqueue requests that are\n blocked on the given queue will be canceled.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Closes the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueCloseV2", "docs": "Closes the given queue.\n\n This operation signals that no more elements will be enqueued in the\n given queue. Subsequent Enqueue(Many) operations will fail.\n Subsequent Dequeue(Many) operations will continue to succeed if\n sufficient elements remain in the queue. Subsequent Dequeue(Many)\n operations that would block will fail immediately.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n cancel_pending_enqueues: An optional `bool`. Defaults to `False`.\n If true, all pending enqueue requests that are\n blocked on the given queue will be canceled.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Closes the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeue", "docs": "Dequeues a tuple of one or more tensors from the given queue.\n\n This operation has k outputs, where k is the number of components\n in the tuples stored in the given queue, and output i is the ith\n component of the dequeued tuple.\n\n N.B. If the queue is empty, this operation will block until an element\n has been dequeued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is empty, this operation will block for up to\n timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues a tuple of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeueMany", "docs": "Dequeues `n` tuples of one or more tensors from the given queue.\n\n If the queue is closed and there are fewer than `n` elements, then an\n OutOfRange error is returned.\n\n This operation concatenates queue-element component tensors along the\n 0th dimension to make a single component tensor. All of the components\n in the dequeued tuple will have size `n` in the 0th dimension.\n\n This operation has `k` outputs, where `k` is the number of components in\n the tuples stored in the given queue, and output `i` is the ith\n component of the dequeued tuple.\n\n N.B. If the queue is empty, this operation will block until `n` elements\n have been dequeued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n n: A `Tensor` of type `int32`. The number of tuples to dequeue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue has fewer than n elements, this operation\n will block for up to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues `n` tuples of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeueManyV2", "docs": "Dequeues `n` tuples of one or more tensors from the given queue.\n\n If the queue is closed and there are fewer than `n` elements, then an\n OutOfRange error is returned.\n\n This operation concatenates queue-element component tensors along the\n 0th dimension to make a single component tensor. All of the components\n in the dequeued tuple will have size `n` in the 0th dimension.\n\n This operation has `k` outputs, where `k` is the number of components in\n the tuples stored in the given queue, and output `i` is the ith\n component of the dequeued tuple.\n\n N.B. If the queue is empty, this operation will block until `n` elements\n have been dequeued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n n: A `Tensor` of type `int32`. The number of tuples to dequeue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue has fewer than n elements, this operation\n will block for up to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues `n` tuples of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeueUpTo", "docs": "Dequeues `n` tuples of one or more tensors from the given queue.\n\n This operation is not supported by all queues. If a queue does not support\n DequeueUpTo, then an Unimplemented error is returned.\n\n If the queue is closed and there are more than 0 but less than `n`\n elements remaining, then instead of returning an OutOfRange error like\n QueueDequeueMany, less than `n` elements are returned immediately. If\n the queue is closed and there are 0 elements left in the queue, then\n an OutOfRange error is returned just like in QueueDequeueMany.\n Otherwise the behavior is identical to QueueDequeueMany:\n\n This operation concatenates queue-element component tensors along the\n 0th dimension to make a single component tensor. All of the components\n in the dequeued tuple will have size `n` in the 0th dimension.\n\n This operation has k outputs, where `k` is the number of components in\n the tuples stored in the given queue, and output `i` is the ith\n component of the dequeued tuple.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n n: A `Tensor` of type `int32`. The number of tuples to dequeue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue has fewer than n elements, this operation\n will block for up to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues `n` tuples of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeueUpToV2", "docs": "Dequeues `n` tuples of one or more tensors from the given queue.\n\n This operation is not supported by all queues. If a queue does not support\n DequeueUpTo, then an Unimplemented error is returned.\n\n If the queue is closed and there are more than 0 but less than `n`\n elements remaining, then instead of returning an OutOfRange error like\n QueueDequeueMany, less than `n` elements are returned immediately. If\n the queue is closed and there are 0 elements left in the queue, then\n an OutOfRange error is returned just like in QueueDequeueMany.\n Otherwise the behavior is identical to QueueDequeueMany:\n\n This operation concatenates queue-element component tensors along the\n 0th dimension to make a single component tensor. All of the components\n in the dequeued tuple will have size n in the 0th dimension.\n\n This operation has `k` outputs, where `k` is the number of components in\n the tuples stored in the given queue, and output `i` is the ith\n component of the dequeued tuple.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n n: A `Tensor` of type `int32`. The number of tuples to dequeue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue has fewer than n elements, this operation\n will block for up to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues `n` tuples of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueDequeueV2", "docs": "Dequeues a tuple of one or more tensors from the given queue.\n\n This operation has k outputs, where k is the number of components\n in the tuples stored in the given queue, and output i is the ith\n component of the dequeued tuple.\n\n N.B. If the queue is empty, this operation will block until an element\n has been dequeued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a tuple.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is empty, this operation will block for up to\n timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `component_types`.\n ", "desc": "Dequeues a tuple of one or more tensors from the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueEnqueue", "docs": "Enqueues a tuple of one or more tensors in the given queue.\n\n The components input has k elements, which correspond to the components of\n tuples stored in the given queue.\n\n N.B. If the queue is full, this operation will block until the given\n element has been enqueued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n components: A list of `Tensor` objects.\n One or more tensors from which the enqueued tensors should be taken.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is full, this operation will block for up to\n timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueues a tuple of one or more tensors in the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueEnqueueMany", "docs": "Enqueues zero or more tuples of one or more tensors in the given queue.\n\n This operation slices each component tensor along the 0th dimension to\n make multiple queue elements. All of the tuple components must have the\n same size in the 0th dimension.\n\n The components input has k elements, which correspond to the components of\n tuples stored in the given queue.\n\n N.B. If the queue is full, this operation will block until the given\n elements have been enqueued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n components: A list of `Tensor` objects.\n One or more tensors from which the enqueued tensors should\n be taken.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is too full, this operation will block for up\n to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueues zero or more tuples of one or more tensors in the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueEnqueueManyV2", "docs": "Enqueues zero or more tuples of one or more tensors in the given queue.\n\n This operation slices each component tensor along the 0th dimension to\n make multiple queue elements. All of the tuple components must have the\n same size in the 0th dimension.\n\n The components input has k elements, which correspond to the components of\n tuples stored in the given queue.\n\n N.B. If the queue is full, this operation will block until the given\n elements have been enqueued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n components: A list of `Tensor` objects.\n One or more tensors from which the enqueued tensors should\n be taken.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is too full, this operation will block for up\n to timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueues zero or more tuples of one or more tensors in the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueEnqueueV2", "docs": "Enqueues a tuple of one or more tensors in the given queue.\n\n The components input has k elements, which correspond to the components of\n tuples stored in the given queue.\n\n N.B. If the queue is full, this operation will block until the given\n element has been enqueued (or 'timeout_ms' elapses, if specified).\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n components: A list of `Tensor` objects.\n One or more tensors from which the enqueued tensors should be taken.\n timeout_ms: An optional `int`. Defaults to `-1`.\n If the queue is full, this operation will block for up to\n timeout_ms milliseconds.\n Note: This option is not supported yet.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Enqueues a tuple of one or more tensors in the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueIsClosed", "docs": "Returns true if queue is closed.\n\n This operation returns true if the queue is closed and false if the queue\n is open.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns true if queue is closed.", "type": "API"}, {"name": "tf.raw_ops.QueueIsClosedV2", "docs": "Returns true if queue is closed.\n\n This operation returns true if the queue is closed and false if the queue\n is open.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Returns true if queue is closed.", "type": "API"}, {"name": "tf.raw_ops.QueueSize", "docs": "Computes the number of elements in the given queue.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a queue.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Computes the number of elements in the given queue.", "type": "API"}, {"name": "tf.raw_ops.QueueSizeV2", "docs": "Computes the number of elements in the given queue.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a queue.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Computes the number of elements in the given queue.", "type": "API"}, {"name": "tf.raw_ops.RaggedBincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n Outputs a vector with length `size` and the same dtype as `weights`. If\n `weights` are empty, then index `i` stores the number of times the value `i` is\n counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of\n the value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Values in `arr` outside of the range [0, size) are ignored.\n\n Args:\n splits: A `Tensor` of type `int64`. 1D int64 `Tensor`.\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2D int `Tensor`.\n size: A `Tensor`. Must have the same type as `values`.\n non-negative int scalar `Tensor`.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n is an int32, int64, float32, or float64 `Tensor` with the same\n shape as `input`, or a length-0 `Tensor`, in which case it acts as all weights\n equal to 1.\n binary_output: An optional `bool`. Defaults to `False`.\n bool; Whether the kernel should count the appearance or number of occurrences.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.raw_ops.RaggedCountSparseOutput", "docs": "Performs sparse-output bin counting for a ragged tensor input.\n\n Counts the number of times each value occurs in the input.\n\n Args:\n splits: A `Tensor` of type `int64`.\n Tensor containing the row splits of the ragged tensor to count.\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Tensor containing values of the sparse tensor to count.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n A Tensor of the same shape as indices containing per-index weight values.\n May also be the empty tensor if no weights are used.\n binary_output: A `bool`.\n Whether to output the number of occurrences of each value or 1.\n minlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Minimum value to count. Can be set to -1 for no minimum.\n maxlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Maximum value to count. Can be set to -1 for no maximum.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_dense_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `weights`.\n output_dense_shape: A `Tensor` of type `int64`.\n ", "desc": "Performs sparse-output bin counting for a ragged tensor input.", "type": "API"}, {"name": "tf.raw_ops.RaggedCross", "docs": "Generates a feature cross from a list of tensors, and returns it as a\nRaggedTensor. See `tf.ragged.cross` for more details.\n\n Args:\n ragged_values: A list of `Tensor` objects with types from: `int64`, `string`.\n The values tensor for each RaggedTensor input.\n ragged_row_splits: A list of `Tensor` objects with types from: `int32`, `int64`.\n The row_splits tensor for each RaggedTensor input.\n sparse_indices: A list of `Tensor` objects with type `int64`.\n The indices tensor for each SparseTensor input.\n sparse_values: A list of `Tensor` objects with types from: `int64`, `string`.\n The values tensor for each SparseTensor input.\n sparse_shape: A list with the same length as `sparse_indices` of `Tensor` objects with type `int64`.\n The dense_shape tensor for each SparseTensor input.\n dense_inputs: A list of `Tensor` objects with types from: `int64`, `string`.\n The tf.Tensor inputs.\n input_order: A `string`.\n String specifying the tensor type for each input. The `i`th character in\n this string specifies the type of the `i`th input, and is one of: 'R' (ragged),\n 'D' (dense), or 'S' (sparse). This attr is used to ensure that the crossed\n values are combined in the order of the inputs from the call to tf.ragged.cross.\n hashed_output: A `bool`.\n num_buckets: An `int` that is `>= 0`.\n hash_key: An `int`.\n out_values_type: A `tf.DType` from: `tf.int64, tf.string`.\n out_row_splits_type: A `tf.DType` from: `tf.int32, tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_values, output_row_splits).\n\n output_values: A `Tensor` of type `out_values_type`.\n output_row_splits: A `Tensor` of type `out_row_splits_type`.\n ", "desc": "Generates a feature cross from a list of tensors, and returns it as a", "type": "API"}, {"name": "tf.raw_ops.RaggedGather", "docs": "Gather ragged slices from `params` axis `0` according to `indices`.\n\n Outputs a `RaggedTensor` output composed from `output_dense_values` and\n `output_nested_splits`, such that:\n\n ```python\n output.shape = indices.shape + params.shape[1:]\n output.ragged_rank = indices.shape.ndims + params.ragged_rank\n output[i...j, d0...dn] = params[indices[i...j], d0...dn]\n ```\n\n where\n\n * `params =\n ragged.from_nested_row_splits(params_dense_values, params_nested_splits)`\n provides the values that should be gathered.\n * `indices` ia a dense tensor with dtype `int32` or `int64`, indicating which\n values should be gathered.\n * `output =\n ragged.from_nested_row_splits(output_dense_values, output_nested_splits)`\n is the output tensor.\n\n (Note: This c++ op is used to implement the higher-level python\n `tf.ragged.gather` op, which also supports ragged indices.)\n\n Args:\n params_nested_splits: A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`.\n The `nested_row_splits` tensors that define the row-partitioning for the\n `params` RaggedTensor input.\n params_dense_values: A `Tensor`.\n The `flat_values` for the `params` RaggedTensor. There was a terminology change\n at the python level from dense_values to flat_values, so dense_values is the\n deprecated name.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Indices in the outermost dimension of `params` of the values that should be\n gathered.\n OUTPUT_RAGGED_RANK: An `int` that is `>= 0`.\n The ragged rank of the output RaggedTensor. `output_nested_splits` will contain\n this number of `row_splits` tensors. This value should equal\n `indices.shape.ndims + params.ragged_rank - 1`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_nested_splits, output_dense_values).\n\n output_nested_splits: A list of `OUTPUT_RAGGED_RANK` `Tensor` objects with the same type as `params_nested_splits`.\n output_dense_values: A `Tensor`. Has the same type as `params_dense_values`.\n ", "desc": "Gather ragged slices from `params` axis `0` according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.RaggedRange", "docs": "Returns a `RaggedTensor` containing the specified sequences of numbers.\n\n \n Returns a `RaggedTensor` `result` composed from `rt_dense_values` and\n `rt_nested_splits`, such that\n `result[i] = range(starts[i], limits[i], deltas[i])`.\n\n ```python\n (rt_nested_splits, rt_dense_values) = ragged_range(\n starts=[2, 5, 8], limits=[3, 5, 12], deltas=1)\n result = tf.ragged.from_row_splits(rt_dense_values, rt_nested_splits)\n print(result)\n \n ```\n\n The input tensors `starts`, `limits`, and `deltas` may be scalars or vectors.\n The vector inputs must all have the same size. Scalar inputs are broadcast\n to match the size of the vector inputs.\n\n Args:\n starts: A `Tensor`. Must be one of the following types: `bfloat16`, `float32`, `float64`, `int32`, `int64`.\n The starts of each range.\n limits: A `Tensor`. Must have the same type as `starts`.\n The limits of each range.\n deltas: A `Tensor`. Must have the same type as `starts`.\n The deltas of each range.\n Tsplits: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (rt_nested_splits, rt_dense_values).\n\n rt_nested_splits: A `Tensor` of type `Tsplits`.\n rt_dense_values: A `Tensor`. Has the same type as `starts`.\n ", "desc": "Returns a `RaggedTensor` containing the specified sequences of numbers.", "type": "API"}, {"name": "tf.raw_ops.RaggedTensorFromVariant", "docs": "Decodes a `variant` Tensor into a `RaggedTensor`.\n\n Decodes the given `variant` Tensor and returns a `RaggedTensor`. The input\n could be a scalar, meaning it encodes a single `RaggedTensor` with ragged_rank\n `output_ragged_rank`. It could also have an arbitrary rank, in which case each\n element is decoded into a `RaggedTensor` with ragged_rank `input_ragged_rank`\n and these are then stacked according to the input shape to output a single\n `RaggedTensor` with ragged_rank `output_ragged_rank`. Each `variant` element in\n the input Tensor is decoded by retrieving from the element a 1-D `variant`\n Tensor with `input_ragged_rank + 1` Tensors, corresponding to the splits and\n values of the decoded `RaggedTensor`. If `input_ragged_rank` is -1, then it is\n inferred as `output_ragged_rank` - `rank(encoded_ragged)`. See\n `RaggedTensorToVariant` for the corresponding encoding logic.\n\n Args:\n encoded_ragged: A `Tensor` of type `variant`.\n A `variant` Tensor containing encoded `RaggedTensor`s.\n input_ragged_rank: An `int` that is `>= -1`.\n The ragged rank of each encoded `RaggedTensor` component in the input. If set to\n -1, this is inferred as `output_ragged_rank` - `rank(encoded_ragged)`\n output_ragged_rank: An `int` that is `>= 0`.\n The expected ragged rank of the output `RaggedTensor`. The following must hold:\n `output_ragged_rank = rank(encoded_ragged) + input_ragged_rank`.\n Tvalues: A `tf.DType`.\n Tsplits: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_nested_splits, output_dense_values).\n\n output_nested_splits: A list of `output_ragged_rank` `Tensor` objects with type `Tsplits`.\n output_dense_values: A `Tensor` of type `Tvalues`.\n ", "desc": "Decodes a `variant` Tensor into a `RaggedTensor`.", "type": "API"}, {"name": "tf.raw_ops.RaggedTensorToSparse", "docs": "Converts a `RaggedTensor` into a `SparseTensor` with the same values.\n\n input=ragged.from_nested_row_splits(rt_dense_values, rt_nested_splits)\n output=SparseTensor(indices=sparse_indices, values=sparse_values,\n dense_shape=sparse_dense_shape)\n\n Args:\n rt_nested_splits: A list of at least 1 `Tensor` objects with the same type in: `int32`, `int64`.\n The `row_splits` for the `RaggedTensor`.\n rt_dense_values: A `Tensor`. The `flat_values` for the `RaggedTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_dense_shape).\n\n sparse_indices: A `Tensor` of type `int64`.\n sparse_values: A `Tensor`. Has the same type as `rt_dense_values`.\n sparse_dense_shape: A `Tensor` of type `int64`.\n ", "desc": "Converts a `RaggedTensor` into a `SparseTensor` with the same values.", "type": "API"}, {"name": "tf.raw_ops.RaggedTensorToTensor", "docs": "Create a dense tensor from a ragged tensor, possibly altering its shape.\n\n The `ragged_to_dense` op creates a dense tensor from a list of row partition\n tensors, a value vector, and default values. If the shape is unspecified, the\n minimal shape required to contain all the elements in the ragged tensor (the\n natural shape) will be used. If some dimensions are left unspecified, then the\n size of the natural shape is used in that dimension.\n\n The default_value will be broadcast to the output shape. After that, the values\n from the ragged tensor overwrite the default values. Note that the default_value\n must have less dimensions than the value.\n\n The row partition tensors are in the order of the dimensions.\n At present, the types can be:\n * \"ROW_SPLITS\": the row_splits tensor from the ragged tensor.\n * \"VALUE_ROWIDS\": the value_rowids tensor from the ragged tensor.\n * \"FIRST_DIM_SIZE\": if value_rowids is used for the first dimension, then it\n is preceded by \"FIRST_DIM_SIZE\".\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int64`, `int32`.\n The desired shape of the output tensor. If left unspecified (empty),\n the minimal shape required to contain all the elements in the ragged tensor\n (the natural shape) will be used. If some dimensions are left unspecified, then\n the size of the natural shape is used in that dimension.\n\n Note that dense dimensions cannot be modified by the shape argument. Trying to\n change the size of a dense dimension will cause the op to fail.\n Examples:\n natural shape: [4, 5, 6]\n shape: -1\n output shape: [4, 5, 6]\n\n natural shape: [4, 5, 6]\n shape: [3, -1, 2]\n output shape: [3, 5, 2]\n\n natural shape: [4, 5, 6]\n shape: [3, 7, 2]\n output shape: [3, 7, 2]\n values: A `Tensor`.\n A 1D tensor representing the values of the ragged tensor.\n default_value: A `Tensor`. Must have the same type as `values`.\n The default_value when the shape is larger than the ragged tensor. The\n default_value is broadcast until it is the shape of the output tensor, and\n then overwritten by values in the ragged tensor. The default value must be\n compatible with this broadcast operation, and must have fewer dimensions than\n the value tensor.\n row_partition_tensors: A list of at least 1 `Tensor` objects with the same type in: `int64`, `int32`.\n row_partition_types: A list of `strings`.\n The types of the row partition tensors. At present, these can be:\n * \"ROW_SPLITS\": the row_splits tensor from the ragged tensor.\n * \"VALUE_ROWIDS\": the value_rowids tensor from the ragged tensor.\n * \"FIRST_DIM_SIZE\": if value_rowids is used for the first dimension, then it\n is preceeded by \"FIRST_DIM_SIZE\".\n The tensors are in the order of the dimensions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `values`.\n ", "desc": "Create a dense tensor from a ragged tensor, possibly altering its shape.", "type": "API"}, {"name": "tf.raw_ops.RaggedTensorToVariant", "docs": "Encodes a `RaggedTensor` into a `variant` Tensor.\n\n \n Encodes the given `RaggedTensor` and returns a `variant` Tensor. If\n `batched_input` is True, then input `RaggedTensor` is unbatched along the\n zero-th dimension, each component `RaggedTensor` is encoded into a scalar\n `variant` Tensor, and these are stacked to return a 1-D `variant` Tensor.\n If `batched_input` is False, then the input `RaggedTensor` is encoded as is and\n a scalar `variant` Tensor is returned. A `RaggedTensor` is encoded by first\n creating a 1-D `variant` Tensor with `ragged_rank + 1` elements, containing the\n splits and values Tensors of the `RaggedTensor`. Then the 1-D `variant` Tensor\n is wrapped in a scalar `variant` Tensor. See `RaggedTensorFromVariant` for the\n corresponding decoding logic.\n\n Args:\n rt_nested_splits: A list of `Tensor` objects with the same type in: `int32`, `int64`.\n A list of one or more Tensors representing the splits of the input\n `RaggedTensor`.\n rt_dense_values: A `Tensor`.\n A Tensor representing the values of the input `RaggedTensor`.\n batched_input: A `bool`.\n A `bool` denoting whether the input is a batched `RaggedTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Encodes a `RaggedTensor` into a `variant` Tensor.", "type": "API"}, {"name": "tf.raw_ops.RaggedTensorToVariantGradient", "docs": "Helper used to compute the gradient for `RaggedTensorToVariant`.\n\n Computes the gradient for the dense_values input to the RaggedTensorToVariant\n op, given the variant-encoded ragged gradients of the outputs, along with\n the outer row-splits and the shape of the dense-values that were provided as\n inputs to the RaggedTensorToVariant op.\n\n Args:\n encoded_ragged_grad: A `Tensor` of type `variant`.\n A `variant` Tensor containing encoded `RaggedTensor` gradients.\n row_splits: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Outermost row-splits that were used as input to the RaggedTensorToVariant op.\n dense_values_shape: A `Tensor` of type `int32`.\n Shape of the dense_values that was used as an input to the\n RaggedTensorToVariant op.\n Tvalues: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tvalues`.\n ", "desc": "Helper used to compute the gradient for `RaggedTensorToVariant`.", "type": "API"}, {"name": "tf.raw_ops.RandomCrop", "docs": "Randomly crop `image`.\n\n `size` is a 1-D int64 tensor with 2 elements representing the crop height and\n width. The values must be non negative.\n\n This Op picks a random location in `image` and crops a `height` by `width`\n rectangle from that location. The random location is picked so the cropped\n area will fit inside the original image.\n\n Args:\n image: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `float32`, `float64`.\n 3-D of shape `[height, width, channels]`.\n size: A `Tensor` of type `int64`.\n 1-D of length 2 containing: `crop_height`, `crop_width`..\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `image`.\n ", "desc": "Randomly crop `image`.", "type": "API"}, {"name": "tf.raw_ops.RandomDataset", "docs": "Creates a Dataset that returns pseudorandom numbers.\n\n Creates a Dataset that returns a stream of uniformly distributed\n pseudorandom 64-bit signed integers.\n\n In the TensorFlow Python API, you can instantiate this dataset via the\n class `tf.data.experimental.RandomDataset`.\n\n Instances of this dataset are also created as a result of the\n `hoist_random_uniform` static optimization. Whether this optimization is\n performed is determined by the `experimental_optimization.hoist_random_uniform`\n option of `tf.data.Options`.\n\n Args:\n seed: A `Tensor` of type `int64`.\n A scalar seed for the random number generator. If either seed or\n seed2 is set to be non-zero, the random number generator is seeded\n by the given seed. Otherwise, a random seed is used.\n seed2: A `Tensor` of type `int64`.\n A second scalar seed to avoid seed collision.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a Dataset that returns pseudorandom numbers.", "type": "API"}, {"name": "tf.raw_ops.RandomGamma", "docs": "Outputs random values from the Gamma distribution(s) described by alpha.\n\n This op uses the algorithm by Marsaglia et al. to acquire samples via\n transformation-rejection from pairs of uniform and normal random variables.\n See http://dl.acm.org/citation.cfm?id=358414\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D integer tensor. Shape of independent samples to draw from each\n distribution described by the shape parameters given in alpha.\n alpha: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n A tensor in which each scalar is a \"shape\" parameter describing the\n associated gamma distribution.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `alpha`.\n ", "desc": "Outputs random values from the Gamma distribution(s) described by alpha.", "type": "API"}, {"name": "tf.raw_ops.RandomGammaGrad", "docs": "Computes the derivative of a Gamma random sample w.r.t. `alpha`.\n\n Args:\n alpha: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n sample: A `Tensor`. Must have the same type as `alpha`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `alpha`.\n ", "desc": "Computes the derivative of a Gamma random sample w.r.t. `alpha`.", "type": "API"}, {"name": "tf.raw_ops.RandomPoisson", "docs": "Use RandomPoissonV2 instead.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n rate: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `rate`.\n ", "desc": "Use RandomPoissonV2 instead.", "type": "API"}, {"name": "tf.raw_ops.RandomPoissonV2", "docs": "Outputs random values from the Poisson distribution(s) described by rate.\n\n This op uses two algorithms, depending on rate. If rate >= 10, then\n the algorithm by Hormann is used to acquire samples via\n transformation-rejection.\n See http://www.sciencedirect.com/science/article/pii/0167668793909974.\n\n Otherwise, Knuth's algorithm is used to acquire samples via multiplying uniform\n random variables.\n See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer\n Programming, Volume 2. Addison Wesley\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D integer tensor. Shape of independent samples to draw from each\n distribution described by the shape parameters given in rate.\n rate: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n A tensor in which each scalar is a \"rate\" parameter describing the\n associated poisson distribution.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n dtype: An optional `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from the Poisson distribution(s) described by rate.", "type": "API"}, {"name": "tf.raw_ops.RandomShuffle", "docs": "Randomly shuffles a tensor along its first dimension.\n\n The tensor is shuffled along dimension 0, such that each `value[j]` is mapped\n to one and only one `output[i]`. For example, a mapping that might occur for a\n 3x2 tensor is:\n\n ```\n [[1, 2], [[5, 6],\n [3, 4], ==> [1, 2],\n [5, 6]] [3, 4]]\n ```\n\n Args:\n value: A `Tensor`. The tensor to be shuffled.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `value`.\n ", "desc": "Randomly shuffles a tensor along its first dimension.", "type": "API"}, {"name": "tf.raw_ops.RandomShuffleQueue", "docs": "A queue that randomizes the order of elements.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n min_after_dequeue: An optional `int`. Defaults to `0`.\n Dequeue will block unless there would be this\n many elements after the dequeue or the queue is closed. This\n ensures a minimum level of mixing of elements.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 is set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, a random seed is used.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A queue that randomizes the order of elements.", "type": "API"}, {"name": "tf.raw_ops.RandomShuffleQueueV2", "docs": "A queue that randomizes the order of elements.\n\n Args:\n component_types: A list of `tf.DTypes` that has length `>= 1`.\n The type of each component in a value.\n shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n The shape of each component in a value. The length of this attr must\n be either 0 or the same as the length of component_types. If the length of\n this attr is 0, the shapes of queue elements are not constrained, and\n only one element may be dequeued at a time.\n capacity: An optional `int`. Defaults to `-1`.\n The upper bound on the number of elements in this queue.\n Negative numbers mean no limit.\n min_after_dequeue: An optional `int`. Defaults to `0`.\n Dequeue will block unless there would be this\n many elements after the dequeue or the queue is closed. This\n ensures a minimum level of mixing of elements.\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 is set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, a random seed is used.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue will be shared under the given name\n across multiple sessions.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A queue that randomizes the order of elements.", "type": "API"}, {"name": "tf.raw_ops.RandomStandardNormal", "docs": "Outputs random values from a normal distribution.\n\n The generated values will have mean 0 and standard deviation 1.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n dtype: A `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`.\n The type of the output.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a normal distribution.", "type": "API"}, {"name": "tf.raw_ops.RandomUniform", "docs": "Outputs random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[0, 1)`. The\n lower bound 0 is included in the range, while the upper bound 1 is excluded.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n dtype: A `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`.\n The type of the output.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.RandomUniformInt", "docs": "Outputs random integers from a uniform distribution.\n\n The generated values are uniform integers in the range `[minval, maxval)`.\n The lower bound `minval` is included in the range, while the upper bound\n `maxval` is excluded.\n\n The random integers are slightly biased unless `maxval - minval` is an exact\n power of two. The bias is small for values of `maxval - minval` significantly\n smaller than the range of the output (either `2^32` or `2^64`).\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n minval: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D. Inclusive lower bound on the generated integers.\n maxval: A `Tensor`. Must have the same type as `minval`.\n 0-D. Exclusive upper bound on the generated integers.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `minval`.\n ", "desc": "Outputs random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.Range", "docs": "Creates a sequence of numbers.\n\n This operation creates a sequence of numbers that begins at `start` and\n extends by increments of `delta` up to but not including `limit`.\n\n For example:\n\n ```\n # 'start' is 3\n # 'limit' is 18\n # 'delta' is 3\n tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]\n ```\n\n Args:\n start: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint16`, `uint32`.\n 0-D (scalar). First entry in the sequence.\n limit: A `Tensor`. Must have the same type as `start`.\n 0-D (scalar). Upper limit of sequence, exclusive.\n delta: A `Tensor`. Must have the same type as `start`.\n 0-D (scalar). Optional. Default is 1. Number that increments `start`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `start`.\n ", "desc": "Creates a sequence of numbers.", "type": "API"}, {"name": "tf.raw_ops.RangeDataset", "docs": "Creates a dataset with a range of values. Corresponds to python's xrange.\n\n Args:\n start: A `Tensor` of type `int64`.\n corresponds to start in python's xrange().\n stop: A `Tensor` of type `int64`.\n corresponds to stop in python's xrange().\n step: A `Tensor` of type `int64`.\n corresponds to step in python's xrange().\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset with a range of values. Corresponds to python's xrange.", "type": "API"}, {"name": "tf.raw_ops.Rank", "docs": "Returns the rank of a tensor.\n\n This operation returns an integer representing the rank of `input`.\n\n For example:\n\n ```\n # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]\n # shape of tensor 't' is [2, 2, 3]\n rank(t) ==> 3\n ```\n\n **Note**: The rank of a tensor is not the same as the rank of a matrix. The rank\n of a tensor is the number of indices required to uniquely select each element\n of the tensor. Rank is also known as \"order\", \"degree\", or \"ndims.\"\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Returns the rank of a tensor.", "type": "API"}, {"name": "tf.raw_ops.ReaderNumRecordsProduced", "docs": "Returns the number of records this Reader has produced.\n\n This is the same as the number of ReaderRead executions that have\n succeeded.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the number of records this Reader has produced.", "type": "API"}, {"name": "tf.raw_ops.ReaderNumRecordsProducedV2", "docs": "Returns the number of records this Reader has produced.\n\n This is the same as the number of ReaderRead executions that have\n succeeded.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the number of records this Reader has produced.", "type": "API"}, {"name": "tf.raw_ops.ReaderNumWorkUnitsCompleted", "docs": "Returns the number of work units this Reader has finished processing.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the number of work units this Reader has finished processing.", "type": "API"}, {"name": "tf.raw_ops.ReaderNumWorkUnitsCompletedV2", "docs": "Returns the number of work units this Reader has finished processing.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns the number of work units this Reader has finished processing.", "type": "API"}, {"name": "tf.raw_ops.ReaderRead", "docs": "Returns the next record (key, value pair) produced by a Reader.\n\n Will dequeue from the input queue if necessary (e.g. when the\n Reader needs to start reading from a new file since it has finished\n with the previous file).\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n queue_handle: A `Tensor` of type mutable `string`.\n Handle to a Queue, with string work items.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, value).\n\n key: A `Tensor` of type `string`.\n value: A `Tensor` of type `string`.\n ", "desc": "Returns the next record (key, value pair) produced by a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReaderReadUpTo", "docs": "Returns up to `num_records` (key, value) pairs produced by a Reader.\n\n Will dequeue from the input queue if necessary (e.g. when the\n Reader needs to start reading from a new file since it has finished\n with the previous file).\n It may return less than `num_records` even before the last batch.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a `Reader`.\n queue_handle: A `Tensor` of type mutable `string`.\n Handle to a `Queue`, with string work items.\n num_records: A `Tensor` of type `int64`.\n number of records to read from `Reader`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (keys, values).\n\n keys: A `Tensor` of type `string`.\n values: A `Tensor` of type `string`.\n ", "desc": "Returns up to `num_records` (key, value) pairs produced by a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReaderReadUpToV2", "docs": "Returns up to `num_records` (key, value) pairs produced by a Reader.\n\n Will dequeue from the input queue if necessary (e.g. when the\n Reader needs to start reading from a new file since it has finished\n with the previous file).\n It may return less than `num_records` even before the last batch.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a `Reader`.\n queue_handle: A `Tensor` of type `resource`.\n Handle to a `Queue`, with string work items.\n num_records: A `Tensor` of type `int64`.\n number of records to read from `Reader`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (keys, values).\n\n keys: A `Tensor` of type `string`.\n values: A `Tensor` of type `string`.\n ", "desc": "Returns up to `num_records` (key, value) pairs produced by a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReaderReadV2", "docs": "Returns the next record (key, value pair) produced by a Reader.\n\n Will dequeue from the input queue if necessary (e.g. when the\n Reader needs to start reading from a new file since it has finished\n with the previous file).\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n queue_handle: A `Tensor` of type `resource`.\n Handle to a Queue, with string work items.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, value).\n\n key: A `Tensor` of type `string`.\n value: A `Tensor` of type `string`.\n ", "desc": "Returns the next record (key, value pair) produced by a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReaderReset", "docs": "Restore a Reader to its initial clean state.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Restore a Reader to its initial clean state.", "type": "API"}, {"name": "tf.raw_ops.ReaderResetV2", "docs": "Restore a Reader to its initial clean state.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Restore a Reader to its initial clean state.", "type": "API"}, {"name": "tf.raw_ops.ReaderRestoreState", "docs": "Restore a reader to a previously saved state.\n\n Not all Readers support being restored, so this can produce an\n Unimplemented error.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n state: A `Tensor` of type `string`.\n Result of a ReaderSerializeState of a Reader with type\n matching reader_handle.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Restore a reader to a previously saved state.", "type": "API"}, {"name": "tf.raw_ops.ReaderRestoreStateV2", "docs": "Restore a reader to a previously saved state.\n\n Not all Readers support being restored, so this can produce an\n Unimplemented error.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n state: A `Tensor` of type `string`.\n Result of a ReaderSerializeState of a Reader with type\n matching reader_handle.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Restore a reader to a previously saved state.", "type": "API"}, {"name": "tf.raw_ops.ReaderSerializeState", "docs": "Produce a string tensor that encodes the state of a Reader.\n\n Not all Readers support being serialized, so this can produce an\n Unimplemented error.\n\n Args:\n reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Produce a string tensor that encodes the state of a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReaderSerializeStateV2", "docs": "Produce a string tensor that encodes the state of a Reader.\n\n Not all Readers support being serialized, so this can produce an\n Unimplemented error.\n\n Args:\n reader_handle: A `Tensor` of type `resource`. Handle to a Reader.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Produce a string tensor that encodes the state of a Reader.", "type": "API"}, {"name": "tf.raw_ops.ReadFile", "docs": "Reads and outputs the entire contents of the input filename.\n\n Args:\n filename: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Reads and outputs the entire contents of the input filename.", "type": "API"}, {"name": "tf.raw_ops.ReadVariableOp", "docs": "Reads the value of a variable.\n\n The tensor returned by this operation is immutable.\n\n The value returned by this operation is guaranteed to be influenced by all the\n writes on which this operation depends directly or indirectly, and to not be\n influenced by any of the writes which depend directly or indirectly on this\n operation.\n\n Args:\n resource: A `Tensor` of type `resource`.\n handle to the resource in which to store the variable.\n dtype: A `tf.DType`. the dtype of the value.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Reads the value of a variable.", "type": "API"}, {"name": "tf.raw_ops.Real", "docs": "Returns the real part of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n type `float` that is the real part of each element in `input`. All elements in\n `input` must be complex numbers of the form \\\\(a + bj\\\\), where *a* is the real\n part returned by this operation and *b* is the imaginary part.\n\n For example:\n\n ```\n # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]\n tf.real(input) ==> [-2.25, 3.25]\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n Tout: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tout`.\n ", "desc": "Returns the real part of a complex number.", "type": "API"}, {"name": "tf.raw_ops.RealDiv", "docs": "Returns x / y element-wise for real types.\n\n If `x` and `y` are reals, this will return the floating-point division.\n\n *NOTE*: `Div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for real types.", "type": "API"}, {"name": "tf.raw_ops.RebatchDataset", "docs": "Creates a dataset that changes the batch size.\n\n Creates a dataset that changes the batch size of the dataset to current batch\n size // num_workers.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n num_replicas: A `Tensor` of type `int64`.\n A scalar representing the number of replicas to distribute this batch across. As\n a result of this transformation the current batch size would end up being\n divided by this parameter.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_fallback: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that changes the batch size.", "type": "API"}, {"name": "tf.raw_ops.RebatchDatasetV2", "docs": "Creates a dataset that changes the batch size.\n\n Creates a dataset that rebatches elements from `input_dataset` into new batch\n sizes.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n batch_sizes: A `Tensor` of type `int64`.\n A vector of integers representing the size of batches to produce. These values\n are cycled through in order.\n drop_remainder: A `Tensor` of type `bool`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that changes the batch size.", "type": "API"}, {"name": "tf.raw_ops.Reciprocal", "docs": "Computes the reciprocal of x element-wise.\n\n I.e., \\\\(y = 1 / x\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes the reciprocal of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.ReciprocalGrad", "docs": "Computes the gradient for the inverse of `x` wrt its input.\n\n Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy`\n is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient for the inverse of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.RecordInput", "docs": "Emits randomized records.\n\n Args:\n file_pattern: A `string`. Glob pattern for the data files.\n file_random_seed: An optional `int`. Defaults to `301`.\n Random seeds used to produce randomized records.\n file_shuffle_shift_ratio: An optional `float`. Defaults to `0`.\n Shifts the list of files after the list is randomly\n shuffled.\n file_buffer_size: An optional `int`. Defaults to `10000`.\n The randomization shuffling buffer.\n file_parallelism: An optional `int`. Defaults to `16`.\n How many sstables are opened and concurrently iterated over.\n batch_size: An optional `int`. Defaults to `32`. The batch size.\n compression_type: An optional `string`. Defaults to `\"\"`.\n The type of compression for the file. Currently ZLIB and\n GZIP are supported. Defaults to none.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Emits randomized records.", "type": "API"}, {"name": "tf.raw_ops.Recv", "docs": "Receives the named tensor from send_device on recv_device.\n\n Args:\n tensor_type: A `tf.DType`.\n tensor_name: A `string`. The name of the tensor to receive.\n send_device: A `string`. The name of the device sending the tensor.\n send_device_incarnation: An `int`. The current incarnation of send_device.\n recv_device: A `string`. The name of the device receiving the tensor.\n client_terminated: An optional `bool`. Defaults to `False`.\n If set to true, this indicates that the node was added\n to the graph as a result of a client-side feed or fetch of Tensor data,\n in which case the corresponding send or recv is expected to be managed\n locally by the caller.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `tensor_type`.\n ", "desc": "Receives the named tensor from send_device on recv_device.", "type": "API"}, {"name": "tf.raw_ops.RecvTPUEmbeddingActivations", "docs": "An op that receives embedding activations on the TPU.\n\n The TPU system performs the embedding lookups and aggregations specified by\n the arguments to TPUEmbeddingEnqueue(Integer/Sparse/SparseTensor)Batch. The\n results of these aggregations are visible to the Tensorflow Graph as the\n outputs of a RecvTPUEmbeddingActivations op. This op returns a list containing\n one Tensor of activations per table specified in the model. There can be at\n most one RecvTPUEmbeddingActivations op in the TPU graph.\n\n Args:\n num_outputs: An `int` that is `>= 1`.\n The number of output activation tensors, equal to the number of\n embedding tables in the model.\n config: A `string`. Serialized TPUEmbeddingConfiguration proto.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_outputs` `Tensor` objects with type `float32`.\n ", "desc": "An op that receives embedding activations on the TPU.", "type": "API"}, {"name": "tf.raw_ops.ReduceDataset", "docs": "Reduces the input dataset to a singleton using a reduce function.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n initial_state: A list of `Tensor` objects.\n A nested structure of tensors, representing the initial state of the\n transformation.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n A function that maps `(old_state, input_element)` to `new_state`. It must take\n two arguments and return a nested structures of tensors. The structure of\n `new_state` must match the structure of `initial_state`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n use_inter_op_parallelism: An optional `bool`. Defaults to `True`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Reduces the input dataset to a singleton using a reduce function.", "type": "API"}, {"name": "tf.raw_ops.ReduceJoin", "docs": "Joins a string Tensor across the given dimensions.\n\n Computes the string join across dimensions in the given string Tensor of shape\n `[\\\\(d_0, d_1, ..., d_{n-1}\\\\)]`. Returns a new Tensor created by joining the input\n strings with the given separator (default: empty string). Negative indices are\n counted backwards from the end, with `-1` being equivalent to `n - 1`. If\n indices are not specified, joins across all dimensions beginning from `n - 1`\n through `0`.\n\n For example:\n\n ```python\n # tensor `a` is [[\"a\", \"b\"], [\"c\", \"d\"]]\n tf.reduce_join(a, 0) ==> [\"ac\", \"bd\"]\n tf.reduce_join(a, 1) ==> [\"ab\", \"cd\"]\n tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> [\"ac\", \"bd\"]\n tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> [\"ab\", \"cd\"]\n tf.reduce_join(a, 0, keep_dims=True) ==> [[\"ac\", \"bd\"]]\n tf.reduce_join(a, 1, keep_dims=True) ==> [[\"ab\"], [\"cd\"]]\n tf.reduce_join(a, 0, separator=\".\") ==> [\"a.c\", \"b.d\"]\n tf.reduce_join(a, [0, 1]) ==> \"acbd\"\n tf.reduce_join(a, [1, 0]) ==> \"abcd\"\n tf.reduce_join(a, []) ==> [[\"a\", \"b\"], [\"c\", \"d\"]]\n tf.reduce_join(a) = tf.reduce_join(a, [1, 0]) ==> \"abcd\"\n ```\n\n Args:\n inputs: A `Tensor` of type `string`.\n The input to be joined. All reduced indices must have non-zero size.\n reduction_indices: A `Tensor` of type `int32`.\n The dimensions to reduce over. Dimensions are reduced in the\n order specified. Omitting `reduction_indices` is equivalent to passing\n `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported.\n keep_dims: An optional `bool`. Defaults to `False`.\n If `True`, retain reduced dimensions with length `1`.\n separator: An optional `string`. Defaults to `\"\"`.\n The separator to use when joining.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Joins a string Tensor across the given dimensions.", "type": "API"}, {"name": "tf.raw_ops.RefEnter", "docs": "Creates or finds a child frame, and makes `data` available to the child frame.\n\n The unique `frame_name` is used by the `Executor` to identify frames. If\n `is_constant` is true, `output` is a constant in the child frame; otherwise\n it may be changed in the child frame. At most `parallel_iterations` iterations\n are run in parallel in the child frame.\n\n Args:\n data: A mutable `Tensor`.\n The tensor to be made available to the child frame.\n frame_name: A `string`. The name of the child frame.\n is_constant: An optional `bool`. Defaults to `False`.\n If true, the output is constant within the child frame.\n parallel_iterations: An optional `int`. Defaults to `10`.\n The number of iterations allowed to run in parallel.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `data`.\n ", "desc": "Creates or finds a child frame, and makes `data` available to the child frame.", "type": "API"}, {"name": "tf.raw_ops.RefExit", "docs": "Exits the current frame to its parent frame.\n\n Exit makes its input `data` available to the parent frame.\n\n Args:\n data: A mutable `Tensor`.\n The tensor to be made available to the parent frame.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `data`.\n ", "desc": "Exits the current frame to its parent frame.", "type": "API"}, {"name": "tf.raw_ops.RefIdentity", "docs": "Return the same ref tensor as the input ref tensor.\n\n Args:\n input: A mutable `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `input`.\n ", "desc": "Return the same ref tensor as the input ref tensor.", "type": "API"}, {"name": "tf.raw_ops.RefMerge", "docs": "Forwards the value of an available tensor from `inputs` to `output`.\n\n `Merge` waits for at least one of the tensors in `inputs` to become available.\n It is usually combined with `Switch` to implement branching.\n\n `Merge` forwards the first tensor for become available to `output`, and sets\n `value_index` to its index in `inputs`.\n\n Args:\n inputs: A list of at least 1 mutable `Tensor` objects with the same type.\n The input tensors, exactly one of which will become available.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, value_index).\n\n output: A mutable `Tensor`. Has the same type as `inputs`.\n value_index: A `Tensor` of type `int32`.\n ", "desc": "Forwards the value of an available tensor from `inputs` to `output`.", "type": "API"}, {"name": "tf.raw_ops.RefNextIteration", "docs": "Makes its input available to the next iteration.\n\n Args:\n data: A mutable `Tensor`.\n The tensor to be made available to the next iteration.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `data`.\n ", "desc": "Makes its input available to the next iteration.", "type": "API"}, {"name": "tf.raw_ops.RefSelect", "docs": "Forwards the `index`th element of `inputs` to `output`.\n\n Args:\n index: A `Tensor` of type `int32`.\n A scalar that determines the input that gets selected.\n inputs: A list of at least 1 mutable `Tensor` objects with the same type.\n A list of ref tensors, one of which will be forwarded to `output`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `inputs`.\n ", "desc": "Forwards the `index`th element of `inputs` to `output`.", "type": "API"}, {"name": "tf.raw_ops.RefSwitch", "docs": "Forwards the ref tensor `data` to the output port determined by `pred`.\n\n If `pred` is true, the `data` input is forwarded to `output_true`. Otherwise,\n the data goes to `output_false`.\n\n See also `Switch` and `Merge`.\n\n Args:\n data: A mutable `Tensor`.\n The ref tensor to be forwarded to the appropriate output.\n pred: A `Tensor` of type `bool`.\n A scalar that specifies which output port will receive data.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_false, output_true).\n\n output_false: A mutable `Tensor`. Has the same type as `data`.\n output_true: A mutable `Tensor`. Has the same type as `data`.\n ", "desc": "Forwards the ref tensor `data` to the output port determined by `pred`.", "type": "API"}, {"name": "tf.raw_ops.RegexFullMatch", "docs": "Check if the input matches the regex pattern.\n\n The input is a string tensor of any shape. The pattern is a scalar\n string tensor which is applied to every element of the input tensor.\n The boolean values (True or False) of the output tensor indicate\n if the input matches the regex pattern provided.\n\n The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Examples:\n\n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*lib$\")\n \n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*TF$\")\n \n\n Args:\n input: A `Tensor` of type `string`.\n A string tensor of the text to be processed.\n pattern: A `Tensor` of type `string`.\n A scalar string tensor containing the regular expression to match the input.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Check if the input matches the regex pattern.", "type": "API"}, {"name": "tf.raw_ops.RegexReplace", "docs": "Replaces matches of the `pattern` regular expression in `input` with the\nreplacement string provided in `rewrite`.\n\n It follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Args:\n input: A `Tensor` of type `string`. The text to be processed.\n pattern: A `Tensor` of type `string`.\n The regular expression to be matched in the `input` strings.\n rewrite: A `Tensor` of type `string`.\n The rewrite string to be substituted for the `pattern` expression where it is\n matched in the `input` strings.\n replace_global: An optional `bool`. Defaults to `True`.\n If True, the replacement is global (that is, all matches of the `pattern` regular\n expression in each input string are rewritten), otherwise the `rewrite`\n substitution is only made for the first `pattern` match.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Replaces matches of the `pattern` regular expression in `input` with the", "type": "API"}, {"name": "tf.raw_ops.RegisterDataset", "docs": "Registers a dataset with the tf.data service.\n\n Args:\n dataset: A `Tensor` of type `variant`.\n address: A `Tensor` of type `string`.\n protocol: A `Tensor` of type `string`.\n external_state_policy: An `int`.\n element_spec: An optional `string`. Defaults to `\"\"`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Registers a dataset with the tf.data service.", "type": "API"}, {"name": "tf.raw_ops.Relu", "docs": "Computes rectified linear: `max(features, 0)`.\n\n See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks)\n Example usage:\n >>> tf.nn.relu([-2., 0., 3.]).numpy()\n array([0., 0., 3.], dtype=float32)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `qint8`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes rectified linear: `max(features, 0)`.", "type": "API"}, {"name": "tf.raw_ops.Relu6", "docs": "Computes rectified linear 6: `min(max(features, 0), 6)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes rectified linear 6: `min(max(features, 0), 6)`.", "type": "API"}, {"name": "tf.raw_ops.Relu6Grad", "docs": "Computes rectified linear 6 gradients for a Relu6 operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The backpropagated gradients to the corresponding Relu6 operation.\n features: A `Tensor`. Must have the same type as `gradients`.\n The features passed as input to the corresponding Relu6 operation, or\n its output; using either one produces the same result.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes rectified linear 6 gradients for a Relu6 operation.", "type": "API"}, {"name": "tf.raw_ops.ReluGrad", "docs": "Computes rectified linear gradients for a Relu operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n The backpropagated gradients to the corresponding Relu operation.\n features: A `Tensor`. Must have the same type as `gradients`.\n The features passed as input to the corresponding Relu operation, OR\n the outputs of that operation (both work equivalently).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes rectified linear gradients for a Relu operation.", "type": "API"}, {"name": "tf.raw_ops.RemoteCall", "docs": "Runs function `f` on a remote device indicated by `target`.\n\n Args:\n target: A `Tensor` of type `string`.\n A fully specified device name where we want to run the function.\n args: A list of `Tensor` objects. A list of arguments for the function.\n Tout: A list of `tf.DTypes` that has length `>= 1`.\n The type list for the return values.\n f: A function decorated with @Defun. The function to run remotely.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Runs function `f` on a remote device indicated by `target`.", "type": "API"}, {"name": "tf.raw_ops.RepeatDataset", "docs": "Creates a dataset that emits the outputs of `input_dataset` `count` times.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n count: A `Tensor` of type `int64`.\n A scalar representing the number of times that `input_dataset` should\n be repeated. A value of `-1` indicates that it should be repeated infinitely.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits the outputs of `input_dataset` `count` times.", "type": "API"}, {"name": "tf.raw_ops.RequantizationRange", "docs": "Computes a range that covers the actual values present in a quantized tensor.\n\n Given a quantized tensor described by `(input, input_min, input_max)`, outputs a\n range that covers the actual values present in that tensor. This op is typically\n used to produce the `requested_output_min` and `requested_output_max` for\n `Requantize`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n input_min: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n input_max: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_min, output_max).\n\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Computes a range that covers the actual values present in a quantized tensor.", "type": "API"}, {"name": "tf.raw_ops.RequantizationRangePerChannel", "docs": "Computes requantization range per channel.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n input_min: A `Tensor` of type `float32`.\n The minimum value of the input tensor\n input_max: A `Tensor` of type `float32`.\n The maximum value of the input tensor.\n clip_value_max: A `float`.\n The maximum value of the output that needs to be clipped.\n Example: set this to 6 for Relu6.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_min, output_max).\n\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Computes requantization range per channel.", "type": "API"}, {"name": "tf.raw_ops.Requantize", "docs": "Converts the quantized `input` tensor into a lower-precision `output`.\n\n Converts the quantized `input` tensor into a lower-precision `output`, using the\n output range specified with `requested_output_min` and `requested_output_max`.\n\n `[input_min, input_max]` are scalar floats that specify the range for the float\n interpretation of the `input` data. For example, if `input_min` is -1.0f and\n `input_max` is 1.0f, and we are dealing with `quint16` quantized data, then a 0\n value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n input_min: A `Tensor` of type `float32`.\n The float value that the minimum quantized input value represents.\n input_max: A `Tensor` of type `float32`.\n The float value that the maximum quantized input value represents.\n requested_output_min: A `Tensor` of type `float32`.\n The float value that the minimum quantized output value represents.\n requested_output_max: A `Tensor` of type `float32`.\n The float value that the maximum quantized output value represents.\n out_type: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`.\n The type of the output. Should be a lower bit depth than Tinput.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `out_type`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Converts the quantized `input` tensor into a lower-precision `output`.", "type": "API"}, {"name": "tf.raw_ops.RequantizePerChannel", "docs": "Requantizes input with min and max values known per channel.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`.\n The original input tensor.\n input_min: A `Tensor` of type `float32`.\n The minimum value of the input tensor\n input_max: A `Tensor` of type `float32`.\n The maximum value of the input tensor.\n requested_output_min: A `Tensor` of type `float32`.\n The minimum value of the output tensor requested.\n requested_output_max: A `Tensor` of type `float32`.\n The maximum value of the output tensor requested.\n out_type: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. Defaults to `tf.quint8`.\n The quantized type of output tensor that needs to be converted.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output, output_min, output_max).\n\n output: A `Tensor` of type `out_type`.\n output_min: A `Tensor` of type `float32`.\n output_max: A `Tensor` of type `float32`.\n ", "desc": "Requantizes input with min and max values known per channel.", "type": "API"}, {"name": "tf.raw_ops.Reshape", "docs": "Reshapes a tensor.\n\n Given `tensor`, this operation returns a tensor that has the same values\n as `tensor` with shape `shape`.\n\n If one component of 1-D tensor `shape` is the special value -1, the size of that\n dimension is computed so that the total size remains constant. In particular, a\n `shape` of `[-1]` flattens into 1-D. At most one component of `shape` may be\n unknown.\n\n The `shape` must be 1-D and the operation returns a tensor with shape\n `shape` filled with the values of `tensor`. In this case, the number of elements\n implied by `shape` must be the same as the number of elements in `tensor`.\n\n It is an error if `shape` is not 1-D.\n\n For example:\n\n ```\n # tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]\n # tensor 't' has shape [9]\n reshape(t, [3, 3]) ==> [[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]]\n\n # tensor 't' is [[[1, 1], [2, 2]],\n # [[3, 3], [4, 4]]]\n # tensor 't' has shape [2, 2, 2]\n reshape(t, [2, 4]) ==> [[1, 1, 2, 2],\n [3, 3, 4, 4]]\n\n # tensor 't' is [[[1, 1, 1],\n # [2, 2, 2]],\n # [[3, 3, 3],\n # [4, 4, 4]],\n # [[5, 5, 5],\n # [6, 6, 6]]]\n # tensor 't' has shape [3, 2, 3]\n # pass '[-1]' to flatten 't'\n reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]\n\n # -1 can also be used to infer the shape\n\n # -1 is inferred to be 9:\n reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],\n [4, 4, 4, 5, 5, 5, 6, 6, 6]]\n # -1 is inferred to be 2:\n reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],\n [4, 4, 4, 5, 5, 5, 6, 6, 6]]\n # -1 is inferred to be 3:\n reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]],\n [[4, 4, 4],\n [5, 5, 5],\n [6, 6, 6]]]\n\n # tensor 't' is [7]\n # shape `[]` reshapes to a scalar\n reshape(t, []) ==> 7\n ```\n\n Args:\n tensor: A `Tensor`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Defines the shape of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reshapes a tensor.", "type": "API"}, {"name": "tf.raw_ops.ResizeArea", "docs": "Resize `images` to `size` using area interpolation.\n\n Input images can be of different types but output images are always float.\n\n The range of pixel values for the output image might be slightly different\n from the range for the input image because of limited numerical precision.\n To guarantee an output range, for example `[0.0, 1.0]`, apply\n `tf.clip_by_value` to the output.\n\n Each output pixel is computed by first transforming the pixel's footprint into\n the input tensor and then averaging the pixels that intersect the footprint. An\n input pixel's contribution to the average is weighted by the fraction of its\n area that intersects the footprint. This is the same as OpenCV's INTER_AREA.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Resize `images` to `size` using area interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeBicubic", "docs": "Resize `images` to `size` using bicubic interpolation.\n\n Input images can be of different types but output images are always float.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Resize `images` to `size` using bicubic interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeBicubicGrad", "docs": "Computes the gradient of bicubic interpolation.\n\n Args:\n grads: A `Tensor` of type `float32`.\n 4-D with shape `[batch, height, width, channels]`.\n original_image: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n 4-D with shape `[batch, orig_height, orig_width, channels]`,\n The image tensor that was resized.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and grad tensors are\n aligned. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `original_image`.\n ", "desc": "Computes the gradient of bicubic interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeBilinear", "docs": "Resize `images` to `size` using bilinear interpolation.\n\n Input images can be of different types but output images are always float.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Resize `images` to `size` using bilinear interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeBilinearGrad", "docs": "Computes the gradient of bilinear interpolation.\n\n Args:\n grads: A `Tensor` of type `float32`.\n 4-D with shape `[batch, height, width, channels]`.\n original_image: A `Tensor`. Must be one of the following types: `float32`, `bfloat16`, `half`, `float64`.\n 4-D with shape `[batch, orig_height, orig_width, channels]`,\n The image tensor that was resized.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and grad tensors are\n aligned. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `original_image`.\n ", "desc": "Computes the gradient of bilinear interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeNearestNeighbor", "docs": "Resize `images` to `size` using nearest neighbor interpolation.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The\n new size for the images.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and output tensors are\n aligned, preserving the values at the corner pixels. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Resize `images` to `size` using nearest neighbor interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResizeNearestNeighborGrad", "docs": "Computes the gradient of nearest neighbor interpolation.\n\n Args:\n grads: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `half`, `float32`, `float64`, `bfloat16`.\n 4-D with shape `[batch, height, width, channels]`.\n size: A 1-D int32 Tensor of 2 elements: `orig_height, orig_width`. The\n original input size.\n align_corners: An optional `bool`. Defaults to `False`.\n If true, the centers of the 4 corner pixels of the input and grad tensors are\n aligned. Defaults to false.\n half_pixel_centers: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grads`.\n ", "desc": "Computes the gradient of nearest neighbor interpolation.", "type": "API"}, {"name": "tf.raw_ops.ResourceAccumulatorApplyGradient", "docs": "Applies a gradient to a given accumulator.\n\n Does not add if local_step is lesser than the accumulator's global_step.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a accumulator.\n local_step: A `Tensor` of type `int64`.\n The local_step value at which the gradient was computed.\n gradient: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of the gradient to be accumulated.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies a gradient to a given accumulator.", "type": "API"}, {"name": "tf.raw_ops.ResourceAccumulatorNumAccumulated", "docs": "Returns the number of gradients aggregated in the given accumulators.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to an accumulator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Returns the number of gradients aggregated in the given accumulators.", "type": "API"}, {"name": "tf.raw_ops.ResourceAccumulatorSetGlobalStep", "docs": "Updates the accumulator with a new value for global_step.\n\n Logs warning if the accumulator's value is already higher than\n new_global_step.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to an accumulator.\n new_global_step: A `Tensor` of type `int64`.\n The new global_step value to set.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Updates the accumulator with a new value for global_step.", "type": "API"}, {"name": "tf.raw_ops.ResourceAccumulatorTakeGradient", "docs": "Extracts the average gradient in the given ConditionalAccumulator.\n\n The op blocks until sufficient (i.e., more than num_required)\n gradients have been accumulated. If the accumulator has already\n aggregated more than num_required gradients, it returns the average of\n the accumulated gradients. Also automatically increments the recorded\n global_step in the accumulator by 1, and resets the aggregate to 0.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to an accumulator.\n num_required: A `Tensor` of type `int32`.\n Number of gradients required before we return an aggregate.\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The data type of accumulated gradients. Needs to correspond to the type\n of the accumulator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Extracts the average gradient in the given ConditionalAccumulator.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdadelta", "docs": "Update '*var' according to the adadelta scheme.\n\n accum = rho() * accum + (1 - rho()) * grad.square();\n update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad;\n update_accum = rho() * update_accum + (1 - rho()) * update.square();\n var -= update;\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n accum_update: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var, accum and update_accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the adadelta scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdagrad", "docs": "Update '*var' according to the adagrad scheme.\n\n accum += grad * grad\n var -= lr * grad * (1 / sqrt(accum))\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdagradDA", "docs": "Update '*var' according to the proximal adagrad scheme.\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n gradient_accumulator: A `Tensor` of type `resource`.\n Should be from a Variable().\n gradient_squared_accumulator: A `Tensor` of type `resource`.\n Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n lr: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 regularization. Must be a scalar.\n global_step: A `Tensor` of type `int64`.\n Training step number. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the proximal adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdagradV2", "docs": "Update '*var' according to the adagrad scheme.\n\n accum += grad * grad\n var -= lr * grad * (1 / (sqrt(accum) + epsilon))\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdam", "docs": "Update '*var' according to the Adam algorithm.\n\n $$\\text{lr}_t := \\mathrm{lr} \\cdot \\frac{\\sqrt{1 - \\beta_2^t}}{1 - \\beta_1^t}$$\n $$m_t := \\beta_1 \\cdot m_{t-1} + (1 - \\beta_1) \\cdot g$$\n $$v_t := \\beta_2 \\cdot v_{t-1} + (1 - \\beta_2) \\cdot g^2$$\n $$\\text{var} := \\begin{cases} \\text{var} - (m_t \\beta_1 + g \\cdot (1 - \\beta_1))\\cdot\\text{lr}_t/(\\sqrt{v_t} + \\epsilon), &\\text{if use_nesterov}\\\\\\\\ \\text{var} - m_t \\cdot \\text{lr}_t /(\\sqrt{v_t} + \\epsilon), &\\text{otherwise} \\end{cases}$$\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n m: A `Tensor` of type `resource`. Should be from a Variable().\n v: A `Tensor` of type `resource`. Should be from a Variable().\n beta1_power: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Must be a scalar.\n beta2_power: A `Tensor`. Must have the same type as `beta1_power`.\n Must be a scalar.\n lr: A `Tensor`. Must have the same type as `beta1_power`.\n Scaling factor. Must be a scalar.\n beta1: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n beta2: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `beta1_power`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `beta1_power`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, m, and v tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, uses the nesterov update.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the Adam algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdaMax", "docs": "Update '*var' according to the AdaMax algorithm.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n v_t <- max(beta2 * v_{t-1}, abs(g))\n variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n m: A `Tensor` of type `resource`. Should be from a Variable().\n v: A `Tensor` of type `resource`. Should be from a Variable().\n beta1_power: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Must be a scalar.\n lr: A `Tensor`. Must have the same type as `beta1_power`.\n Scaling factor. Must be a scalar.\n beta1: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n beta2: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `beta1_power`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `beta1_power`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, m, and v tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the AdaMax algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAdamWithAmsgrad", "docs": "Update '*var' according to the Adam algorithm.\n\n $$\\text{lr}_t := \\mathrm{learning_rate} * \\sqrt{1 - \\beta_2^t} / (1 - \\beta_1^t)$$\n $$m_t := \\beta_1 * m_{t-1} + (1 - \\beta_1) * g$$\n $$v_t := \\beta_2 * v_{t-1} + (1 - \\beta_2) * g * g$$\n $$\\hat{v}_t := max{\\hat{v}_{t-1}, v_t}$$\n $$\\text{variable} := \\text{variable} - \\text{lr}_t * m_t / (\\sqrt{\\hat{v}_t} + \\epsilon)$$\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n m: A `Tensor` of type `resource`. Should be from a Variable().\n v: A `Tensor` of type `resource`. Should be from a Variable().\n vhat: A `Tensor` of type `resource`. Should be from a Variable().\n beta1_power: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Must be a scalar.\n beta2_power: A `Tensor`. Must have the same type as `beta1_power`.\n Must be a scalar.\n lr: A `Tensor`. Must have the same type as `beta1_power`.\n Scaling factor. Must be a scalar.\n beta1: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n beta2: A `Tensor`. Must have the same type as `beta1_power`.\n Momentum factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `beta1_power`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `beta1_power`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, m, and v tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the Adam algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyAddSign", "docs": "Update '*var' according to the AddSign update.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n update <- (alpha + sign_decay * sign(g) *sign(m)) * g\n variable <- variable - lr_t * update\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n m: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n alpha: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n sign_decay: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n beta: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and m tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the AddSign update.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyCenteredRMSProp", "docs": "Update '*var' according to the centered RMSProp algorithm.\n\n The centered RMSProp algorithm uses an estimate of the centered second moment\n (i.e., the variance) for normalization, as opposed to regular RMSProp, which\n uses the (uncentered) second moment. This often helps with training, but is\n slightly more expensive in terms of computation and memory.\n\n Note that in dense implementation of this algorithm, mg, ms, and mom will\n update even if the grad is zero, but in this sparse implementation, mg, ms,\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n mean_grad = decay * mean_grad + (1-decay) * gradient\n\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)\n\n mg <- rho * mg_{t-1} + (1-rho) * grad\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon)\n var <- var - mom\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n mg: A `Tensor` of type `resource`. Should be from a Variable().\n ms: A `Tensor` of type `resource`. Should be from a Variable().\n mom: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `lr`.\n Momentum Scale. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, mg, ms, and mom tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the centered RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyFtrl", "docs": "Update '*var' according to the Ftrl-proximal scheme.\n\n accum_new = accum + grad * grad\n linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n linear: A `Tensor` of type `resource`. Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n lr: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 regularization. Must be a scalar.\n lr_power: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyFtrlV2", "docs": "Update '*var' according to the Ftrl-proximal scheme.\n\n accum_new = accum + grad * grad\n grad_with_shrinkage = grad + 2 * l2_shrinkage * var\n linear += grad_with_shrinkage +\n (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n linear: A `Tensor` of type `resource`. Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n lr: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 shrinkage regularization. Must be a scalar.\n l2_shrinkage: A `Tensor`. Must have the same type as `grad`.\n lr_power: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyGradientDescent", "docs": "Update '*var' by subtracting 'alpha' * 'delta' from it.\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n alpha: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n delta: A `Tensor`. Must have the same type as `alpha`. The change.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' by subtracting 'alpha' * 'delta' from it.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyKerasMomentum", "docs": "Update '*var' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n accum = accum * momentum - lr * grad\n var += accum\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n momentum: A `Tensor`. Must have the same type as `lr`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var + momentum * accum, so in the end, the var you get is actually\n var + momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyMomentum", "docs": "Update '*var' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n accum = accum * momentum + grad\n var -= lr * accum\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n momentum: A `Tensor`. Must have the same type as `lr`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var - lr * momentum * accum, so in the end, the var you get is actually\n var - lr * momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyPowerSign", "docs": "Update '*var' according to the AddSign update.\n\n m_t <- beta1 * m_{t-1} + (1 - beta1) * g\n update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g\n variable <- variable - lr_t * update\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n m: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n logbase: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n sign_decay: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n beta: A `Tensor`. Must have the same type as `lr`. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and m tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the AddSign update.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyProximalAdagrad", "docs": "Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.\n\n accum += grad * grad\n prox_v = var - lr * grad * (1 / sqrt(accum))\n var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `lr`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `lr`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyProximalGradientDescent", "docs": "Update '*var' as FOBOS algorithm with fixed learning rate.\n\n prox_v = var - alpha * delta\n var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n alpha: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `alpha`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `alpha`.\n L2 regularization. Must be a scalar.\n delta: A `Tensor`. Must have the same type as `alpha`. The change.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' as FOBOS algorithm with fixed learning rate.", "type": "API"}, {"name": "tf.raw_ops.ResourceApplyRMSProp", "docs": "Update '*var' according to the RMSProp algorithm.\n\n Note that in dense implementation of this algorithm, ms and mom will\n update even if the grad is zero, but in this sparse implementation, ms\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon)\n\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)\n var <- var - mom\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n ms: A `Tensor` of type `resource`. Should be from a Variable().\n mom: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `lr`.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, ms, and mom tensors is protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceConditionalAccumulator", "docs": "A conditional accumulator for aggregating gradients.\n\n The accumulator accepts gradients marked with local_step greater or\n equal to the most recent global_step known to the accumulator. The\n average can be extracted from the accumulator, provided sufficient\n gradients have been accumulated. Extracting the average automatically\n resets the aggregate to 0, and increments the global_step recorded by\n the accumulator.\n This is a resource version of ConditionalAccumulator that will work in TF2.0\n with tf.cond version 2.\n\n Args:\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The type of the value being accumulated.\n shape: A `tf.TensorShape` or list of `ints`.\n The shape of the values, can be [], in which case shape is unknown.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator will be shared under the\n given name across multiple sessions.\n reduction_type: An optional `string` from: `\"MEAN\", \"SUM\"`. Defaults to `\"MEAN\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A conditional accumulator for aggregating gradients.", "type": "API"}, {"name": "tf.raw_ops.ResourceCountUpTo", "docs": "Increments variable pointed to by 'resource' until it reaches 'limit'.\n\n Args:\n resource: A `Tensor` of type `resource`.\n Should be from a scalar `Variable` node.\n limit: An `int`.\n If incrementing ref would bring it above limit, instead generates an\n 'OutOfRange' error.\n T: A `tf.DType` from: `tf.int32, tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `T`.\n ", "desc": "Increments variable pointed to by 'resource' until it reaches 'limit'.", "type": "API"}, {"name": "tf.raw_ops.ResourceGather", "docs": "Gather slices from the variable pointed to by `resource` according to `indices`.\n\n `indices` must be an integer tensor of any dimension (usually 0-D or 1-D).\n Produces an output tensor with shape `indices.shape + params.shape[1:]` where:\n\n ```python\n # Scalar indices\n output[:, ..., :] = params[indices, :, ... :]\n\n # Vector indices\n output[i, :, ..., :] = params[indices[i], :, ... :]\n\n # Higher rank indices\n output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]\n ```\n\n Args:\n resource: A `Tensor` of type `resource`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n dtype: A `tf.DType`.\n batch_dims: An optional `int`. Defaults to `0`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Gather slices from the variable pointed to by `resource` according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.ResourceGatherNd", "docs": "TODO: add doc.\n\n Args:\n resource: A `Tensor` of type `resource`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterAdd", "docs": "Adds sparse updates to the variable referenced by `resource`.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] += updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] += updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Adds sparse updates to the variable referenced by `resource`.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterDiv", "docs": "Divides sparse updates into the variable referenced by `resource`.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] /= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] /= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions multiply.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Divides sparse updates into the variable referenced by `resource`.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterMax", "docs": "Reduces sparse updates into the variable referenced by `resource` using the `max` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = max(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions are combined.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Reduces sparse updates into the variable referenced by `resource` using the `max` operation.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterMin", "docs": "Reduces sparse updates into the variable referenced by `resource` using the `min` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = min(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions are combined.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Reduces sparse updates into the variable referenced by `resource` using the `min` operation.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterMul", "docs": "Multiplies sparse updates into the variable referenced by `resource`.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] *= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] *= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions multiply.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Multiplies sparse updates into the variable referenced by `resource`.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterNdAdd", "docs": "Applies sparse addition to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to add 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that addition would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n add = tf.scatter_nd_add(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(add)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 13, 3, 14, 14, 6, 7, 20]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A `Tensor` of type `resource`.\n A resource handle. Must be from a VarHandleOp.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. A Tensor. Must have the same type as ref. A tensor of\n values to add to ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies sparse addition to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterNdMax", "docs": "TODO: add doc.\n\n Args:\n ref: A `Tensor` of type `resource`.\n A resource handle. Must be from a VarHandleOp.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. A Tensor. Must have the same type as ref. A tensor of\n values whose element wise max is taken with ref\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterNdMin", "docs": "TODO: add doc.\n\n Args:\n ref: A `Tensor` of type `resource`.\n A resource handle. Must be from a VarHandleOp.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. A Tensor. Must have the same type as ref. A tensor of\n values whose element wise min is taken with ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterNdSub", "docs": "Applies sparse subtraction to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to subtract 4 scattered elements from a rank-1 tensor\n with 8 elements. In Python, that subtraction would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n sub = tf.scatter_nd_sub(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(sub)\n ```\n\n The resulting update to ref would look like this:\n\n [1, -9, 3, -6, -4, 6, 7, -4]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A `Tensor` of type `resource`.\n A resource handle. Must be from a VarHandleOp.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. A Tensor. Must have the same type as ref. A tensor of\n values to add to ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies sparse subtraction to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterNdUpdate", "docs": "Applies sparse `updates` to individual values or slices within a given\n\n variable according to `indices`.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].\n ```\n\n For example, say we want to update 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that update would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1] ,[7]])\n updates = tf.constant([9, 10, 11, 12])\n update = tf.scatter_nd_update(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(update)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 11, 3, 10, 9, 6, 7, 12]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A `Tensor` of type `resource`.\n A resource handle. Must be from a VarHandleOp.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`.\n A Tensor. Must have the same type as ref. A tensor of updated\n values to add to ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies sparse `updates` to individual values or slices within a given", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterSub", "docs": "Subtracts sparse updates from the variable referenced by `resource`.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] -= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] -= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Subtracts sparse updates from the variable referenced by `resource`.", "type": "API"}, {"name": "tf.raw_ops.ResourceScatterUpdate", "docs": "Assigns sparse updates to the variable referenced by `resource`.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] = updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]\n\n Args:\n resource: A `Tensor` of type `resource`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. A tensor of updated values to add to `ref`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Assigns sparse updates to the variable referenced by `resource`.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyAdadelta", "docs": "var: Should be from a Variable().\n\n Args:\n var: A `Tensor` of type `resource`.\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n accum_update: A `Tensor` of type `resource`.\n : Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "var: Should be from a Variable().", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyAdagrad", "docs": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.\n\n That is for rows we have grad for, we update var and accum as follows:\n accum += grad * grad\n var -= lr * grad * (1 / sqrt(accum))\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyAdagradDA", "docs": "Update entries in '*var' and '*accum' according to the proximal adagrad scheme.\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n gradient_accumulator: A `Tensor` of type `resource`.\n Should be from a Variable().\n gradient_squared_accumulator: A `Tensor` of type `resource`.\n Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `grad`.\n Learning rate. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 regularization. Must be a scalar.\n global_step: A `Tensor` of type `int64`.\n Training step number. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update entries in '*var' and '*accum' according to the proximal adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyAdagradV2", "docs": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.\n\n That is for rows we have grad for, we update var and accum as follows:\n accum += grad * grad\n var -= lr * grad * (1 / sqrt(accum))\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyCenteredRMSProp", "docs": "Update '*var' according to the centered RMSProp algorithm.\n\n The centered RMSProp algorithm uses an estimate of the centered second moment\n (i.e., the variance) for normalization, as opposed to regular RMSProp, which\n uses the (uncentered) second moment. This often helps with training, but is\n slightly more expensive in terms of computation and memory.\n\n Note that in dense implementation of this algorithm, mg, ms, and mom will\n update even if the grad is zero, but in this sparse implementation, mg, ms,\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n mean_grad = decay * mean_grad + (1-decay) * gradient\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)\n\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)\n var <- var - mom\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n mg: A `Tensor` of type `resource`. Should be from a Variable().\n ms: A `Tensor` of type `resource`. Should be from a Variable().\n mom: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `lr`.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var, ms and mom.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, mg, ms, and mom tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the centered RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyFtrl", "docs": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.\n\n That is for rows we have grad for, we update var, accum and linear as follows:\n accum_new = accum + grad * grad\n linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n linear: A `Tensor` of type `resource`. Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 regularization. Must be a scalar.\n lr_power: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyFtrlV2", "docs": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.\n\n That is for rows we have grad for, we update var, accum and linear as follows:\n grad_with_shrinkage = grad + 2 * l2_shrinkage * var\n accum_new = accum + grad_with_shrinkage * grad_with_shrinkage\n linear += grad_with_shrinkage +\n (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n linear: A `Tensor` of type `resource`. Should be from a Variable().\n grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `grad`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `grad`.\n L2 shrinkage regularization. Must be a scalar.\n l2_shrinkage: A `Tensor`. Must have the same type as `grad`.\n lr_power: A `Tensor`. Must have the same type as `grad`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyKerasMomentum", "docs": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n That is for rows we have grad for, we update var and accum as follows:\n\n accum = accum * momentum - lr * grad\n var += accum\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n momentum: A `Tensor`. Must have the same type as `lr`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var + momentum * accum, so in the end, the var you get is actually\n var + momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyMomentum", "docs": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n That is for rows we have grad for, we update var and accum as follows:\n\n accum = accum * momentum + grad\n var -= lr * accum\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n momentum: A `Tensor`. Must have the same type as `lr`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var - lr * momentum * accum, so in the end, the var you get is actually\n var - lr * momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyProximalAdagrad", "docs": "Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.\n\n That is for rows we have grad for, we update var and accum as follows:\n accum += grad * grad\n prox_v = var\n prox_v -= lr * grad * (1 / sqrt(accum))\n var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n accum: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Learning rate. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `lr`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `lr`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyProximalGradientDescent", "docs": "Sparse update '*var' as FOBOS algorithm with fixed learning rate.\n\n That is for rows we have grad for, we update var as follows:\n prox_v = var - alpha * grad\n var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n alpha: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `alpha`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `alpha`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `alpha`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Sparse update '*var' as FOBOS algorithm with fixed learning rate.", "type": "API"}, {"name": "tf.raw_ops.ResourceSparseApplyRMSProp", "docs": "Update '*var' according to the RMSProp algorithm.\n\n Note that in dense implementation of this algorithm, ms and mom will\n update even if the grad is zero, but in this sparse implementation, ms\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon)\n\n ms <- rho * ms_{t-1} + (1-rho) * grad * grad\n mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)\n var <- var - mom\n\n Args:\n var: A `Tensor` of type `resource`. Should be from a Variable().\n ms: A `Tensor` of type `resource`. Should be from a Variable().\n mom: A `Tensor` of type `resource`. Should be from a Variable().\n lr: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `lr`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `lr`.\n epsilon: A `Tensor`. Must have the same type as `lr`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `lr`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var, ms and mom.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, ms, and mom tensors is protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Update '*var' according to the RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.ResourceStridedSliceAssign", "docs": "Assign `value` to the sliced l-value reference of `ref`.\n\n The values of `value` are assigned to the positions in the variable\n `ref` that are selected by the slice parameters. The slice parameters\n `begin, `end`, `strides`, etc. work exactly as in `StridedSlice`.\n\n NOTE this op currently does not support broadcasting and so `value`'s\n shape must be exactly the shape produced by the slice of `ref`.\n\n Args:\n ref: A `Tensor` of type `resource`.\n begin: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n end: A `Tensor`. Must have the same type as `begin`.\n strides: A `Tensor`. Must have the same type as `begin`.\n value: A `Tensor`.\n begin_mask: An optional `int`. Defaults to `0`.\n end_mask: An optional `int`. Defaults to `0`.\n ellipsis_mask: An optional `int`. Defaults to `0`.\n new_axis_mask: An optional `int`. Defaults to `0`.\n shrink_axis_mask: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Assign `value` to the sliced l-value reference of `ref`.", "type": "API"}, {"name": "tf.raw_ops.Restore", "docs": "Restores a tensor from checkpoint files.\n\n Reads a tensor stored in one or several files. If there are several files (for\n instance because a tensor was saved as slices), `file_pattern` may contain\n wildcard symbols (`*` and `?`) in the filename portion only, not in the\n directory portion.\n\n If a `file_pattern` matches several files, `preferred_shard` can be used to hint\n in which file the requested tensor is likely to be found. This op will first\n open the file at index `preferred_shard` in the list of matching files and try\n to restore tensors from that file. Only if some tensors or tensor slices are\n not found in that first file, then the Op opens all the files. Setting\n `preferred_shard` to match the value passed as the `shard` input\n of a matching `Save` Op may speed up Restore. This attribute only affects\n performance, not correctness. The default value -1 means files are processed in\n order.\n\n See also `RestoreSlice`.\n\n Args:\n file_pattern: A `Tensor` of type `string`.\n Must have a single element. The pattern of the files from\n which we read the tensor.\n tensor_name: A `Tensor` of type `string`.\n Must have a single element. The name of the tensor to be\n restored.\n dt: A `tf.DType`. The type of the tensor to be restored.\n preferred_shard: An optional `int`. Defaults to `-1`.\n Index of file to open first if multiple files match\n `file_pattern`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dt`.\n ", "desc": "Restores a tensor from checkpoint files.", "type": "API"}, {"name": "tf.raw_ops.RestoreSlice", "docs": "Restores a tensor from checkpoint files.\n\n This is like `Restore` except that restored tensor can be listed as filling\n only a slice of a larger tensor. `shape_and_slice` specifies the shape of the\n larger tensor and the slice that the restored tensor covers.\n\n The `shape_and_slice` input has the same format as the\n elements of the `shapes_and_slices` input of the `SaveSlices` op.\n\n Args:\n file_pattern: A `Tensor` of type `string`.\n Must have a single element. The pattern of the files from\n which we read the tensor.\n tensor_name: A `Tensor` of type `string`.\n Must have a single element. The name of the tensor to be\n restored.\n shape_and_slice: A `Tensor` of type `string`.\n Scalar. The shapes and slice specifications to use when\n restoring a tensors.\n dt: A `tf.DType`. The type of the tensor to be restored.\n preferred_shard: An optional `int`. Defaults to `-1`.\n Index of file to open first if multiple files match\n `file_pattern`. See the documentation for `Restore`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dt`.\n ", "desc": "Restores a tensor from checkpoint files.", "type": "API"}, {"name": "tf.raw_ops.RestoreV2", "docs": "Restores tensors from a V2 checkpoint.\n\n For backward compatibility with the V1 format, this Op currently allows\n restoring from a V1 checkpoint as well:\n - This Op first attempts to find the V2 index file pointed to by \"prefix\", and\n if found proceed to read it as a V2 checkpoint;\n - Otherwise the V1 read path is invoked.\n Relying on this behavior is not recommended, as the ability to fall back to read\n V1 might be deprecated and eventually removed.\n\n By default, restores the named tensors in full. If the caller wishes to restore\n specific slices of stored tensors, \"shape_and_slices\" should be non-empty\n strings and correspondingly well-formed.\n\n Callers must ensure all the named tensors are indeed stored in the checkpoint.\n\n Args:\n prefix: A `Tensor` of type `string`.\n Must have a single element. The prefix of a V2 checkpoint.\n tensor_names: A `Tensor` of type `string`.\n shape {N}. The names of the tensors to be restored.\n shape_and_slices: A `Tensor` of type `string`.\n shape {N}. The slice specs of the tensors to be restored.\n Empty strings indicate that they are non-partitioned tensors.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n shape {N}. The list of expected dtype for the tensors. Must match\n those stored in the checkpoint.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Restores tensors from a V2 checkpoint.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters", "docs": "Retrieve Adadelta embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, accumulators, updates).\n\n parameters: A `Tensor` of type `float32`.\n accumulators: A `Tensor` of type `float32`.\n updates: A `Tensor` of type `float32`.\n ", "desc": "Retrieve Adadelta embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters", "docs": "Retrieve Adagrad embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, accumulators).\n\n parameters: A `Tensor` of type `float32`.\n accumulators: A `Tensor` of type `float32`.\n ", "desc": "Retrieve Adagrad embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingADAMParameters", "docs": "Retrieve ADAM embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, momenta, velocities).\n\n parameters: A `Tensor` of type `float32`.\n momenta: A `Tensor` of type `float32`.\n velocities: A `Tensor` of type `float32`.\n ", "desc": "Retrieve ADAM embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters", "docs": "Retrieve centered RMSProp embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, ms, mom, mg).\n\n parameters: A `Tensor` of type `float32`.\n ms: A `Tensor` of type `float32`.\n mom: A `Tensor` of type `float32`.\n mg: A `Tensor` of type `float32`.\n ", "desc": "Retrieve centered RMSProp embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingFrequencyEstimatorParameters", "docs": "Retrieve frequency estimator embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, last_hit_step).\n\n parameters: A `Tensor` of type `float32`.\n last_hit_step: A `Tensor` of type `float32`.\n ", "desc": "Retrieve frequency estimator embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters", "docs": "Retrieve FTRL embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, accumulators, linears).\n\n parameters: A `Tensor` of type `float32`.\n accumulators: A `Tensor` of type `float32`.\n linears: A `Tensor` of type `float32`.\n ", "desc": "Retrieve FTRL embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters", "docs": "Retrieve MDL Adagrad Light embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, accumulators, weights, benefits).\n\n parameters: A `Tensor` of type `float32`.\n accumulators: A `Tensor` of type `float32`.\n weights: A `Tensor` of type `float32`.\n benefits: A `Tensor` of type `float32`.\n ", "desc": "Retrieve MDL Adagrad Light embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters", "docs": "Retrieve Momentum embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, momenta).\n\n parameters: A `Tensor` of type `float32`.\n momenta: A `Tensor` of type `float32`.\n ", "desc": "Retrieve Momentum embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters", "docs": "Retrieve proximal Adagrad embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, accumulators).\n\n parameters: A `Tensor` of type `float32`.\n accumulators: A `Tensor` of type `float32`.\n ", "desc": "Retrieve proximal Adagrad embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingProximalYogiParameters", "docs": "TODO: add doc.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, v, m).\n\n parameters: A `Tensor` of type `float32`.\n v: A `Tensor` of type `float32`.\n m: A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters", "docs": "Retrieve RMSProp embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (parameters, ms, mom).\n\n parameters: A `Tensor` of type `float32`.\n ms: A `Tensor` of type `float32`.\n mom: A `Tensor` of type `float32`.\n ", "desc": "Retrieve RMSProp embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters", "docs": "Retrieve SGD embedding parameters.\n\n An op that retrieves optimization parameters from embedding to host\n memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up\n the correct embedding table configuration. For example, this op is\n used to retrieve updated parameters before saving a checkpoint.\n\n Args:\n num_shards: An `int`.\n shard_id: An `int`.\n table_id: An optional `int`. Defaults to `-1`.\n table_name: An optional `string`. Defaults to `\"\"`.\n config: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Retrieve SGD embedding parameters.", "type": "API"}, {"name": "tf.raw_ops.Reverse", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `bool` tensor `dims` representing the dimensions\n of `tensor`, this operation reverses each dimension i of `tensor` where\n `dims[i]` is `True`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions\n of `tensor` must equal the number of elements in `dims`. In other words:\n\n `rank(tensor) = size(dims)`\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [False, False, False, True]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is [False, True, False, False]\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is [False, False, True, False]\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `uint32`, `int32`, `uint64`, `int64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n dims: A `Tensor` of type `bool`. 1-D. The dimensions to reverse.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.ReverseSequence", "docs": "Reverses variable length slices.\n\n This op first slices `input` along the dimension `batch_dim`, and for each\n slice `i`, reverses the first `seq_lengths[i]` elements along\n the dimension `seq_dim`.\n\n The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`,\n and `seq_lengths` must be a vector of length `input.dims[batch_dim]`.\n\n The output slice `i` along dimension `batch_dim` is then given by input\n slice `i`, with the first `seq_lengths[i]` slices along dimension\n `seq_dim` reversed.\n\n For example:\n\n ```\n # Given this:\n batch_dim = 0\n seq_dim = 1\n input.dims = (4, 8, ...)\n seq_lengths = [7, 2, 3, 5]\n\n # then slices of input are reversed on seq_dim, but only up to seq_lengths:\n output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]\n output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]\n output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]\n output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]\n\n # while entries past seq_lens are copied through:\n output[0, 7:, :, ...] = input[0, 7:, :, ...]\n output[1, 2:, :, ...] = input[1, 2:, :, ...]\n output[2, 3:, :, ...] = input[2, 3:, :, ...]\n output[3, 2:, :, ...] = input[3, 2:, :, ...]\n ```\n\n In contrast, if:\n\n ```\n # Given this:\n batch_dim = 2\n seq_dim = 0\n input.dims = (8, ?, 4, ...)\n seq_lengths = [7, 2, 3, 5]\n\n # then slices of input are reversed on seq_dim, but only up to seq_lengths:\n output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...]\n output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...]\n output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...]\n output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]\n\n # while entries past seq_lens are copied through:\n output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...]\n output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...]\n output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...]\n output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]\n ```\n\n Args:\n input: A `Tensor`. The input to reverse.\n seq_lengths: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with length `input.dims(batch_dim)` and\n `max(seq_lengths) <= input.dims(seq_dim)`\n seq_dim: An `int`. The dimension which is partially reversed.\n batch_dim: An optional `int`. Defaults to `0`.\n The dimension along which reversal is performed.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Reverses variable length slices.", "type": "API"}, {"name": "tf.raw_ops.ReverseV2", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `int32` tensor `axis` representing the set of\n dimensions of `tensor` to reverse. This operation reverses each dimension\n `i` for which there exists `j` s.t. `axis[j] == i`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions specified\n in `axis` may be 0 or more entries. If an index is specified more than\n once, a InvalidArgument error is raised.\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [3] or 'dims' is [-1]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is '[1]' (or 'dims' is '[-3]')\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is '[2]' (or 'dims' is '[-2]')\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `int64`, `uint64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. The indices of the dimensions to reverse. Must be in the range\n `[-rank(tensor), rank(tensor))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.RFFT", "docs": "Real-valued fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most dimension of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the\n `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term,\n followed by the `fft_length / 2` positive-frequency terms.\n\n Along the axis `RFFT` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n Tcomplex: An optional `tf.DType` from: `tf.complex64, tf.complex128`. Defaults to `tf.complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "Real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.RFFT2D", "docs": "2D real-valued fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 2 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n Tcomplex: An optional `tf.DType` from: `tf.complex64, tf.complex128`. Defaults to `tf.complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.RFFT3D", "docs": "3D real-valued fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 3 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n Tcomplex: An optional `tf.DType` from: `tf.complex64, tf.complex128`. Defaults to `tf.complex64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.raw_ops.RGBToHSV", "docs": "Converts one or more images from RGB to HSV.\n\n Outputs a tensor of the same shape as the `images` tensor, containing the HSV\n value of the pixels. The output is only well defined if the value in `images`\n are in `[0,1]`.\n\n `output[..., 0]` contains hue, `output[..., 1]` contains saturation, and\n `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0\n corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.\n\n Usage Example:\n\n >>> blue_image = tf.stack([\n ... tf.zeros([5,5]),\n ... tf.zeros([5,5]),\n ... tf.ones([5,5])],\n ... axis=-1)\n >>> blue_hsv_image = tf.image.rgb_to_hsv(blue_image)\n >>> blue_hsv_image[0,0].numpy()\n array([0.6666667, 1. , 1. ], dtype=float32)\n\n Args:\n images: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 1-D or higher rank. RGB data to convert. Last dimension must be size 3.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `images`.\n ", "desc": "Converts one or more images from RGB to HSV.", "type": "API"}, {"name": "tf.raw_ops.RightShift", "docs": "Elementwise computes the bitwise right-shift of `x` and `y`.\n\n Performs a logical shift for unsigned integer types, and an arithmetic shift\n for signed integer types.\n\n If `y` is negative, or greater than or equal to than the width of `x` in bits\n the result is implementation defined.\n\n Example:\n\n ```python\n import tensorflow as tf\n from tensorflow.python.ops import bitwise_ops\n import numpy as np\n dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]\n\n for dtype in dtype_list:\n lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)\n rhs = tf.constant([5, 0, 7, 11], dtype=dtype)\n\n right_shift_result = bitwise_ops.right_shift(lhs, rhs)\n\n print(right_shift_result)\n\n # This will print:\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)\n # tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)\n\n lhs = np.array([-2, 64, 101, 32], dtype=np.int8)\n rhs = np.array([-1, -5, -3, -14], dtype=np.int8)\n bitwise_ops.right_shift(lhs, rhs)\n # \n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Elementwise computes the bitwise right-shift of `x` and `y`.", "type": "API"}, {"name": "tf.raw_ops.Rint", "docs": "Returns element-wise integer closest to x.\n\n If the result is midway between two representable values,\n the even representable is chosen.\n For example:\n\n ```\n rint(-1.5) ==> -2.0\n rint(0.5000001) ==> 1.0\n rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise integer closest to x.", "type": "API"}, {"name": "tf.raw_ops.RngReadAndSkip", "docs": "Advance the counter of a counter-based RNG.\n\n The state of the RNG after\n `rng_read_and_skip(n)` will be the same as that after `uniform([n])`\n (or any other distribution). The actual increment added to the\n counter is an unspecified implementation choice.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n alg: A `Tensor` of type `int32`. The RNG algorithm.\n delta: A `Tensor` of type `uint64`. The amount of advancement.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Advance the counter of a counter-based RNG.", "type": "API"}, {"name": "tf.raw_ops.RngSkip", "docs": "Advance the counter of a counter-based RNG.\n\n The state of the RNG after\n `rng_skip(n)` will be the same as that after `stateful_uniform([n])`\n (or any other distribution). The actual increment added to the\n counter is an unspecified implementation detail.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n delta: A `Tensor` of type `int64`. The amount of advancement.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Advance the counter of a counter-based RNG.", "type": "API"}, {"name": "tf.raw_ops.Roll", "docs": "Rolls the elements of a tensor along an axis.\n\n The elements are shifted positively (towards larger indices) by the offset of\n `shift` along the dimension of `axis`. Negative `shift` values will shift\n elements in the opposite direction. Elements that roll passed the last position\n will wrap around to the first and vice versa. Multiple shifts along multiple\n axes may be specified.\n\n For example:\n\n ```\n # 't' is [0, 1, 2, 3, 4]\n roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]\n\n # shifting along multiple dimensions\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]\n\n # shifting along the same axis multiple times\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]\n ```\n\n Args:\n input: A `Tensor`.\n shift: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which\n elements are shifted positively (towards larger indices) along the dimension\n specified by `axis[i]`. Negative shifts will roll the elements in the opposite\n direction.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift\n `shift[i]` should occur. If the same axis is referenced more than once, the\n total shift for that axis will be the sum of all the shifts that belong to that\n axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Rolls the elements of a tensor along an axis.", "type": "API"}, {"name": "tf.raw_ops.Round", "docs": "Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use std::cint.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Rounds the values of a tensor to the nearest integer, element-wise.", "type": "API"}, {"name": "tf.raw_ops.Rsqrt", "docs": "Computes reciprocal of square root of x element-wise.\n\n I.e., \\\\(y = 1 / \\sqrt{x}\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes reciprocal of square root of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.RsqrtGrad", "docs": "Computes the gradient for the rsqrt of `x` wrt its input.\n\n Specifically, `grad = dy * -0.5 * y^3`, where `y = rsqrt(x)`, and `dy`\n is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient for the rsqrt of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.SampleDistortedBoundingBox", "docs": "Generate a single randomly distorted bounding box for an image.\n\n Bounding box annotations are often supplied in addition to ground-truth labels\n in image recognition or object localization tasks. A common technique for\n training such a system is to randomly distort an image while preserving\n its content, i.e. *data augmentation*. This Op outputs a randomly distorted\n localization of an object, i.e. bounding box, given an `image_size`,\n `bounding_boxes` and a series of constraints.\n\n The output of this Op is a single bounding box that may be used to crop the\n original image. The output is returned as 3 tensors: `begin`, `size` and\n `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\n image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize\n what the bounding box looks like.\n\n Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The\n bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\n height of the underlying image.\n\n For example,\n\n ```python\n # Generate a single distorted bounding box.\n begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(\n tf.shape(image),\n bounding_boxes=bounding_boxes)\n\n # Draw the bounding box in an image summary.\n image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),\n bbox_for_draw)\n tf.summary.image('images_with_box', image_with_box)\n\n # Employ the bounding box to distort the image.\n distorted_image = tf.slice(image, begin, size)\n ```\n\n Note that if no bounding box information is available, setting\n `use_image_if_no_bounding_boxes = true` will assume there is a single implicit\n bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\n false and no bounding boxes are supplied, an error is raised.\n\n Args:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`.\n 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`.\n 3-D with shape `[batch, N, 4]` describing the N bounding boxes\n associated with the image.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to non-zero, the random number\n generator is seeded by the given `seed`. Otherwise, it is seeded by a random\n seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n min_object_covered: An optional `float`. Defaults to `0.1`.\n The cropped area of the image must contain at least this\n fraction of any bounding box supplied. The value of this parameter should be\n non-negative. In the case of 0, the cropped area does not need to overlap\n any of the bounding boxes supplied.\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75, 1.33]`.\n The cropped area of the image must have an aspect ratio =\n width / height within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`.\n The cropped area of the image must contain a fraction of the\n supplied image within this range.\n max_attempts: An optional `int`. Defaults to `100`.\n Number of attempts at generating a cropped region of the image\n of the specified constraints. After `max_attempts` failures, return the entire\n image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied.\n If true, assume an implicit bounding box covering the whole input. If false,\n raise an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`.\n size: A `Tensor`. Has the same type as `image_size`.\n bboxes: A `Tensor` of type `float32`.\n ", "desc": "Generate a single randomly distorted bounding box for an image.", "type": "API"}, {"name": "tf.raw_ops.SampleDistortedBoundingBoxV2", "docs": "Generate a single randomly distorted bounding box for an image.\n\n Bounding box annotations are often supplied in addition to ground-truth labels\n in image recognition or object localization tasks. A common technique for\n training such a system is to randomly distort an image while preserving\n its content, i.e. *data augmentation*. This Op outputs a randomly distorted\n localization of an object, i.e. bounding box, given an `image_size`,\n `bounding_boxes` and a series of constraints.\n\n The output of this Op is a single bounding box that may be used to crop the\n original image. The output is returned as 3 tensors: `begin`, `size` and\n `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\n image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize\n what the bounding box looks like.\n\n Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The\n bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\n height of the underlying image.\n\n For example,\n\n ```python\n # Generate a single distorted bounding box.\n begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(\n tf.shape(image),\n bounding_boxes=bounding_boxes)\n\n # Draw the bounding box in an image summary.\n image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),\n bbox_for_draw)\n tf.summary.image('images_with_box', image_with_box)\n\n # Employ the bounding box to distort the image.\n distorted_image = tf.slice(image, begin, size)\n ```\n\n Note that if no bounding box information is available, setting\n `use_image_if_no_bounding_boxes = true` will assume there is a single implicit\n bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\n false and no bounding boxes are supplied, an error is raised.\n\n Args:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`.\n 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`.\n 3-D with shape `[batch, N, 4]` describing the N bounding boxes\n associated with the image.\n min_object_covered: A `Tensor` of type `float32`.\n The cropped area of the image must contain at least this\n fraction of any bounding box supplied. The value of this parameter should be\n non-negative. In the case of 0, the cropped area does not need to overlap\n any of the bounding boxes supplied.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to non-zero, the random number\n generator is seeded by the given `seed`. Otherwise, it is seeded by a random\n seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75, 1.33]`.\n The cropped area of the image must have an aspect ratio =\n width / height within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`.\n The cropped area of the image must contain a fraction of the\n supplied image within this range.\n max_attempts: An optional `int`. Defaults to `100`.\n Number of attempts at generating a cropped region of the image\n of the specified constraints. After `max_attempts` failures, return the entire\n image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied.\n If true, assume an implicit bounding box covering the whole input. If false,\n raise an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`.\n size: A `Tensor`. Has the same type as `image_size`.\n bboxes: A `Tensor` of type `float32`.\n ", "desc": "Generate a single randomly distorted bounding box for an image.", "type": "API"}, {"name": "tf.raw_ops.SamplingDataset", "docs": "Creates a dataset that takes a Bernoulli sample of the contents of another dataset.\n\n There is no transformation in the `tf.data` Python API for creating this dataset.\n Instead, it is created as a result of the `filter_with_random_uniform_fusion`\n static optimization. Whether this optimization is performed is determined by the\n `experimental_optimization.filter_with_random_uniform_fusion` option of\n `tf.data.Options`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n rate: A `Tensor` of type `float32`.\n A scalar representing the sample rate. Each element of `input_dataset` is\n retained with this probability, independent of all other elements.\n seed: A `Tensor` of type `int64`.\n A scalar representing seed of random number generator.\n seed2: A `Tensor` of type `int64`.\n A scalar representing seed2 of random number generator.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that takes a Bernoulli sample of the contents of another dataset.", "type": "API"}, {"name": "tf.raw_ops.Save", "docs": "Saves the input tensors to disk.\n\n The size of `tensor_names` must match the number of tensors in `data`. `data[i]`\n is written to `filename` with name `tensor_names[i]`.\n\n See also `SaveSlices`.\n\n Args:\n filename: A `Tensor` of type `string`.\n Must have a single element. The name of the file to which we write\n the tensor.\n tensor_names: A `Tensor` of type `string`.\n Shape `[N]`. The names of the tensors to be saved.\n data: A list of `Tensor` objects. `N` tensors to save.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Saves the input tensors to disk.", "type": "API"}, {"name": "tf.raw_ops.SaveDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n path: A `Tensor` of type `string`.\n shard_func_other_args: A list of `Tensor` objects.\n shard_func: A function decorated with @Defun.\n compression: An optional `string`. Defaults to `\"\"`.\n use_shard_func: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.SaveSlices", "docs": "Saves input tensors slices to disk.\n\n This is like `Save` except that tensors can be listed in the saved file as being\n a slice of a larger tensor. `shapes_and_slices` specifies the shape of the\n larger tensor and the slice that this tensor covers. `shapes_and_slices` must\n have as many elements as `tensor_names`.\n\n Elements of the `shapes_and_slices` input must either be:\n\n * The empty string, in which case the corresponding tensor is\n saved normally.\n * A string of the form `dim0 dim1 ... dimN-1 slice-spec` where the\n `dimI` are the dimensions of the larger tensor and `slice-spec`\n specifies what part is covered by the tensor to save.\n\n `slice-spec` itself is a `:`-separated list: `slice0:slice1:...:sliceN-1`\n where each `sliceI` is either:\n\n * The string `-` meaning that the slice covers all indices of this dimension\n * `start,length` where `start` and `length` are integers. In that\n case the slice covers `length` indices starting at `start`.\n\n See also `Save`.\n\n Args:\n filename: A `Tensor` of type `string`.\n Must have a single element. The name of the file to which we write the\n tensor.\n tensor_names: A `Tensor` of type `string`.\n Shape `[N]`. The names of the tensors to be saved.\n shapes_and_slices: A `Tensor` of type `string`.\n Shape `[N]`. The shapes and slice specifications to use when\n saving the tensors.\n data: A list of `Tensor` objects. `N` tensors to save.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Saves input tensors slices to disk.", "type": "API"}, {"name": "tf.raw_ops.SaveV2", "docs": "Saves tensors in V2 checkpoint format.\n\n By default, saves the named tensors in full. If the caller wishes to save\n specific slices of full tensors, \"shape_and_slices\" should be non-empty strings\n and correspondingly well-formed.\n\n Args:\n prefix: A `Tensor` of type `string`.\n Must have a single element. The prefix of the V2 checkpoint to which we\n write the tensors.\n tensor_names: A `Tensor` of type `string`.\n shape {N}. The names of the tensors to be saved.\n shape_and_slices: A `Tensor` of type `string`.\n shape {N}. The slice specs of the tensors to be saved.\n Empty strings indicate that they are non-partitioned tensors.\n tensors: A list of `Tensor` objects. `N` tensors to save.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Saves tensors in V2 checkpoint format.", "type": "API"}, {"name": "tf.raw_ops.ScalarSummary", "docs": "Outputs a `Summary` protocol buffer with scalar values.\n\n The input `tags` and `values` must have the same shape. The generated summary\n has a summary value for each tag-value pair in `tags` and `values`.\n\n Args:\n tags: A `Tensor` of type `string`. Tags for the summary.\n values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n Same shape as `tags. Values for the summary.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with scalar values.", "type": "API"}, {"name": "tf.raw_ops.ScaleAndTranslate", "docs": "TODO: add doc.\n\n Args:\n images: A `Tensor`. Must be one of the following types: `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`.\n size: A `Tensor` of type `int32`.\n scale: A `Tensor` of type `float32`.\n translation: A `Tensor` of type `float32`.\n kernel_type: An optional `string`. Defaults to `\"lanczos3\"`.\n antialias: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ScaleAndTranslateGrad", "docs": "TODO: add doc.\n\n Args:\n grads: A `Tensor`. Must be one of the following types: `float32`.\n original_image: A `Tensor`. Must have the same type as `grads`.\n scale: A `Tensor` of type `float32`.\n translation: A `Tensor` of type `float32`.\n kernel_type: An optional `string`. Defaults to `\"lanczos3\"`.\n antialias: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grads`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ScanDataset", "docs": "Creates a dataset successively reduces `f` over the elements of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n initial_state: A list of `Tensor` objects.\n other_arguments: A list of `Tensor` objects.\n f: A function decorated with @Defun.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n preserve_cardinality: An optional `bool`. Defaults to `False`.\n use_default_device: An optional `bool`. Defaults to `True`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset successively reduces `f` over the elements of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ScatterAdd", "docs": "Adds sparse updates to a variable reference.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] += updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] += updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to add to `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the addition will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Adds sparse updates to a variable reference.", "type": "API"}, {"name": "tf.raw_ops.ScatterDiv", "docs": "Divides a variable reference by sparse updates.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] /= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] /= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions divide.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of values that `ref` is divided by.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the operation will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Divides a variable reference by sparse updates.", "type": "API"}, {"name": "tf.raw_ops.ScatterMax", "docs": "Reduces sparse updates into a variable reference using the `max` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = max(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions combine.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to reduce into `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the update will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Reduces sparse updates into a variable reference using the `max` operation.", "type": "API"}, {"name": "tf.raw_ops.ScatterMin", "docs": "Reduces sparse updates into a variable reference using the `min` operation.\n\n This operation computes\n\n # Scalar indices\n ref[indices, ...] = min(ref[indices, ...], updates[...])\n\n # Vector indices (for each i)\n ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions combine.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`, `int32`, `int64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to reduce into `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the update will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Reduces sparse updates into a variable reference using the `min` operation.", "type": "API"}, {"name": "tf.raw_ops.ScatterMul", "docs": "Multiplies sparse updates into a variable reference.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] *= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] *= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their contributions multiply.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to multiply to `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the operation will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Multiplies sparse updates into a variable reference.", "type": "API"}, {"name": "tf.raw_ops.ScatterNd", "docs": "Scatters `updates` into a tensor of shape `shape` according to `indices`.\n\n Update the input tensor by scattering sparse `updates` according to individual values at the specified `indices`.\n This op returns an `output` tensor with the `shape` you specify. This op is the\n inverse of the `tf.gather_nd` operator which extracts values or slices from a\n given tensor.\n\n This operation is similar to `tf.tensor_scatter_nd_add`, except that the tensor\n is zero-initialized. Calling `tf.scatter_nd(indices, values, shape)`\n is identical to calling\n `tf.tensor_scatter_nd_add(tf.zeros(shape, values.dtype), indices, values)`\n\n If `indices` contains duplicates, the duplicate `values` are accumulated\n (summed).\n\n **WARNING**: The order in which updates are applied is nondeterministic, so the\n output will be nondeterministic if `indices` contains duplicates;\n numbers summed in different order may yield different results because of some\n numerical approximation issues.\n\n `indices` is an integer tensor of shape `shape`. The last dimension\n of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices of elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`.\n\n `updates` is a tensor with shape:\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of the scatter op is to insert individual elements in\n a tensor by index. Consider an example where you want to insert 4 scattered\n elements in a rank-1 tensor with 8 elements.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n shape = tf.constant([8])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [0, 11, 0, 10, 9, 0, 0, 12]\n\n You can also insert entire slices of a higher rank tensor all at once. For\n example, you can insert two slices in the first dimension of a rank-3 tensor\n with two matrices of new values.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n shape = tf.constant([4, 4, 4])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],\n [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Tensor of indices.\n updates: A `Tensor`. Values to scatter into the output tensor.\n shape: A `Tensor`. Must have the same type as `indices`.\n 1-D. The shape of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `updates`.\n ", "desc": "Scatters `updates` into a tensor of shape `shape` according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.ScatterNdAdd", "docs": "Applies sparse addition to individual values or slices in a Variable.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to add 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that addition would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n add = tf.scatter_nd_add(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(add)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 13, 3, 14, 14, 6, 7, 20]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated values\n to add to ref.\n use_locking: An optional `bool`. Defaults to `False`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse addition to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.raw_ops.ScatterNdMax", "docs": "Computes element-wise maximum.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated values\n to add to ref.\n use_locking: An optional `bool`. Defaults to `False`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Computes element-wise maximum.", "type": "API"}, {"name": "tf.raw_ops.ScatterNdMin", "docs": "Computes element-wise minimum.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated values\n to add to ref.\n use_locking: An optional `bool`. Defaults to `False`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Computes element-wise minimum.", "type": "API"}, {"name": "tf.raw_ops.ScatterNdNonAliasingAdd", "docs": "Applies sparse addition to `input` using individual values or slices\n\n from `updates` according to indices `indices`. The updates are non-aliasing:\n `input` is only modified in-place if no other operations will use it.\n Otherwise, a copy of `input` is made. This operation has a gradient with\n respect to both `input` and `updates`.\n\n `input` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `input`.\n It must be shape \\\\([d_0, ..., d_{Q-2}, K]\\\\) where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or `(P-K)`-dimensional slices\n (if `K < P`) along the `K`th dimension of `input`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n $$[d_0, ..., d_{Q-2}, input.shape[K], ..., input.shape[P-1]].$$\n\n For example, say we want to add 4 scattered elements to a rank-1 tensor to 8\n elements. In Python, that addition would look like this:\n\n input = tf.constant([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n output = tf.scatter_nd_non_aliasing_add(input, indices, updates)\n with tf.Session() as sess:\n print(sess.run(output))\n\n The resulting value `output` would look like this:\n\n [1, 13, 3, 14, 14, 6, 7, 20]\n\n See `tf.scatter_nd` for more details about how to make updates to slices.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n A Tensor.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into `input`.\n updates: A `Tensor`. Must have the same type as `input`.\n A Tensor. Must have the same type as ref. A tensor of updated values\n to add to `input`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Applies sparse addition to `input` using individual values or slices", "type": "API"}, {"name": "tf.raw_ops.ScatterNdSub", "docs": "Applies sparse subtraction to individual values or slices in a Variable.\n\n within a given variable according to `indices`.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n ```\n [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]\n ```\n\n For example, say we want to subtract 4 scattered elements from a rank-1 tensor\n with 8 elements. In Python, that subtraction would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n sub = tf.scatter_nd_sub(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(sub)\n ```\n\n The resulting update to ref would look like this:\n\n [1, -9, 3, -6, -4, 6, 7, -4]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated values\n to subtract from ref.\n use_locking: An optional `bool`. Defaults to `False`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse subtraction to individual values or slices in a Variable.", "type": "API"}, {"name": "tf.raw_ops.ScatterNdUpdate", "docs": "Applies sparse `updates` to individual values or slices within a given\n\n variable according to `indices`.\n\n `ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.\n\n `indices` must be integer tensor, containing indices into `ref`.\n It must be shape \\\\([d_0, ..., d_{Q-2}, K]\\\\) where `0 < K <= P`.\n\n The innermost dimension of `indices` (with length `K`) corresponds to\n indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th\n dimension of `ref`.\n\n `updates` is `Tensor` of rank `Q-1+P-K` with shape:\n\n $$[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].$$\n\n For example, say we want to update 4 scattered elements to a rank-1 tensor to\n 8 elements. In Python, that update would look like this:\n\n ```python\n ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])\n indices = tf.constant([[4], [3], [1] ,[7]])\n updates = tf.constant([9, 10, 11, 12])\n update = tf.scatter_nd_update(ref, indices, updates)\n with tf.Session() as sess:\n print sess.run(update)\n ```\n\n The resulting update to ref would look like this:\n\n [1, 11, 3, 10, 9, 6, 7, 12]\n\n See `tf.scatter_nd` for more details about how to make updates to\n slices.\n\n See also `tf.scatter_update` and `tf.batch_scatter_update`.\n\n Args:\n ref: A mutable `Tensor`. A mutable Tensor. Should be from a Variable node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A Tensor. Must be one of the following types: int32, int64.\n A tensor of indices into ref.\n updates: A `Tensor`. Must have the same type as `ref`.\n A Tensor. Must have the same type as ref. A tensor of updated\n values to add to ref.\n use_locking: An optional `bool`. Defaults to `True`.\n An optional bool. Defaults to True. If True, the assignment will\n be protected by a lock; otherwise the behavior is undefined,\n but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse `updates` to individual values or slices within a given", "type": "API"}, {"name": "tf.raw_ops.ScatterSub", "docs": "Subtracts sparse updates to a variable reference.\n\n ```python\n # Scalar indices\n ref[indices, ...] -= updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] -= updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n Duplicate entries are handled correctly: if multiple `indices` reference\n the same location, their (negated) contributions add.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n Args:\n ref: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to subtract from `ref`.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Subtracts sparse updates to a variable reference.", "type": "API"}, {"name": "tf.raw_ops.ScatterUpdate", "docs": "Applies sparse updates to a variable reference.\n\n This operation computes\n\n ```python\n # Scalar indices\n ref[indices, ...] = updates[...]\n\n # Vector indices (for each i)\n ref[indices[i], ...] = updates[i, ...]\n\n # High rank indices (for each i, ..., j)\n ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]\n ```\n\n This operation outputs `ref` after the update is done.\n This makes it easier to chain operations that need to use the reset value.\n\n If values in `ref` is to be updated more than once, because there are\n duplicate entries in `indices`, the order at which the updates happen\n for each value is undefined.\n\n Requires `updates.shape = indices.shape + ref.shape[1:]` or `updates.shape = []`.\n\n
\n \n
\n\n See also `tf.batch_scatter_update` and `tf.scatter_nd_update`.\n\n Args:\n ref: A mutable `Tensor`. Should be from a `Variable` node.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor of indices into the first dimension of `ref`.\n updates: A `Tensor`. Must have the same type as `ref`.\n A tensor of updated values to store in `ref`.\n use_locking: An optional `bool`. Defaults to `True`.\n If True, the assignment will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Applies sparse updates to a variable reference.", "type": "API"}, {"name": "tf.raw_ops.SdcaFprint", "docs": "Computes fingerprints of the input strings.\n\n Args:\n input: A `Tensor` of type `string`.\n vector of strings to compute fingerprints on.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Computes fingerprints of the input strings.", "type": "API"}, {"name": "tf.raw_ops.SdcaOptimizer", "docs": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for\n\n linear models with L1 + L2 regularization. As global optimization objective is\n strongly-convex, the optimizer optimizes the dual objective at each step. The\n optimizer applies each update one example at a time. Examples are sampled\n uniformly, and the optimizer is learning rate free and enjoys linear convergence\n rate.\n\n [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
\n Shai Shalev-Shwartz, Tong Zhang. 2012\n\n $$Loss Objective = \\sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$\n\n [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
\n Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan,\n Peter Richtarik, Martin Takac. 2015\n\n [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
\n Dominik Csiba, Zheng Qu, Peter Richtarik. 2015\n\n Args:\n sparse_example_indices: A list of `Tensor` objects with type `int64`.\n a list of vectors which contain example indices.\n sparse_feature_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors which contain feature indices.\n sparse_feature_values: A list of `Tensor` objects with type `float32`.\n a list of vectors which contains feature value\n associated with each feature group.\n dense_features: A list of `Tensor` objects with type `float32`.\n a list of matrices which contains the dense feature values.\n example_weights: A `Tensor` of type `float32`.\n a vector which contains the weight associated with each\n example.\n example_labels: A `Tensor` of type `float32`.\n a vector which contains the label/target associated with each\n example.\n sparse_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors where each value is the indices which has\n corresponding weights in sparse_weights. This field maybe omitted for the\n dense approach.\n sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n a list of vectors where each value is the weight associated with\n a sparse feature group.\n dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n a list of vectors where the values are the weights associated\n with a dense feature group.\n example_state_data: A `Tensor` of type `float32`.\n a list of vectors containing the example state data.\n loss_type: A `string` from: `\"logistic_loss\", \"squared_loss\", \"hinge_loss\", \"smooth_hinge_loss\", \"poisson_loss\"`.\n Type of the primal loss. Currently SdcaSolver supports logistic,\n squared and hinge losses.\n l1: A `float`. Symmetric l1 regularization strength.\n l2: A `float`. Symmetric l2 regularization strength.\n num_loss_partitions: An `int` that is `>= 1`.\n Number of partitions of the global loss function.\n num_inner_iterations: An `int` that is `>= 1`.\n Number of iterations per mini-batch.\n adaptative: An optional `bool`. Defaults to `True`.\n Whether to use Adaptive SDCA for the inner loop.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).\n\n out_example_state_data: A `Tensor` of type `float32`.\n out_delta_sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n out_delta_dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n ", "desc": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for", "type": "API"}, {"name": "tf.raw_ops.SdcaOptimizerV2", "docs": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for\n\n linear models with L1 + L2 regularization. As global optimization objective is\n strongly-convex, the optimizer optimizes the dual objective at each step. The\n optimizer applies each update one example at a time. Examples are sampled\n uniformly, and the optimizer is learning rate free and enjoys linear convergence\n rate.\n\n [Proximal Stochastic Dual Coordinate Ascent](http://arxiv.org/pdf/1211.2717v1.pdf).
\n Shai Shalev-Shwartz, Tong Zhang. 2012\n\n $$Loss Objective = \\sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$\n\n [Adding vs. Averaging in Distributed Primal-Dual Optimization](http://arxiv.org/abs/1502.03508).
\n Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan,\n Peter Richtarik, Martin Takac. 2015\n\n [Stochastic Dual Coordinate Ascent with Adaptive Probabilities](https://arxiv.org/abs/1502.08053).
\n Dominik Csiba, Zheng Qu, Peter Richtarik. 2015\n\n Args:\n sparse_example_indices: A list of `Tensor` objects with type `int64`.\n a list of vectors which contain example indices.\n sparse_feature_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors which contain feature indices.\n sparse_feature_values: A list of `Tensor` objects with type `float32`.\n a list of vectors which contains feature value\n associated with each feature group.\n dense_features: A list of `Tensor` objects with type `float32`.\n a list of matrices which contains the dense feature values.\n example_weights: A `Tensor` of type `float32`.\n a vector which contains the weight associated with each\n example.\n example_labels: A `Tensor` of type `float32`.\n a vector which contains the label/target associated with each\n example.\n sparse_indices: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `int64`.\n a list of vectors where each value is the indices which has\n corresponding weights in sparse_weights. This field maybe omitted for the\n dense approach.\n sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n a list of vectors where each value is the weight associated with\n a sparse feature group.\n dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n a list of vectors where the values are the weights associated\n with a dense feature group.\n example_state_data: A `Tensor` of type `float32`.\n a list of vectors containing the example state data.\n loss_type: A `string` from: `\"logistic_loss\", \"squared_loss\", \"hinge_loss\", \"smooth_hinge_loss\", \"poisson_loss\"`.\n Type of the primal loss. Currently SdcaSolver supports logistic,\n squared and hinge losses.\n l1: A `float`. Symmetric l1 regularization strength.\n l2: A `float`. Symmetric l2 regularization strength.\n num_loss_partitions: An `int` that is `>= 1`.\n Number of partitions of the global loss function.\n num_inner_iterations: An `int` that is `>= 1`.\n Number of iterations per mini-batch.\n adaptive: An optional `bool`. Defaults to `True`.\n Whether to use Adaptive SDCA for the inner loop.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights).\n\n out_example_state_data: A `Tensor` of type `float32`.\n out_delta_sparse_weights: A list with the same length as `sparse_example_indices` of `Tensor` objects with type `float32`.\n out_delta_dense_weights: A list with the same length as `dense_features` of `Tensor` objects with type `float32`.\n ", "desc": "Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for", "type": "API"}, {"name": "tf.raw_ops.SdcaShrinkL1", "docs": "Applies L1 regularization shrink step on the parameters.\n\n Args:\n weights: A list of `Tensor` objects with type mutable `float32`.\n a list of vectors where each value is the weight associated with a\n feature group.\n l1: A `float`. Symmetric l1 regularization strength.\n l2: A `float`.\n Symmetric l2 regularization strength. Should be a positive float.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies L1 regularization shrink step on the parameters.", "type": "API"}, {"name": "tf.raw_ops.SegmentMax", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\max_j(data_j)\\\\) where `max` is over `j` such\n that `segment_ids[j] == i`.\n\n If the max is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SegmentMean", "docs": "Computes the mean along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\frac{\\sum_j data_j}{N}\\\\) where `mean` is\n over `j` such that `segment_ids[j] == i` and `N` is the total number of\n values summed.\n\n If the mean is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as a smaller following index when computing the numerator\n of the mean.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy()\n array([[2.5, 2.5, 2.5, 2.5],\n [5., 6., 7., 8.]], dtype=float32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SegmentMin", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\min_j(data_j)\\\\) where `min` is over `j` such\n that `segment_ids[j] == i`.\n\n If the min is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SegmentProd", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\prod_j data_j\\\\) where the product is over `j` such\n that `segment_ids[j] == i`.\n\n If the product is empty for a given segment ID `i`, `output[i] = 1`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SegmentSum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output_i = \\sum_j data_j\\\\) where sum is over `j` such\n that `segment_ids[j] == i`.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n\n Caution: On CPU, values in `segment_ids` are always validated to be sorted,\n and an error is thrown for indices that are not increasing. On GPU, this\n does not throw an error for unsorted indices. On GPU, out-of-order indices\n result in safe but unspecified behavior, which may include treating\n out-of-order indices as the same as a smaller following index.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])\n >>> tf.math.segment_sum(c, tf.constant([0, 0, 1])).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor whose size is equal to the size of `data`'s\n first dimension. Values should be sorted and can be repeated.\n\n Caution: The values are always validated to be sorted on CPU, never validated\n on GPU.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Select", "docs": "Selects elements from `x` or `y`, depending on `condition`.\n\n The `x`, and `y` tensors must all have the same shape, and the\n output will also have that shape.\n\n The `condition` tensor must be a scalar if `x` and `y` are scalars.\n If `x` and `y` are vectors or higher rank, then `condition` must be either a\n scalar, a vector with size matching the first dimension of `x`, or must have\n the same shape as `x`.\n\n The `condition` tensor acts as a mask that chooses, based on the value at each\n element, whether the corresponding element / row in the output should be\n taken from `x` (if true) or `y` (if false).\n\n If `condition` is a vector and `x` and `y` are higher rank matrices, then\n it chooses which row (outer dimension) to copy from `x` and `y`.\n If `condition` has the same shape as `x` and `y`, then it chooses which\n element to copy from `x` and `y`.\n\n For example:\n\n ```python\n # 'condition' tensor is [[True, False]\n # [False, True]]\n # 't' is [[1, 2],\n # [3, 4]]\n # 'e' is [[5, 6],\n # [7, 8]]\n select(condition, t, e) # => [[1, 6], [7, 4]]\n\n\n # 'condition' tensor is [True, False]\n # 't' is [[1, 2],\n # [3, 4]]\n # 'e' is [[5, 6],\n # [7, 8]]\n select(condition, t, e) ==> [[1, 2],\n [7, 8]]\n\n ```\n\n Args:\n condition: A `Tensor` of type `bool`.\n x: A `Tensor` which may have the same shape as `condition`.\n If `condition` is rank 1, `x` may have higher rank,\n but its first dimension must match the size of `condition`.\n y: A `Tensor` with the same type and shape as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "Selects elements from `x` or `y`, depending on `condition`.", "type": "API"}, {"name": "tf.raw_ops.SelectV2", "docs": "TODO: add doc.\n\n Args:\n condition: A `Tensor` of type `bool`.\n t: A `Tensor`.\n e: A `Tensor`. Must have the same type as `t`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `t`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.SelfAdjointEig", "docs": "Computes the Eigen Decomposition of a batch of square self-adjoint matrices.\n\n The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions\n form square matrices, with the same constraints as the single matrix\n SelfAdjointEig.\n\n The result is a [..., M+1, M] matrix with [..., 0,:] containing the\n eigenvalues, and subsequent [...,1:, :] containing the eigenvectors. The eigenvalues\n are sorted in non-decreasing order.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`.\n Shape is `[..., M, M]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the Eigen Decomposition of a batch of square self-adjoint matrices.", "type": "API"}, {"name": "tf.raw_ops.SelfAdjointEigV2", "docs": "Computes the eigen decomposition of one or more square self-adjoint matrices.\n\n Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in\n `input` such that `input[..., :, :] = v[..., :, :] * diag(e[..., :])`. The eigenvalues\n are sorted in non-decreasing order.\n\n ```python\n # a is a tensor.\n # e is a tensor of eigenvalues.\n # v is a tensor of eigenvectors.\n e, v = self_adjoint_eig(a)\n e = self_adjoint_eig(a, compute_v=False)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n `Tensor` input of shape `[N, N]`.\n compute_v: An optional `bool`. Defaults to `True`.\n If `True` then eigenvectors will be computed and returned in `v`.\n Otherwise, only the eigenvalues will be computed.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (e, v).\n\n e: A `Tensor`. Has the same type as `input`.\n v: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the eigen decomposition of one or more square self-adjoint matrices.", "type": "API"}, {"name": "tf.raw_ops.Selu", "docs": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`\n\n if < 0, `scale * features` otherwise.\n\n To be used together with\n `initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')`.\n For correct dropout, use `tf.contrib.nn.alpha_dropout`.\n\n See [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes scaled exponential linear: `scale * alpha * (exp(features) - 1)`", "type": "API"}, {"name": "tf.raw_ops.SeluGrad", "docs": "Computes gradients for the scaled exponential linear (Selu) operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The backpropagated gradients to the corresponding Selu operation.\n outputs: A `Tensor`. Must have the same type as `gradients`.\n The outputs of the corresponding Selu operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes gradients for the scaled exponential linear (Selu) operation.", "type": "API"}, {"name": "tf.raw_ops.Send", "docs": "Sends the named tensor from send_device to recv_device.\n\n Args:\n tensor: A `Tensor`. The tensor to send.\n tensor_name: A `string`. The name of the tensor to send.\n send_device: A `string`. The name of the device sending the tensor.\n send_device_incarnation: An `int`. The current incarnation of send_device.\n recv_device: A `string`. The name of the device receiving the tensor.\n client_terminated: An optional `bool`. Defaults to `False`.\n If set to true, this indicates that the node was added\n to the graph as a result of a client-side feed or fetch of Tensor data,\n in which case the corresponding send or recv is expected to be managed\n locally by the caller.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Sends the named tensor from send_device to recv_device.", "type": "API"}, {"name": "tf.raw_ops.SendTPUEmbeddingGradients", "docs": "Performs gradient updates of embedding tables.\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with type `float32`.\n A TensorList of gradients with which to update embedding tables.\n This argument has the same length and shapes as the return value of\n RecvTPUEmbeddingActivations, but contains gradients of the model's loss\n with respect to the embedding activations. The embedding tables are updated\n from these gradients via the optimizer specified in the TPU embedding\n configuration given to tpu.initialize_system.\n learning_rates: A list of `Tensor` objects with type `float32`.\n A TensorList of float32 scalars, one for each dynamic learning\n rate tag: see the comments in\n //third_party/tensorflow/core/protobuf/tpu/optimization_parameters.proto.\n Multiple tables can share the same dynamic learning rate tag as specified\n in the configuration. If the learning rates for all tables are constant,\n this list should be empty.\n config: A `string`. Serialized TPUEmbeddingConfiguration proto.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Performs gradient updates of embedding tables.", "type": "API"}, {"name": "tf.raw_ops.SerializeIterator", "docs": "Converts the given `resource_handle` representing an iterator to a variant tensor.\n\n Args:\n resource_handle: A `Tensor` of type `resource`.\n A handle to an iterator resource.\n external_state_policy: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Converts the given `resource_handle` representing an iterator to a variant tensor.", "type": "API"}, {"name": "tf.raw_ops.SerializeManySparse", "docs": "Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor` object.\n\n The `SparseTensor` must have rank `R` greater than 1, and the first dimension\n is treated as the minibatch dimension. Elements of the `SparseTensor`\n must be sorted in increasing order of this first dimension. The serialized\n `SparseTensor` objects going into each row of `serialized_sparse` will have\n rank `R-1`.\n\n The minibatch size `N` is extracted from `sparse_shape[0]`.\n\n Args:\n sparse_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the minibatch `SparseTensor`.\n sparse_values: A `Tensor`.\n 1-D. The `values` of the minibatch `SparseTensor`.\n sparse_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the minibatch `SparseTensor`.\n out_type: An optional `tf.DType` from: `tf.string, tf.variant`. Defaults to `tf.string`.\n The `dtype` to use for serialization; the supported types are `string`\n (default) and `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Serialize an `N`-minibatch `SparseTensor` into an `[N, 3]` `Tensor` object.", "type": "API"}, {"name": "tf.raw_ops.SerializeSparse", "docs": "Serialize a `SparseTensor` into a `[3]` `Tensor` object.\n\n Args:\n sparse_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the `SparseTensor`.\n sparse_values: A `Tensor`. 1-D. The `values` of the `SparseTensor`.\n sparse_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the `SparseTensor`.\n out_type: An optional `tf.DType` from: `tf.string, tf.variant`. Defaults to `tf.string`.\n The `dtype` to use for serialization; the supported types are `string`\n (default) and `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Serialize a `SparseTensor` into a `[3]` `Tensor` object.", "type": "API"}, {"name": "tf.raw_ops.SerializeTensor", "docs": "Transforms a Tensor into a serialized TensorProto proto.\n\n Args:\n tensor: A `Tensor`. A Tensor of type `T`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Transforms a Tensor into a serialized TensorProto proto.", "type": "API"}, {"name": "tf.raw_ops.SetSize", "docs": "Number of unique elements along last dimension of input `set`.\n\n Input `set` is a `SparseTensor` represented by `set_indices`, `set_values`,\n and `set_shape`. The last dimension contains values in a set, duplicates are\n allowed but ignored.\n\n If `validate_indices` is `True`, this op validates the order and range of `set`\n indices.\n\n Args:\n set_indices: A `Tensor` of type `int64`.\n 2D `Tensor`, indices of a `SparseTensor`.\n set_values: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`.\n 1D `Tensor`, values of a `SparseTensor`.\n set_shape: A `Tensor` of type `int64`.\n 1D `Tensor`, shape of a `SparseTensor`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Number of unique elements along last dimension of input `set`.", "type": "API"}, {"name": "tf.raw_ops.SetStatsAggregatorDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n stats_aggregator: A `Tensor` of type `resource`.\n tag: A `Tensor` of type `string`.\n counter_prefix: A `Tensor` of type `string`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Shape", "docs": "Returns the shape of a tensor.\n\n This operation returns a 1-D integer tensor representing the shape of `input`.\n\n For example:\n\n ```\n # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]\n shape(t) ==> [2, 2, 3]\n ```\n\n Args:\n input: A `Tensor`.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Returns the shape of a tensor.", "type": "API"}, {"name": "tf.raw_ops.ShapeN", "docs": "Returns shape of tensors.\n\n This operation returns N 1-D integer tensors representing shape of `input[i]s`.\n\n Args:\n input: A list of at least 1 `Tensor` objects with the same type.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `input` of `Tensor` objects with type `out_type`.\n ", "desc": "Returns shape of tensors.", "type": "API"}, {"name": "tf.raw_ops.ShardDataset", "docs": "Creates a `Dataset` that includes only 1/`num_shards` of this dataset.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n num_shards: A `Tensor` of type `int64`.\n An integer representing the number of shards operating in parallel.\n index: A `Tensor` of type `int64`.\n An integer representing the current worker index.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n require_non_empty: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a `Dataset` that includes only 1/`num_shards` of this dataset.", "type": "API"}, {"name": "tf.raw_ops.ShardedFilename", "docs": "Generate a sharded filename. The filename is printf formatted as\n\n %s-%05d-of-%05d, basename, shard, num_shards.\n\n Args:\n basename: A `Tensor` of type `string`.\n shard: A `Tensor` of type `int32`.\n num_shards: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Generate a sharded filename. The filename is printf formatted as", "type": "API"}, {"name": "tf.raw_ops.ShardedFilespec", "docs": "Generate a glob pattern matching all sharded file names.\n\n Args:\n basename: A `Tensor` of type `string`.\n num_shards: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Generate a glob pattern matching all sharded file names.", "type": "API"}, {"name": "tf.raw_ops.ShuffleAndRepeatDataset", "docs": "Creates a dataset that shuffles and repeats elements from `input_dataset`\n\n pseudorandomly.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n The number of output elements to buffer in an iterator over\n this dataset. Compare with the `min_after_dequeue` attr when creating a\n `RandomShuffleQueue`.\n seed: A `Tensor` of type `int64`.\n A scalar seed for the random number generator. If either `seed` or\n `seed2` is set to be non-zero, the random number generator is seeded\n by the given seed. Otherwise, a random seed is used.\n seed2: A `Tensor` of type `int64`.\n A second scalar seed to avoid seed collision.\n count: A `Tensor` of type `int64`.\n A scalar representing the number of times the underlying dataset\n should be repeated. The default is `-1`, which results in infinite repetition.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reshuffle_each_iteration: An optional `bool`. Defaults to `True`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that shuffles and repeats elements from `input_dataset`", "type": "API"}, {"name": "tf.raw_ops.ShuffleAndRepeatDatasetV2", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n seed: A `Tensor` of type `int64`.\n seed2: A `Tensor` of type `int64`.\n count: A `Tensor` of type `int64`.\n seed_generator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reshuffle_each_iteration: An optional `bool`. Defaults to `True`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ShuffleDataset", "docs": "Creates a dataset that shuffles elements from `input_dataset` pseudorandomly.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n The number of output elements to buffer in an iterator over\n this dataset. Compare with the `min_after_dequeue` attr when creating a\n `RandomShuffleQueue`.\n seed: A `Tensor` of type `int64`.\n A scalar seed for the random number generator. If either `seed` or\n `seed2` is set to be non-zero, the random number generator is seeded\n by the given seed. Otherwise, a random seed is used.\n seed2: A `Tensor` of type `int64`.\n A second scalar seed to avoid seed collision.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reshuffle_each_iteration: An optional `bool`. Defaults to `True`.\n If true, each iterator over this dataset will be given\n a different pseudorandomly generated seed, based on a sequence seeded by the\n `seed` and `seed2` inputs. If false, each iterator will be given the same\n seed, and repeated iteration over this dataset will yield the exact same\n sequence of results.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that shuffles elements from `input_dataset` pseudorandomly.", "type": "API"}, {"name": "tf.raw_ops.ShuffleDatasetV2", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n seed_generator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ShuffleDatasetV3", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n buffer_size: A `Tensor` of type `int64`.\n seed: A `Tensor` of type `int64`.\n seed2: A `Tensor` of type `int64`.\n seed_generator: A `Tensor` of type `resource`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reshuffle_each_iteration: An optional `bool`. Defaults to `True`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.ShutdownDistributedTPU", "docs": "Shuts down a running distributed TPU system.\n\n The op returns an error if no system is running.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Shuts down a running distributed TPU system.", "type": "API"}, {"name": "tf.raw_ops.Sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Specifically, `y = 1 / (1 + exp(-x))`.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.raw_ops.SigmoidGrad", "docs": "Computes the gradient of the sigmoid of `x` wrt its input.\n\n Specifically, `grad = dy * y * (1 - y)`, where `y = sigmoid(x)`, and\n `dy` is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient of the sigmoid of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.Sign", "docs": "Returns an element-wise indication of the sign of a number.\n\n `y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`.\n\n For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`.\n\n Example usage:\n >>> tf.math.sign([0., 2., -3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns an element-wise indication of the sign of a number.", "type": "API"}, {"name": "tf.raw_ops.Sin", "docs": "Computes sine of x element-wise.\n\n Given an input tensor, this function computes sine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10, float(\"inf\")])\n tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Sinh", "docs": "Computes hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic sine of every\n element in the tensor. Input range is `[-inf,inf]` and output range\n is `[-inf,inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Size", "docs": "Returns the size of a tensor.\n\n This operation returns an integer representing the number of elements in\n `input`.\n\n For example:\n\n ```\n # 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]\n size(t) ==> 12\n ```\n\n Args:\n input: A `Tensor`.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Returns the size of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SkipDataset", "docs": "Creates a dataset that skips `count` elements from the `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n count: A `Tensor` of type `int64`.\n A scalar representing the number of elements from the `input_dataset`\n that should be skipped. If count is -1, skips everything.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that skips `count` elements from the `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.SleepDataset", "docs": "TODO: add doc.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n sleep_microseconds: A `Tensor` of type `int64`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Slice", "docs": "Return a slice from 'input'.\n\n The output tensor is a tensor with dimensions described by 'size'\n whose values are extracted from 'input' starting at the offsets in\n 'begin'.\n\n *Requirements*:\n 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)\n\n Args:\n input: A `Tensor`.\n begin: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n begin[i] specifies the offset into the 'i'th dimension of\n 'input' to slice from.\n size: A `Tensor`. Must have the same type as `begin`.\n size[i] specifies the number of elements of the 'i'th dimension\n of 'input' to slice. If size[i] is -1, all remaining elements in dimension\n i are included in the slice (i.e. this is equivalent to setting\n size[i] = input.dim_size(i) - begin[i]).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Return a slice from 'input'.", "type": "API"}, {"name": "tf.raw_ops.SlidingWindowDataset", "docs": "Creates a dataset that passes a sliding window over `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n window_size: A `Tensor` of type `int64`.\n A scalar representing the number of elements in the\n sliding window.\n window_shift: A `Tensor` of type `int64`.\n A scalar representing the steps moving the sliding window\n forward in one iteration. It must be positive.\n window_stride: A `Tensor` of type `int64`.\n A scalar representing the stride of the input elements of the sliding window.\n It must be positive.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n drop_remainder: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that passes a sliding window over `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.Snapshot", "docs": "Returns a copy of the input tensor.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns a copy of the input tensor.", "type": "API"}, {"name": "tf.raw_ops.SnapshotDataset", "docs": "Creates a dataset that will write to / read from a snapshot.\n\n This dataset attempts to determine whether a valid snapshot exists at the\n `snapshot_path`, and reads from the snapshot in lieu of using `input_dataset`.\n If not, it will run the preprocessing pipeline as usual, and write out a\n snapshot of the data processed for future use.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n path: A `Tensor` of type `string`.\n The path we should write snapshots to / read snapshots from.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n compression: An optional `string`. Defaults to `\"\"`.\n reader_path_prefix: An optional `string`. Defaults to `\"\"`.\n writer_path_prefix: An optional `string`. Defaults to `\"\"`.\n shard_size_bytes: An optional `int`. Defaults to `10737418240`.\n pending_snapshot_expiry_seconds: An optional `int`. Defaults to `86400`.\n num_reader_threads: An optional `int`. Defaults to `1`.\n reader_buffer_size: An optional `int`. Defaults to `1`.\n num_writer_threads: An optional `int`. Defaults to `1`.\n writer_buffer_size: An optional `int`. Defaults to `1`.\n shuffle_on_read: An optional `bool`. Defaults to `False`.\n seed: An optional `int`. Defaults to `0`.\n seed2: An optional `int`. Defaults to `0`.\n mode: An optional `string`. Defaults to `\"auto\"`.\n snapshot_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that will write to / read from a snapshot.", "type": "API"}, {"name": "tf.raw_ops.SnapshotDatasetV2", "docs": "Creates a dataset that will write to / read from a snapshot.\n\n This dataset attempts to determine whether a valid snapshot exists at the\n `snapshot_path`, and reads from the snapshot in lieu of using `input_dataset`.\n If not, it will run the preprocessing pipeline as usual, and write out a\n snapshot of the data processed for future use.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n A variant tensor representing the input dataset.\n path: A `Tensor` of type `string`.\n The path we should write snapshots to / read snapshots from.\n reader_func_other_args: A list of `Tensor` objects.\n shard_func_other_args: A list of `Tensor` objects.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n reader_func: A function decorated with @Defun.\n Optional. A function to control how to read data from snapshot shards.\n shard_func: A function decorated with @Defun.\n Optional. A function to control how to shard data when writing a snapshot.\n compression: An optional `string`. Defaults to `\"\"`.\n The type of compression to be applied to the saved snapshot files.\n reader_prefix: An optional `string`. Defaults to `\"\"`.\n writer_prefix: An optional `string`. Defaults to `\"\"`.\n hash_valid: An optional `bool`. Defaults to `False`.\n hash: An optional `int`. Defaults to `0`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that will write to / read from a snapshot.", "type": "API"}, {"name": "tf.raw_ops.SobolSample", "docs": "Generates points from the Sobol sequence.\n\n Creates a Sobol sequence with `num_results` samples. Each sample has dimension\n `dim`. Skips the first `skip` samples.\n\n Args:\n dim: A `Tensor` of type `int32`.\n Positive scalar `Tensor` representing each sample's dimension.\n num_results: A `Tensor` of type `int32`.\n Positive scalar `Tensor` of dtype int32. The number of Sobol points to return\n in the output.\n skip: A `Tensor` of type `int32`.\n Positive scalar `Tensor` of dtype int32. The number of initial points of the\n Sobol sequence to skip.\n dtype: An optional `tf.DType` from: `tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the sample. One of: `float32` or `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Generates points from the Sobol sequence.", "type": "API"}, {"name": "tf.raw_ops.Softmax", "docs": "Computes softmax activations.\n\n For each batch `i` and class `j` we have\n\n $$softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))$$\n\n Args:\n logits: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n 2-D with shape `[batch_size, num_classes]`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `logits`.\n ", "desc": "Computes softmax activations.", "type": "API"}, {"name": "tf.raw_ops.SoftmaxCrossEntropyWithLogits", "docs": "Computes softmax cross entropy cost and gradients to backpropagate.\n\n Inputs are the logits, not probabilities.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n batch_size x num_classes matrix\n labels: A `Tensor`. Must have the same type as `features`.\n batch_size x num_classes matrix\n The caller must ensure that each batch of labels represents a valid\n probability distribution.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (loss, backprop).\n\n loss: A `Tensor`. Has the same type as `features`.\n backprop: A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softmax cross entropy cost and gradients to backpropagate.", "type": "API"}, {"name": "tf.raw_ops.Softplus", "docs": "TODO: add doc.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.SoftplusGrad", "docs": "Computes softplus gradients for a softplus operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The backpropagated gradients to the corresponding softplus operation.\n features: A `Tensor`. Must have the same type as `gradients`.\n The features passed as input to the corresponding softplus operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes softplus gradients for a softplus operation.", "type": "API"}, {"name": "tf.raw_ops.Softsign", "docs": "Computes softsign: `features / (abs(features) + 1)`.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softsign: `features / (abs(features) + 1)`.", "type": "API"}, {"name": "tf.raw_ops.SoftsignGrad", "docs": "Computes softsign gradients for a softsign operation.\n\n Args:\n gradients: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n The backpropagated gradients to the corresponding softsign operation.\n features: A `Tensor`. Must have the same type as `gradients`.\n The features passed as input to the corresponding softsign operation.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `gradients`.\n ", "desc": "Computes softsign gradients for a softsign operation.", "type": "API"}, {"name": "tf.raw_ops.SpaceToBatch", "docs": "SpaceToBatch for 4-D tensors of type T.\n\n This is a legacy version of the more general SpaceToBatchND.\n\n Zero-pads and then rearranges (permutes) blocks of spatial data into batch.\n More specifically, this op outputs a copy of the input tensor where values from\n the `height` and `width` dimensions are moved to the `batch` dimension. After\n the zero-padding, both `height` and `width` of the input must be divisible by the\n block size.\n\n The attr `block_size` must be greater than one. It indicates the block size.\n\n * Non-overlapping blocks of size `block_size x block size` in the height and\n width dimensions are rearranged into the batch dimension at each location.\n * The batch of the output tensor is `batch * block_size * block_size`.\n * Both height_pad and width_pad must be divisible by block_size.\n\n The shape of the output will be:\n\n [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,\n depth]\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],\n [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`. 4-D with shape `[batch, height, width, depth]`.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies\n the padding of the input with zeros across the spatial dimensions as follows:\n\n paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]\n\n The effective spatial dimensions of the zero-padded input tensor will be:\n\n height_pad = pad_top + height + pad_bottom\n width_pad = pad_left + width + pad_right\n block_size: An `int` that is `>= 2`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for 4-D tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.SpaceToBatchND", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.SpaceToDepth", "docs": "SpaceToDepth for tensors of type T.\n\n Rearranges blocks of spatial data, into depth. More specifically,\n this op outputs a copy of the input tensor where values from the `height`\n and `width` dimensions are moved to the `depth` dimension.\n The attr `block_size` indicates the input block size.\n\n * Non-overlapping blocks of size `block_size x block size` are rearranged\n into depth at each location.\n * The depth of the output tensor is `block_size * block_size * input_depth`.\n * The Y, X coordinates within each block of the input become the high order\n component of the output channel index.\n * The input tensor's height and width must be divisible by block_size.\n\n The `data_format` attr specifies the layout of the input and output tensors\n with the following options:\n \"NHWC\": `[ batch, height, width, channels ]`\n \"NCHW\": `[ batch, channels, height, width ]`\n \"NCHW_VECT_C\":\n `qint8 [ batch, channels / 4, height, width, 4 ]`\n\n It is useful to consider the operation as transforming a 6-D Tensor.\n e.g. for data_format = NHWC,\n Each element in the input tensor can be specified via 6 coordinates,\n ordered by decreasing memory layout significance as:\n n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates\n within the output image, bX, bY means coordinates\n within the input block, iC means input channels).\n The output would be a transpose to the following layout:\n n,oY,oX,bY,bX,iC\n\n This operation is useful for resizing the activations between convolutions\n (but keeping all data), e.g. instead of pooling. It is also useful for training\n purely convolutional models.\n\n For example, given an input of shape `[1, 2, 2, 1]`, data_format = \"NHWC\" and\n block_size = 2:\n\n ```\n x = [[[[1], [2]],\n [[3], [4]]]]\n ```\n\n This operation will output a tensor of shape `[1, 1, 1, 4]`:\n\n ```\n [[[[1, 2, 3, 4]]]]\n ```\n\n Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,\n the corresponding output will have a single element (i.e. width and height are\n both 1) and will have a depth of 4 channels (1 * block_size * block_size).\n The output element shape is `[1, 1, 4]`.\n\n For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n This operation, for block_size of 2, will return the following tensor of shape\n `[1, 1, 1, 12]`\n\n ```\n [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]\n ```\n\n Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:\n\n ```\n x = [[[[1], [2], [5], [6]],\n [[3], [4], [7], [8]],\n [[9], [10], [13], [14]],\n [[11], [12], [15], [16]]]]\n ```\n\n the operator will return the following tensor of shape `[1 2 2 4]`:\n\n ```\n x = [[[[1, 2, 3, 4],\n [5, 6, 7, 8]],\n [[9, 10, 11, 12],\n [13, 14, 15, 16]]]]\n ```\n\n Args:\n input: A `Tensor`.\n block_size: An `int` that is `>= 2`. The size of the spatial block.\n data_format: An optional `string` from: `\"NHWC\", \"NCHW\", \"NCHW_VECT_C\"`. Defaults to `\"NHWC\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToDepth for tensors of type T.", "type": "API"}, {"name": "tf.raw_ops.SparseAccumulatorApplyGradient", "docs": "Applies a sparse gradient to a given accumulator.\n\n Does not add if local_step is smaller than the accumulator's\n global_step.\n\n Args:\n handle: A `Tensor` of type mutable `string`. The handle to a accumulator.\n local_step: A `Tensor` of type `int64`.\n The local_step value at which the sparse gradient was computed.\n gradient_indices: A `Tensor` of type `int64`.\n Indices of the sparse gradient to be accumulated. Must be a\n vector.\n gradient_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Values are the non-zero slices of the gradient, and must have\n the same first dimension as indices, i.e., the nnz represented by indices and\n values must be consistent.\n gradient_shape: A `Tensor` of type `int64`.\n Shape of the sparse gradient to be accumulated.\n has_known_shape: A `bool`.\n Boolean indicating whether gradient_shape is unknown, in which\n case the input is ignored during validation.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Applies a sparse gradient to a given accumulator.", "type": "API"}, {"name": "tf.raw_ops.SparseAccumulatorTakeGradient", "docs": "Extracts the average sparse gradient in a SparseConditionalAccumulator.\n\n The op will blocks until sufficient (i.e., more than num_required)\n gradients have been accumulated. If the accumulator has already\n aggregated more than num_required gradients, it will return its\n average of the accumulated gradients. Also automatically increments\n the recorded global_step in the accumulator by 1, and resets the\n aggregate to 0.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n The handle to a SparseConditionalAccumulator.\n num_required: A `Tensor` of type `int32`.\n Number of gradients required before we return an aggregate.\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The data type of accumulated gradients. Needs to correspond to the type\n of the accumulator.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, values, shape).\n\n indices: A `Tensor` of type `int64`.\n values: A `Tensor` of type `dtype`.\n shape: A `Tensor` of type `int64`.\n ", "desc": "Extracts the average sparse gradient in a SparseConditionalAccumulator.", "type": "API"}, {"name": "tf.raw_ops.SparseAdd", "docs": "Adds two `SparseTensor` objects to produce another `SparseTensor`.\n\n The input `SparseTensor` objects' indices are assumed ordered in standard\n lexicographic order. If this is not the case, before this step run\n `SparseReorder` to restore index ordering.\n\n By default, if two values sum to zero at some index, the output `SparseTensor`\n would still include that particular location in its index, storing a zero in the\n corresponding value slot. To override this, callers can specify `thresh`,\n indicating that if the sum has a magnitude strictly smaller than `thresh`, its\n corresponding value and index would then not be included. In particular,\n `thresh == 0` (default) means everything is kept and actual thresholding happens\n only for a positive value.\n\n In the following shapes, `nnz` is the count after taking `thresh` into account.\n\n Args:\n a_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the first `SparseTensor`, size `[nnz, ndims]` Matrix.\n a_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. The `values` of the first `SparseTensor`, size `[nnz]` Vector.\n a_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the first `SparseTensor`, size `[ndims]` Vector.\n b_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the second `SparseTensor`, size `[nnz, ndims]` Matrix.\n b_values: A `Tensor`. Must have the same type as `a_values`.\n 1-D. The `values` of the second `SparseTensor`, size `[nnz]` Vector.\n b_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the second `SparseTensor`, size `[ndims]` Vector.\n thresh: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 0-D. The magnitude threshold that determines if an output value/index\n pair takes space.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sum_indices, sum_values, sum_shape).\n\n sum_indices: A `Tensor` of type `int64`.\n sum_values: A `Tensor`. Has the same type as `a_values`.\n sum_shape: A `Tensor` of type `int64`.\n ", "desc": "Adds two `SparseTensor` objects to produce another `SparseTensor`.", "type": "API"}, {"name": "tf.raw_ops.SparseAddGrad", "docs": "The gradient operator for the SparseAdd op.\n\n The SparseAdd op calculates A + B, where A, B, and the sum are all represented\n as `SparseTensor` objects. This op takes in the upstream gradient w.r.t.\n non-empty values of the sum, and outputs the gradients w.r.t. the non-empty\n values of A and B.\n\n Args:\n backprop_val_grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D with shape `[nnz(sum)]`. The gradient with respect to\n the non-empty values of the sum.\n a_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the `SparseTensor` A, size `[nnz(A), ndims]`.\n b_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the `SparseTensor` B, size `[nnz(B), ndims]`.\n sum_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the sum `SparseTensor`, size\n `[nnz(sum), ndims]`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (a_val_grad, b_val_grad).\n\n a_val_grad: A `Tensor`. Has the same type as `backprop_val_grad`.\n b_val_grad: A `Tensor`. Has the same type as `backprop_val_grad`.\n ", "desc": "The gradient operator for the SparseAdd op.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyAdadelta", "docs": "var: Should be from a Variable().\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n accum_update: A mutable `Tensor`. Must have the same type as `var`.\n : Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay factor. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "var: Should be from a Variable().", "type": "API"}, {"name": "tf.raw_ops.SparseApplyAdagrad", "docs": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.\n\n That is for rows we have grad for, we update var and accum as follows:\n $$accum += grad * grad$$\n $$var -= lr * grad * (1 / sqrt(accum))$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyAdagradDA", "docs": "Update entries in '*var' and '*accum' according to the proximal adagrad scheme.\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n gradient_accumulator: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n gradient_squared_accumulator: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n global_step: A `Tensor` of type `int64`.\n Training step number. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update entries in '*var' and '*accum' according to the proximal adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyAdagradV2", "docs": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.\n\n That is for rows we have grad for, we update var and accum as follows:\n $$accum += grad * grad$$\n $$var -= lr * grad * (1 / sqrt(accum))$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Constant factor. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n update_slots: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the adagrad scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyCenteredRMSProp", "docs": "Update '*var' according to the centered RMSProp algorithm.\n\n The centered RMSProp algorithm uses an estimate of the centered second moment\n (i.e., the variance) for normalization, as opposed to regular RMSProp, which\n uses the (uncentered) second moment. This often helps with training, but is\n slightly more expensive in terms of computation and memory.\n\n Note that in dense implementation of this algorithm, mg, ms, and mom will\n update even if the grad is zero, but in this sparse implementation, mg, ms,\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n mean_grad = decay * mean_grad + (1-decay) * gradient\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)\n\n $$ms <- rho * ms_{t-1} + (1-rho) * grad * grad$$\n $$mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)$$\n $$var <- var - mom$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n mg: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n ms: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n mom: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `var`.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var, ms and mom.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, mg, ms, and mom tensors is\n protected by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the centered RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyFtrl", "docs": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.\n\n That is for rows we have grad for, we update var, accum and linear as follows:\n $$accum_new = accum + grad * grad$$\n $$linear += grad + (accum_{new}^{-lr_{power}} - accum^{-lr_{power}} / lr * var$$\n $$quadratic = 1.0 / (accum_{new}^{lr_{power}} * lr) + 2 * l2$$\n $$var = (sign(linear) * l1 - linear) / quadratic\\ if\\ |linear| > l1\\ else\\ 0.0$$\n $$accum = accum_{new}$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n linear: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n lr_power: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyFtrlV2", "docs": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.\n\n That is for rows we have grad for, we update var, accum and linear as follows:\n grad_with_shrinkage = grad + 2 * l2_shrinkage * var\n accum_new = accum + grad * grad\n linear += grad_with_shrinkage -\n (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var\n quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2\n var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0\n accum = accum_new\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n linear: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 shrinkage regularization. Must be a scalar.\n l2_shrinkage: A `Tensor`. Must have the same type as `var`.\n lr_power: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n multiply_linear_by_lr: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update relevant entries in '*var' according to the Ftrl-proximal scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyMomentum", "docs": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.\n\n Set use_nesterov = True if you want to use Nesterov momentum.\n\n That is for rows we have grad for, we update var and accum as follows:\n\n $$accum = accum * momentum + grad$$\n $$var -= lr * accum$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n momentum: A `Tensor`. Must have the same type as `var`.\n Momentum. Must be a scalar.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var and accum tensors will be protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n use_nesterov: An optional `bool`. Defaults to `False`.\n If `True`, the tensor passed to compute grad will be\n var - lr * momentum * accum, so in the end, the var you get is actually\n var - lr * momentum * accum.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update relevant entries in '*var' and '*accum' according to the momentum scheme.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyProximalAdagrad", "docs": "Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.\n\n That is for rows we have grad for, we update var and accum as follows:\n $$accum += grad * grad$$\n $$prox_v = var$$\n $$prox_v -= lr * grad * (1 / sqrt(accum))$$\n $$var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n accum: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Learning rate. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, updating of the var and accum tensors will be protected by\n a lock; otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyProximalGradientDescent", "docs": "Sparse update '*var' as FOBOS algorithm with fixed learning rate.\n\n That is for rows we have grad for, we update var as follows:\n $$prox_v = var - alpha * grad$$\n $$var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n alpha: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n l1: A `Tensor`. Must have the same type as `var`.\n L1 regularization. Must be a scalar.\n l2: A `Tensor`. Must have the same type as `var`.\n L2 regularization. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var and accum.\n use_locking: An optional `bool`. Defaults to `False`.\n If True, the subtraction will be protected by a lock;\n otherwise the behavior is undefined, but may exhibit less contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Sparse update '*var' as FOBOS algorithm with fixed learning rate.", "type": "API"}, {"name": "tf.raw_ops.SparseApplyRMSProp", "docs": "Update '*var' according to the RMSProp algorithm.\n\n Note that in dense implementation of this algorithm, ms and mom will\n update even if the grad is zero, but in this sparse implementation, ms\n and mom will not update in iterations during which the grad is zero.\n\n mean_square = decay * mean_square + (1-decay) * gradient ** 2\n Delta = learning_rate * gradient / sqrt(mean_square + epsilon)\n\n $$ms <- rho * ms_{t-1} + (1-rho) * grad * grad$$\n $$mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)$$\n $$var <- var - mom$$\n\n Args:\n var: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n Should be from a Variable().\n ms: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n mom: A mutable `Tensor`. Must have the same type as `var`.\n Should be from a Variable().\n lr: A `Tensor`. Must have the same type as `var`.\n Scaling factor. Must be a scalar.\n rho: A `Tensor`. Must have the same type as `var`.\n Decay rate. Must be a scalar.\n momentum: A `Tensor`. Must have the same type as `var`.\n epsilon: A `Tensor`. Must have the same type as `var`.\n Ridge term. Must be a scalar.\n grad: A `Tensor`. Must have the same type as `var`. The gradient.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A vector of indices into the first dimension of var, ms and mom.\n use_locking: An optional `bool`. Defaults to `False`.\n If `True`, updating of the var, ms, and mom tensors is protected\n by a lock; otherwise the behavior is undefined, but may exhibit less\n contention.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `var`.\n ", "desc": "Update '*var' according to the RMSProp algorithm.", "type": "API"}, {"name": "tf.raw_ops.SparseBincount", "docs": "Counts the number of occurrences of each value in an integer array.\n\n Outputs a vector with length `size` and the same dtype as `weights`. If\n `weights` are empty, then index `i` stores the number of times the value `i` is\n counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of\n the value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Values in `arr` outside of the range [0, size) are ignored.\n\n Args:\n indices: A `Tensor` of type `int64`. 2D int64 `Tensor`.\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1D int `Tensor`.\n dense_shape: A `Tensor` of type `int64`. 1D int64 `Tensor`.\n size: A `Tensor`. Must have the same type as `values`.\n non-negative int scalar `Tensor`.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n is an int32, int64, float32, or float64 `Tensor` with the same\n shape as `input`, or a length-0 `Tensor`, in which case it acts as all weights\n equal to 1.\n binary_output: An optional `bool`. Defaults to `False`.\n bool; Whether the kernel should count the appearance or number of occurrences.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `weights`.\n ", "desc": "Counts the number of occurrences of each value in an integer array.", "type": "API"}, {"name": "tf.raw_ops.SparseConcat", "docs": "Concatenates a list of `SparseTensor` along the specified dimension.\n\n Concatenation is with respect to the dense versions of these sparse tensors.\n It is assumed that each input is a `SparseTensor` whose elements are ordered\n along increasing dimension number.\n\n All inputs' shapes must match, except for the concat dimension. The\n `indices`, `values`, and `shapes` lists must have the same length.\n\n The output shape is identical to the inputs', except along the concat\n dimension, where it is the sum of the inputs' sizes along that dimension.\n\n The output elements will be resorted to preserve the sort order along\n increasing dimension number.\n\n This op runs in `O(M log M)` time, where `M` is the total number of non-empty\n values across all inputs. This is due to the need for an internal sort in\n order to concatenate efficiently across an arbitrary dimension.\n\n For example, if `concat_dim = 1` and the inputs are\n\n sp_inputs[0]: shape = [2, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\n then the output will be\n\n shape = [2, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n Graphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b c ] [ ] [b c ]\n\n Args:\n indices: A list of at least 2 `Tensor` objects with type `int64`.\n 2-D. Indices of each input `SparseTensor`.\n values: A list with the same length as `indices` of `Tensor` objects with the same type.\n 1-D. Non-empty values of each `SparseTensor`.\n shapes: A list with the same length as `indices` of `Tensor` objects with type `int64`.\n 1-D. Shapes of each `SparseTensor`.\n concat_dim: An `int`.\n Dimension to concatenate along. Must be in range [-rank, rank),\n where rank is the number of dimensions in each input `SparseTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `values`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Concatenates a list of `SparseTensor` along the specified dimension.", "type": "API"}, {"name": "tf.raw_ops.SparseConditionalAccumulator", "docs": "A conditional accumulator for aggregating sparse gradients.\n\n The accumulator accepts gradients marked with local_step greater or\n equal to the most recent global_step known to the accumulator. The\n average can be extracted from the accumulator, provided sufficient\n gradients have been accumulated. Extracting the average automatically\n resets the aggregate to 0, and increments the global_step recorded by\n the accumulator.\n\n Args:\n dtype: A `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64`.\n The type of the value being accumulated.\n shape: A `tf.TensorShape` or list of `ints`. The shape of the values.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this accumulator will be shared under the given name\n across multiple sessions.\n reduction_type: An optional `string` from: `\"MEAN\", \"SUM\"`. Defaults to `\"MEAN\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A conditional accumulator for aggregating sparse gradients.", "type": "API"}, {"name": "tf.raw_ops.SparseCountSparseOutput", "docs": "Performs sparse-output bin counting for a sparse tensor input.\n\n Counts the number of times each value occurs in the input.\n\n Args:\n indices: A `Tensor` of type `int64`.\n Tensor containing the indices of the sparse tensor to count.\n values: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Tensor containing values of the sparse tensor to count.\n dense_shape: A `Tensor` of type `int64`.\n Tensor containing the dense shape of the sparse tensor to count.\n weights: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.\n A Tensor of the same shape as indices containing per-index weight values.\n May also be the empty tensor if no weights are used.\n binary_output: A `bool`.\n Whether to output the number of occurrences of each value or 1.\n minlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Minimum value to count. Can be set to -1 for no minimum.\n maxlength: An optional `int` that is `>= -1`. Defaults to `-1`.\n Maximum value to count. Can be set to -1 for no maximum.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_dense_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `weights`.\n output_dense_shape: A `Tensor` of type `int64`.\n ", "desc": "Performs sparse-output bin counting for a sparse tensor input.", "type": "API"}, {"name": "tf.raw_ops.SparseCross", "docs": "Generates sparse cross from a list of sparse and dense tensors.\n\n The op takes two lists, one of 2D `SparseTensor` and one of 2D `Tensor`, each\n representing features of one feature column. It outputs a 2D `SparseTensor` with\n the batchwise crosses of these features.\n\n For example, if the inputs are\n\n inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n\n inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be\n\n shape = [2, 2]\n [0, 0]: \"a_X_d_X_f\"\n [1, 0]: \"b_X_e_X_g\"\n [1, 1]: \"c_X_e_X_g\"\n\n if hashed_output=true then the output will be\n\n shape = [2, 2]\n [0, 0]: FingerprintCat64(\n Fingerprint64(\"f\"), FingerprintCat64(\n Fingerprint64(\"d\"), Fingerprint64(\"a\")))\n [1, 0]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"b\")))\n [1, 1]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"c\")))\n\n Args:\n indices: A list of `Tensor` objects with type `int64`.\n 2-D. Indices of each input `SparseTensor`.\n values: A list of `Tensor` objects with types from: `int64`, `string`.\n 1-D. values of each `SparseTensor`.\n shapes: A list with the same length as `indices` of `Tensor` objects with type `int64`.\n 1-D. Shapes of each `SparseTensor`.\n dense_inputs: A list of `Tensor` objects with types from: `int64`, `string`.\n 2-D. Columns represented by dense `Tensor`.\n hashed_output: A `bool`.\n If true, returns the hash of the cross instead of the string.\n This will allow us avoiding string manipulations.\n num_buckets: An `int` that is `>= 0`. It is used if hashed_output is true.\n output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.\n hash_key: An `int`.\n Specify the hash_key that will be used by the `FingerprintCat64`\n function to combine the crosses fingerprints.\n out_type: A `tf.DType` from: `tf.int64, tf.string`.\n internal_type: A `tf.DType` from: `tf.int64, tf.string`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor` of type `out_type`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Generates sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.raw_ops.SparseCrossHashed", "docs": "Generates sparse cross from a list of sparse and dense tensors.\n\n The op takes two lists, one of 2D `SparseTensor` and one of 2D `Tensor`, each\n representing features of one feature column. It outputs a 2D `SparseTensor` with\n the batchwise crosses of these features.\n\n For example, if the inputs are\n\n inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n\n inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be\n\n shape = [2, 2]\n [0, 0]: \"a_X_d_X_f\"\n [1, 0]: \"b_X_e_X_g\"\n [1, 1]: \"c_X_e_X_g\"\n\n if hashed_output=true then the output will be\n\n shape = [2, 2]\n [0, 0]: FingerprintCat64(\n Fingerprint64(\"f\"), FingerprintCat64(\n Fingerprint64(\"d\"), Fingerprint64(\"a\")))\n [1, 0]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"b\")))\n [1, 1]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"c\")))\n\n Args:\n indices: A list of `Tensor` objects with type `int64`.\n 2-D. Indices of each input `SparseTensor`.\n values: A list of `Tensor` objects with types from: `int64`, `string`.\n 1-D. values of each `SparseTensor`.\n shapes: A list with the same length as `indices` of `Tensor` objects with type `int64`.\n 1-D. Shapes of each `SparseTensor`.\n dense_inputs: A list of `Tensor` objects with types from: `int64`, `string`.\n 2-D. Columns represented by dense `Tensor`.\n num_buckets: A `Tensor` of type `int64`.\n It is used if hashed_output is true.\n output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.\n strong_hash: A `Tensor` of type `bool`.\n boolean, if true, siphash with salt will be used instead of farmhash.\n salt: A `Tensor` of type `int64`.\n Specify the salt that will be used by the siphash function.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor` of type `int64`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Generates sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.raw_ops.SparseCrossV2", "docs": "Generates sparse cross from a list of sparse and dense tensors.\n\n The op takes two lists, one of 2D `SparseTensor` and one of 2D `Tensor`, each\n representing features of one feature column. It outputs a 2D `SparseTensor` with\n the batchwise crosses of these features.\n\n For example, if the inputs are\n\n inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n\n inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be\n\n shape = [2, 2]\n [0, 0]: \"a_X_d_X_f\"\n [1, 0]: \"b_X_e_X_g\"\n [1, 1]: \"c_X_e_X_g\"\n\n if hashed_output=true then the output will be\n\n shape = [2, 2]\n [0, 0]: FingerprintCat64(\n Fingerprint64(\"f\"), FingerprintCat64(\n Fingerprint64(\"d\"), Fingerprint64(\"a\")))\n [1, 0]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"b\")))\n [1, 1]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"c\")))\n\n Args:\n indices: A list of `Tensor` objects with type `int64`.\n 2-D. Indices of each input `SparseTensor`.\n values: A list of `Tensor` objects with types from: `int64`, `string`.\n 1-D. values of each `SparseTensor`.\n shapes: A list with the same length as `indices` of `Tensor` objects with type `int64`.\n 1-D. Shapes of each `SparseTensor`.\n dense_inputs: A list of `Tensor` objects with types from: `int64`, `string`.\n 2-D. Columns represented by dense `Tensor`.\n sep: A `Tensor` of type `string`.\n string used when joining a list of string inputs, can be used as separator later.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor` of type `string`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Generates sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.raw_ops.SparseDenseCwiseAdd", "docs": "Adds up a SparseTensor and a dense Tensor, using these special rules:\n\n (1) Broadcasts the dense side to have the same shape as the sparse side, if\n eligible;\n (2) Then, only the dense values pointed to by the indices of the SparseTensor\n participate in the cwise addition.\n\n By these rules, the result is a logical SparseTensor with exactly the same\n indices and shape, but possibly with different non-zero values. The output of\n this Op is the resultant non-zero values.\n\n Args:\n sp_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n sp_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `sp_indices`.\n sp_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n dense: A `Tensor`. Must have the same type as `sp_values`.\n `R`-D. The dense Tensor operand.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `sp_values`.\n ", "desc": "Adds up a SparseTensor and a dense Tensor, using these special rules:", "type": "API"}, {"name": "tf.raw_ops.SparseDenseCwiseDiv", "docs": "Component-wise divides a SparseTensor by a dense Tensor.\n\n *Limitation*: this Op only broadcasts the dense side to the sparse side, but not\n the other direction.\n\n Args:\n sp_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n sp_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `sp_indices`.\n sp_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n dense: A `Tensor`. Must have the same type as `sp_values`.\n `R`-D. The dense Tensor operand.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `sp_values`.\n ", "desc": "Component-wise divides a SparseTensor by a dense Tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseDenseCwiseMul", "docs": "Component-wise multiplies a SparseTensor by a dense Tensor.\n\n The output locations corresponding to the implicitly zero elements in the sparse\n tensor will be zero (i.e., will not take up storage space), regardless of the\n contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN).\n\n *Limitation*: this Op only broadcasts the dense side to the sparse side, but not\n the other direction.\n\n Args:\n sp_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n sp_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `sp_indices`.\n sp_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n dense: A `Tensor`. Must have the same type as `sp_values`.\n `R`-D. The dense Tensor operand.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `sp_values`.\n ", "desc": "Component-wise multiplies a SparseTensor by a dense Tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseFillEmptyRows", "docs": "Fills empty rows in the input 2-D `SparseTensor` with a default value.\n\n The input `SparseTensor` is represented via the tuple of inputs\n (`indices`, `values`, `dense_shape`). The output `SparseTensor` has the\n same `dense_shape` but with indices `output_indices` and values\n `output_values`.\n\n This op inserts a single entry for every row that doesn't have any values.\n The index is created as `[row, 0, ..., 0]` and the inserted value\n is `default_value`.\n\n For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:\n\n [0, 1]: a\n [0, 3]: b\n [1, 0]: default_value\n [2, 0]: c\n [3, 1]: d\n [4, 0]: default_value\n\n The output `SparseTensor` will be in row-major order and will have the\n same shape as the input.\n\n This op also returns an indicator vector shaped `[dense_shape[0]]` such that\n\n empty_row_indicator[i] = True iff row i was an empty row.\n\n And a reverse index map vector shaped `[indices.shape[0]]` that is used during\n backpropagation,\n\n reverse_index_map[j] = out_j s.t. indices[j, :] == output_indices[out_j, :]\n\n Args:\n indices: A `Tensor` of type `int64`.\n 2-D. the indices of the sparse tensor.\n values: A `Tensor`. 1-D. the values of the sparse tensor.\n dense_shape: A `Tensor` of type `int64`.\n 1-D. the shape of the sparse tensor.\n default_value: A `Tensor`. Must have the same type as `values`.\n 0-D. default value to insert into location `[row, 0, ..., 0]`\n for rows missing from the input sparse tensor.\n output indices: 2-D. the indices of the filled sparse tensor.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, empty_row_indicator, reverse_index_map).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `values`.\n empty_row_indicator: A `Tensor` of type `bool`.\n reverse_index_map: A `Tensor` of type `int64`.\n ", "desc": "Fills empty rows in the input 2-D `SparseTensor` with a default value.", "type": "API"}, {"name": "tf.raw_ops.SparseFillEmptyRowsGrad", "docs": "The gradient of SparseFillEmptyRows.\n\n Takes vectors reverse_index_map, shaped `[N]`, and grad_values,\n shaped `[N_full]`, where `N_full >= N` and copies data into either\n `d_values` or `d_default_value`. Here `d_values` is shaped `[N]` and\n `d_default_value` is a scalar.\n\n d_values[j] = grad_values[reverse_index_map[j]]\n d_default_value = sum_{k : 0 .. N_full - 1} (\n grad_values[k] * 1{k not in reverse_index_map})\n\n Args:\n reverse_index_map: A `Tensor` of type `int64`.\n 1-D. The reverse index map from SparseFillEmptyRows.\n grad_values: A `Tensor`. 1-D. The gradients from backprop.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (d_values, d_default_value).\n\n d_values: A `Tensor`. Has the same type as `grad_values`.\n d_default_value: A `Tensor`. Has the same type as `grad_values`.\n ", "desc": "The gradient of SparseFillEmptyRows.", "type": "API"}, {"name": "tf.raw_ops.SparseMatMul", "docs": "Multiply matrix \"a\" by matrix \"b\".\n\n The inputs must be two-dimensional matrices and the inner dimension of \"a\" must\n match the outer dimension of \"b\". Both \"a\" and \"b\" must be `Tensor`s not\n `SparseTensor`s. This op is optimized for the case where at least one of \"a\" or\n \"b\" is sparse, in the sense that they have a large proportion of zero values.\n The breakeven for using this versus a dense matrix multiply on one platform was\n 30% zero values in the sparse matrix.\n\n The gradient computation of this operation will only take advantage of sparsity\n in the input gradient when that gradient comes from a Relu.\n\n Args:\n a: A `Tensor`. Must be one of the following types: `float32`, `bfloat16`.\n b: A `Tensor`. Must be one of the following types: `float32`, `bfloat16`.\n transpose_a: An optional `bool`. Defaults to `False`.\n transpose_b: An optional `bool`. Defaults to `False`.\n a_is_sparse: An optional `bool`. Defaults to `False`.\n b_is_sparse: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Multiply matrix \"a\" by matrix \"b\".", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixAdd", "docs": "Sparse addition of two CSR matrices, C = alpha * A + beta * B.\n\n The gradients of SparseMatrixAdd outputs with respect to alpha and beta are not\n currently defined (TensorFlow will return zeros for these entries).\n\n Args:\n a: A `Tensor` of type `variant`. A CSRSparseMatrix.\n b: A `Tensor` of type `variant`. A CSRSparseMatrix.\n alpha: A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`.\n A constant scalar.\n beta: A `Tensor`. Must have the same type as `alpha`. A constant scalar.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Sparse addition of two CSR matrices, C = alpha * A + beta * B.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixMatMul", "docs": "Matrix-multiplies a sparse matrix with a dense matrix.\n\n Returns a dense matrix.\n For inputs A and B, where A is CSR and B is dense; this op returns a dense C;\n\n If transpose_output is false, returns:\n ```\n C = A . B\n ```\n\n If transpose_output is `true`, returns:\n ```\n C = transpose(A . B) = transpose(B) . transpose(A)\n ```\n where the transposition is performed along the two innermost (matrix)\n dimensions.\n\n If conjugate_output is `true`, returns:\n ```\n C = conjugate(A . B) = conjugate(A) . conjugate(B)\n ```\n\n If both conjugate_output and transpose_output are `true`, returns:\n ```\n C = conjugate(transpose(A . B)) = conjugate(transpose(B)) .\n conjugate(transpose(A))\n ```\n\n Args:\n a: A `Tensor` of type `variant`. A CSRSparseMatrix.\n b: A `Tensor`. A dense tensor.\n transpose_a: An optional `bool`. Defaults to `False`.\n Indicates whether `a` should be transposed.\n transpose_b: An optional `bool`. Defaults to `False`.\n Indicates whether `b` should be transposed.\n adjoint_a: An optional `bool`. Defaults to `False`.\n Indicates whether `a` should be conjugate-transposed.\n adjoint_b: An optional `bool`. Defaults to `False`.\n Indicates whether `b` should be conjugate-transposed.\n transpose_output: An optional `bool`. Defaults to `False`.\n Transposes the product of `a` and `b`.\n conjugate_output: An optional `bool`. Defaults to `False`.\n Conjugates the product of `a` and `b`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `b`.\n ", "desc": "Matrix-multiplies a sparse matrix with a dense matrix.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixMul", "docs": "Element-wise multiplication of a sparse matrix with a dense tensor.\n\n Returns a sparse matrix.\n\n The dense tensor `b` may be either a scalar; otherwise `a` must be a rank-3\n `SparseMatrix`; in this case `b` must be shaped `[batch_size, 1, 1]` and the\n multiply operation broadcasts.\n\n **NOTE** even if `b` is zero, the sparsity structure of the output does not\n change.\n\n Args:\n a: A `Tensor` of type `variant`. A CSRSparseMatrix.\n b: A `Tensor`. A dense tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Element-wise multiplication of a sparse matrix with a dense tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixNNZ", "docs": "Returns the number of nonzeroes of `sparse_matrix`.\n\n Args:\n sparse_matrix: A `Tensor` of type `variant`. A CSRSparseMatrix.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Returns the number of nonzeroes of `sparse_matrix`.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixOrderingAMD", "docs": "Computes the Approximate Minimum Degree (AMD) ordering of `input`.\n\n Computes the Approximate Minimum Degree (AMD) ordering for a sparse matrix.\n\n The returned permutation may be used to permute the rows and columns of the\n given sparse matrix. This typically results in permuted sparse matrix's sparse\n Cholesky (or other decompositions) in having fewer zero fill-in compared to\n decomposition of the original matrix.\n\n The input sparse matrix may have rank 2 or rank 3. The output Tensor,\n representing would then have rank 1 or 2 respectively, with the same batch\n shape as the input.\n\n Each component of the input sparse matrix must represent a square symmetric\n matrix; only the lower triangular part of the matrix is read. The values of the\n sparse matrix does not affect the returned permutation, only the sparsity\n pattern of the sparse matrix is used. Hence, a single AMD ordering may be\n reused for the Cholesky decompositions of sparse matrices with the same sparsity\n pattern but with possibly different values.\n\n Each batch component of the output permutation represents a permutation of `N`\n elements, where the input sparse matrix components each have `N` rows. That is,\n the component contains each of the integers `{0, .. N-1}` exactly once. The\n `i`th element represents the row index that the `i`th row maps to.\n\n Usage example:\n\n ```python\n from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops\n\n a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]])\n a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32)\n a_dense_shape = [4, 4]\n\n with tf.Session() as sess:\n # Define (COO format) SparseTensor over Numpy array.\n a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape)\n\n # Convert SparseTensors to CSR SparseMatrix.\n a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(\n a_st.indices, a_st.values, a_st.dense_shape)\n\n # Obtain the AMD Ordering for the CSR SparseMatrix.\n ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix)\n\n ordering_amd_value = sess.run(ordering_amd)\n ```\n\n `ordering_amd_value` stores the AMD ordering: `[1 2 3 0]`.\n\n input: A `CSRSparseMatrix`.\n\n Args:\n input: A `Tensor` of type `variant`. A `CSRSparseMatrix`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Computes the Approximate Minimum Degree (AMD) ordering of `input`.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixSoftmax", "docs": "Calculates the softmax of a CSRSparseMatrix.\n\n Calculate the softmax of the innermost dimensions of a SparseMatrix.\n\n Missing values are treated as `-inf` (i.e., logits of zero probability); and\n the output has the same sparsity structure as the input (though missing values\n in the output may now be treated as having probability zero).\n\n Args:\n logits: A `Tensor` of type `variant`. A CSRSparseMatrix.\n type: A `tf.DType` from: `tf.float32, tf.float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Calculates the softmax of a CSRSparseMatrix.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixSoftmaxGrad", "docs": "Calculates the gradient of the SparseMatrixSoftmax op.\n\n Args:\n softmax: A `Tensor` of type `variant`. A CSRSparseMatrix.\n grad_softmax: A `Tensor` of type `variant`. The gradient of `softmax`.\n type: A `tf.DType` from: `tf.float32, tf.float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Calculates the gradient of the SparseMatrixSoftmax op.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixSparseCholesky", "docs": "Computes the sparse Cholesky decomposition of `input`.\n\n Computes the Sparse Cholesky decomposition of a sparse matrix, with the given\n fill-in reducing permutation.\n\n The input sparse matrix and the fill-in reducing permutation `permutation` must\n have compatible shapes. If the sparse matrix has rank 3; with the batch\n dimension `B`, then the `permutation` must be of rank 2; with the same batch\n dimension `B`. There is no support for broadcasting.\n\n Furthermore, each component vector of `permutation` must be of length `N`,\n containing each of the integers {0, 1, ..., N - 1} exactly once, where `N` is\n the number of rows of each component of the sparse matrix.\n\n Each component of the input sparse matrix must represent a symmetric positive\n definite (SPD) matrix; although only the lower triangular part of the matrix is\n read. If any individual component is not SPD, then an InvalidArgument error is\n thrown.\n\n The returned sparse matrix has the same dense shape as the input sparse matrix.\n For each component `A` of the input sparse matrix, the corresponding output\n sparse matrix represents `L`, the lower triangular Cholesky factor satisfying\n the following identity:\n\n ```\n A = L * Lt\n ```\n\n where Lt denotes the transpose of L (or its conjugate transpose, if `type` is\n `complex64` or `complex128`).\n\n The `type` parameter denotes the type of the matrix elements. The supported\n types are: `float32`, `float64`, `complex64` and `complex128`.\n\n Usage example:\n\n ```python\n from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops\n\n a_indices = np.array([[0, 0], [1, 1], [2, 1], [2, 2], [3, 3]])\n a_values = np.array([1.0, 2.0, 1.0, 3.0, 4.0], np.float32)\n a_dense_shape = [4, 4]\n\n with tf.Session() as sess:\n # Define (COO format) SparseTensor over Numpy array.\n a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape)\n\n # Convert SparseTensors to CSR SparseMatrix.\n a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(\n a_st.indices, a_st.values, a_st.dense_shape)\n\n # Obtain the Sparse Cholesky factor using AMD Ordering for reducing zero\n # fill-in (number of structural non-zeros in the sparse Cholesky factor).\n ordering_amd = sparse_csr_matrix_ops.sparse_matrix_ordering_amd(sparse_matrix)\n cholesky_sparse_matrices = (\n sparse_csr_matrix_ops.sparse_matrix_sparse_cholesky(\n sparse_matrix, ordering_amd, type=tf.float32))\n\n # Convert the CSRSparseMatrix Cholesky factor to a dense Tensor\n dense_cholesky = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense(\n cholesky_sparse_matrices, tf.float32)\n\n # Evaluate the dense Tensor value.\n dense_cholesky_value = sess.run(dense_cholesky)\n ```\n\n `dense_cholesky_value` stores the dense Cholesky factor:\n\n ```\n [[ 1. 0. 0. 0.]\n [ 0. 1.41 0. 0.]\n [ 0. 0.70 1.58 0.]\n [ 0. 0. 0. 2.]]\n ```\n\n\n input: A `CSRSparseMatrix`.\n permutation: A `Tensor`.\n type: The type of `input`.\n\n Args:\n input: A `Tensor` of type `variant`. A `CSRSparseMatrix`.\n permutation: A `Tensor` of type `int32`.\n A fill-in reducing permutation matrix.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Computes the sparse Cholesky decomposition of `input`.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixSparseMatMul", "docs": "Sparse-matrix-multiplies two CSR matrices `a` and `b`.\n\n Performs a matrix multiplication of a sparse matrix `a` with a sparse matrix\n `b`; returns a sparse matrix `a * b`, unless either `a` or `b` is transposed or\n adjointed.\n\n Each matrix may be transposed or adjointed (conjugated and transposed)\n according to the Boolean parameters `transpose_a`, `adjoint_a`, `transpose_b`\n and `adjoint_b`. At most one of `transpose_a` or `adjoint_a` may be True.\n Similarly, at most one of `transpose_b` or `adjoint_b` may be True.\n\n The inputs must have compatible shapes. That is, the inner dimension of `a`\n must be equal to the outer dimension of `b`. This requirement is adjusted\n according to whether either `a` or `b` is transposed or adjointed.\n\n The `type` parameter denotes the type of the matrix elements. Both `a` and `b`\n must have the same type. The supported types are: `float32`, `float64`,\n `complex64` and `complex128`.\n\n Both `a` and `b` must have the same rank. Broadcasting is not supported. If they\n have rank 3, each batch of 2D CSRSparseMatrices within `a` and `b` must have the\n same dense shape.\n\n The sparse matrix product may have numeric (non-structural) zeros.\n TODO(anudhyan): Consider adding a boolean attribute to control whether to prune\n zeros.\n\n Usage example:\n\n ```python\n from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops\n\n a_indices = np.array([[0, 0], [2, 3], [2, 4], [3, 0]])\n a_values = np.array([1.0, 5.0, -1.0, -2.0], np.float32)\n a_dense_shape = [4, 5]\n\n b_indices = np.array([[0, 0], [3, 0], [3, 1]])\n b_values = np.array([2.0, 7.0, 8.0], np.float32)\n b_dense_shape = [5, 3]\n\n with tf.Session() as sess:\n # Define (COO format) Sparse Tensors over Numpy arrays\n a_st = tf.sparse.SparseTensor(a_indices, a_values, a_dense_shape)\n b_st = tf.sparse.SparseTensor(b_indices, b_values, b_dense_shape)\n\n # Convert SparseTensors to CSR SparseMatrix\n a_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(\n a_st.indices, a_st.values, a_st.dense_shape)\n b_sm = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(\n b_st.indices, b_st.values, b_st.dense_shape)\n\n # Compute the CSR SparseMatrix matrix multiplication\n c_sm = sparse_csr_matrix_ops.sparse_matrix_sparse_mat_mul(\n a=a_sm, b=b_sm, type=tf.float32)\n\n # Convert the CSR SparseMatrix product to a dense Tensor\n c_sm_dense = sparse_csr_matrix_ops.csr_sparse_matrix_to_dense(\n c_sm, tf.float32)\n # Evaluate the dense Tensor value\n c_sm_dense_value = sess.run(c_sm_dense)\n ```\n\n `c_sm_dense_value` stores the dense matrix product:\n\n ```\n [[ 2. 0. 0.]\n [ 0. 0. 0.]\n [ 35. 40. 0.]\n [ -4. 0. 0.]]\n ```\n\n a: A `CSRSparseMatrix`.\n b: A `CSRSparseMatrix` with the same type and rank as `a`.\n type: The type of both `a` and `b`.\n transpose_a: If True, `a` transposed before multiplication.\n transpose_b: If True, `b` transposed before multiplication.\n adjoint_a: If True, `a` adjointed before multiplication.\n adjoint_b: If True, `b` adjointed before multiplication.\n\n Args:\n a: A `Tensor` of type `variant`. A CSRSparseMatrix.\n b: A `Tensor` of type `variant`. A CSRSparseMatrix.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n transpose_a: An optional `bool`. Defaults to `False`.\n Indicates whether `a` should be transposed.\n transpose_b: An optional `bool`. Defaults to `False`.\n Indicates whether `b` should be transposed.\n adjoint_a: An optional `bool`. Defaults to `False`.\n Indicates whether `a` should be conjugate-transposed.\n adjoint_b: An optional `bool`. Defaults to `False`.\n Indicates whether `b` should be conjugate-transposed.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Sparse-matrix-multiplies two CSR matrices `a` and `b`.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixTranspose", "docs": "Transposes the inner (matrix) dimensions of a CSRSparseMatrix.\n\n Transposes the inner (matrix) dimensions of a SparseMatrix and optionally\n conjugates its values.\n\n Args:\n input: A `Tensor` of type `variant`. A CSRSparseMatrix.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n conjugate: An optional `bool`. Defaults to `False`.\n Indicates whether `input` should be conjugated.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Transposes the inner (matrix) dimensions of a CSRSparseMatrix.", "type": "API"}, {"name": "tf.raw_ops.SparseMatrixZeros", "docs": "Creates an all-zeros CSRSparseMatrix with shape `dense_shape`.\n\n Args:\n dense_shape: A `Tensor` of type `int64`. The desired matrix shape.\n type: A `tf.DType` from: `tf.float32, tf.float64, tf.complex64, tf.complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates an all-zeros CSRSparseMatrix with shape `dense_shape`.", "type": "API"}, {"name": "tf.raw_ops.SparseReduceMax", "docs": "Computes the max of elements across dimensions of a SparseTensor.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_max()`. In particular, this Op also returns a dense `Tensor`\n instead of a sparse one.\n\n Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained\n with length 1.\n\n If `reduction_axes` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n which are interpreted according to the indexing rules in Python.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n input_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `input_indices`.\n input_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n reduction_axes: A `Tensor` of type `int32`.\n 1-D. Length-`K` vector containing the reduction axes.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input_values`.\n ", "desc": "Computes the max of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.SparseReduceMaxSparse", "docs": "Computes the max of elements across dimensions of a SparseTensor.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_max()`. In contrast to SparseReduceMax, this Op returns a\n SparseTensor.\n\n Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained\n with length 1.\n\n If `reduction_axes` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n which are interpreted according to the indexing rules in Python.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n input_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `input_indices`.\n input_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n reduction_axes: A `Tensor` of type `int32`.\n 1-D. Length-`K` vector containing the reduction axes.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `input_values`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Computes the max of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.SparseReduceSum", "docs": "Computes the sum of elements across dimensions of a SparseTensor.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`\n instead of a sparse one.\n\n Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained\n with length 1.\n\n If `reduction_axes` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n which are interpreted according to the indexing rules in Python.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n input_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `input_indices`.\n input_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n reduction_axes: A `Tensor` of type `int32`.\n 1-D. Length-`K` vector containing the reduction axes.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input_values`.\n ", "desc": "Computes the sum of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.SparseReduceSumSparse", "docs": "Computes the sum of elements across dimensions of a SparseTensor.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_sum()`. In contrast to SparseReduceSum, this Op returns a\n SparseTensor.\n\n Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained\n with length 1.\n\n If `reduction_axes` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n which are interpreted according to the indexing rules in Python.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n input_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `input_indices`.\n input_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n reduction_axes: A `Tensor` of type `int32`.\n 1-D. Length-`K` vector containing the reduction axes.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `input_values`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Computes the sum of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.raw_ops.SparseReorder", "docs": "Reorders a SparseTensor into the canonical, row-major ordering.\n\n Note that by convention, all sparse ops preserve the canonical ordering along\n increasing dimension number. The only time ordering can be violated is during\n manual manipulation of the indices and values vectors to add entries.\n\n Reordering does not affect the shape of the SparseTensor.\n\n If the tensor has rank `R` and `N` non-empty values, `input_indices` has\n shape `[N, R]`, input_values has length `N`, and input_shape has length `R`.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, possibly not in canonical ordering.\n input_values: A `Tensor`.\n 1-D. `N` non-empty values corresponding to `input_indices`.\n input_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `input_values`.\n ", "desc": "Reorders a SparseTensor into the canonical, row-major ordering.", "type": "API"}, {"name": "tf.raw_ops.SparseReshape", "docs": "Reshapes a SparseTensor to represent values in a new dense shape.\n\n This operation has the same semantics as reshape on the represented dense\n tensor. The `input_indices` are recomputed based on the requested `new_shape`.\n\n If one component of `new_shape` is the special value -1, the size of that\n dimension is computed so that the total dense size remains constant. At\n most one component of `new_shape` can be -1. The number of dense elements\n implied by `new_shape` must be the same as the number of dense elements\n originally implied by `input_shape`.\n\n Reshaping does not affect the order of values in the SparseTensor.\n\n If the input tensor has rank `R_in` and `N` non-empty values, and `new_shape`\n has length `R_out`, then `input_indices` has shape `[N, R_in]`,\n `input_shape` has length `R_in`, `output_indices` has shape `[N, R_out]`, and\n `output_shape` has length `R_out`.\n\n Args:\n input_indices: A `Tensor` of type `int64`.\n 2-D. `N x R_in` matrix with the indices of non-empty values in a\n SparseTensor.\n input_shape: A `Tensor` of type `int64`.\n 1-D. `R_in` vector with the input SparseTensor's dense shape.\n new_shape: A `Tensor` of type `int64`.\n 1-D. `R_out` vector with the requested new dense shape.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Reshapes a SparseTensor to represent values in a new dense shape.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentMean", "docs": "Computes the mean along sparse segments of a tensor.\n\n See `tf.sparse.segment_sum` for usage examples.\n\n Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first\n dimension, selecting a subset of dimension 0, specified by `indices`.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along sparse segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentMeanGrad", "docs": "Computes gradients for SparseSegmentMean.\n\n Returns tensor \"output\" with same shape as grad, except for dimension 0 whose\n value is output_dim0.\n\n Args:\n grad: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n gradient propagated to the SparseSegmentMean op.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n indices passed to the corresponding SparseSegmentMean op.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n segment_ids passed to the corresponding SparseSegmentMean op.\n output_dim0: A `Tensor` of type `int32`.\n dimension 0 of \"data\" passed to SparseSegmentMean op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grad`.\n ", "desc": "Computes gradients for SparseSegmentMean.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentMeanWithNumSegments", "docs": "Computes the mean along sparse segments of a tensor.\n\n Like `SparseSegmentMean`, but allows missing ids in `segment_ids`. If an id is\n missing, the `output` tensor at that position will be zeroed.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Should equal the number of distinct segment IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the mean along sparse segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentSqrtN", "docs": "Computes the sum along sparse segments of a tensor divided by the sqrt of N.\n\n N is the size of the segment being reduced.\n\n See `tf.sparse.segment_sum` for usage examples.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along sparse segments of a tensor divided by the sqrt of N.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentSqrtNGrad", "docs": "Computes gradients for SparseSegmentSqrtN.\n\n Returns tensor \"output\" with same shape as grad, except for dimension 0 whose\n value is output_dim0.\n\n Args:\n grad: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n gradient propagated to the SparseSegmentSqrtN op.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n indices passed to the corresponding SparseSegmentSqrtN op.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n segment_ids passed to the corresponding SparseSegmentSqrtN op.\n output_dim0: A `Tensor` of type `int32`.\n dimension 0 of \"data\" passed to SparseSegmentSqrtN op.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `grad`.\n ", "desc": "Computes gradients for SparseSegmentSqrtN.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentSqrtNWithNumSegments", "docs": "Computes the sum along sparse segments of a tensor divided by the sqrt of N.\n\n N is the size of the segment being reduced.\n\n Like `SparseSegmentSqrtN`, but allows missing ids in `segment_ids`. If an id is\n missing, the `output` tensor at that position will be zeroed.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Should equal the number of distinct segment IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along sparse segments of a tensor divided by the sqrt of N.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentSum", "docs": "Computes the sum along sparse segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first\n dimension, selecting a subset of dimension 0, specified by `indices`.\n\n For example:\n\n ```python\n c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\n\n # Select two rows, one segment.\n tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))\n # => [[0 0 0 0]]\n\n # Select two rows, two segment.\n tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))\n # => [[ 1 2 3 4]\n # [-1 -2 -3 -4]]\n\n # Select all rows, two segments.\n tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))\n # => [[0 0 0 0]\n # [5 6 7 8]]\n\n # Which is equivalent to:\n tf.segment_sum(c, tf.constant([0, 0, 1]))\n ```\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along sparse segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseSegmentSumWithNumSegments", "docs": "Computes the sum along sparse segments of a tensor.\n\n Like `SparseSegmentSum`, but allows missing ids in `segment_ids`. If an id is\n missing, the `output` tensor at that position will be zeroed.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/sparse#Segmentation)\n for an explanation of segments.\n\n For example:\n\n ```python\n c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\n\n tf.sparse_segment_sum_with_num_segments(\n c, tf.constant([0, 1]), tf.constant([0, 0]), num_segments=3)\n # => [[0 0 0 0]\n # [0 0 0 0]\n # [0 0 0 0]]\n\n tf.sparse_segment_sum_with_num_segments(c,\n tf.constant([0, 1]),\n tf.constant([0, 2],\n num_segments=4))\n # => [[ 1 2 3 4]\n # [ 0 0 0 0]\n # [-1 -2 -3 -4]\n # [ 0 0 0 0]]\n ```\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Has same rank as `segment_ids`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1-D tensor. Values should be sorted and can be repeated.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Should equal the number of distinct segment IDs.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along sparse segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseSlice", "docs": "Slice a `SparseTensor` based on the `start` and `size`.\n\n For example, if the input is\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\n Graphically the output tensors are:\n\n sparse_slice([0, 0], [2, 4]) = shape = [2, 4]\n [ a ]\n [b c ]\n\n sparse_slice([0, 4], [2, 3]) = shape = [2, 3]\n [ d e ]\n [ ]\n\n Args:\n indices: A `Tensor` of type `int64`.\n 2-D tensor represents the indices of the sparse tensor.\n values: A `Tensor`. 1-D tensor represents the values of the sparse tensor.\n shape: A `Tensor` of type `int64`.\n 1-D. tensor represents the shape of the sparse tensor.\n start: A `Tensor` of type `int64`.\n 1-D. tensor represents the start of the slice.\n size: A `Tensor` of type `int64`.\n 1-D. tensor represents the size of the slice.\n output indices: A list of 1-D tensors represents the indices of the output\n sparse tensors.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `values`.\n output_shape: A `Tensor` of type `int64`.\n ", "desc": "Slice a `SparseTensor` based on the `start` and `size`.", "type": "API"}, {"name": "tf.raw_ops.SparseSliceGrad", "docs": "The gradient operator for the SparseSlice op.\n\n This op takes in the upstream gradient w.r.t. non-empty values of\n the sliced `SparseTensor`, and outputs the gradients w.r.t.\n the non-empty values of input `SparseTensor`.\n\n Args:\n backprop_val_grad: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. The gradient with respect to\n the non-empty values of the sliced `SparseTensor`.\n input_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the input `SparseTensor`.\n input_start: A `Tensor` of type `int64`.\n 1-D. tensor represents the start of the slice.\n output_indices: A `Tensor` of type `int64`.\n 2-D. The `indices` of the sliced `SparseTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `backprop_val_grad`.\n ", "desc": "The gradient operator for the SparseSlice op.", "type": "API"}, {"name": "tf.raw_ops.SparseSoftmax", "docs": "Applies softmax to a batched N-D `SparseTensor`.\n\n The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`\n (where `N >= 2`), and with indices sorted in the canonical lexicographic order.\n\n This op is equivalent to applying the normal `tf.nn.softmax()` to each innermost\n logical submatrix with shape `[B, C]`, but with the catch that *the implicitly\n zero elements do not participate*. Specifically, the algorithm is equivalent\n to the following:\n\n (1) Applies `tf.nn.softmax()` to a densified view of each innermost submatrix\n with shape `[B, C]`, along the size-C dimension;\n (2) Masks out the original implicitly-zero locations;\n (3) Renormalizes the remaining elements.\n\n Hence, the `SparseTensor` result has exactly the same non-zero indices and\n shape.\n\n Args:\n sp_indices: A `Tensor` of type `int64`.\n 2-D. `NNZ x R` matrix with the indices of non-empty values in a\n SparseTensor, in canonical ordering.\n sp_values: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n 1-D. `NNZ` non-empty values corresponding to `sp_indices`.\n sp_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `sp_values`.\n ", "desc": "Applies softmax to a batched N-D `SparseTensor`.", "type": "API"}, {"name": "tf.raw_ops.SparseSoftmaxCrossEntropyWithLogits", "docs": "Computes softmax cross entropy cost and gradients to backpropagate.\n\n Unlike `SoftmaxCrossEntropyWithLogits`, this operation does not accept\n a matrix of label probabilities, but rather a single label per row\n of features. This label is considered to have probability 1.0 for the\n given row.\n\n Inputs are the logits, not probabilities.\n\n Args:\n features: A `Tensor`. Must be one of the following types: `half`, `bfloat16`, `float32`, `float64`.\n batch_size x num_classes matrix\n labels: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n batch_size vector with values in [0, num_classes).\n This is the label for the given minibatch entry.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (loss, backprop).\n\n loss: A `Tensor`. Has the same type as `features`.\n backprop: A `Tensor`. Has the same type as `features`.\n ", "desc": "Computes softmax cross entropy cost and gradients to backpropagate.", "type": "API"}, {"name": "tf.raw_ops.SparseSparseMaximum", "docs": "Returns the element-wise max of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Args:\n a_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, in the canonical lexicographic ordering.\n a_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `a_indices`.\n a_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n b_indices: A `Tensor` of type `int64`.\n counterpart to `a_indices` for the other operand.\n b_values: A `Tensor`. Must have the same type as `a_values`.\n counterpart to `a_values` for the other operand; must be of the same dtype.\n b_shape: A `Tensor` of type `int64`.\n counterpart to `a_shape` for the other operand; the two shapes must be equal.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `a_values`.\n ", "desc": "Returns the element-wise max of two SparseTensors.", "type": "API"}, {"name": "tf.raw_ops.SparseSparseMinimum", "docs": "Returns the element-wise min of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Args:\n a_indices: A `Tensor` of type `int64`.\n 2-D. `N x R` matrix with the indices of non-empty values in a\n SparseTensor, in the canonical lexicographic ordering.\n a_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. `N` non-empty values corresponding to `a_indices`.\n a_shape: A `Tensor` of type `int64`.\n 1-D. Shape of the input SparseTensor.\n b_indices: A `Tensor` of type `int64`.\n counterpart to `a_indices` for the other operand.\n b_values: A `Tensor`. Must have the same type as `a_values`.\n counterpart to `a_values` for the other operand; must be of the same dtype.\n b_shape: A `Tensor` of type `int64`.\n counterpart to `a_shape` for the other operand; the two shapes must be equal.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values).\n\n output_indices: A `Tensor` of type `int64`.\n output_values: A `Tensor`. Has the same type as `a_values`.\n ", "desc": "Returns the element-wise min of two SparseTensors.", "type": "API"}, {"name": "tf.raw_ops.SparseSplit", "docs": "Split a `SparseTensor` into `num_split` tensors along one dimension.\n\n If the `shape[split_dim]` is not an integer multiple of `num_split`. Slices\n `[0 : shape[split_dim] % num_split]` gets one extra dimension.\n For example, if `split_dim = 1` and `num_split = 2` and the input is\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\n Graphically the output tensors are:\n\n output_tensor[0] = shape = [2, 4]\n [ a ]\n [b c ]\n\n output_tensor[1] = shape = [2, 3]\n [ d e ]\n [ ]\n\n Args:\n split_dim: A `Tensor` of type `int64`.\n 0-D. The dimension along which to split. Must be in the range\n `[0, rank(shape))`.\n indices: A `Tensor` of type `int64`.\n 2-D tensor represents the indices of the sparse tensor.\n values: A `Tensor`. 1-D tensor represents the values of the sparse tensor.\n shape: A `Tensor` of type `int64`.\n 1-D. tensor represents the shape of the sparse tensor.\n output indices: A list of 1-D tensors represents the indices of the output\n sparse tensors.\n num_split: An `int` that is `>= 1`. The number of ways to split.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_indices, output_values, output_shape).\n\n output_indices: A list of `num_split` `Tensor` objects with type `int64`.\n output_values: A list of `num_split` `Tensor` objects with the same type as `values`.\n output_shape: A list of `num_split` `Tensor` objects with type `int64`.\n ", "desc": "Split a `SparseTensor` into `num_split` tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.SparseTensorDenseAdd", "docs": "Adds up a `SparseTensor` and a dense `Tensor`, producing a dense `Tensor`.\n\n This Op does not require `a_indices` be sorted in standard lexicographic order.\n\n Args:\n a_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D. The `indices` of the `SparseTensor`, with shape `[nnz, ndims]`.\n a_values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n 1-D. The `values` of the `SparseTensor`, with shape `[nnz]`.\n a_shape: A `Tensor`. Must have the same type as `a_indices`.\n 1-D. The `shape` of the `SparseTensor`, with shape `[ndims]`.\n b: A `Tensor`. Must have the same type as `a_values`.\n `ndims`-D Tensor. With shape `a_shape`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a_values`.\n ", "desc": "Adds up a `SparseTensor` and a dense `Tensor`, producing a dense `Tensor`.", "type": "API"}, {"name": "tf.raw_ops.SparseTensorDenseMatMul", "docs": "Multiply SparseTensor (of rank 2) \"A\" by dense matrix \"B\".\n\n No validity checking is performed on the indices of A. However, the following\n input format is recommended for optimal behavior:\n\n if adjoint_a == false:\n A should be sorted in lexicographically increasing order. Use SparseReorder\n if you're not sure.\n if adjoint_a == true:\n A should be sorted in order of increasing dimension 1 (i.e., \"column major\"\n order instead of \"row major\" order).\n\n Args:\n a_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D. The `indices` of the `SparseTensor`, size `[nnz, 2]` Matrix.\n a_values: A `Tensor`.\n 1-D. The `values` of the `SparseTensor`, size `[nnz]` Vector.\n a_shape: A `Tensor` of type `int64`.\n 1-D. The `shape` of the `SparseTensor`, size `[2]` Vector.\n b: A `Tensor`. Must have the same type as `a_values`.\n 2-D. A dense Matrix.\n adjoint_a: An optional `bool`. Defaults to `False`.\n Use the adjoint of A in the matrix multiply. If A is complex, this\n is transpose(conj(A)). Otherwise it's transpose(A).\n adjoint_b: An optional `bool`. Defaults to `False`.\n Use the adjoint of B in the matrix multiply. If B is complex, this\n is transpose(conj(B)). Otherwise it's transpose(B).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `a_values`.\n ", "desc": "Multiply SparseTensor (of rank 2) \"A\" by dense matrix \"B\".", "type": "API"}, {"name": "tf.raw_ops.SparseTensorSliceDataset", "docs": "Creates a dataset that splits a SparseTensor into elements row-wise.\n\n Args:\n indices: A `Tensor` of type `int64`.\n values: A `Tensor`.\n dense_shape: A `Tensor` of type `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that splits a SparseTensor into elements row-wise.", "type": "API"}, {"name": "tf.raw_ops.SparseTensorToCSRSparseMatrix", "docs": "Converts a SparseTensor to a (possibly batched) CSRSparseMatrix.\n\n Args:\n indices: A `Tensor` of type `int64`. SparseTensor indices.\n values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `complex64`, `complex128`.\n SparseTensor values.\n dense_shape: A `Tensor` of type `int64`. SparseTensor dense shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Converts a SparseTensor to a (possibly batched) CSRSparseMatrix.", "type": "API"}, {"name": "tf.raw_ops.SparseToDense", "docs": "Converts a sparse representation into a dense tensor.\n\n Builds an array `dense` with shape `output_shape` such that\n\n ```\n # If sparse_indices is scalar\n dense[i] = (i == sparse_indices ? sparse_values : default_value)\n\n # If sparse_indices is a vector, then for each i\n dense[sparse_indices[i]] = sparse_values[i]\n\n # If sparse_indices is an n by d matrix, then for each i in [0, n)\n dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]\n ```\n\n All other values in `dense` are set to `default_value`. If `sparse_values` is a\n scalar, all sparse indices are set to this single value.\n\n Indices should be sorted in lexicographic order, and indices must not\n contain any repeats. If `validate_indices` is true, these properties\n are checked during execution.\n\n Args:\n sparse_indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 0-D, 1-D, or 2-D. `sparse_indices[i]` contains the complete\n index where `sparse_values[i]` will be placed.\n output_shape: A `Tensor`. Must have the same type as `sparse_indices`.\n 1-D. Shape of the dense output tensor.\n sparse_values: A `Tensor`.\n 1-D. Values corresponding to each row of `sparse_indices`,\n or a scalar value to be used for all sparse indices.\n default_value: A `Tensor`. Must have the same type as `sparse_values`.\n Scalar value to set for indices not specified in\n `sparse_indices`.\n validate_indices: An optional `bool`. Defaults to `True`.\n If true, indices are checked to make sure they are sorted in\n lexicographic order and that there are no repeats.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `sparse_values`.\n ", "desc": "Converts a sparse representation into a dense tensor.", "type": "API"}, {"name": "tf.raw_ops.SparseToSparseSetOperation", "docs": "Applies set operation along last dimension of 2 `SparseTensor` inputs.\n\n See SetOperationOp::SetOperationFromContext for values of `set_operation`.\n\n If `validate_indices` is `True`, `SparseToSparseSetOperation` validates the\n order and range of `set1` and `set2` indices.\n\n Input `set1` is a `SparseTensor` represented by `set1_indices`, `set1_values`,\n and `set1_shape`. For `set1` ranked `n`, 1st `n-1` dimensions must be the same\n as `set2`. Dimension `n` contains values in a set, duplicates are allowed but\n ignored.\n\n Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`,\n and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same\n as `set1`. Dimension `n` contains values in a set, duplicates are allowed but\n ignored.\n\n If `validate_indices` is `True`, this op validates the order and range of `set1`\n and `set2` indices.\n\n Output `result` is a `SparseTensor` represented by `result_indices`,\n `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this\n has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth`\n dimension contains the result of `set_operation` applied to the corresponding\n `[0...n-1]` dimension of `set`.\n\n Args:\n set1_indices: A `Tensor` of type `int64`.\n 2D `Tensor`, indices of a `SparseTensor`. Must be in row-major\n order.\n set1_values: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `string`.\n 1D `Tensor`, values of a `SparseTensor`. Must be in row-major\n order.\n set1_shape: A `Tensor` of type `int64`.\n 1D `Tensor`, shape of a `SparseTensor`. `set1_shape[0...n-1]` must\n be the same as `set2_shape[0...n-1]`, `set1_shape[n]` is the\n max set size across `0...n-1` dimensions.\n set2_indices: A `Tensor` of type `int64`.\n 2D `Tensor`, indices of a `SparseTensor`. Must be in row-major\n order.\n set2_values: A `Tensor`. Must have the same type as `set1_values`.\n 1D `Tensor`, values of a `SparseTensor`. Must be in row-major\n order.\n set2_shape: A `Tensor` of type `int64`.\n 1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must\n be the same as `set1_shape[0...n-1]`, `set2_shape[n]` is the\n max set size across `0...n-1` dimensions.\n set_operation: A `string`.\n validate_indices: An optional `bool`. Defaults to `True`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (result_indices, result_values, result_shape).\n\n result_indices: A `Tensor` of type `int64`.\n result_values: A `Tensor`. Has the same type as `set1_values`.\n result_shape: A `Tensor` of type `int64`.\n ", "desc": "Applies set operation along last dimension of 2 `SparseTensor` inputs.", "type": "API"}, {"name": "tf.raw_ops.Spence", "docs": "TODO: add doc.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Split", "docs": "Splits a tensor into `num_split` tensors along one dimension.\n\n Args:\n axis: A `Tensor` of type `int32`.\n 0-D. The dimension along which to split. Must be in the range\n `[-rank(value), rank(value))`.\n value: A `Tensor`. The tensor to split.\n num_split: An `int` that is `>= 1`.\n The number of ways to split. Must evenly divide\n `value.shape[split_dim]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_split` `Tensor` objects with the same type as `value`.\n ", "desc": "Splits a tensor into `num_split` tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.SplitV", "docs": "Splits a tensor into `num_split` tensors along one dimension.\n\n Args:\n value: A `Tensor`. The tensor to split.\n size_splits: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n list containing the sizes of each output tensor along the split\n dimension. Must sum to the dimension of value along split_dim.\n Can contain one -1 indicating that dimension is to be inferred.\n axis: A `Tensor` of type `int32`.\n 0-D. The dimension along which to split. Must be in the range\n `[-rank(value), rank(value))`.\n num_split: An `int` that is `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_split` `Tensor` objects with the same type as `value`.\n ", "desc": "Splits a tensor into `num_split` tensors along one dimension.", "type": "API"}, {"name": "tf.raw_ops.SqlDataset", "docs": "Creates a dataset that executes a SQL query and emits rows of the result set.\n\n Args:\n driver_name: A `Tensor` of type `string`.\n The database type. Currently, the only supported type is 'sqlite'.\n data_source_name: A `Tensor` of type `string`.\n A connection string to connect to the database.\n query: A `Tensor` of type `string`. A SQL query to execute.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that executes a SQL query and emits rows of the result set.", "type": "API"}, {"name": "tf.raw_ops.Sqrt", "docs": "Computes square root of x element-wise.\n\n I.e., \\\\(y = \\sqrt{x} = x^{1/2}\\\\).\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes square root of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.SqrtGrad", "docs": "Computes the gradient for the sqrt of `x` wrt its input.\n\n Specifically, `grad = dy * 0.5 / y`, where `y = sqrt(x)`, and `dy`\n is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient for the sqrt of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.Square", "docs": "Computes square of x element-wise.\n\n I.e., \\\\(y = x * x = x^2\\\\).\n\n >>> tf.math.square([-2., 0., 3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes square of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.SquaredDifference", "docs": "Returns conj(x - y)(x - y) element-wise.\n\n *NOTE*: `math.squared_difference` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns conj(x - y)(x - y) element-wise.", "type": "API"}, {"name": "tf.raw_ops.Squeeze", "docs": "Removes dimensions of size 1 from the shape of a tensor.\n\n Given a tensor `input`, this operation returns a tensor of the same type with\n all dimensions of size 1 removed. If you don't want to remove all size 1\n dimensions, you can remove specific size 1 dimensions by specifying\n `axis`.\n\n For example:\n\n ```\n # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n shape(squeeze(t)) ==> [2, 3]\n ```\n\n Or, to remove specific size 1 dimensions:\n\n ```\n # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]\n ```\n\n Args:\n input: A `Tensor`. The `input` to squeeze.\n axis: An optional list of `ints`. Defaults to `[]`.\n If specified, only squeezes the dimensions listed. The dimension\n index starts at 0. It is an error to squeeze a dimension that is not 1. Must\n be in the range `[-rank(input), rank(input))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Removes dimensions of size 1 from the shape of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Stack", "docs": "Deprecated, use StackV2.\n\n Args:\n elem_type: A `tf.DType`.\n stack_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "Deprecated, use StackV2.", "type": "API"}, {"name": "tf.raw_ops.StackClose", "docs": "Deprecated, use StackCloseV2.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Deprecated, use StackCloseV2.", "type": "API"}, {"name": "tf.raw_ops.StackCloseV2", "docs": "Delete the stack from its resource container.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a stack.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Delete the stack from its resource container.", "type": "API"}, {"name": "tf.raw_ops.StackPop", "docs": "Deprecated, use StackPopV2.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n elem_type: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `elem_type`.\n ", "desc": "Deprecated, use StackPopV2.", "type": "API"}, {"name": "tf.raw_ops.StackPopV2", "docs": "Pop the element at the top of the stack.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a stack.\n elem_type: A `tf.DType`. The type of the elem that is popped.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `elem_type`.\n ", "desc": "Pop the element at the top of the stack.", "type": "API"}, {"name": "tf.raw_ops.StackPush", "docs": "Deprecated, use StackPushV2.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n elem: A `Tensor`.\n swap_memory: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `elem`.\n ", "desc": "Deprecated, use StackPushV2.", "type": "API"}, {"name": "tf.raw_ops.StackPushV2", "docs": "Push an element onto the stack.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a stack.\n elem: A `Tensor`. The tensor to be pushed onto the stack.\n swap_memory: An optional `bool`. Defaults to `False`.\n Swap `elem` to CPU. Default to false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `elem`.\n ", "desc": "Push an element onto the stack.", "type": "API"}, {"name": "tf.raw_ops.StackV2", "docs": "A stack that produces elements in first-in last-out order.\n\n Args:\n max_size: A `Tensor` of type `int32`.\n The maximum size of the stack if non-negative. If negative, the stack\n size is unlimited.\n elem_type: A `tf.DType`. The type of the elements on the stack.\n stack_name: An optional `string`. Defaults to `\"\"`.\n Overrides the name used for the temporary stack resource. Default\n value is the name of the 'Stack' op (which is guaranteed unique).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A stack that produces elements in first-in last-out order.", "type": "API"}, {"name": "tf.raw_ops.Stage", "docs": "Stage values similar to a lightweight Enqueue.\n\n The basic functionality of this Op is similar to a queue with many\n fewer capabilities and options. This Op is optimized for performance.\n\n Args:\n values: A list of `Tensor` objects. a list of tensors\n dtypes A list of data types that inserted values should adhere to.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n Maximum number of elements in the Staging Area. If > 0, inserts\n on the container will block when the capacity is reached.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n The maximum number of bytes allowed for Tensors in the Staging Area.\n If > 0, inserts will block until sufficient space is available.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this queue is placed in the given container. Otherwise,\n a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n It is necessary to match this name to the matching Unstage Op.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Stage values similar to a lightweight Enqueue.", "type": "API"}, {"name": "tf.raw_ops.StageClear", "docs": "Op removes all elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Op removes all elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.StagePeek", "docs": "Op peeks at the values at the specified index. If the\n\n underlying container does not contain sufficient elements\n this op will block until it does. This Op is optimized for\n performance.\n\n Args:\n index: A `Tensor` of type `int32`.\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op peeks at the values at the specified index. If the", "type": "API"}, {"name": "tf.raw_ops.StageSize", "docs": "Op returns the number of elements in the underlying container.\n\n Args:\n dtypes: A list of `tf.DTypes`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Op returns the number of elements in the underlying container.", "type": "API"}, {"name": "tf.raw_ops.StatefulPartitionedCall", "docs": "returns `f(inputs)`, where `f`'s body is placed and partitioned.\n\n Args:\n args: A list of `Tensor` objects. A list of input tensors.\n Tout: A list of `tf.DTypes`. A list of output types.\n f: A function decorated with @Defun.\n A function that takes 'args', a list of tensors, and returns 'output',\n another list of tensors. Input and output types are specified by 'Tin'\n and 'Tout'. The function body of f will be placed and partitioned across\n devices, setting this op apart from the regular Call op. This op is\n stateful.\n config: An optional `string`. Defaults to `\"\"`.\n config_proto: An optional `string`. Defaults to `\"\"`.\n executor_type: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "returns `f(inputs)`, where `f`'s body is placed and partitioned.", "type": "API"}, {"name": "tf.raw_ops.StatefulRandomBinomial", "docs": "TODO: add doc.\n\n Args:\n resource: A `Tensor` of type `resource`.\n algorithm: A `Tensor` of type `int64`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n counts: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n probs: A `Tensor`. Must have the same type as `counts`.\n dtype: An optional `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.StatefulStandardNormal", "docs": "Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2'\n\n The generated values will have mean 0 and standard deviation 1.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a normal distribution. This op is deprecated in favor of op 'StatefulStandardNormalV2'", "type": "API"}, {"name": "tf.raw_ops.StatefulStandardNormalV2", "docs": "Outputs random values from a normal distribution.\n\n The generated values will have mean 0 and standard deviation 1.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatefulTruncatedNormal", "docs": "Outputs random values from a truncated normal distribution.\n\n The generated values follow a normal distribution with mean 0 and standard\n deviation 1, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatefulUniform", "docs": "Outputs random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[0, 1)`. The\n lower bound 0 is included in the range, while the upper bound 1 is excluded.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatefulUniformFullInt", "docs": "Outputs random integers from a uniform distribution.\n\n The generated values are uniform integers covering the whole range of `dtype`.\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n shape: A `Tensor`. The shape of the output tensor.\n dtype: An optional `tf.DType`. Defaults to `tf.uint64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatefulUniformInt", "docs": "Outputs random integers from a uniform distribution.\n\n The generated values are uniform integers in the range `[minval, maxval)`.\n The lower bound `minval` is included in the range, while the upper bound\n `maxval` is excluded.\n\n The random integers are slightly biased unless `maxval - minval` is an exact\n power of two. The bias is small for values of `maxval - minval` significantly\n smaller than the range of the output (either `2^32` or `2^64`).\n\n Args:\n resource: A `Tensor` of type `resource`.\n The handle of the resource variable that stores the state of the RNG.\n algorithm: A `Tensor` of type `int64`. The RNG algorithm.\n shape: A `Tensor`. The shape of the output tensor.\n minval: A `Tensor`. Minimum value (inclusive, scalar).\n maxval: A `Tensor`. Must have the same type as `minval`.\n Maximum value (exclusive, scalar).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `minval`.\n ", "desc": "Outputs random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessCase", "docs": "An n-way switch statement which calls a single branch function.\n\n An n-way switch statement, implementing the following:\n ```\n switch (branch_index) {\n case 0:\n output = branches[0](input);\n break;\n case 1:\n output = branches[1](input);\n break;\n ...\n case [[nbranches-1]]:\n default:\n output = branches[nbranches-1](input);\n break;\n }\n ```\n\n This should only be used when the none of branches has stateful ops.\n\n Args:\n branch_index: A `Tensor` of type `int32`.\n The branch selector, an int32 Tensor.\n input: A list of `Tensor` objects.\n A list of input tensors passed to the branch function.\n Tout: A list of `tf.DTypes`. A list of output types.\n branches: A list of functions decorated with @Defun that has length `>= 1`.\n A list of functions each of which takes 'inputs' and returns a list of\n tensors, whose types are the same as what every other branch returns.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "An n-way switch statement which calls a single branch function.", "type": "API"}, {"name": "tf.raw_ops.StatelessIf", "docs": "output = cond ? then_branch(input) : else_branch(input)\n\n Args:\n cond: A `Tensor`.\n A Tensor. If the tensor is a scalar of non-boolean type, the\n scalar is converted to a boolean according to the\n following rule: if the scalar is a numerical value, non-zero means\n `True` and zero means False; if the scalar is a string, non-empty\n means `True` and empty means `False`. If the tensor is not a scalar,\n being empty means False and being non-empty means True.\n\n This should only be used when the if then/else body functions do not\n have stateful ops.\n input: A list of `Tensor` objects. A list of input tensors.\n Tout: A list of `tf.DTypes`. A list of output types.\n then_branch: A function decorated with @Defun.\n A function that takes 'inputs' and returns a list of tensors, whose\n types are the same as what else_branch returns.\n else_branch: A function decorated with @Defun.\n A function that takes 'inputs' and returns a list of tensors, whose\n types are the same as what then_branch returns.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "output = cond ? then_branch(input) : else_branch(input)", "type": "API"}, {"name": "tf.raw_ops.StatelessMultinomial", "docs": "Draws samples from a multinomial distribution.\n\n Args:\n logits: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]`\n represents the unnormalized log probabilities for all classes.\n num_samples: A `Tensor` of type `int32`.\n 0-D. Number of independent samples to draw for each row slice.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n output_dtype: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `output_dtype`.\n ", "desc": "Draws samples from a multinomial distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessParameterizedTruncatedNormal", "docs": "TODO: add doc.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n means: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n The mean parameter of each batch.\n stddevs: A `Tensor`. Must have the same type as `means`.\n The standard deviation parameter of each batch. Must be greater than 0.\n minvals: A `Tensor`. Must have the same type as `means`.\n The minimum cutoff. May be -infinity.\n maxvals: A `Tensor`. Must have the same type as `means`.\n The maximum cutoff. May be +infinity, and must be more than the minval\n for each batch.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `means`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomBinomial", "docs": "Outputs deterministic pseudorandom random numbers from a binomial distribution.\n\n Outputs random values from a binomial distribution.\n\n The outputs are a deterministic function of `shape`, `seed`, `counts`, and `probs`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n counts: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n The counts of the binomial distribution. Must be broadcastable with `probs`,\n and broadcastable with the rightmost dimensions of `shape`.\n probs: A `Tensor`. Must have the same type as `counts`.\n The probability of success for the binomial distribution. Must be broadcastable\n with `counts` and broadcastable with the rightmost dimensions of `shape`.\n dtype: An optional `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.int64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random numbers from a binomial distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomGammaV2", "docs": "Outputs deterministic pseudorandom random numbers from a gamma distribution.\n\n Outputs random values from a gamma distribution.\n\n The outputs are a deterministic function of `shape`, `seed`, and `alpha`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n alpha: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.\n The concentration of the gamma distribution. Shape must match the rightmost\n dimensions of `shape`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `alpha`.\n ", "desc": "Outputs deterministic pseudorandom random numbers from a gamma distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomGetAlg", "docs": "Picks the best counter-based RNG algorithm based on device.\n\n This op picks the best counter-based RNG algorithm based on device.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Picks the best counter-based RNG algorithm based on device.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomGetKeyCounter", "docs": "Scrambles seed into key and counter, using the best algorithm based on device.\n\n This op scrambles a shape-[2] seed into a key and a counter, both needed by counter-based RNG algorithms. The scrambing uses the best algorithm based on device. The scrambling is opaque but approximately satisfies the property that different seed results in different key/counter pair (which will in turn result in different random numbers).\n\n Args:\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, counter).\n\n key: A `Tensor` of type `uint64`.\n counter: A `Tensor` of type `uint64`.\n ", "desc": "Scrambles seed into key and counter, using the best algorithm based on device.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomGetKeyCounterAlg", "docs": "Picks the best algorithm based on device, and scrambles seed into key and counter.\n\n This op picks the best counter-based RNG algorithm based on device, and scrambles a shape-[2] seed into a key and a counter, both needed by the counter-based algorithm. The scrambling is opaque but approximately satisfies the property that different seed results in different key/counter pair (which will in turn result in different random numbers).\n\n Args:\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (key, counter, alg).\n\n key: A `Tensor` of type `uint64`.\n counter: A `Tensor` of type `uint64`.\n alg: A `Tensor` of type `int32`.\n ", "desc": "Picks the best algorithm based on device, and scrambles seed into key and counter.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomNormal", "docs": "Outputs deterministic pseudorandom values from a normal distribution.\n\n The generated values will have mean 0 and standard deviation 1.\n\n The outputs are a deterministic function of `shape` and `seed`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom values from a normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomNormalV2", "docs": "Outputs deterministic pseudorandom values from a normal distribution.\n\n The generated values will have mean 0 and standard deviation 1.\n\n The outputs are a deterministic function of `shape`, `key`, `counter` and `alg`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n key: A `Tensor` of type `uint64`.\n Key for the counter-based RNG algorithm (shape uint64[1]).\n counter: A `Tensor` of type `uint64`.\n Initial counter for the counter-based RNG algorithm (shape uint64[2] or uint64[1] depending on the algorithm). If a larger vector is given, only the needed portion on the left (i.e. [:N]) will be used.\n alg: A `Tensor` of type `int32`. The RNG algorithm (shape int32[]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom values from a normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomPoisson", "docs": "Outputs deterministic pseudorandom random numbers from a Poisson distribution.\n\n Outputs random values from a Poisson distribution.\n\n The outputs are a deterministic function of `shape`, `seed`, and `lam`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n lam: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.\n The rate of the Poisson distribution. Shape must match the rightmost dimensions\n of `shape`.\n dtype: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.int64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random numbers from a Poisson distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniform", "docs": "Outputs deterministic pseudorandom random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[0, 1)`. The\n lower bound 0 is included in the range, while the upper bound 1 is excluded.\n\n The outputs are a deterministic function of `shape` and `seed`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random values from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniformFullInt", "docs": "Outputs deterministic pseudorandom random integers from a uniform distribution.\n\n The generated values are uniform integers covering the whole range of `dtype`.\n\n The outputs are a deterministic function of `shape` and `seed`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`, `uint32`, `uint64`.\n 2 seeds (shape [2]).\n dtype: An optional `tf.DType` from: `tf.int32, tf.int64, tf.uint32, tf.uint64`. Defaults to `tf.uint64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniformFullIntV2", "docs": "Outputs deterministic pseudorandom random integers from a uniform distribution.\n\n The generated values are uniform integers covering the whole range of `dtype`.\n\n The outputs are a deterministic function of `shape`, `key`, `counter` and `alg`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n key: A `Tensor` of type `uint64`.\n Key for the counter-based RNG algorithm (shape uint64[1]).\n counter: A `Tensor` of type `uint64`.\n Initial counter for the counter-based RNG algorithm (shape uint64[2] or uint64[1] depending on the algorithm). If a larger vector is given, only the needed portion on the left (i.e. [:N]) will be used.\n alg: A `Tensor` of type `int32`. The RNG algorithm (shape int32[]).\n dtype: An optional `tf.DType` from: `tf.int32, tf.int64, tf.uint32, tf.uint64`. Defaults to `tf.uint64`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniformInt", "docs": "Outputs deterministic pseudorandom random integers from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[minval, maxval)`.\n\n The outputs are a deterministic function of `shape`, `seed`, `minval`, and `maxval`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n minval: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Minimum value (inclusive, scalar).\n maxval: A `Tensor`. Must have the same type as `minval`.\n Maximum value (exclusive, scalar).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `minval`.\n ", "desc": "Outputs deterministic pseudorandom random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniformIntV2", "docs": "Outputs deterministic pseudorandom random integers from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[minval, maxval)`.\n\n The outputs are a deterministic function of `shape`, `key`, `counter`, `alg`, `minval` and `maxval`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n key: A `Tensor` of type `uint64`.\n Key for the counter-based RNG algorithm (shape uint64[1]).\n counter: A `Tensor` of type `uint64`.\n Initial counter for the counter-based RNG algorithm (shape uint64[2] or uint64[1] depending on the algorithm). If a larger vector is given, only the needed portion on the left (i.e. [:N]) will be used.\n alg: A `Tensor` of type `int32`. The RNG algorithm (shape int32[]).\n minval: A `Tensor`. Must be one of the following types: `int32`, `int64`, `uint32`, `uint64`.\n Minimum value (inclusive, scalar).\n maxval: A `Tensor`. Must have the same type as `minval`.\n Maximum value (exclusive, scalar).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `minval`.\n ", "desc": "Outputs deterministic pseudorandom random integers from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessRandomUniformV2", "docs": "Outputs deterministic pseudorandom random values from a uniform distribution.\n\n The generated values follow a uniform distribution in the range `[0, 1)`. The\n lower bound 0 is included in the range, while the upper bound 1 is excluded.\n\n The outputs are a deterministic function of `shape`, `key`, `counter` and `alg`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n key: A `Tensor` of type `uint64`.\n Key for the counter-based RNG algorithm (shape uint64[1]).\n counter: A `Tensor` of type `uint64`.\n Initial counter for the counter-based RNG algorithm (shape uint64[2] or uint64[1] depending on the algorithm). If a larger vector is given, only the needed portion on the left (i.e. [:N]) will be used.\n alg: A `Tensor` of type `int32`. The RNG algorithm (shape int32[]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom random values from a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessSampleDistortedBoundingBox", "docs": "Generate a randomly distorted bounding box for an image deterministically.\n\n Bounding box annotations are often supplied in addition to ground-truth labels\n in image recognition or object localization tasks. A common technique for\n training such a system is to randomly distort an image while preserving its\n content, i.e. *data augmentation*. This Op, given the same `seed`,\n deterministically outputs a randomly distorted localization of an object, i.e.\n bounding box, given an `image_size`, `bounding_boxes` and a series of\n constraints.\n\n The output of this Op is a single bounding box that may be used to crop the\n original image. The output is returned as 3 tensors: `begin`, `size` and\n `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the\n image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize\n what the bounding box looks like.\n\n Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The\n bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and\n the height of the underlying image.\n\n The output of this Op is guaranteed to be the same given the same `seed` and is\n independent of how many times the function is called, and independent of global\n seed settings (e.g. `tf.random.set_seed`).\n\n Example usage:\n\n >>> image = np.array([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])\n >>> bbox = tf.constant(\n ... [0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])\n >>> seed = (1, 2)\n >>> # Generate a single distorted bounding box.\n >>> bbox_begin, bbox_size, bbox_draw = (\n ... tf.image.stateless_sample_distorted_bounding_box(\n ... tf.shape(image), bounding_boxes=bbox, seed=seed))\n >>> # Employ the bounding box to distort the image.\n >>> tf.slice(image, bbox_begin, bbox_size)\n \n >>> # Draw the bounding box in an image summary.\n >>> colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])\n >>> tf.image.draw_bounding_boxes(\n ... tf.expand_dims(tf.cast(image, tf.float32),0), bbox_draw, colors)\n \n\n Note that if no bounding box information is available, setting\n `use_image_if_no_bounding_boxes = true` will assume there is a single implicit\n bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\n false and no bounding boxes are supplied, an error is raised.\n\n Args:\n image_size: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`.\n 1-D, containing `[height, width, channels]`.\n bounding_boxes: A `Tensor` of type `float32`.\n 3-D with shape `[batch, N, 4]` describing the N bounding boxes\n associated with the image.\n min_object_covered: A `Tensor` of type `float32`.\n The cropped area of the image must contain at least this\n fraction of any bounding box supplied. The value of this parameter should be\n non-negative. In the case of 0, the cropped area does not need to overlap\n any of the bounding boxes supplied.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[2]`. The seed to the random number generator. Must have dtype\n `int32` or `int64`. (When using XLA, only `int32` is allowed.)\n aspect_ratio_range: An optional list of `floats`. Defaults to `[0.75, 1.33]`.\n The cropped area of the image must have an aspect ratio =\n width / height within this range.\n area_range: An optional list of `floats`. Defaults to `[0.05, 1]`.\n The cropped area of the image must contain a fraction of the\n supplied image within this range.\n max_attempts: An optional `int`. Defaults to `100`.\n Number of attempts at generating a cropped region of the image\n of the specified constraints. After `max_attempts` failures, return the entire\n image.\n use_image_if_no_bounding_boxes: An optional `bool`. Defaults to `False`.\n Controls behavior if no bounding boxes supplied.\n If true, assume an implicit bounding box covering the whole input. If false,\n raise an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (begin, size, bboxes).\n\n begin: A `Tensor`. Has the same type as `image_size`.\n size: A `Tensor`. Has the same type as `image_size`.\n bboxes: A `Tensor` of type `float32`.\n ", "desc": "Generate a randomly distorted bounding box for an image deterministically.", "type": "API"}, {"name": "tf.raw_ops.StatelessTruncatedNormal", "docs": "Outputs deterministic pseudorandom values from a truncated normal distribution.\n\n The generated values follow a normal distribution with mean 0 and standard\n deviation 1, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n The outputs are a deterministic function of `shape` and `seed`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n seed: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2 seeds (shape [2]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom values from a truncated normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessTruncatedNormalV2", "docs": "Outputs deterministic pseudorandom values from a truncated normal distribution.\n\n The generated values follow a normal distribution with mean 0 and standard\n deviation 1, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n The outputs are a deterministic function of `shape`, `key`, `counter` and `alg`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n key: A `Tensor` of type `uint64`.\n Key for the counter-based RNG algorithm (shape uint64[1]).\n counter: A `Tensor` of type `uint64`.\n Initial counter for the counter-based RNG algorithm (shape uint64[2] or uint64[1] depending on the algorithm). If a larger vector is given, only the needed portion on the left (i.e. [:N]) will be used.\n alg: A `Tensor` of type `int32`. The RNG algorithm (shape int32[]).\n dtype: An optional `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`. Defaults to `tf.float32`.\n The type of the output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs deterministic pseudorandom values from a truncated normal distribution.", "type": "API"}, {"name": "tf.raw_ops.StatelessWhile", "docs": "output = input; While (Cond(output)) { output = Body(output) }\n\n Args:\n input: A list of `Tensor` objects.\n A list of input tensors whose types are T.\n cond: A function decorated with @Defun.\n A function takes 'input' and returns a tensor. If the tensor is\n a scalar of non-boolean, the scalar is converted to a boolean\n according to the following rule: if the scalar is a numerical\n value, non-zero means True and zero means False; if the scalar is\n a string, non-empty means True and empty means False. If the\n tensor is not a scalar, non-emptiness means True and False\n otherwise.\n\n This should only be used when the while condition and body functions\n do not have stateful ops.\n body: A function decorated with @Defun.\n A function that takes a list of tensors and returns another\n list of tensors. Both lists have the same types as specified\n by T.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n parallel_iterations: An optional `int`. Defaults to `10`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "output = input; While (Cond(output)) { output = Body(output) }", "type": "API"}, {"name": "tf.raw_ops.StaticRegexFullMatch", "docs": "Check if the input matches the regex pattern.\n\n The input is a string tensor of any shape. The pattern is the\n regular expression to be matched with every element of the input tensor.\n The boolean values (True or False) of the output tensor indicate\n if the input matches the regex pattern provided.\n\n The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Args:\n input: A `Tensor` of type `string`.\n A string tensor of the text to be processed.\n pattern: A `string`. The regular expression to match the input.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Check if the input matches the regex pattern.", "type": "API"}, {"name": "tf.raw_ops.StaticRegexReplace", "docs": "Replaces the match of pattern in input with rewrite.\n\n It follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Args:\n input: A `Tensor` of type `string`. The text to be processed.\n pattern: A `string`. The regular expression to match the input.\n rewrite: A `string`. The rewrite to be applied to the matched expression.\n replace_global: An optional `bool`. Defaults to `True`.\n If True, the replacement is global, otherwise the replacement\n is done only on the first match.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Replaces the match of pattern in input with rewrite.", "type": "API"}, {"name": "tf.raw_ops.StatsAggregatorHandle", "docs": "Creates a statistics manager resource.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a statistics manager resource.", "type": "API"}, {"name": "tf.raw_ops.StatsAggregatorHandleV2", "docs": "TODO: add doc.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.StatsAggregatorSetSummaryWriter", "docs": "Set a summary_writer_interface to record statistics using given stats_aggregator.\n\n Args:\n stats_aggregator: A `Tensor` of type `resource`.\n summary: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Set a summary_writer_interface to record statistics using given stats_aggregator.", "type": "API"}, {"name": "tf.raw_ops.StatsAggregatorSummary", "docs": "Produces a summary of any statistics recorded by the given statistics manager.\n\n Args:\n iterator: A `Tensor` of type `resource`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Produces a summary of any statistics recorded by the given statistics manager.", "type": "API"}, {"name": "tf.raw_ops.StopGradient", "docs": "Stops gradient computation.\n\n When executed in a graph, this op outputs its input tensor as-is.\n\n When building ops to compute gradients, this op prevents the contribution of\n its inputs to be taken into account. Normally, the gradient generator adds ops\n to a graph to compute the derivatives of a specified 'loss' by recursively\n finding out inputs that contributed to its computation. If you insert this op\n in the graph it inputs are masked from the gradient generator. They are not\n taken into account for computing gradients.\n\n This is useful any time you want to compute a value with TensorFlow but need\n to pretend that the value was a constant. For example, the softmax function\n for a vector x can be written as\n\n ```python\n\n def softmax(x):\n numerator = tf.exp(x)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n This however is susceptible to overflow if the values in x are large. An\n alternative more stable way is to subtract the maximum of x from each of the\n values.\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.reduce_max(x)\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n However, when we backprop through the softmax to x, we dont want to backprop\n through the `tf.reduce_max(x)` (if the max values are not unique then the\n gradient could flow to the wrong input) calculation and treat that as a\n constant. Therefore, we should write this out as\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.stop_gradient(tf.reduce_max(x))\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n Some other examples include:\n\n * The *EM* algorithm where the *M-step* should not involve backpropagation\n through the output of the *E-step*.\n * Contrastive divergence training of Boltzmann machines where, when\n differentiating the energy function, the training must not backpropagate\n through the graph that generated the samples from the model.\n * Adversarial training, where no backprop should happen through the adversarial\n example generation process.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Stops gradient computation.", "type": "API"}, {"name": "tf.raw_ops.StridedSlice", "docs": "Return a strided slice from `input`.\n\n Note, most python users will want to use the Python `Tensor.__getitem__`\n or `Variable.__getitem__` rather than this op directly.\n\n The goal of this op is to produce a new tensor with a subset of\n the elements from the `n` dimensional `input` tensor. The subset is chosen using\n a sequence of `m` sparse range specifications encoded into the arguments\n of this function. Note, in some cases\n `m` could be equal to `n`, but this need not be the case. Each\n range specification entry can be one of the following:\n\n - An ellipsis (...). Ellipses are used to imply zero or more\n dimensions of full-dimension selection and are produced using\n `ellipsis_mask`. For example, `foo[...]` is the identity slice.\n\n - A new axis. This is used to insert a new shape=1 dimension and is\n produced using `new_axis_mask`. For example, `foo[:, ...]` where\n `foo` is shape `(3, 4)` produces a `(1, 3, 4)` tensor.\n\n\n - A range `begin:end:stride`. This is used to specify how much to choose from\n a given dimension. `stride` can be any integer but 0. `begin` is an integer\n which represents the index of the first value to select while `end` represents\n the index of the last value to select. The number of values selected in each\n dimension is `end - begin` if `stride > 0` and `begin - end` if `stride < 0`.\n `begin` and `end` can be negative where `-1` is the last element, `-2` is\n the second to last. `begin_mask` controls whether to replace the explicitly\n given `begin` with an implicit effective value of `0` if `stride > 0` and\n `-1` if `stride < 0`. `end_mask` is analogous but produces the number\n required to create the largest open interval. For example, given a shape\n `(3,)` tensor `foo[:]`, the effective `begin` and `end` are `0` and `3`. Do\n not assume this is equivalent to `foo[0:-1]` which has an effective `begin`\n and `end` of `0` and `2`. Another example is `foo[-2::-1]` which reverses the\n first dimension of a tensor while dropping the last two (in the original\n order elements). For example `foo = [1,2,3,4]; foo[-2::-1]` is `[4,3]`.\n\n - A single index. This is used to keep only elements that have a given\n index. For example (`foo[2, :]` on a shape `(5,6)` tensor produces a\n shape `(6,)` tensor. This is encoded in `begin` and `end` and\n `shrink_axis_mask`.\n\n Each conceptual range specification is encoded in the op's argument. This\n encoding is best understand by considering a non-trivial example. In\n particular,\n `foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as\n\n ```\n begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0)\n end = [2, 4, x, x, -3, x]\n strides = [1, 1, x, x, -1, 1]\n begin_mask = 1<<4 | 1<<5 = 48\n end_mask = 1<<5 = 32\n ellipsis_mask = 1<<3 = 8\n new_axis_mask = 1<<2 = 4\n shrink_axis_mask = 1<<0 = 1\n ```\n\n In this case if `foo.shape` is (5, 5, 5, 5, 5, 5) the final shape of\n the slice becomes (2, 1, 5, 5, 2, 5).\n Let us walk step by step through each argument specification.\n\n 1. The first argument in the example slice is turned into `begin = 1` and\n `end = begin + 1 = 2`. To disambiguate from the original spec `2:4` we\n also set the appropriate bit in `shrink_axis_mask`.\n\n 2. `2:4` is contributes 2, 4, 1 to begin, end, and stride. All masks have\n zero bits contributed.\n\n 3. None is a synonym for `tf.newaxis`. This means insert a dimension of size 1\n dimension in the final shape. Dummy values are contributed to begin,\n end and stride, while the new_axis_mask bit is set.\n\n 4. `...` grab the full ranges from as many dimensions as needed to\n fully specify a slice for every dimension of the input shape.\n\n 5. `:-3:-1` shows the use of negative indices. A negative index `i` associated\n with a dimension that has shape `s` is converted to a positive index\n `s + i`. So `-1` becomes `s-1` (i.e. the last element). This conversion\n is done internally so begin, end and strides receive x, -3, and -1.\n The appropriate begin_mask bit is set to indicate the start range is the\n full range (ignoring the x).\n\n 6. `:` indicates that the entire contents of the corresponding dimension\n is selected. This is equivalent to `::` or `0::1`. begin, end, and strides\n receive 0, 0, and 1, respectively. The appropriate bits in `begin_mask` and\n `end_mask` are also set.\n\n *Requirements*:\n `0 != strides[i] for i in [0, m)`\n `ellipsis_mask must be a power of two (only one ellipsis)`\n\n Args:\n input: A `Tensor`.\n begin: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n `begin[k]` specifies the offset into the `k`th range specification.\n The exact dimension this corresponds to will be determined by context.\n Out-of-bounds values will be silently clamped. If the `k`th bit of\n `begin_mask` then `begin[k]` is ignored and the full range of the\n appropriate dimension is used instead. Negative values causes indexing\n to start from the highest element e.g. If `foo==[1,2,3]` then `foo[-1]==3`.\n end: A `Tensor`. Must have the same type as `begin`.\n `end[i]` is like `begin` with the exception that `end_mask` is\n used to determine full ranges.\n strides: A `Tensor`. Must have the same type as `begin`.\n `strides[i]` specifies the increment in the `i`th specification\n after extracting a given element. Negative indices will reverse\n the original order. Out or range values are\n clamped to `[0,dim[i]) if slice[i]>0` or `[-1,dim[i]-1] if slice[i] < 0`\n begin_mask: An optional `int`. Defaults to `0`.\n a bitmask where a bit i being 1 means to ignore the begin\n value and instead use the largest interval possible. At runtime\n begin[i] will be replaced with `[0, n-1)` if `stride[i] > 0` or\n `[-1, n-1]` if `stride[i] < 0`\n end_mask: An optional `int`. Defaults to `0`. analogous to `begin_mask`\n ellipsis_mask: An optional `int`. Defaults to `0`.\n a bitmask where bit `i` being 1 means the `i`th\n position is actually an ellipsis. One bit at most can be 1.\n If `ellipsis_mask == 0`, then an implicit ellipsis mask of `1 << (m+1)`\n is provided. This means that `foo[3:5] == foo[3:5, ...]`. An ellipsis\n implicitly creates as many range specifications as necessary to fully\n specify the sliced range for every dimension. For example for a 4-dimensional\n tensor `foo` the slice `foo[2, ..., 5:8]` implies `foo[2, :, :, 5:8]`.\n new_axis_mask: An optional `int`. Defaults to `0`.\n a bitmask where bit `i` being 1 means the `i`th\n specification creates a new shape 1 dimension. For example\n `foo[:4, tf.newaxis, :2]` would produce a shape `(4, 1, 2)` tensor.\n shrink_axis_mask: An optional `int`. Defaults to `0`.\n a bitmask where bit `i` implies that the `i`th\n specification should shrink the dimensionality. begin and end\n must imply a slice of size 1 in the dimension. For example in\n python one might do `foo[:, 3, :]` which would result in\n `shrink_axis_mask` being 2.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Return a strided slice from `input`.", "type": "API"}, {"name": "tf.raw_ops.StridedSliceAssign", "docs": "Assign `value` to the sliced l-value reference of `ref`.\n\n The values of `value` are assigned to the positions in the variable\n `ref` that are selected by the slice parameters. The slice parameters\n `begin`, `end`, `strides`, etc. work exactly as in `StridedSlice`.\n\n NOTE this op currently does not support broadcasting and so `value`'s\n shape must be exactly the shape produced by the slice of `ref`.\n\n Args:\n ref: A mutable `Tensor`.\n begin: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n end: A `Tensor`. Must have the same type as `begin`.\n strides: A `Tensor`. Must have the same type as `begin`.\n value: A `Tensor`. Must have the same type as `ref`.\n begin_mask: An optional `int`. Defaults to `0`.\n end_mask: An optional `int`. Defaults to `0`.\n ellipsis_mask: An optional `int`. Defaults to `0`.\n new_axis_mask: An optional `int`. Defaults to `0`.\n shrink_axis_mask: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor`. Has the same type as `ref`.\n ", "desc": "Assign `value` to the sliced l-value reference of `ref`.", "type": "API"}, {"name": "tf.raw_ops.StridedSliceGrad", "docs": "Returns the gradient of `StridedSlice`.\n\n Since `StridedSlice` cuts out pieces of its `input` which is size\n `shape`, its gradient will have the same shape (which is passed here\n as `shape`). The gradient will be zero in any element that the slice\n does not select.\n\n Arguments are the same as StridedSliceGrad with the exception that\n `dy` is the input gradient to be propagated and `shape` is the\n shape of `StridedSlice`'s `input`.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n begin: A `Tensor`. Must have the same type as `shape`.\n end: A `Tensor`. Must have the same type as `shape`.\n strides: A `Tensor`. Must have the same type as `shape`.\n dy: A `Tensor`.\n begin_mask: An optional `int`. Defaults to `0`.\n end_mask: An optional `int`. Defaults to `0`.\n ellipsis_mask: An optional `int`. Defaults to `0`.\n new_axis_mask: An optional `int`. Defaults to `0`.\n shrink_axis_mask: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `dy`.\n ", "desc": "Returns the gradient of `StridedSlice`.", "type": "API"}, {"name": "tf.raw_ops.StringFormat", "docs": "Formats a string template using a list of tensors.\n\n Formats a string template using a list of tensors, pretty-printing tensor summaries.\n\n Args:\n inputs: A list of `Tensor` objects.\n The list of tensors to format into the placeholder string.\n template: An optional `string`. Defaults to `\"%s\"`.\n A string, the template to format tensor summaries into.\n placeholder: An optional `string`. Defaults to `\"%s\"`.\n A string, at each placeholder in the template a subsequent tensor summary will be inserted.\n summarize: An optional `int`. Defaults to `3`.\n When formatting the tensor summaries print the first and last summarize entries of each tensor dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Formats a string template using a list of tensors.", "type": "API"}, {"name": "tf.raw_ops.StringJoin", "docs": "Joins the strings in the given list of string tensors into one tensor;\n\n with the given separator (default is an empty separator).\n\n Examples:\n\n >>> s = [\"hello\", \"world\", \"tensorflow\"]\n >>> tf.strings.join(s, \" \")\n \n\n Args:\n inputs: A list of at least 1 `Tensor` objects with type `string`.\n A list of string tensors. The tensors must all have the same shape,\n or be scalars. Scalars may be mixed in; these will be broadcast to the shape\n of non-scalar inputs.\n separator: An optional `string`. Defaults to `\"\"`.\n string, an optional join separator.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Joins the strings in the given list of string tensors into one tensor;", "type": "API"}, {"name": "tf.raw_ops.StringLength", "docs": "String lengths of `input`.\n\n Computes the length of each string given in the input tensor.\n\n >>> strings = tf.constant(['Hello','TensorFlow', '\\U0001F642'])\n >>> tf.strings.length(strings).numpy() # default counts bytes\n array([ 5, 10, 4], dtype=int32)\n >>> tf.strings.length(strings, unit=\"UTF8_CHAR\").numpy()\n array([ 5, 10, 1], dtype=int32)\n\n Args:\n input: A `Tensor` of type `string`.\n The strings for which to compute the length for each element.\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is counted to compute string length. One of: `\"BYTE\"` (for\n the number of bytes in each string) or `\"UTF8_CHAR\"` (for the number of UTF-8\n encoded Unicode code points in each string). Results are undefined\n if `unit=UTF8_CHAR` and the `input` strings do not contain structurally\n valid UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "String lengths of `input`.", "type": "API"}, {"name": "tf.raw_ops.StringLower", "docs": "Converts all uppercase characters into their respective lowercase replacements.\n\n Example:\n\n >>> tf.strings.lower(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be lower-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all uppercase characters into their respective lowercase replacements.", "type": "API"}, {"name": "tf.raw_ops.StringNGrams", "docs": "Creates ngrams from ragged string data.\n\n This op accepts a ragged tensor with 1 ragged dimension containing only\n strings and outputs a ragged tensor with 1 ragged dimension containing ngrams\n of that string, joined along the innermost axis.\n\n Args:\n data: A `Tensor` of type `string`.\n The values tensor of the ragged string tensor to make ngrams out of. Must be a\n 1D string tensor.\n data_splits: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The splits tensor of the ragged string tensor to make ngrams out of.\n separator: A `string`.\n The string to append between elements of the token. Use \"\" for no separator.\n ngram_widths: A list of `ints`. The sizes of the ngrams to create.\n left_pad: A `string`.\n The string to use to pad the left side of the ngram sequence. Only used if\n pad_width != 0.\n right_pad: A `string`.\n The string to use to pad the right side of the ngram sequence. Only used if\n pad_width != 0.\n pad_width: An `int`.\n The number of padding elements to add to each side of each\n sequence. Note that padding will never be greater than 'ngram_widths'-1\n regardless of this value. If `pad_width=-1`, then add `max(ngram_widths)-1`\n elements.\n preserve_short_sequences: A `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (ngrams, ngrams_splits).\n\n ngrams: A `Tensor` of type `string`.\n ngrams_splits: A `Tensor`. Has the same type as `data_splits`.\n ", "desc": "Creates ngrams from ragged string data.", "type": "API"}, {"name": "tf.raw_ops.StringSplit", "docs": "Split elements of `input` based on `delimiter` into a `SparseTensor`.\n\n Let N be the size of source (typically N will be the batch size). Split each\n element of `input` based on `delimiter` and return a `SparseTensor`\n containing the splitted tokens. Empty tokens are ignored.\n\n `delimiter` can be empty, or a string of split characters. If `delimiter` is an\n empty string, each element of `input` is split into individual single-byte\n character strings, including splitting of UTF-8 multibyte sequences. Otherwise\n every character of `delimiter` is a potential split point.\n\n For example:\n N = 2, input[0] is 'hello world' and input[1] is 'a b c', then the output\n will be\n\n indices = [0, 0;\n 0, 1;\n 1, 0;\n 1, 1;\n 1, 2]\n shape = [2, 3]\n values = ['hello', 'world', 'a', 'b', 'c']\n\n Args:\n input: A `Tensor` of type `string`. 1-D. Strings to split.\n delimiter: A `Tensor` of type `string`.\n 0-D. Delimiter characters (bytes), or empty string.\n skip_empty: An optional `bool`. Defaults to `True`.\n A `bool`. If `True`, skip the empty strings from the result.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, values, shape).\n\n indices: A `Tensor` of type `int64`.\n values: A `Tensor` of type `string`.\n shape: A `Tensor` of type `int64`.\n ", "desc": "Split elements of `input` based on `delimiter` into a `SparseTensor`.", "type": "API"}, {"name": "tf.raw_ops.StringSplitV2", "docs": "Split elements of `source` based on `sep` into a `SparseTensor`.\n\n Let N be the size of source (typically N will be the batch size). Split each\n element of `source` based on `sep` and return a `SparseTensor`\n containing the split tokens. Empty tokens are ignored.\n\n For example, N = 2, source[0] is 'hello world' and source[1] is 'a b c',\n then the output will be\n ```\n st.indices = [0, 0;\n 0, 1;\n 1, 0;\n 1, 1;\n 1, 2]\n st.shape = [2, 3]\n st.values = ['hello', 'world', 'a', 'b', 'c']\n ```\n\n If `sep` is given, consecutive delimiters are not grouped together and are\n deemed to delimit empty strings. For example, source of `\"1<>2<><>3\"` and\n sep of `\"<>\"` returns `[\"1\", \"2\", \"\", \"3\"]`. If `sep` is None or an empty\n string, consecutive whitespace are regarded as a single separator, and the\n result will contain no empty strings at the startor end if the string has\n leading or trailing whitespace.\n\n Note that the above mentioned behavior matches python's str.split.\n\n Args:\n input: A `Tensor` of type `string`.\n `1-D` string `Tensor`, the strings to split.\n sep: A `Tensor` of type `string`.\n `0-D` string `Tensor`, the delimiter character.\n maxsplit: An optional `int`. Defaults to `-1`.\n An `int`. If `maxsplit > 0`, limit of the split of the result.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (indices, values, shape).\n\n indices: A `Tensor` of type `int64`.\n values: A `Tensor` of type `string`.\n shape: A `Tensor` of type `int64`.\n ", "desc": "Split elements of `source` based on `sep` into a `SparseTensor`.", "type": "API"}, {"name": "tf.raw_ops.StringStrip", "docs": "Strip leading and trailing whitespaces from the Tensor.\n\n Examples:\n\n >>> tf.strings.strip([\"\\nTensorFlow\", \" The python library \"]).numpy()\n array([b'TensorFlow', b'The python library'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`. A string `Tensor` of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Strip leading and trailing whitespaces from the Tensor.", "type": "API"}, {"name": "tf.raw_ops.StringToHashBucket", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process.\n\n Note that the hash function may change from time to time.\n This functionality will be deprecated and it's recommended to use\n `tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`.\n\n Args:\n string_tensor: A `Tensor` of type `string`.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.raw_ops.StringToHashBucketFast", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process and will never change. However, it is not suitable for cryptography.\n This function may be used when CPU time is scarce and inputs are trusted or\n unimportant. There is a risk of adversaries constructing inputs that all hash\n to the same bucket. To prevent this problem, use a strong hash function with\n `tf.string_to_hash_bucket_strong`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_fast([\"Hello\", \"TensorFlow\", \"2.x\"], 3).numpy()\n array([0, 2, 2])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.raw_ops.StringToHashBucketStrong", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process. The hash function is a keyed hash function, where attribute `key`\n defines the key of the hash function. `key` is an array of 2 elements.\n\n A strong hash is important when inputs may be malicious, e.g. URLs with\n additional components. Adversaries could try to make their inputs hash to the\n same bucket for a denial-of-service attack or to skew the results. A strong\n hash can be used to make it difficult to find inputs with a skewed hash value\n distribution over buckets. This requires that the hash function is\n seeded by a high-entropy (random) \"key\" unknown to the adversary.\n\n The additional robustness comes at a cost of roughly 4x higher compute\n time than `tf.string_to_hash_bucket_fast`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_strong([\"Hello\", \"TF\"], 3, [1, 2]).numpy()\n array([2, 0])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n key: A list of `ints`.\n The key used to seed the hash function, passed as a list of two uint64\n elements.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.raw_ops.StringToNumber", "docs": "Converts each string in the input Tensor to the specified numeric type.\n\n (Note that int32 overflow results in an error while float overflow\n results in a rounded value.)\n\n Example:\n\n >>> strings = [\"5.0\", \"3.0\", \"7.0\"]\n >>> tf.strings.to_number(strings)\n \n\n Args:\n string_tensor: A `Tensor` of type `string`.\n out_type: An optional `tf.DType` from: `tf.float32, tf.float64, tf.int32, tf.int64`. Defaults to `tf.float32`.\n The numeric type to interpret each string in `string_tensor` as.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Converts each string in the input Tensor to the specified numeric type.", "type": "API"}, {"name": "tf.raw_ops.StringUpper", "docs": "Converts all lowercase characters into their respective uppercase replacements.\n\n Example:\n\n >>> tf.strings.upper(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be upper-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all lowercase characters into their respective uppercase replacements.", "type": "API"}, {"name": "tf.raw_ops.Sub", "docs": "Returns x - y element-wise.\n\n *NOTE*: `tf.subtract` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Both input and output have a range `(-inf, inf)`.\n\n Example usages below.\n\n Subtract operation between an array and a scalar:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.subtract(x, y)\n \n >>> tf.subtract(y, x)\n \n\n Note that binary `-` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x - y\n \n\n Subtract operation between an array and a tensor of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([5, 4, 3, 2, 1])\n >>> tf.subtract(y, x)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**8 + 1, 2**8 + 2]\n >>> tf.subtract(x, y)\n \n\n When subtracting two input values of different shapes, `tf.subtract` follows the\n [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules)\n . The two input array shapes are compared element-wise. Starting with the\n trailing dimensions, the two dimensions either have to be equal or one of them\n needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(2, 1, 3)\n >>> tf.subtract(x, y)\n \n\n Example with inputs of different dimensions:\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(1, 6)\n >>> tf.subtract(x, y)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x - y element-wise.", "type": "API"}, {"name": "tf.raw_ops.Substr", "docs": "Return substrings from `Tensor` of strings.\n\n For each string in the input `Tensor`, creates a substring starting at index\n `pos` with a total length of `len`.\n\n If `len` defines a substring that would extend beyond the length of the input\n string, or if `len` is negative, then as many characters as possible are used.\n\n A negative `pos` indicates distance within the string backwards from the end.\n\n If `pos` specifies an index which is out of range for any of the input strings,\n then an `InvalidArgumentError` is thrown.\n\n `pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on\n Op creation.\n\n *NOTE*: `Substr` supports broadcasting up to two dimensions. More about\n broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n ---\n\n Examples\n\n Using scalar `pos` and `len`:\n\n ```python\n input = [b'Hello', b'World']\n position = 1\n length = 3\n\n output = [b'ell', b'orl']\n ```\n\n Using `pos` and `len` with same shape as `input`:\n\n ```python\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen']]\n position = [[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]\n length = [[2, 3, 4],\n [4, 3, 2],\n [5, 5, 5]]\n\n output = [[b'en', b'eve', b'lve'],\n [b'hirt', b'urt', b'te'],\n [b'ixtee', b'vente', b'hteen']]\n ```\n\n Broadcasting `pos` and `len` onto `input`:\n\n ```\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen'],\n [b'nineteen', b'twenty', b'twentyone']]\n position = [1, 2, 3]\n length = [1, 2, 3]\n\n output = [[b'e', b'ev', b'lve'],\n [b'h', b'ur', b'tee'],\n [b'i', b've', b'hte'],\n [b'i', b'en', b'nty']]\n ```\n\n Broadcasting `input` onto `pos` and `len`:\n\n ```\n input = b'thirteen'\n position = [1, 5, 7]\n length = [3, 2, 1]\n\n output = [b'hir', b'ee', b'n']\n ```\n\n Raises:\n\n * `ValueError`: If the first argument cannot be converted to a\n Tensor of `dtype string`.\n * `InvalidArgumentError`: If indices are out of range.\n * `ValueError`: If `pos` and `len` are not the same shape.\n\n Args:\n input: A `Tensor` of type `string`. Tensor of strings\n pos: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Scalar defining the position of first character in each substring\n len: A `Tensor`. Must have the same type as `pos`.\n Scalar defining the number of characters to include in each substring\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is used to create the substring. One of: `\"BYTE\"` (for\n defining position and length by bytes) or `\"UTF8_CHAR\"` (for the UTF-8\n encoded Unicode code points). The default is `\"BYTE\"`. Results are undefined if\n `unit=UTF8_CHAR` and the `input` strings do not contain structurally valid\n UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Return substrings from `Tensor` of strings.", "type": "API"}, {"name": "tf.raw_ops.Sum", "docs": "Computes the sum of elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`. Unless\n `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keep_dims` is true, the reduced dimensions are\n retained with length 1.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n The tensor to reduce.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The dimensions to reduce. Must be in the range\n `[-rank(input), rank(input))`.\n keep_dims: An optional `bool`. Defaults to `False`.\n If true, retain reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the sum of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.raw_ops.SummaryWriter", "docs": "TODO: add doc.\n\n Args:\n shared_name: An optional `string`. Defaults to `\"\"`.\n container: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.Svd", "docs": "Computes the singular value decompositions of one or more matrices.\n\n Computes the SVD of each inner matrix in `input` such that\n `input[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])`\n\n ```python\n # a is a tensor containing a batch of matrices.\n # s is a tensor of singular values for each matrix.\n # u is the tensor containing the left singular vectors for each matrix.\n # v is the tensor containing the right singular vectors for each matrix.\n s, u, v = svd(a)\n s, _, _ = svd(a, compute_uv=False)\n ```\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float64`, `float32`, `half`, `complex64`, `complex128`.\n A tensor of shape `[..., M, N]` whose inner-most 2 dimensions\n form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.\n compute_uv: An optional `bool`. Defaults to `True`.\n If true, left and right singular vectors will be\n computed and returned in `u` and `v`, respectively.\n If false, `u` and `v` are not set and should never referenced.\n full_matrices: An optional `bool`. Defaults to `False`.\n If true, compute full-sized `u` and `v`. If false\n (the default), compute only the leading `P` singular vectors.\n Ignored if `compute_uv` is `False`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (s, u, v).\n\n s: A `Tensor`. Has the same type as `input`.\n u: A `Tensor`. Has the same type as `input`.\n v: A `Tensor`. Has the same type as `input`.\n ", "desc": "Computes the singular value decompositions of one or more matrices.", "type": "API"}, {"name": "tf.raw_ops.Switch", "docs": "Forwards `data` to the output port determined by `pred`.\n\n If `pred` is true, the `data` input is forwarded to `output_true`. Otherwise,\n the data goes to `output_false`.\n\n See also `RefSwitch` and `Merge`.\n\n Args:\n data: A `Tensor`. The tensor to be forwarded to the appropriate output.\n pred: A `Tensor` of type `bool`.\n A scalar that specifies which output port will receive data.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_false, output_true).\n\n output_false: A `Tensor`. Has the same type as `data`.\n output_true: A `Tensor`. Has the same type as `data`.\n ", "desc": "Forwards `data` to the output port determined by `pred`.", "type": "API"}, {"name": "tf.raw_ops.SymbolicGradient", "docs": "Computes the gradient function for function f via backpropagation.\n\n Args:\n input: A list of `Tensor` objects. a list of input tensors of size N + M;\n Tout: A list of `tf.DTypes` that has length `>= 1`.\n the type list for the input list.\n f: A function decorated with @Defun.\n The function we want to compute the gradient for.\n\n The function 'f' must be a numerical function which takes N inputs and\n produces M outputs. Its gradient function 'g', which is computed by\n this SymbolicGradient op is a function taking N + M inputs and\n produces N outputs.\n\n I.e. if we have\n (y1, y2, ..., y_M) = f(x1, x2, ..., x_N),\n then, g is\n (dL/dx1, dL/dx2, ..., dL/dx_N) = g(x1, x2, ..., x_N,\n dL/dy1, dL/dy2, ..., dL/dy_M),\n\n where L is a scalar-value function of (x1, x2, ..., xN) (e.g., the\n loss function). dL/dx_i is the partial derivative of L with respect\n to x_i.\n\n (Needs some math expert to say the comment above better.)\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Computes the gradient function for function f via backpropagation.", "type": "API"}, {"name": "tf.raw_ops.TakeDataset", "docs": "Creates a dataset that contains `count` elements from the `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n count: A `Tensor` of type `int64`.\n A scalar representing the number of elements from the `input_dataset`\n that should be taken. A value of `-1` indicates that all of `input_dataset`\n is taken.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that contains `count` elements from the `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.TakeManySparseFromTensorsMap", "docs": "Read `SparseTensors` from a `SparseTensorsMap` and concatenate them.\n\n The input `sparse_handles` must be an `int64` matrix of shape `[N, 1]` where\n `N` is the minibatch size and the rows correspond to the output handles of\n `AddSparseToTensorsMap` or `AddManySparseToTensorsMap`. The ranks of the\n original `SparseTensor` objects that went into the given input ops must all\n match. When the final `SparseTensor` is created, it has rank one\n higher than the ranks of the incoming `SparseTensor` objects\n (they have been concatenated along a new row dimension on the left).\n\n The output `SparseTensor` object's shape values for all dimensions but the\n first are the max across the input `SparseTensor` objects' shape values\n for the corresponding dimensions. Its first shape value is `N`, the minibatch\n size.\n\n The input `SparseTensor` objects' indices are assumed ordered in\n standard lexicographic order. If this is not the case, after this\n step run `SparseReorder` to restore index ordering.\n\n For example, if the handles represent an input, which is a `[2, 3]` matrix\n representing two original `SparseTensor` objects:\n\n ```\n index = [ 0]\n [10]\n [20]\n values = [1, 2, 3]\n shape = [50]\n ```\n\n and\n\n ```\n index = [ 2]\n [10]\n values = [4, 5]\n shape = [30]\n ```\n\n then the final `SparseTensor` will be:\n\n ```\n index = [0 0]\n [0 10]\n [0 20]\n [1 2]\n [1 10]\n values = [1, 2, 3, 4, 5]\n shape = [2 50]\n ```\n\n Args:\n sparse_handles: A `Tensor` of type `int64`.\n 1-D, The `N` serialized `SparseTensor` objects.\n Shape: `[N]`.\n dtype: A `tf.DType`.\n The `dtype` of the `SparseTensor` objects stored in the\n `SparseTensorsMap`.\n container: An optional `string`. Defaults to `\"\"`.\n The container name for the `SparseTensorsMap` read by this op.\n shared_name: An optional `string`. Defaults to `\"\"`.\n The shared name for the `SparseTensorsMap` read by this op.\n It should not be blank; rather the `shared_name` or unique Operation name\n of the Op that created the original `SparseTensorsMap` should be used.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sparse_indices, sparse_values, sparse_shape).\n\n sparse_indices: A `Tensor` of type `int64`.\n sparse_values: A `Tensor` of type `dtype`.\n sparse_shape: A `Tensor` of type `int64`.\n ", "desc": "Read `SparseTensors` from a `SparseTensorsMap` and concatenate them.", "type": "API"}, {"name": "tf.raw_ops.TakeWhileDataset", "docs": "Creates a dataset that stops iteration when predicate` is false.\n\n The `predicate` function must return a scalar boolean and accept the\n following arguments:\n\n * One tensor for each component of an element of `input_dataset`.\n * One tensor for each value in `other_arguments`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n other_arguments: A list of `Tensor` objects.\n A list of tensors, typically values that were captured when\n building a closure for `predicate`.\n predicate: A function decorated with @Defun.\n A function returning a scalar boolean.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that stops iteration when predicate` is false.", "type": "API"}, {"name": "tf.raw_ops.Tan", "docs": "Computes tan of x element-wise.\n\n Given an input tensor, this function computes tangent of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `(-inf, inf)`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes tan of x element-wise.", "type": "API"}, {"name": "tf.raw_ops.Tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.raw_ops.TanhGrad", "docs": "Computes the gradient for the tanh of `x` wrt its input.\n\n Specifically, `grad = dy * (1 - y*y)`, where `y = tanh(x)`, and `dy`\n is the corresponding input gradient.\n\n Args:\n y: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n dy: A `Tensor`. Must have the same type as `y`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `y`.\n ", "desc": "Computes the gradient for the tanh of `x` wrt its input.", "type": "API"}, {"name": "tf.raw_ops.TemporaryVariable", "docs": "Returns a tensor that may be mutated, but only persists within a single step.\n\n This is an experimental op for internal use only and it is possible to use this\n op in unsafe ways. DO NOT USE unless you fully understand the risks.\n\n It is the caller's responsibility to ensure that 'ref' is eventually passed to a\n matching 'DestroyTemporaryVariable' op after all other uses have completed.\n\n Outputs a ref to the tensor state so it may be read or modified.\n\n E.g.\n var = state_ops._temporary_variable([1, 2], types.float_)\n var_name = var.op.name\n var = state_ops.assign(var, [[4.0, 5.0]])\n var = state_ops.assign_add(var, [[6.0, 7.0]])\n final = state_ops._destroy_temporary_variable(var, var_name=var_name)\n\n Args:\n shape: A `tf.TensorShape` or list of `ints`.\n The shape of the variable tensor.\n dtype: A `tf.DType`. The type of elements in the variable tensor.\n var_name: An optional `string`. Defaults to `\"\"`.\n Overrides the name used for the temporary variable resource. Default\n value is the name of the 'TemporaryVariable' op (which is guaranteed unique).\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor` of type `dtype`.\n ", "desc": "Returns a tensor that may be mutated, but only persists within a single step.", "type": "API"}, {"name": "tf.raw_ops.TensorArray", "docs": "TODO: add doc.\n\n Args:\n size: A `Tensor` of type `int32`.\n dtype: A `tf.DType`.\n dynamic_size: An optional `bool`. Defaults to `False`.\n clear_after_read: An optional `bool`. Defaults to `True`.\n tensor_array_name: An optional `string`. Defaults to `\"\"`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayClose", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayCloseV2", "docs": "Deprecated. Use TensorArrayCloseV3\n\n Args:\n handle: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Deprecated. Use TensorArrayCloseV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayCloseV3", "docs": "Delete the TensorArray from its resource container.\n\n This enables the user to close and release the resource in the middle\n of a step/run.\n\n Args:\n handle: A `Tensor` of type `resource`.\n The handle to a TensorArray (output of TensorArray or TensorArrayGrad).\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Delete the TensorArray from its resource container.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayConcat", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n element_shape_except0: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (value, lengths).\n\n value: A `Tensor` of type `dtype`.\n lengths: A `Tensor` of type `int64`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayConcatV2", "docs": "Deprecated. Use TensorArrayConcatV3\n\n Args:\n handle: A `Tensor` of type `string`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n element_shape_except0: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (value, lengths).\n\n value: A `Tensor` of type `dtype`.\n lengths: A `Tensor` of type `int64`.\n ", "desc": "Deprecated. Use TensorArrayConcatV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayConcatV3", "docs": "Concat the elements from the TensorArray into value `value`.\n\n Takes `T` elements of shapes\n\n ```\n (n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...)\n ```\n\n and concatenates them into a Tensor of shape:\n\n ```(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)```\n\n All elements must have the same shape (excepting the first dimension).\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n dtype: A `tf.DType`. The type of the elem that is returned.\n element_shape_except0: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n The expected shape of an element, if known,\n excluding the first dimension. Used to validate the shapes of\n TensorArray elements. If this shape is not fully specified, concatenating\n zero-size TensorArrays is an error.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (value, lengths).\n\n value: A `Tensor` of type `dtype`.\n lengths: A `Tensor` of type `int64`.\n ", "desc": "Concat the elements from the TensorArray into value `value`.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGather", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n indices: A `Tensor` of type `int32`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGatherV2", "docs": "Deprecated. Use TensorArrayGatherV3\n\n Args:\n handle: A `Tensor` of type `string`.\n indices: A `Tensor` of type `int32`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Deprecated. Use TensorArrayGatherV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGatherV3", "docs": "Gather specific elements from the TensorArray into output `value`.\n\n All elements selected by `indices` must have the same shape.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n indices: A `Tensor` of type `int32`.\n The locations in the TensorArray from which to read tensor elements.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n dtype: A `tf.DType`. The type of the elem that is returned.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n The expected shape of an element, if known. Used to\n validate the shapes of TensorArray elements. If this shape is not\n fully specified, gathering zero-size TensorArrays is an error.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Gather specific elements from the TensorArray into output `value`.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGrad", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type `string`.\n flow_in: A `Tensor` of type `float32`.\n source: A `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGradV2", "docs": "Deprecated. Use TensorArrayGradV3\n\n Args:\n handle: A `Tensor` of type `string`.\n flow_in: A `Tensor` of type `float32`.\n source: A `string`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Deprecated. Use TensorArrayGradV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGradV3", "docs": "Creates a TensorArray for storing the gradients of values in the given handle.\n\n If the given TensorArray gradient already exists, returns a reference to it.\n\n Locks the size of the original TensorArray by disabling its dynamic size flag.\n\n **A note about the input flow_in:**\n\n The handle flow_in forces the execution of the gradient lookup to occur\n only after certain other operations have occurred. For example, when\n the forward TensorArray is dynamically sized, writes to this TensorArray\n may resize the object. The gradient TensorArray is statically sized based\n on the size of the forward TensorArray when this operation executes.\n Furthermore, the size of the forward TensorArray is frozen by this call.\n As a result, the flow is used to ensure that the call to generate the gradient\n TensorArray only happens after all writes are executed.\n\n In the case of dynamically sized TensorArrays, gradient computation should\n only be performed on read operations that have themselves been chained via\n flow to occur only after all writes have executed. That way the final size\n of the forward TensorArray is known when this operation is called.\n\n **A note about the source attribute:**\n\n TensorArray gradient calls use an accumulator TensorArray object. If\n multiple gradients are calculated and run in the same session, the multiple\n gradient nodes may accidentally flow through the same accumulator TensorArray.\n This double counts and generally breaks the TensorArray gradient flow.\n\n The solution is to identify which gradient call this particular\n TensorArray gradient is being called in. This is performed by identifying\n a unique string (e.g. \"gradients\", \"gradients_1\", ...) from the input\n gradient Tensor's name. This string is used as a suffix when creating\n the TensorArray gradient object here (the attribute `source`).\n\n The attribute `source` is added as a suffix to the forward TensorArray's\n name when performing the creation / lookup, so that each separate gradient\n calculation gets its own TensorArray accumulator.\n\n Args:\n handle: A `Tensor` of type `resource`.\n The handle to the forward TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n source: A `string`.\n The gradient source string, used to decide which gradient TensorArray\n to return.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (grad_handle, flow_out).\n\n grad_handle: A `Tensor` of type `resource`.\n flow_out: A `Tensor` of type `float32`.\n ", "desc": "Creates a TensorArray for storing the gradients of values in the given handle.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayGradWithShape", "docs": "Creates a TensorArray for storing multiple gradients of values in the given handle.\n\n Similar to TensorArrayGradV3. However it creates an accumulator with an\n expanded shape compared to the input TensorArray whose gradient is being\n computed. This enables multiple gradients for the same TensorArray to be\n calculated using the same accumulator.\n\n Args:\n handle: A `Tensor` of type `resource`.\n The handle to the forward TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n shape_to_prepend: A `Tensor` of type `int32`.\n An int32 vector representing a shape. Elements in the gradient accumulator will\n have shape which is this shape_to_prepend value concatenated with shape of the\n elements in the TensorArray corresponding to the input handle.\n source: A `string`.\n The gradient source string, used to decide which gradient TensorArray\n to return.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (grad_handle, flow_out).\n\n grad_handle: A `Tensor` of type `resource`.\n flow_out: A `Tensor` of type `float32`.\n ", "desc": "Creates a TensorArray for storing multiple gradients of values in the given handle.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayPack", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayRead", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n index: A `Tensor` of type `int32`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayReadV2", "docs": "Deprecated. Use TensorArrayReadV3\n\n Args:\n handle: A `Tensor` of type `string`.\n index: A `Tensor` of type `int32`.\n flow_in: A `Tensor` of type `float32`.\n dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Deprecated. Use TensorArrayReadV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayReadV3", "docs": "Read an element from the TensorArray into output `value`.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n index: A `Tensor` of type `int32`.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n dtype: A `tf.DType`. The type of the elem that is returned.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Read an element from the TensorArray into output `value`.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayScatter", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n indices: A `Tensor` of type `int32`.\n value: A `Tensor`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayScatterV2", "docs": "Deprecated. Use TensorArrayScatterV3\n\n Args:\n handle: A `Tensor` of type `string`.\n indices: A `Tensor` of type `int32`.\n value: A `Tensor`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Deprecated. Use TensorArrayScatterV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayScatterV3", "docs": "Scatter the data from the input value into specific TensorArray elements.\n\n `indices` must be a vector, its length must match the first dim of `value`.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n indices: A `Tensor` of type `int32`.\n The locations at which to write the tensor elements.\n value: A `Tensor`. The concatenated tensor to write to the TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Scatter the data from the input value into specific TensorArray elements.", "type": "API"}, {"name": "tf.raw_ops.TensorArraySize", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArraySizeV2", "docs": "Deprecated. Use TensorArraySizeV3\n\n Args:\n handle: A `Tensor` of type `string`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Deprecated. Use TensorArraySizeV3", "type": "API"}, {"name": "tf.raw_ops.TensorArraySizeV3", "docs": "Get the current size of the TensorArray.\n\n Args:\n handle: A `Tensor` of type `resource`.\n The handle to a TensorArray (output of TensorArray or TensorArrayGrad).\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Get the current size of the TensorArray.", "type": "API"}, {"name": "tf.raw_ops.TensorArraySplit", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n value: A `Tensor`.\n lengths: A `Tensor` of type `int64`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArraySplitV2", "docs": "Deprecated. Use TensorArraySplitV3\n\n Args:\n handle: A `Tensor` of type `string`.\n value: A `Tensor`.\n lengths: A `Tensor` of type `int64`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Deprecated. Use TensorArraySplitV3", "type": "API"}, {"name": "tf.raw_ops.TensorArraySplitV3", "docs": "Split the data from the input value into TensorArray elements.\n\n Assuming that `lengths` takes on values\n\n ```(n0, n1, ..., n(T-1))```\n\n and that `value` has shape\n\n ```(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)```,\n\n this splits values into a TensorArray with T tensors.\n\n TensorArray index t will be the subtensor of values with starting position\n\n ```(n0 + n1 + ... + n(t-1), 0, 0, ...)```\n\n and having size\n\n ```nt x d0 x d1 x ...```\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n value: A `Tensor`. The concatenated tensor to write to the TensorArray.\n lengths: A `Tensor` of type `int64`.\n The vector of lengths, how to split the rows of value into the\n TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Split the data from the input value into TensorArray elements.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayUnpack", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n value: A `Tensor`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayV2", "docs": "Deprecated. Use TensorArrayV3\n\n Args:\n size: A `Tensor` of type `int32`.\n dtype: A `tf.DType`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n dynamic_size: An optional `bool`. Defaults to `False`.\n clear_after_read: An optional `bool`. Defaults to `True`.\n tensor_array_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Deprecated. Use TensorArrayV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayV3", "docs": "An array of Tensors of given size.\n\n Write data via Write and read via Read or Pack.\n\n Args:\n size: A `Tensor` of type `int32`. The size of the array.\n dtype: A `tf.DType`. The type of the elements on the tensor_array.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n The expected shape of an element, if known. Used to\n validate the shapes of TensorArray elements. If this shape is not\n fully specified, gathering zero-size TensorArrays is an error.\n dynamic_size: An optional `bool`. Defaults to `False`.\n A boolean that determines whether writes to the TensorArray\n are allowed to grow the size. By default, this is not allowed.\n clear_after_read: An optional `bool`. Defaults to `True`.\n If true (default), Tensors in the TensorArray are cleared\n after being read. This disables multiple read semantics but allows early\n release of memory.\n identical_element_shapes: An optional `bool`. Defaults to `False`.\n If true (default is false), then all\n elements in the TensorArray will be expected to have identical shapes.\n This allows certain behaviors, like dynamically checking for\n consistent shapes on write, and being able to fill in properly\n shaped zero tensors on stack -- even if the element_shape attribute\n is not fully defined.\n tensor_array_name: An optional `string`. Defaults to `\"\"`.\n Overrides the name used for the temporary tensor_array\n resource. Default value is the name of the 'TensorArray' op (which\n is guaranteed unique).\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (handle, flow).\n\n handle: A `Tensor` of type `resource`.\n flow: A `Tensor` of type `float32`.\n ", "desc": "An array of Tensors of given size.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayWrite", "docs": "TODO: add doc.\n\n Args:\n handle: A `Tensor` of type mutable `string`.\n index: A `Tensor` of type `int32`.\n value: A `Tensor`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorArrayWriteV2", "docs": "Deprecated. Use TensorArrayGradV3\n\n Args:\n handle: A `Tensor` of type `string`.\n index: A `Tensor` of type `int32`.\n value: A `Tensor`.\n flow_in: A `Tensor` of type `float32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Deprecated. Use TensorArrayGradV3", "type": "API"}, {"name": "tf.raw_ops.TensorArrayWriteV3", "docs": "Push an element onto the tensor_array.\n\n Args:\n handle: A `Tensor` of type `resource`. The handle to a TensorArray.\n index: A `Tensor` of type `int32`.\n The position to write to inside the TensorArray.\n value: A `Tensor`. The tensor to write to the TensorArray.\n flow_in: A `Tensor` of type `float32`.\n A float scalar that enforces proper chaining of operations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "Push an element onto the tensor_array.", "type": "API"}, {"name": "tf.raw_ops.TensorDataset", "docs": "Creates a dataset that emits `components` as a tuple of tensors once.\n\n Args:\n components: A list of `Tensor` objects.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits `components` as a tuple of tensors once.", "type": "API"}, {"name": "tf.raw_ops.TensorListConcat", "docs": "Concats all tensors in the list along the 0th dimension.\n\n Requires that all tensors have the same shape except the first dimension.\n\n input_handle: The input list.\n tensor: The concated result.\n lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n element_dtype: A `tf.DType`.\n element_shape: An optional `tf.TensorShape` or list of `ints`. Defaults to `None`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (tensor, lengths).\n\n tensor: A `Tensor` of type `element_dtype`.\n lengths: A `Tensor` of type `int64`.\n ", "desc": "Concats all tensors in the list along the 0th dimension.", "type": "API"}, {"name": "tf.raw_ops.TensorListConcatLists", "docs": "TODO: add doc.\n\n Args:\n input_a: A `Tensor` of type `variant`.\n input_b: A `Tensor` of type `variant`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorListConcatV2", "docs": "Concats all tensors in the list along the 0th dimension.\n\n Requires that all tensors have the same shape except the first dimension.\n\n input_handle: The input list.\n element_shape: The shape of the uninitialized elements in the list. If the first\n dimension is not -1, it is assumed that all list elements have the same\n leading dim.\n leading_dims: The list of leading dims of uninitialized list elements. Used if\n the leading dim of input_handle.element_shape or the element_shape input arg\n is not already set.\n tensor: The concated result.\n lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n leading_dims: A `Tensor` of type `int64`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (tensor, lengths).\n\n tensor: A `Tensor` of type `element_dtype`.\n lengths: A `Tensor` of type `int64`.\n ", "desc": "Concats all tensors in the list along the 0th dimension.", "type": "API"}, {"name": "tf.raw_ops.TensorListElementShape", "docs": "The shape of the elements of the given list, as a tensor.\n\n input_handle: the list\n element_shape: the shape of elements of the list\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n shape_type: A `tf.DType` from: `tf.int32, tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `shape_type`.\n ", "desc": "The shape of the elements of the given list, as a tensor.", "type": "API"}, {"name": "tf.raw_ops.TensorListFromTensor", "docs": "Creates a TensorList which, when stacked, has the value of `tensor`.\n\n Each tensor in the result list corresponds to one row of the input tensor.\n\n tensor: The input tensor.\n output_handle: The list.\n\n Args:\n tensor: A `Tensor`.\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a TensorList which, when stacked, has the value of `tensor`.", "type": "API"}, {"name": "tf.raw_ops.TensorListGather", "docs": "Creates a Tensor by indexing into the TensorList.\n\n Each row in the produced Tensor corresponds to the element in the TensorList\n specified by the given index (see `tf.gather`).\n\n input_handle: The input tensor list.\n indices: The indices used to index into the list.\n values: The tensor.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n indices: A `Tensor` of type `int32`.\n element_shape: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `element_dtype`.\n ", "desc": "Creates a Tensor by indexing into the TensorList.", "type": "API"}, {"name": "tf.raw_ops.TensorListGetItem", "docs": "Returns the item in the list with the given index.\n\n input_handle: the list\n index: the position in the list from which an element will be retrieved\n item: the element at that position\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n index: A `Tensor` of type `int32`.\n element_shape: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `element_dtype`.\n ", "desc": "Returns the item in the list with the given index.", "type": "API"}, {"name": "tf.raw_ops.TensorListLength", "docs": "Returns the number of tensors in the input tensor list.\n\n input_handle: the input list\n length: the number of tensors in the list\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Returns the number of tensors in the input tensor list.", "type": "API"}, {"name": "tf.raw_ops.TensorListPopBack", "docs": "Returns the last element of the input list as well as a list with all but that element.\n\n Fails if the list is empty.\n\n input_handle: the input list\n tensor: the withdrawn last element of the list\n element_dtype: the type of elements in the list\n element_shape: the shape of the output tensor\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n element_shape: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (output_handle, tensor).\n\n output_handle: A `Tensor` of type `variant`.\n tensor: A `Tensor` of type `element_dtype`.\n ", "desc": "Returns the last element of the input list as well as a list with all but that element.", "type": "API"}, {"name": "tf.raw_ops.TensorListPushBack", "docs": "Returns a list which has the passed-in `Tensor` as last element and the other elements of the given list in `input_handle`.\n\n tensor: The tensor to put on the list.\n input_handle: The old list.\n output_handle: A list with the elements of the old list followed by tensor.\n element_dtype: the type of elements in the list.\n element_shape: a shape compatible with that of elements in the list.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n tensor: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Returns a list which has the passed-in `Tensor` as last element and the other elements of the given list in `input_handle`.", "type": "API"}, {"name": "tf.raw_ops.TensorListPushBackBatch", "docs": "TODO: add doc.\n\n Args:\n input_handles: A `Tensor` of type `variant`.\n tensor: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorListReserve", "docs": "List of the given size with empty elements.\n\n element_shape: the shape of the future elements of the list\n num_elements: the number of elements to reserve\n handle: the output list\n element_dtype: the desired type of elements in the list.\n\n Args:\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n num_elements: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "List of the given size with empty elements.", "type": "API"}, {"name": "tf.raw_ops.TensorListResize", "docs": "Resizes the list.\n\n \n input_handle: the input list\n size: size of the output list\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n size: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Resizes the list.", "type": "API"}, {"name": "tf.raw_ops.TensorListScatter", "docs": "Creates a TensorList by indexing into a Tensor.\n\n Each member of the TensorList corresponds to one row of the input tensor,\n specified by the given index (see `tf.gather`).\n\n tensor: The input tensor.\n indices: The indices used to index into the list.\n element_shape: The shape of the elements in the list (can be less specified than\n the shape of the tensor).\n output_handle: The TensorList.\n\n Args:\n tensor: A `Tensor`.\n indices: A `Tensor` of type `int32`.\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a TensorList by indexing into a Tensor.", "type": "API"}, {"name": "tf.raw_ops.TensorListScatterIntoExistingList", "docs": "Scatters tensor at indices in an input list.\n\n Each member of the TensorList corresponds to one row of the input tensor,\n specified by the given index (see `tf.gather`).\n\n input_handle: The list to scatter into.\n tensor: The input tensor.\n indices: The indices used to index into the list.\n output_handle: The TensorList.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n tensor: A `Tensor`.\n indices: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Scatters tensor at indices in an input list.", "type": "API"}, {"name": "tf.raw_ops.TensorListScatterV2", "docs": "Creates a TensorList by indexing into a Tensor.\n\n Each member of the TensorList corresponds to one row of the input tensor,\n specified by the given index (see `tf.gather`).\n\n tensor: The input tensor.\n indices: The indices used to index into the list.\n element_shape: The shape of the elements in the list (can be less specified than\n the shape of the tensor).\n num_elements: The size of the output list. Must be large enough to accommodate\n the largest index in indices. If -1, the list is just large enough to include\n the largest index in indices.\n output_handle: The TensorList.\n\n Args:\n tensor: A `Tensor`.\n indices: A `Tensor` of type `int32`.\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n num_elements: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a TensorList by indexing into a Tensor.", "type": "API"}, {"name": "tf.raw_ops.TensorListSetItem", "docs": "Sets the index-th position of the list to contain the given tensor.\n\n input_handle: the list\n index: the position in the list to which the tensor will be assigned\n item: the element to be assigned to that position\n output_handle: the new list, with the element in the proper position\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n index: A `Tensor` of type `int32`.\n item: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Sets the index-th position of the list to contain the given tensor.", "type": "API"}, {"name": "tf.raw_ops.TensorListSplit", "docs": "Splits a tensor into a list.\n\n list[i] corresponds to lengths[i] tensors from the input tensor.\n The tensor must have rank at least 1 and contain exactly sum(lengths) elements.\n\n tensor: The input tensor.\n element_shape: A shape compatible with that of elements in the tensor.\n lengths: Vector of sizes of the 0th dimension of tensors in the list.\n output_handle: The list.\n\n Args:\n tensor: A `Tensor`.\n element_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n lengths: A `Tensor` of type `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Splits a tensor into a list.", "type": "API"}, {"name": "tf.raw_ops.TensorListStack", "docs": "Stacks all tensors in the list.\n\n Requires that all tensors have the same shape.\n\n input_handle: the input list\n tensor: the gathered result\n num_elements: optional. If not -1, the number of elements in the list.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n element_shape: A `Tensor` of type `int32`.\n element_dtype: A `tf.DType`.\n num_elements: An optional `int`. Defaults to `-1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `element_dtype`.\n ", "desc": "Stacks all tensors in the list.", "type": "API"}, {"name": "tf.raw_ops.TensorScatterAdd", "docs": "Adds sparse `updates` to an existing tensor according to `indices`.\n\n This operation creates a new tensor by adding sparse `updates` to the passed\n in `tensor`.\n This operation is very similar to `tf.compat.v1.scatter_nd_add`, except that the\n updates are added onto an existing tensor (as opposed to a variable). If the\n memory for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `tensor.shape`. The last dimension of `indices` can be at most the rank of\n `tensor.shape`:\n\n ```\n indices.shape[-1] <= tensor.shape.rank\n ```\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = tensor.shape.rank`) or slices\n (if `indices.shape[-1] < tensor.shape.rank`) along dimension\n `indices.shape[-1]` of `tensor.shape`. `updates` is a tensor with shape\n\n ```\n indices.shape[:-1] + tensor.shape[indices.shape[-1]:]\n ```\n\n The simplest form of `tensor_scatter_nd_add` is to add individual elements to a\n tensor by index. For example, say we want to add 4 elements in a rank-1\n tensor with 8 elements.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[4], [3], [1], [7]])\n >>> updates = tf.constant([9, 10, 11, 12])\n >>> tensor = tf.ones([8], dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[0], [2]])\n >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]],\n ... [[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]]])\n >>> tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n Note: on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Adds sparse `updates` to an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.TensorScatterMax", "docs": "Apply a sparse update to a tensor taking the element-wise maximum.\n\n Returns a new tensor copied from `tensor` whose values are element-wise maximum between\n tensor and updates according to the indices.\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] \n >>> indices = [[1], [4], [5]]\n >>> updates = [1, -1, 1]\n >>> tf.tensor_scatter_nd_max(tensor, indices, updates).numpy()\n array([0, 1, 0, 0, 0, 1, 0, 0], dtype=int32)\n\n Refer to `tf.tensor_scatter_nd_update` for more details.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Apply a sparse update to a tensor taking the element-wise maximum.", "type": "API"}, {"name": "tf.raw_ops.TensorScatterMin", "docs": "TODO: add doc.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.TensorScatterSub", "docs": "Subtracts sparse `updates` from an existing tensor according to `indices`.\n\n This operation creates a new tensor by subtracting sparse `updates` from the\n passed in `tensor`.\n This operation is very similar to `tf.scatter_nd_sub`, except that the updates\n are subtracted from an existing tensor (as opposed to a variable). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `shape`. The last dimension of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`. `updates` is a tensor with shape\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of tensor_scatter_sub is to subtract individual elements\n from a tensor by index. For example, say we want to insert 4 scattered elements\n in a rank-1 tensor with 8 elements.\n\n In Python, this scatter subtract operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n tensor = tf.ones([8], dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [1, -10, 1, -9, -8, 1, 1, -11]\n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],\n [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Subtracts sparse `updates` from an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.TensorScatterUpdate", "docs": "Scatter `updates` into an existing tensor according to `indices`.\n\n This operation creates a new tensor by applying sparse `updates` to the passed\n in `tensor`.\n This operation is very similar to `tf.scatter_nd`, except that the updates are\n scattered onto an existing tensor (as opposed to a zero-tensor). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n If `indices` contains duplicates, then we pick the last update for the index.\n\n If an out of bound index is found on CPU, an error is returned.\n\n **WARNING**: There are some GPU specific semantics for this operation.\n - If an out of bound index is found, the index is ignored.\n - The order in which updates are applied is nondeterministic, so the output\n will be nondeterministic if `indices` contains duplicates.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `shape`.\n\n * `indices` must have at least 2 axes: `(num_updates, index_depth)`.\n * The last axis of `indices` is how deep to index into `tensor` so this index\n depth must be less than the rank of `tensor`: `indices.shape[-1] <= tensor.ndim`\n\n if `indices.shape[-1] = tensor.rank` this Op indexes and updates scalar elements.\n if `indices.shape[-1] < tensor.rank` it indexes and updates slices of the input\n `tensor`.\n\n Each `update` has a rank of `tensor.rank - indices.shape[-1]`.\n The overall shape of `updates` is:\n\n ```\n indices.shape[:-1] + tensor.shape[indices.shape[-1]:]\n ```\n\n For usage examples see the python [tf.tensor_scatter_nd_update](\n https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_update) function\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`, `uint16`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Scatter `updates` into an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.raw_ops.TensorSliceDataset", "docs": "Creates a dataset that emits each dim-0 slice of `components` once.\n\n Args:\n components: A list of `Tensor` objects.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n is_files: An optional `bool`. Defaults to `False`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits each dim-0 slice of `components` once.", "type": "API"}, {"name": "tf.raw_ops.TensorStridedSliceUpdate", "docs": "Assign `value` to the sliced l-value reference of `input`.\n\n The values of `value` are assigned to the positions in the tensor `input` that\n are selected by the slice parameters. The slice parameters `begin` `end`\n `strides` etc. work exactly as in `StridedSlice`.\n\n NOTE this op currently does not support broadcasting and so `value`'s shape\n must be exactly the shape produced by the slice of `input`.\n\n Args:\n input: A `Tensor`.\n begin: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n end: A `Tensor`. Must have the same type as `begin`.\n strides: A `Tensor`. Must have the same type as `begin`.\n value: A `Tensor`. Must have the same type as `input`.\n begin_mask: An optional `int`. Defaults to `0`.\n end_mask: An optional `int`. Defaults to `0`.\n ellipsis_mask: An optional `int`. Defaults to `0`.\n new_axis_mask: An optional `int`. Defaults to `0`.\n shrink_axis_mask: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Assign `value` to the sliced l-value reference of `input`.", "type": "API"}, {"name": "tf.raw_ops.TensorSummary", "docs": "Outputs a `Summary` protocol buffer with a tensor.\n\n This op is being phased out in favor of TensorSummaryV2, which lets callers pass\n a tag as well as a serialized SummaryMetadata proto string that contains\n plugin-specific data. We will keep this op to maintain backwards compatibility.\n\n Args:\n tensor: A `Tensor`. A tensor to serialize.\n description: An optional `string`. Defaults to `\"\"`.\n A json-encoded SummaryDescription proto.\n labels: An optional list of `strings`. Defaults to `[]`.\n An unused list of strings.\n display_name: An optional `string`. Defaults to `\"\"`. An unused string.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with a tensor.", "type": "API"}, {"name": "tf.raw_ops.TensorSummaryV2", "docs": "Outputs a `Summary` protocol buffer with a tensor and per-plugin data.\n\n Args:\n tag: A `Tensor` of type `string`.\n A string attached to this summary. Used for organization in TensorBoard.\n tensor: A `Tensor`. A tensor to serialize.\n serialized_summary_metadata: A `Tensor` of type `string`.\n A serialized SummaryMetadata proto. Contains plugin\n data.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Outputs a `Summary` protocol buffer with a tensor and per-plugin data.", "type": "API"}, {"name": "tf.raw_ops.TextLineDataset", "docs": "Creates a dataset that emits the lines of one or more text files.\n\n Args:\n filenames: A `Tensor` of type `string`.\n A scalar or a vector containing the name(s) of the file(s) to be\n read.\n compression_type: A `Tensor` of type `string`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n buffer_size: A `Tensor` of type `int64`.\n A scalar containing the number of bytes to buffer.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits the lines of one or more text files.", "type": "API"}, {"name": "tf.raw_ops.TextLineReader", "docs": "A Reader that outputs the lines of a file delimited by '\\n'.\n\n Args:\n skip_header_lines: An optional `int`. Defaults to `0`.\n Number of lines to skip from the beginning of every file.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs the lines of a file delimited by '\\n'.", "type": "API"}, {"name": "tf.raw_ops.TextLineReaderV2", "docs": "A Reader that outputs the lines of a file delimited by '\\n'.\n\n Args:\n skip_header_lines: An optional `int`. Defaults to `0`.\n Number of lines to skip from the beginning of every file.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A Reader that outputs the lines of a file delimited by '\\n'.", "type": "API"}, {"name": "tf.raw_ops.TFRecordDataset", "docs": "Creates a dataset that emits the records from one or more TFRecord files.\n\n Args:\n filenames: A `Tensor` of type `string`.\n A scalar or vector containing the name(s) of the file(s) to be\n read.\n compression_type: A `Tensor` of type `string`.\n A scalar containing either (i) the empty string (no\n compression), (ii) \"ZLIB\", or (iii) \"GZIP\".\n buffer_size: A `Tensor` of type `int64`.\n A scalar representing the number of bytes to buffer. A value of\n 0 means no buffering will be performed.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that emits the records from one or more TFRecord files.", "type": "API"}, {"name": "tf.raw_ops.TFRecordReader", "docs": "A Reader that outputs the records from a TensorFlow Records file.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n compression_type: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs the records from a TensorFlow Records file.", "type": "API"}, {"name": "tf.raw_ops.TFRecordReaderV2", "docs": "A Reader that outputs the records from a TensorFlow Records file.\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n compression_type: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A Reader that outputs the records from a TensorFlow Records file.", "type": "API"}, {"name": "tf.raw_ops.ThreadPoolDataset", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n thread_pool: A `Tensor` of type `resource`.\n A resource produced by the ThreadPoolHandle op.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ThreadPoolHandle", "docs": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.\n\n Args:\n num_threads: An `int`. The number of threads in the thread pool.\n display_name: A `string`.\n A human-readable name for the threads that may be visible in some\n visualizations.\n threadpool.\n max_intra_op_parallelism: An optional `int`. Defaults to `1`.\n The maximum degree of parallelism to use within operations that execute on this\n threadpool.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a dataset that uses a custom thread pool to compute `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.ThreadUnsafeUnigramCandidateSampler", "docs": "Generates labels for candidate sampling with a learned unigram distribution.\n\n See explanations of candidate sampling and the data formats at\n go/candidate-sampling.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`.\n Number of candidates to randomly sample.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n range_max: An `int` that is `>= 1`.\n The sampler will sample integers from the interval [0, range_max).\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a learned unigram distribution.", "type": "API"}, {"name": "tf.raw_ops.Tile", "docs": "Constructs a tensor by tiling a given tensor.\n\n This operation creates a new tensor by replicating `input` `multiples` times.\n The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,\n and the values of `input` are replicated `multiples[i]` times along the 'i'th\n dimension. For example, tiling `[a b c d]` by `[2]` produces\n `[a b c d a b c d]`.\n\n >>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32)\n >>> b = tf.constant([1,2], tf.int32)\n >>> tf.tile(a, b)\n \n >>> c = tf.constant([2,1], tf.int32)\n >>> tf.tile(a, c)\n \n >>> d = tf.constant([2,2], tf.int32)\n >>> tf.tile(a, d)\n \n\n Args:\n input: A `Tensor`. 1-D or higher.\n multiples: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. Length must be the same as the number of dimensions in `input`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Constructs a tensor by tiling a given tensor.", "type": "API"}, {"name": "tf.raw_ops.TileGrad", "docs": "Returns the gradient of `Tile`.\n\n Since `Tile` takes an input and repeats the input `multiples` times\n along each dimension, `TileGrad` takes in `multiples` and aggregates\n each repeated tile of `input` into `output`.\n\n Args:\n input: A `Tensor`.\n multiples: A `Tensor` of type `int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Returns the gradient of `Tile`.", "type": "API"}, {"name": "tf.raw_ops.Timestamp", "docs": "Provides the time since epoch in seconds.\n\n Returns the timestamp as a `float64` for seconds since the Unix epoch.\n\n Note: the timestamp is computed when the op is executed, not when it is added\n to the graph.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float64`.\n ", "desc": "Provides the time since epoch in seconds.", "type": "API"}, {"name": "tf.raw_ops.ToBool", "docs": "Converts a tensor to a scalar predicate.\n\n Converts a tensor to a scalar predicate with the following rules:\n\n - For 0D tensors, truthiness is determined by comparing against a \"zero\"\n value. For numerical types it is the obvious zero. For strings it is the\n empty string.\n\n - For >0D tensors, truthiness is determined by looking at the number of\n elements. If has zero elements, then the result is false. Otherwise the\n result is true.\n\n This matches the behavior of If and While for determining if a tensor counts\n as true/false for a branch condition.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Converts a tensor to a scalar predicate.", "type": "API"}, {"name": "tf.raw_ops.TopK", "docs": "Finds values and indices of the `k` largest elements for the last dimension.\n\n If the input is a vector (rank-1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n values.shape = indices.shape = input.shape[:-1] + [k]\n\n If two elements are equal, the lower-index element appears first.\n\n If `k` varies dynamically, use `TopKV2` below.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D or higher with last dimension at least `k`.\n k: An `int` that is `>= 0`.\n Number of top elements to look for along the last dimension (along each\n row for matrices).\n sorted: An optional `bool`. Defaults to `True`.\n If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (values, indices).\n\n values: A `Tensor`. Has the same type as `input`.\n indices: A `Tensor` of type `int32`.\n ", "desc": "Finds values and indices of the `k` largest elements for the last dimension.", "type": "API"}, {"name": "tf.raw_ops.TopKV2", "docs": "Finds values and indices of the `k` largest elements for the last dimension.\n\n If the input is a vector (rank-1), finds the `k` largest entries in the vector\n and outputs their values and indices as vectors. Thus `values[j]` is the\n `j`-th largest entry in `input`, and its index is `indices[j]`.\n\n For matrices (resp. higher rank input), computes the top `k` entries in each\n row (resp. vector along the last dimension). Thus,\n\n values.shape = indices.shape = input.shape[:-1] + [k]\n\n If two elements are equal, the lower-index element appears first.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n 1-D or higher with last dimension at least `k`.\n k: A `Tensor` of type `int32`.\n 0-D. Number of top elements to look for along the last dimension (along each\n row for matrices).\n sorted: An optional `bool`. Defaults to `True`.\n If true the resulting `k` elements will be sorted by the values in\n descending order.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (values, indices).\n\n values: A `Tensor`. Has the same type as `input`.\n indices: A `Tensor` of type `int32`.\n ", "desc": "Finds values and indices of the `k` largest elements for the last dimension.", "type": "API"}, {"name": "tf.raw_ops.TPUCompilationResult", "docs": "Returns the result of a TPU compilation.\n\n This operation returns the result of a TPU compilation as a serialized\n CompilationResultProto, which holds a status and an error message if an error\n occurred during compilation.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Returns the result of a TPU compilation.", "type": "API"}, {"name": "tf.raw_ops.TPUEmbeddingActivations", "docs": "An op enabling differentiation of TPU Embeddings.\n\n This op simply returns its first input, which is assumed to have been sliced\n from the Tensors returned by TPUEmbeddingDequeueActivations. The presence of\n this op, and its first argument being a trainable Variable, enables automatic\n differentiation of graphs containing embeddings via the TPU Embedding Python\n libraries.\n\n Args:\n embedding_variable: A `Tensor` of type `float32`.\n A trainable variable, enabling optimizers to find this op.\n sliced_activations: A `Tensor` of type `float32`.\n The embedding activations Tensor to return.\n table_id: An `int` that is `>= 0`.\n The id of the table in the embedding layer configuration from which\n these activations were computed.\n lookup_id: An `int` that is `>= 0`.\n Identifier of the set of embedding indices which produced these\n activations.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float32`.\n ", "desc": "An op enabling differentiation of TPU Embeddings.", "type": "API"}, {"name": "tf.raw_ops.TPUOrdinalSelector", "docs": "A TPU core selector Op.\n\n This Op produces a set of TPU cores (for warm-up) or a single TPU core\n (for regular inference) to execute the TPU program on. The output is\n consumed by TPUPartitionedCall.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "A TPU core selector Op.", "type": "API"}, {"name": "tf.raw_ops.TPUPartitionedCall", "docs": "Calls a function placed on a specified TPU device.\n\n Args:\n args: A list of `Tensor` objects. The arguments to the function.\n device_ordinal: A `Tensor` of type `int32`.\n The TPU device ordinal to run the function on.\n Tout: A list of `tf.DTypes`. The types of the outputs of the function.\n f: A function decorated with @Defun. The function to call.\n autotuner_thresh: An optional `int`. Defaults to `0`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `Tout`.\n ", "desc": "Calls a function placed on a specified TPU device.", "type": "API"}, {"name": "tf.raw_ops.TPUReplicatedInput", "docs": "Connects N inputs to an N-way replicated TPU computation.\n\n This operation holds a replicated input to a `tpu.replicate()` computation subgraph.\n Each replicated input has the same shape and type alongside the output.\n\n For example:\n ```\n %a = \"tf.opA\"()\n %b = \"tf.opB\"()\n %replicated_input = \"tf.TPUReplicatedInput\"(%a, %b)\n %computation = \"tf.Computation\"(%replicated_input)\n ```\n The above computation has a replicated input of two replicas.\n\n Args:\n inputs: A list of at least 1 `Tensor` objects with the same type.\n is_mirrored_variable: An optional `bool`. Defaults to `False`.\n index: An optional `int`. Defaults to `-1`.\n is_packed: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `inputs`.\n ", "desc": "Connects N inputs to an N-way replicated TPU computation.", "type": "API"}, {"name": "tf.raw_ops.TPUReplicatedOutput", "docs": "Connects N outputs from an N-way replicated TPU computation.\n\n This operation holds a replicated output from a `tpu.replicate()` computation subgraph.\n Each replicated output has the same shape and type alongside the input.\n\n For example:\n ```\n %computation = \"tf.Computation\"()\n %replicated_output:2 = \"tf.TPUReplicatedOutput\"(%computation)\n ```\n The above computation has a replicated output of two replicas.\n\n Args:\n input: A `Tensor`.\n num_replicas: An `int` that is `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num_replicas` `Tensor` objects with the same type as `input`.\n ", "desc": "Connects N outputs from an N-way replicated TPU computation.", "type": "API"}, {"name": "tf.raw_ops.TPUReplicateMetadata", "docs": "Metadata indicating how the TPU computation should be replicated.\n\n This operation holds the metadata common to operations of a `tpu.replicate()` computation subgraph.\n\n Args:\n num_replicas: An `int` that is `>= 0`.\n Number of replicas of the computation\n num_cores_per_replica: An optional `int`. Defaults to `1`.\n Number of cores per replica. Used for model parallelism.\n topology: An optional `string`. Defaults to `\"\"`.\n TopologyProto indicating the topology of the TPU pod slice.\n use_tpu: An optional `bool`. Defaults to `True`.\n Whether to place the computation on the TPU.\n device_assignment: An optional list of `ints`. Defaults to `[]`.\n The assignment of devices for the computation.\n computation_shape: An optional list of `ints`. Defaults to `[]`.\n DEPRECATED. Use num_cores_per_replica instead.\n host_compute_core: An optional list of `strings`. Defaults to `[]`.\n padding_map: An optional list of `strings`. Defaults to `[]`.\n step_marker_location: An optional `string`. Defaults to `\"STEP_MARK_AT_ENTRY\"`.\n allow_soft_placement: An optional `bool`. Defaults to `False`.\n use_spmd_for_xla_partitioning: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Metadata indicating how the TPU computation should be replicated.", "type": "API"}, {"name": "tf.raw_ops.Transpose", "docs": "Shuffle dimensions of x according to a permutation.\n\n The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy:\n `y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]`\n\n Args:\n x: A `Tensor`.\n perm: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Shuffle dimensions of x according to a permutation.", "type": "API"}, {"name": "tf.raw_ops.TridiagonalMatMul", "docs": "Calculate product with tridiagonal matrix.\n\n Calculates product of two matrices, where left matrix is a tridiagonal matrix.\n\n Args:\n superdiag: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.\n Tensor of shape `[..., 1, M]`, representing superdiagonals of\n tri-diagonal matrices to the left of multiplication. Last element is ignored.\n maindiag: A `Tensor`. Must have the same type as `superdiag`.\n Tensor of shape `[..., 1, M]`, representing main diagonals of tri-diagonal\n matrices to the left of multiplication.\n subdiag: A `Tensor`. Must have the same type as `superdiag`.\n Tensor of shape `[..., 1, M]`, representing subdiagonals of tri-diagonal\n matrices to the left of multiplication. First element is ignored.\n rhs: A `Tensor`. Must have the same type as `superdiag`.\n Tensor of shape `[..., M, N]`, representing MxN matrices to the right of\n multiplication.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `superdiag`.\n ", "desc": "Calculate product with tridiagonal matrix.", "type": "API"}, {"name": "tf.raw_ops.TridiagonalSolve", "docs": "Solves tridiagonal systems of equations.\n\n Solves tridiagonal systems of equations.\n Supports batch dimensions and multiple right-hand sides per each left-hand\n side.\n On CPU, solution is computed via Gaussian elimination with or without partial\n pivoting, depending on `partial_pivoting` attribute. On GPU, Nvidia's cuSPARSE\n library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv\n Partial pivoting is not yet supported by XLA backends.\n\n Args:\n diagonals: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.\n Tensor of shape `[..., 3, M]` whose innermost 2 dimensions represent the\n tridiagonal matrices with three rows being the superdiagonal, diagonals, and\n subdiagonals, in order. The last element of the superdiagonal and the first\n element of the subdiagonal is ignored.\n rhs: A `Tensor`. Must have the same type as `diagonals`.\n Tensor of shape `[..., M, K]`, representing K right-hand sides per each\n left-hand side.\n partial_pivoting: An optional `bool`. Defaults to `True`.\n Whether to apply partial pivoting. Partial pivoting makes the procedure more\n stable, but slower.\n perturb_singular: An optional `bool`. Defaults to `False`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `diagonals`.\n ", "desc": "Solves tridiagonal systems of equations.", "type": "API"}, {"name": "tf.raw_ops.TruncateDiv", "docs": "Returns x / y element-wise for integer types.\n\n Truncation designates that negative numbers will round fractional quantities\n toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different\n than Python semantics. See `FloorDiv` for a division function that matches\n Python Semantics.\n\n *NOTE*: `truncatediv` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for integer types.", "type": "API"}, {"name": "tf.raw_ops.TruncatedNormal", "docs": "Outputs random values from a truncated normal distribution.\n\n The generated values follow a normal distribution with mean 0 and standard\n deviation 1, except that values whose magnitude is more than 2 standard\n deviations from the mean are dropped and re-picked.\n\n Args:\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n The shape of the output tensor.\n dtype: A `tf.DType` from: `tf.half, tf.bfloat16, tf.float32, tf.float64`.\n The type of the output.\n seed: An optional `int`. Defaults to `0`.\n If either `seed` or `seed2` are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n A second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `dtype`.\n ", "desc": "Outputs random values from a truncated normal distribution.", "type": "API"}, {"name": "tf.raw_ops.TruncateMod", "docs": "Returns element-wise remainder of division. This emulates C semantics in that\n\n the result here is consistent with a truncating divide. E.g. `truncate(x / y) *\n y + truncate_mod(x, y) = x`.\n\n *NOTE*: `truncatemod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. This emulates C semantics in that", "type": "API"}, {"name": "tf.raw_ops.Unbatch", "docs": "Reverses the operation of Batch for a single output Tensor.\n\n An instance of Unbatch either receives an empty batched_tensor, in which case it\n asynchronously waits until the values become available from a concurrently\n running instance of Unbatch with the same container and shared_name, or receives\n a non-empty batched_tensor in which case it finalizes all other concurrently\n running instances and outputs its own element from the batch.\n\n batched_tensor: The possibly transformed output of Batch. The size of the first\n dimension should remain unchanged by the transformations for the operation to\n work.\n batch_index: The matching batch_index obtained from Batch.\n id: The id scalar emitted by Batch.\n unbatched_tensor: The Tensor corresponding to this execution.\n timeout_micros: Maximum amount of time (in microseconds) to wait to receive the\n batched input tensor associated with a given invocation of the op.\n container: Container to control resource sharing.\n shared_name: Instances of Unbatch with the same container and shared_name are\n assumed to possibly belong to the same batch. If left empty, the op name will\n be used as the shared name.\n\n Args:\n batched_tensor: A `Tensor`.\n batch_index: A `Tensor` of type `int64`.\n id: A `Tensor` of type `int64`.\n timeout_micros: An `int`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `batched_tensor`.\n ", "desc": "Reverses the operation of Batch for a single output Tensor.", "type": "API"}, {"name": "tf.raw_ops.UnbatchDataset", "docs": "A dataset that splits the elements of its input into multiple elements.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "A dataset that splits the elements of its input into multiple elements.", "type": "API"}, {"name": "tf.raw_ops.UnbatchGrad", "docs": "Gradient of Unbatch.\n\n Acts like Batch but using the given batch_index index of batching things as they\n become available. This ensures that the gradients are propagated back in the\n same session which did the forward pass.\n\n original_input: The input to the Unbatch operation this is the gradient of.\n batch_index: The batch_index given to the Unbatch operation this is the gradient\n of.\n grad: The downstream gradient.\n id: The id scalar emitted by Batch.\n batched_grad: The return value, either an empty tensor or the batched gradient.\n container: Container to control resource sharing.\n shared_name: Instances of UnbatchGrad with the same container and shared_name\n are assumed to possibly belong to the same batch. If left empty, the op name\n will be used as the shared name.\n\n Args:\n original_input: A `Tensor`.\n batch_index: A `Tensor` of type `int64`.\n grad: A `Tensor`. Must have the same type as `original_input`.\n id: A `Tensor` of type `int64`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `original_input`.\n ", "desc": "Gradient of Unbatch.", "type": "API"}, {"name": "tf.raw_ops.UncompressElement", "docs": "Uncompresses a compressed dataset element.\n\n Args:\n compressed: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `output_types`.\n ", "desc": "Uncompresses a compressed dataset element.", "type": "API"}, {"name": "tf.raw_ops.UnicodeDecode", "docs": "Decodes each string in `input` into a sequence of Unicode code points.\n\n The character codepoints for all strings are returned using a single vector\n `char_values`, with strings expanded to characters in row-major order.\n\n The `row_splits` tensor indicates where the codepoints for\n each input string begin and end within the `char_values` tensor.\n In particular, the values for the `i`th\n string (in row-major order) are stored in the slice\n `[row_splits[i]:row_splits[i+1]]`. Thus:\n\n * `char_values[row_splits[i]+j]` is the Unicode codepoint for the `j`th\n character in the `i`th string (in row-major order).\n * `row_splits[i+1] - row_splits[i]` is the number of characters in the `i`th\n string (in row-major order).\n\n Args:\n input: A `Tensor` of type `string`.\n The text to be decoded. Can have any shape. Note that the output is flattened\n to a vector of char values.\n input_encoding: A `string`.\n Text encoding of the input strings. This is any of the encodings supported\n by ICU ucnv algorithmic converters. Examples: `\"UTF-16\", \"US ASCII\", \"UTF-8\"`.\n errors: An optional `string` from: `\"strict\", \"replace\", \"ignore\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD or U+65533.)\n replace_control_characters: An optional `bool`. Defaults to `False`.\n Whether to replace the C0 control characters (00-1F) with the\n `replacement_char`. Default is false.\n Tsplits: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (row_splits, char_values).\n\n row_splits: A `Tensor` of type `Tsplits`.\n char_values: A `Tensor` of type `int32`.\n ", "desc": "Decodes each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.raw_ops.UnicodeDecodeWithOffsets", "docs": "Decodes each string in `input` into a sequence of Unicode code points.\n\n The character codepoints for all strings are returned using a single vector\n `char_values`, with strings expanded to characters in row-major order.\n Similarly, the character start byte offsets are returned using a single vector\n `char_to_byte_starts`, with strings expanded in row-major order.\n\n The `row_splits` tensor indicates where the codepoints and start offsets for\n each input string begin and end within the `char_values` and\n `char_to_byte_starts` tensors. In particular, the values for the `i`th\n string (in row-major order) are stored in the slice\n `[row_splits[i]:row_splits[i+1]]`. Thus:\n\n * `char_values[row_splits[i]+j]` is the Unicode codepoint for the `j`th\n character in the `i`th string (in row-major order).\n * `char_to_bytes_starts[row_splits[i]+j]` is the start byte offset for the `j`th\n character in the `i`th string (in row-major order).\n * `row_splits[i+1] - row_splits[i]` is the number of characters in the `i`th\n string (in row-major order).\n\n Args:\n input: A `Tensor` of type `string`.\n The text to be decoded. Can have any shape. Note that the output is flattened\n to a vector of char values.\n input_encoding: A `string`.\n Text encoding of the input strings. This is any of the encodings supported\n by ICU ucnv algorithmic converters. Examples: `\"UTF-16\", \"US ASCII\", \"UTF-8\"`.\n errors: An optional `string` from: `\"strict\", \"replace\", \"ignore\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD or U+65533.)\n replace_control_characters: An optional `bool`. Defaults to `False`.\n Whether to replace the C0 control characters (00-1F) with the\n `replacement_char`. Default is false.\n Tsplits: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (row_splits, char_values, char_to_byte_starts).\n\n row_splits: A `Tensor` of type `Tsplits`.\n char_values: A `Tensor` of type `int32`.\n char_to_byte_starts: A `Tensor` of type `int64`.\n ", "desc": "Decodes each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.raw_ops.UnicodeEncode", "docs": "Encode a tensor of ints into unicode strings.\n\n Returns a vector of strings, where `output[i]` is constructed by encoding the\n Unicode codepoints in `input_values[input_splits[i]:input_splits[i+1]]`\n using `output_encoding`.\n\n ---\n\n Example:\n\n ```\n input_values = [72, 101, 108, 108, 111, 87, 111, 114, 108, 100]\n input_splits = [0, 5, 10]\n output_encoding = 'UTF-8'\n\n output = ['Hello', 'World']\n ```\n\n Args:\n input_values: A `Tensor` of type `int32`.\n A 1D tensor containing the unicode codepoints that should be encoded.\n input_splits: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A 1D tensor specifying how the unicode codepoints should be split into strings.\n In particular, `output[i]` is constructed by encoding the codepoints in the\n slice `input_values[input_splits[i]:input_splits[i+1]]`.\n output_encoding: A `string` from: `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`.\n Unicode encoding of the output strings. Valid encodings are: `\"UTF-8\",\n \"UTF-16-BE\", and \"UTF-32-BE\"`.\n errors: An optional `string` from: `\"ignore\", \"replace\", \"strict\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD (U+65533).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Encode a tensor of ints into unicode strings.", "type": "API"}, {"name": "tf.raw_ops.UnicodeScript", "docs": "Determine the script codes of a given tensor of Unicode integer code points.\n\n This operation converts Unicode code points to script codes corresponding to\n each code point. Script codes correspond to International Components for\n Unicode (ICU) UScriptCode values.\n\n See\n [ICU project docs](http://icu-project.org/apiref/icu4c/uscript_8h.html)\n for more details on script codes.\n\n For an example, see the unicode strings guide on [unicode scripts]\n (https://www.tensorflow.org/tutorials/load_data/unicode#representing_unicode).\n\n Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will\n match input shape.\n\n Examples:\n\n >>> tf.strings.unicode_script([1, 31, 38])\n \n\n Args:\n input: A `Tensor` of type `int32`. A Tensor of int32 Unicode code points.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Determine the script codes of a given tensor of Unicode integer code points.", "type": "API"}, {"name": "tf.raw_ops.UnicodeTranscode", "docs": "Transcode the input text from a source encoding to a destination encoding.\n\n The input is a string tensor of any shape. The output is a string tensor of\n the same shape containing the transcoded strings. Output strings are always\n valid unicode. If the input contains invalid encoding positions, the\n `errors` attribute sets the policy for how to deal with them. If the default\n error-handling policy is used, invalid formatting will be substituted in the\n output by the `replacement_char`. If the errors policy is to `ignore`, any\n invalid encoding positions in the input are skipped and not included in the\n output. If it set to `strict` then any invalid formatting will result in an\n InvalidArgument error.\n\n This operation can be used with `output_encoding = input_encoding` to enforce\n correct formatting for inputs even if they are already in the desired encoding.\n\n If the input is prefixed by a Byte Order Mark needed to determine encoding\n (e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that\n BOM will be consumed and not emitted into the output. If the input encoding\n is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is\n interpreted as a non-breaking-space and is preserved in the output (including\n always for UTF-8).\n\n The end result is that if the input is marked as an explicit endianness the\n transcoding is faithful to all codepoints in the source. If it is not marked\n with an explicit endianness, the BOM is not considered part of the string itself\n but as metadata, and so is not preserved in the output.\n\n Examples:\n\n >>> tf.strings.unicode_transcode([\"Hello\", \"TensorFlow\", \"2.x\"], \"UTF-8\", \"UTF-16-BE\")\n \n >>> tf.strings.unicode_transcode([\"A\", \"B\", \"C\"], \"US ASCII\", \"UTF-8\").numpy()\n array([b'A', b'B', b'C'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`.\n The text to be processed. Can have any shape.\n input_encoding: A `string`.\n Text encoding of the input strings. This is any of the encodings supported\n by ICU ucnv algorithmic converters. Examples: `\"UTF-16\", \"US ASCII\", \"UTF-8\"`.\n output_encoding: A `string` from: `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`.\n The unicode encoding to use in the output. Must be one of\n `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`. Multi-byte encodings will be big-endian.\n errors: An optional `string` from: `\"strict\", \"replace\", \"ignore\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD or U+65533.)\n\n Note that for UTF-8, passing a replacement character expressible in 1 byte, such\n as ' ', will preserve string alignment to the source since invalid bytes will be\n replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte\n replacement character will preserve byte alignment to the source.\n replace_control_characters: An optional `bool`. Defaults to `False`.\n Whether to replace the C0 control characters (00-1F) with the\n `replacement_char`. Default is false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Transcode the input text from a source encoding to a destination encoding.", "type": "API"}, {"name": "tf.raw_ops.UniformCandidateSampler", "docs": "Generates labels for candidate sampling with a uniform distribution.\n\n See explanations of candidate sampling and the data formats at\n go/candidate-sampling.\n\n For each batch, this op picks a single set of sampled candidate labels.\n\n The advantages of sampling candidates per-batch are simplicity and the\n possibility of efficient dense matrix multiplication. The disadvantage is that\n the sampled candidates must be chosen independently of the context and of the\n true labels.\n\n Args:\n true_classes: A `Tensor` of type `int64`.\n A batch_size * num_true matrix, in which each row contains the\n IDs of the num_true target_classes in the corresponding original label.\n num_true: An `int` that is `>= 1`. Number of true labels per context.\n num_sampled: An `int` that is `>= 1`.\n Number of candidates to randomly sample.\n unique: A `bool`.\n If unique is true, we sample with rejection, so that all sampled\n candidates in a batch are unique. This requires some approximation to\n estimate the post-rejection sampling probabilities.\n range_max: An `int` that is `>= 1`.\n The sampler will sample integers from the interval [0, range_max).\n seed: An optional `int`. Defaults to `0`.\n If either seed or seed2 are set to be non-zero, the random number\n generator is seeded by the given seed. Otherwise, it is seeded by a\n random seed.\n seed2: An optional `int`. Defaults to `0`.\n An second seed to avoid seed collision.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (sampled_candidates, true_expected_count, sampled_expected_count).\n\n sampled_candidates: A `Tensor` of type `int64`.\n true_expected_count: A `Tensor` of type `float32`.\n sampled_expected_count: A `Tensor` of type `float32`.\n ", "desc": "Generates labels for candidate sampling with a uniform distribution.", "type": "API"}, {"name": "tf.raw_ops.Unique", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`; `x` does not need to be sorted.\n This operation also returns a tensor `idx` the same size as `x` that contains\n the index of each value of `x` in the unique output `y`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n Examples:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx = unique(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n ```\n\n ```\n # tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5]\n y, idx = unique(x)\n y ==> [4, 5, 1, 2, 3]\n idx ==> [0, 1, 2, 3, 4, 4, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.raw_ops.UniqueDataset", "docs": "Creates a dataset that contains the unique elements of `input_dataset`.\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that contains the unique elements of `input_dataset`.", "type": "API"}, {"name": "tf.raw_ops.UniqueV2", "docs": "Finds unique elements along an axis of a tensor.\n\n This operation either returns a tensor `y` containing unique elements\n along the `axis` of a tensor. The returned unique elements is sorted\n in the same order as they occur along `axis` in `x`.\n This operation also returns a tensor `idx` that is the same size as\n the number of the elements in `x` along the `axis` dimension. It\n contains the index in the unique output `y`.\n In other words, for an `1-D` tensor `x` with `axis = None:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n For example:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx = unique(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n ```\n\n For an `2-D` tensor `x` with `axis = 0`:\n\n ```\n # tensor 'x' is [[1, 0, 0],\n # [1, 0, 0],\n # [2, 0, 0]]\n y, idx = unique(x, axis=0)\n y ==> [[1, 0, 0],\n [2, 0, 0]]\n idx ==> [0, 0, 1]\n ```\n\n For an `2-D` tensor `x` with `axis = 1`:\n\n ```\n # tensor 'x' is [[1, 0, 0],\n # [1, 0, 0],\n # [2, 0, 0]]\n y, idx = unique(x, axis=1)\n y ==> [[1, 0],\n [1, 0],\n [2, 0]]\n idx ==> [0, 1, 1]\n ```\n\n Args:\n x: A `Tensor`. A `Tensor`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `Tensor` of type `int32` (default: None). The axis of the Tensor to\n find the unique elements.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements along an axis of a tensor.", "type": "API"}, {"name": "tf.raw_ops.UniqueWithCounts", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`. This operation also returns a\n tensor `idx` the same size as `x` that contains the index of each value of `x`\n in the unique output `y`. Finally, it returns a third tensor `count` that\n contains the count of each element of `y` in `x`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n For example:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx, count = unique_with_counts(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n count ==> [2, 1, 3, 1, 2]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx, count).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n count: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.raw_ops.UniqueWithCountsV2", "docs": "Finds unique elements along an axis of a tensor.\n\n This operation either returns a tensor `y` containing unique elements\n along the `axis` of a tensor. The returned unique elements is sorted\n in the same order as they occur along `axis` in `x`.\n This operation also returns a tensor `idx` and a tensor `count`\n that are the same size as the number of the elements in `x` along the\n `axis` dimension. The `idx` contains the index in the unique output `y`\n and the `count` contains the count in the unique output `y`.\n In other words, for an `1-D` tensor `x` with `axis = None:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n For example:\n\n ```\n x = tf.constant([1, 1, 2, 4, 4, 4, 7, 8, 8])\n y, idx, count = UniqueWithCountsV2(x, axis = [0])\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n count ==> [2, 1, 3, 1, 2]\n ```\n\n For a `2-D` tensor `x` with `axis = 0`:\n\n ```\n x = tf.constant([[1, 0, 0],\n [1, 0, 0],\n [2, 0, 0]])\n y, idx, count = UniqueWithCountsV2(x, axis=[0])\n y ==> [[1, 0, 0],\n [2, 0, 0]]\n idx ==> [0, 0, 1]\n count ==> [2, 1]\n ```\n\n For a `2-D` tensor `x` with `axis = 1`:\n\n ```\n x = tf.constant([[1, 0, 0],\n [1, 0, 0],\n [2, 0, 0]])\n y, idx, count = UniqueWithCountsV2(x, axis=[1])\n y ==> [[1, 0],\n [1, 0],\n [2, 0]]\n idx ==> [0, 1, 1]\n count ==> [1, 2]\n ```\n\n Args:\n x: A `Tensor`. A `Tensor`.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A `Tensor` of type `int32` (default: None). The axis of the Tensor to\n find the unique elements.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx, count).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n count: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements along an axis of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Unpack", "docs": "Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors.\n\n Unpacks `num` tensors from `value` by chipping it along the `axis` dimension.\n For example, given a tensor of shape `(A, B, C, D)`;\n\n If `axis == 0` then the i'th tensor in `output` is the slice `value[i, :, :, :]`\n and each tensor in `output` will have shape `(B, C, D)`. (Note that the\n dimension unpacked along is gone, unlike `split`).\n\n If `axis == 1` then the i'th tensor in `output` is the slice `value[:, i, :, :]`\n and each tensor in `output` will have shape `(A, C, D)`.\n Etc.\n\n This is the opposite of `pack`.\n\n Args:\n value: A `Tensor`.\n 1-D or higher, with `axis` dimension size equal to `num`.\n num: An `int` that is `>= 0`.\n axis: An optional `int`. Defaults to `0`.\n Dimension along which to unpack. Negative values wrap around, so the\n valid range is `[-R, R)`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `num` `Tensor` objects with the same type as `value`.\n ", "desc": "Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors.", "type": "API"}, {"name": "tf.raw_ops.UnravelIndex", "docs": "Converts an array of flat indices into a tuple of coordinate arrays.\n\n \n Example:\n\n ```\n y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3])\n # 'dims' represent a hypothetical (3, 3) tensor of indices:\n # [[0, 1, *2*],\n # [3, 4, *5*],\n # [6, *7*, 8]]\n # For each entry from 'indices', this operation returns\n # its coordinates (marked with '*'), such as\n # 2 ==> (0, 2)\n # 5 ==> (1, 2)\n # 7 ==> (2, 1)\n y ==> [[0, 1, 2], [2, 2, 1]]\n ```\n\n @compatibility(numpy)\n Equivalent to np.unravel_index\n @end_compatibility\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 0-D or 1-D `int` Tensor whose elements are indices into the\n flattened version of an array of dimensions dims.\n dims: A `Tensor`. Must have the same type as `indices`.\n An 1-D `int` Tensor. The shape of the array to use for unraveling\n indices.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `indices`.\n ", "desc": "Converts an array of flat indices into a tuple of coordinate arrays.", "type": "API"}, {"name": "tf.raw_ops.UnsortedSegmentJoin", "docs": "Joins the elements of `inputs` based on `segment_ids`.\n\n Computes the string join along segments of a tensor.\n Given `segment_ids` with rank `N` and `data` with rank `N+M`:\n\n `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])`\n\n where the join is over all [j1...jN] such that segment_ids[j1...jN] = i.\n Strings are joined in row-major order.\n\n For example:\n\n ```python\n inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']]\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[1, 0, 1],\n num_segments=2,\n separator=':'))\n # output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']]\n\n\n inputs = ['this', 'is', 'a', 'test']\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[0, 0, 0, 0],\n num_segments=1,\n separator=':'))\n # output_array ==> ['this:is:a:test']\n ```\n\n Args:\n inputs: A `Tensor` of type `string`. The input to be joined.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of data.shape. Negative segment ids are not\n supported.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A scalar.\n separator: An optional `string`. Defaults to `\"\"`.\n The separator to use when joining.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Joins the elements of `inputs` based on `segment_ids`.", "type": "API"}, {"name": "tf.raw_ops.UnsortedSegmentMax", "docs": "Computes the maximum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the maximum such that:\n\n \\\\(output_i = \\max_{j...} data[j...]\\\\) where max is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the maximum is empty for a given segment ID `i`, it outputs the smallest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::lowest()`.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 3, 3, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the maximum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.UnsortedSegmentMin", "docs": "Computes the minimum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the minimum such that:\n\n \\\\(output_i = \\min_{j...} data_[j...]\\\\) where min is over tuples `j...` such\n that `segment_ids[j...] == i`.\n\n If the minimum is empty for a given segment ID `i`, it outputs the largest\n possible value for the specific numeric type,\n `output[i] = numeric_limits::max()`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[1, 2, 2, 1],\n [5, 6, 7, 8]], dtype=int32)\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the minimum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.UnsortedSegmentProd", "docs": "Computes the product along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n This operator is similar to `tf.math.unsorted_segment_sum`,\n Instead of computing the sum over segments, it computes the product of all\n entries belonging to a segment such that:\n\n \\\\(output_i = \\prod_{j...} data[j...]\\\\) where the product is over tuples\n `j...` such that `segment_ids[j...] == i`.\n\n For example:\n\n >>> c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])\n >>> tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy()\n array([[4, 6, 6, 4],\n [5, 6, 7, 8]], dtype=int32)\n\n If there is no entry for a given segment ID `i`, it outputs 1.\n\n If the given segment ID `i` is negative, then the corresponding value is\n dropped, and will not be included in the result.\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the product along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.UnsortedSegmentSum", "docs": "Computes the sum along segments of a tensor.\n\n Read\n [the section on segmentation](https://tensorflow.org/api_docs/python/tf/math#Segmentation)\n for an explanation of segments.\n\n Computes a tensor such that\n \\\\(output[i] = \\sum_{j...} data[j...]\\\\) where the sum is over tuples `j...` such\n that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`\n need not be sorted and need not cover all values in the full\n range of valid values.\n\n If the sum is empty for a given segment ID `i`, `output[i] = 0`.\n If the given segment ID `i` is negative, the value is dropped and will not be\n added to the sum of the segment.\n\n `num_segments` should equal the number of distinct segment IDs.\n\n Caution: On CPU, values in `segment_ids` are always validated to be less than\n `num_segments`, and an error is thrown for out-of-bound indices. On GPU, this\n does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices\n result in safe but unspecified behavior, which may include ignoring\n out-of-bound indices or outputting a tensor with a 0 stored in the first\n dimension of its shape if `num_segments` is 0.\n\n
\n \n
\n\n >>> c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]]\n >>> tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy()\n array([[5, 5, 5, 5],\n [5, 6, 7, 8]], dtype=int32)\n\n Args:\n data: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of `data.shape`.\n The values must be less than `num_segments`.\n\n Caution: The values are always validated to be in range on CPU, never validated\n on GPU.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `data`.\n ", "desc": "Computes the sum along segments of a tensor.", "type": "API"}, {"name": "tf.raw_ops.Unstage", "docs": "Op is similar to a lightweight Dequeue.\n\n The basic functionality is similar to dequeue with many fewer\n capabilities and options. This Op is optimized for performance.\n\n Args:\n dtypes: A list of `tf.DTypes` that has length `>= 1`.\n capacity: An optional `int` that is `>= 0`. Defaults to `0`.\n memory_limit: An optional `int` that is `>= 0`. Defaults to `0`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects of type `dtypes`.\n ", "desc": "Op is similar to a lightweight Dequeue.", "type": "API"}, {"name": "tf.raw_ops.UnwrapDatasetVariant", "docs": "TODO: add doc.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.UpperBound", "docs": "Applies upper_bound(sorted_search_values, values) along each row.\n\n Each set of rows with the same index in (sorted_inputs, values) is treated\n independently. The resulting row is the equivalent of calling\n `np.searchsorted(sorted_inputs, values, side='right')`.\n\n The result is not a global index to the entire\n `Tensor`, but rather just the index in the last dimension.\n\n A 2-D example:\n sorted_sequence = [[0, 3, 9, 9, 10],\n [1, 2, 3, 4, 5]]\n values = [[2, 4, 9],\n [0, 2, 6]]\n\n result = UpperBound(sorted_sequence, values)\n\n result == [[1, 2, 4],\n [0, 2, 5]]\n\n Args:\n sorted_inputs: A `Tensor`. 2-D Tensor where each row is ordered.\n values: A `Tensor`. Must have the same type as `sorted_inputs`.\n 2-D Tensor with the same numbers of rows as `sorted_search_values`. Contains\n the values that will be searched for in `sorted_search_values`.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Applies upper_bound(sorted_search_values, values) along each row.", "type": "API"}, {"name": "tf.raw_ops.VarHandleOp", "docs": "Creates a handle to a Variable resource.\n\n Args:\n dtype: A `tf.DType`. the type of this variable. Must agree with the dtypes\n of all ops using this variable.\n shape: A `tf.TensorShape` or list of `ints`.\n The (possibly partially specified) shape of this variable.\n container: An optional `string`. Defaults to `\"\"`.\n the container this variable is placed in.\n shared_name: An optional `string`. Defaults to `\"\"`.\n the name by which this variable is referred to.\n allowed_devices: An optional list of `strings`. Defaults to `[]`.\n DEPRECATED. The allowed devices containing the resource variable. Set when the\n output ResourceHandle represents a per-replica/partitioned resource variable.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "Creates a handle to a Variable resource.", "type": "API"}, {"name": "tf.raw_ops.Variable", "docs": "Use VariableV2 instead.\n\n Args:\n shape: A `tf.TensorShape` or list of `ints`.\n dtype: A `tf.DType`.\n container: An optional `string`. Defaults to `\"\"`.\n shared_name: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor` of type `dtype`.\n ", "desc": "Use VariableV2 instead.", "type": "API"}, {"name": "tf.raw_ops.VariableShape", "docs": "Returns the shape of the variable pointed to by `resource`.\n\n This operation returns a 1-D integer tensor representing the shape of `input`.\n\n For example:\n\n ```\n # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]\n shape(t) ==> [2, 2, 3]\n ```\n\n Args:\n input: A `Tensor` of type `resource`.\n out_type: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Returns the shape of the variable pointed to by `resource`.", "type": "API"}, {"name": "tf.raw_ops.VariableV2", "docs": "Holds state in the form of a tensor that persists across steps.\n\n Outputs a ref to the tensor state so it may be read or modified.\n TODO(zhifengc/mrry): Adds a pointer to a more detail document\n about sharing states in tensorflow.\n\n Args:\n shape: A `tf.TensorShape` or list of `ints`.\n The shape of the variable tensor.\n dtype: A `tf.DType`. The type of elements in the variable tensor.\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this variable is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this variable is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A mutable `Tensor` of type `dtype`.\n ", "desc": "Holds state in the form of a tensor that persists across steps.", "type": "API"}, {"name": "tf.raw_ops.VarIsInitializedOp", "docs": "Checks whether a resource handle-based variable has been initialized.\n\n Args:\n resource: A `Tensor` of type `resource`. the input resource handle.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Checks whether a resource handle-based variable has been initialized.", "type": "API"}, {"name": "tf.raw_ops.Where", "docs": "Returns locations of nonzero / true values in a tensor.\n\n This operation returns the coordinates of true elements in `condition`. The\n coordinates are returned in a 2-D tensor where the first dimension (rows)\n represents the number of true elements, and the second dimension (columns)\n represents the coordinates of the true elements. Keep in mind, the shape of\n the output tensor can vary depending on how many true values there are in\n `condition`. Indices are output in row-major order.\n\n For example:\n\n ```\n # 'input' tensor is [[True, False]\n # [True, False]]\n # 'input' has two true values, so output has two coordinates.\n # 'input' has rank of 2, so coordinates have two indices.\n where(input) ==> [[0, 0],\n [1, 0]]\n\n # `condition` tensor is [[[True, False]\n # [True, False]]\n # [[False, True]\n # [False, True]]\n # [[False, False]\n # [False, True]]]\n # 'input' has 5 true values, so output has 5 coordinates.\n # 'input' has rank of 3, so coordinates have three indices.\n where(input) ==> [[0, 0, 0],\n [0, 1, 0],\n [1, 0, 1],\n [1, 1, 1],\n [2, 1, 1]]\n\n # `condition` tensor is [[[1.5, 0.0]\n # [-0.5, 0.0]]\n # [[0.0, 0.25]\n # [0.0, 0.75]]\n # [[0.0, 0.0]\n # [0.0, 0.01]]]\n # 'input' has 5 nonzero values, so output has 5 coordinates.\n # 'input' has rank of 3, so coordinates have three indices.\n where(input) ==> [[0, 0, 0],\n [0, 1, 0],\n [1, 0, 1],\n [1, 1, 1],\n [2, 1, 1]]\n\n # `condition` tensor is [[[1.5 + 0.0j, 0.0 + 0.0j]\n # [0.0 + 0.5j, 0.0 + 0.0j]]\n # [[0.0 + 0.0j, 0.25 + 1.5j]\n # [0.0 + 0.0j, 0.75 + 0.0j]]\n # [[0.0 + 0.0j, 0.0 + 0.0j]\n # [0.0 + 0.0j, 0.01 + 0.0j]]]\n # 'input' has 5 nonzero magnitude values, so output has 5 coordinates.\n # 'input' has rank of 3, so coordinates have three indices.\n where(input) ==> [[0, 0, 0],\n [0, 1, 0],\n [1, 0, 1],\n [1, 1, 1],\n [2, 1, 1]]\n ```\n\n Args:\n condition: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `complex64`, `int64`, `qint8`, `quint8`, `qint32`, `bfloat16`, `uint16`, `complex128`, `half`, `uint32`, `uint64`, `bool`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Returns locations of nonzero / true values in a tensor.", "type": "API"}, {"name": "tf.raw_ops.While", "docs": "output = input; While (Cond(output)) { output = Body(output) }\n\n Args:\n input: A list of `Tensor` objects.\n A list of input tensors whose types are T.\n cond: A function decorated with @Defun.\n A function takes 'input' and returns a tensor. If the tensor is\n a scalar of non-boolean, the scalar is converted to a boolean\n according to the following rule: if the scalar is a numerical\n value, non-zero means True and zero means False; if the scalar is\n a string, non-empty means True and empty means False. If the\n tensor is not a scalar, non-emptiness means True and False\n otherwise.\n body: A function decorated with @Defun.\n A function that takes a list of tensors and returns another\n list of tensors. Both lists have the same types as specified\n by T.\n output_shapes: An optional list of shapes (each a `tf.TensorShape` or list of `ints`). Defaults to `[]`.\n parallel_iterations: An optional `int`. Defaults to `10`.\n name: A name for the operation (optional).\n\n Returns:\n A list of `Tensor` objects. Has the same type as `input`.\n ", "desc": "output = input; While (Cond(output)) { output = Body(output) }", "type": "API"}, {"name": "tf.raw_ops.WholeFileReader", "docs": "A Reader that outputs the entire contents of a file as a value.\n\n To use, enqueue filenames in a Queue. The output of ReaderRead will\n be a filename (key) and the contents of that file (value).\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type mutable `string`.\n ", "desc": "A Reader that outputs the entire contents of a file as a value.", "type": "API"}, {"name": "tf.raw_ops.WholeFileReaderV2", "docs": "A Reader that outputs the entire contents of a file as a value.\n\n To use, enqueue filenames in a Queue. The output of ReaderRead will\n be a filename (key) and the contents of that file (value).\n\n Args:\n container: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is placed in the given container.\n Otherwise, a default container is used.\n shared_name: An optional `string`. Defaults to `\"\"`.\n If non-empty, this reader is named in the given bucket\n with this shared_name. Otherwise, the node name is used instead.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `resource`.\n ", "desc": "A Reader that outputs the entire contents of a file as a value.", "type": "API"}, {"name": "tf.raw_ops.WindowDataset", "docs": " Combines (nests of) input elements into a dataset of (nests of) windows.\n\n A \"window\" is a finite dataset of flat elements of size `size` (or possibly\n fewer if there are not enough input elements to fill the window and\n `drop_remainder` evaluates to false).\n\n The `shift` argument determines the number of input elements by which\n the window moves on each iteration. The first element in the `k`th window\n will be element\n\n ```\n 1 + (k-1) * shift\n ```\n\n of the input dataset. In particular, the first element of the first window\n will always be the first element of the input dataset. \n\n If the `stride` parameter is greater than 1, then each window will skip\n `(stride - 1)` input elements between each element that appears in the\n window. Output windows will still contain `size` elements regardless of\n the value of `stride`.\n\n The `stride` argument determines the stride of the input elements, and the\n `shift` argument determines the shift of the window.\n\n For example, letting `{...}` to represent a Dataset:\n\n - `tf.data.Dataset.range(7).window(2)` produces\n `{{0, 1}, {2, 3}, {4, 5}, {6}}`\n - `tf.data.Dataset.range(7).window(3, 2, 1, True)` produces\n `{{0, 1, 2}, {2, 3, 4}, {4, 5, 6}}`\n - `tf.data.Dataset.range(7).window(3, 1, 2, True)` produces\n `{{0, 2, 4}, {1, 3, 5}, {2, 4, 6}}`\n\n Note that when the `window` transformation is applied to a dataset of\n nested elements, it produces a dataset of nested windows.\n\n For example:\n\n - `tf.data.Dataset.from_tensor_slices((range(4), range(4))).window(2)`\n produces `{({0, 1}, {0, 1}), ({2, 3}, {2, 3})}`\n - `tf.data.Dataset.from_tensor_slices({\"a\": range(4)}).window(2)`\n produces `{{\"a\": {0, 1}}, {\"a\": {2, 3}}}`\n\n Args:\n input_dataset: A `Tensor` of type `variant`.\n size: A `Tensor` of type `int64`.\n An integer scalar, representing the number of elements\n of the input dataset to combine into a window. Must be positive.\n shift: A `Tensor` of type `int64`.\n An integer scalar, representing the number of input elements\n by which the window moves in each iteration. Defaults to `size`.\n Must be positive.\n stride: A `Tensor` of type `int64`.\n An integer scalar, representing the stride of the input elements\n in the sliding window. Must be positive. The default value of 1 means\n \"retain every input element\".\n drop_remainder: A `Tensor` of type `bool`.\n A Boolean scalar, representing whether the last window should be\n dropped if its size is smaller than `window_size`.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": " Combines (nests of) input elements into a dataset of (nests of) windows.", "type": "API"}, {"name": "tf.raw_ops.WorkerHeartbeat", "docs": "Worker heartbeat op.\n\n Heartbeats may be sent periodically to indicate the coordinator is still active,\n to retrieve the current worker status and to expedite shutdown when necessary.\n\n Args:\n request: A `Tensor` of type `string`.\n A string tensor containing a serialized WorkerHeartbeatRequest\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Worker heartbeat op.", "type": "API"}, {"name": "tf.raw_ops.WrapDatasetVariant", "docs": "TODO: add doc.\n\n Args:\n input_handle: A `Tensor` of type `variant`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.raw_ops.WriteAudioSummary", "docs": "Writes an audio summary.\n\n Writes encoded audio summary `tensor` at `step` with `tag` using summary `writer`.\n `sample_rate` is the audio sample rate is Hz.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tag: A `Tensor` of type `string`.\n tensor: A `Tensor` of type `float32`.\n sample_rate: A `Tensor` of type `float32`.\n max_outputs: An optional `int` that is `>= 1`. Defaults to `3`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes an audio summary.", "type": "API"}, {"name": "tf.raw_ops.WriteFile", "docs": "Writes `contents` to the file at input `filename`.\n\n Creates the file and recursively creates directory if it does not exist.\n\n Args:\n filename: A `Tensor` of type `string`.\n scalar. The name of the file to which we write the contents.\n contents: A `Tensor` of type `string`.\n scalar. The content to be written to the output file.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes `contents` to the file at input `filename`.", "type": "API"}, {"name": "tf.raw_ops.WriteGraphSummary", "docs": "Writes a graph summary.\n\n Writes TensorFlow graph `tensor` at `step` using summary `writer`.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tensor: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes a graph summary.", "type": "API"}, {"name": "tf.raw_ops.WriteHistogramSummary", "docs": "Writes a histogram summary.\n\n Writes histogram `values` at `step` with `tag` using summary `writer`.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tag: A `Tensor` of type `string`.\n values: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `bool`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes a histogram summary.", "type": "API"}, {"name": "tf.raw_ops.WriteImageSummary", "docs": "Writes an image summary.\n\n Writes image `tensor` at `step` with `tag` using summary `writer`.\n `tensor` is image with shape [height, width, channels].\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tag: A `Tensor` of type `string`.\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `float64`, `float32`, `half`.\n bad_color: A `Tensor` of type `uint8`.\n max_images: An optional `int` that is `>= 1`. Defaults to `3`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes an image summary.", "type": "API"}, {"name": "tf.raw_ops.WriteRawProtoSummary", "docs": "Writes a serialized proto summary.\n\n Writes `tensor`, a serialized proto at `step` using summary `writer`.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tensor: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes a serialized proto summary.", "type": "API"}, {"name": "tf.raw_ops.WriteScalarSummary", "docs": "Writes a scalar summary.\n\n Writes scalar `value` at `step` with `tag` using summary `writer`.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tag: A `Tensor` of type `string`.\n value: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes a scalar summary.", "type": "API"}, {"name": "tf.raw_ops.WriteSummary", "docs": "Writes a tensor summary.\n\n Writes `tensor` at `step` with `tag` using summary `writer`.\n\n Args:\n writer: A `Tensor` of type `resource`.\n step: A `Tensor` of type `int64`.\n tensor: A `Tensor`.\n tag: A `Tensor` of type `string`.\n summary_metadata: A `Tensor` of type `string`.\n name: A name for the operation (optional).\n\n Returns:\n The created Operation.\n ", "desc": "Writes a tensor summary.", "type": "API"}, {"name": "tf.raw_ops.Xdivy", "docs": "Returns 0 if x == 0, and x / y otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x / y otherwise, elementwise.", "type": "API"}, {"name": "tf.raw_ops.Xlog1py", "docs": "Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise.", "type": "API"}, {"name": "tf.raw_ops.Xlogy", "docs": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns 0 if x == 0, and x * log(y) otherwise, elementwise.", "type": "API"}, {"name": "tf.raw_ops.ZerosLike", "docs": "Returns a tensor of zeros with the same shape and type as x.\n\n Args:\n x: A `Tensor`. a tensor of type T.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns a tensor of zeros with the same shape and type as x.", "type": "API"}, {"name": "tf.raw_ops.Zeta", "docs": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).\n\n The Hurwitz zeta function is defined as:\n\n\n \\\\(\\zeta(x, q) = \\sum_{n=0}^{\\infty} (q + n)^{-x}\\\\)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n q: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Compute the Hurwitz zeta function \\\\(\\zeta(x, q)\\\\).", "type": "API"}, {"name": "tf.raw_ops.ZipDataset", "docs": "Creates a dataset that zips together `input_datasets`.\n\n The elements of the resulting dataset are created by zipping corresponding\n elements from each of the input datasets.\n\n The size of the resulting dataset will match the size of the smallest input\n dataset, and no error will be raised if input datasets have different sizes.\n\n Args:\n input_datasets: A list of at least 1 `Tensor` objects with type `variant`.\n List of `N` variant Tensors representing datasets to be zipped together.\n output_types: A list of `tf.DTypes` that has length `>= 1`.\n output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.\n metadata: An optional `string`. Defaults to `\"\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `variant`.\n ", "desc": "Creates a dataset that zips together `input_datasets`.", "type": "API"}, {"name": "tf.realdiv", "docs": "Returns x / y element-wise for real types.\n\n If `x` and `y` are reals, this will return the floating-point division.\n\n *NOTE*: `Div` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for real types.", "type": "API"}, {"name": "tf.recompute_grad", "docs": "Defines a function as a recompute-checkpoint for the tape auto-diff.\n\n Tape checkpointing is a technique to reduce the memory consumption of the\n auto-diff tape:\n\n - Without tape checkpointing operations and intermediate values are\n recorded to the tape for use in the backward pass.\n\n - With tape checkpointing, only the function call and its inputs are\n recorded. During back-propagation the `recompute_grad` custom gradient\n (`tf.custom_gradient`) recomputes the function under a localized Tape object.\n This recomputation of the function during backpropagation performs redundant\n calculation, but reduces the overall memory usage of the Tape.\n\n >>> y = tf.Variable(1.0)\n\n >>> def my_function(x):\n ... tf.print('running')\n ... z = x*y\n ... return z\n\n >>> my_function_recompute = tf.recompute_grad(my_function)\n\n >>> with tf.GradientTape() as tape:\n ... r = tf.constant(1.0)\n ... for i in range(4):\n ... r = my_function_recompute(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, [y])\n running\n running\n running\n running\n\n Without `recompute_grad`, the tape contains all intermitate steps, and no\n recomputation is performed.\n\n >>> with tf.GradientTape() as tape:\n ... r = tf.constant(1.0)\n ... for i in range(4):\n ... r = my_function(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, [y])\n\n\n If `f` was a `tf.keras` `Model` or `Layer` object, methods and attributes\n such as `f.variables` are not available on the returned function `g`.\n Either keep a reference of `f` , or use `g.__wrapped__` for accessing\n these variables and methods.\n\n\n >>> def print_running_and_return(x):\n ... tf.print(\"running\")\n ... return x\n\n >>> model = tf.keras.Sequential([\n ... tf.keras.layers.Lambda(print_running_and_return),\n ... tf.keras.layers.Dense(2)\n ... ])\n\n >>> model_recompute = tf.recompute_grad(model)\n\n >>> with tf.GradientTape(persistent=True) as tape:\n ... r = tf.constant([[1,2]])\n ... for i in range(4):\n ... r = model_recompute(r)\n running\n running\n running\n running\n\n >>> grad = tape.gradient(r, model.variables)\n running\n running\n running\n running\n\n Alternatively, use the `__wrapped__` attribute to access the original\n model object.\n\n >>> grad = tape.gradient(r, model_recompute.__wrapped__.variables)\n running\n running\n running\n running\n\n\n Args:\n f: function `f(*x)` that returns a `Tensor` or sequence of `Tensor` outputs.\n\n Returns:\n A function `g` wrapping `f` that defines a custom gradient, which recomputes\n `f` on the backwards pass of a gradient call.\n ", "desc": "Defines a function as a recompute-checkpoint for the tape auto-diff.", "type": "API"}, {"name": "tf.reduce_all", "docs": "Computes `tf.math.logical_and` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.logical_and` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.math.reduce_all(x)\n \n >>> tf.math.reduce_all(x, 0)\n \n >>> tf.math.reduce_all(x, 1)\n \n\n Args:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.all\n @end_compatibility\n ", "desc": "Computes `tf.math.logical_and` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_any", "docs": "Computes `tf.math.logical_or` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.logical_or` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[True, True], [False, False]])\n >>> tf.reduce_any(x)\n \n >>> tf.reduce_any(x, 0)\n \n >>> tf.reduce_any(x, 1)\n \n\n Args:\n input_tensor: The boolean tensor to reduce.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.any\n @end_compatibility\n ", "desc": "Computes `tf.math.logical_or` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_logsumexp", "docs": "Computes log(sum(exp(elements across dimensions of a tensor))).\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` has no entries, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n This function is more numerically stable than log(sum(exp(input))). It avoids\n overflows caused by taking the exp of large inputs and underflows caused by\n taking the log of small inputs.\n\n For example:\n\n ```python\n x = tf.constant([[0., 0., 0.], [0., 0., 0.]])\n tf.reduce_logsumexp(x) # log(6)\n tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)]\n tf.reduce_logsumexp(x, 1) # [log(3), log(3)]\n tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]]\n tf.reduce_logsumexp(x, [0, 1]) # log(6)\n ```\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n ", "desc": "Computes log(sum(exp(elements across dimensions of a tensor))).", "type": "API"}, {"name": "tf.reduce_max", "docs": "Computes `tf.math.maximum` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.maximum` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n Usage example:\n\n >>> x = tf.constant([5, 1, 2, 4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([-5, -1, -2, -4])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([4, float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('nan'), float('nan')])\n >>> tf.reduce_max(x)\n \n >>> x = tf.constant([float('-inf'), float('inf')])\n >>> tf.reduce_max(x)\n \n\n See the numpy docs for `np.amax` and `np.nanmax` behavior.\n\n Args:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n ", "desc": "Computes `tf.math.maximum` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_mean", "docs": "Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis` by computing the\n mean of elements across the dimensions in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a tensor with a single\n element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 1.], [2., 2.]])\n >>> tf.reduce_mean(x)\n \n >>> tf.reduce_mean(x, 0)\n \n >>> tf.reduce_mean(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.mean\n\n Please note that `np.mean` has a `dtype` parameter that could be used to\n specify the output type. By default this is `dtype=float64`. On the other\n hand, `tf.reduce_mean` has an aggressive type inference from `input_tensor`,\n for example:\n\n >>> x = tf.constant([1, 0, 1, 0])\n >>> tf.reduce_mean(x)\n \n >>> y = tf.constant([1., 0., 1., 0.])\n >>> tf.reduce_mean(y)\n \n\n @end_compatibility\n ", "desc": "Computes the mean of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_min", "docs": "Computes the `tf.math.minimum` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.minimum` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> a = tf.constant([\n ... [[1, 2], [3, 4]],\n ... [[1, 2], [3, 4]]\n ... ])\n >>> tf.reduce_min(a)\n \n\n Choosing a specific axis returns minimum element in the given axis:\n\n >>> b = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.reduce_min(b, axis=0)\n \n >>> tf.reduce_min(b, axis=1)\n \n\n Setting `keepdims` to `True` retains the dimension of `input_tensor`:\n\n >>> tf.reduce_min(a, keepdims=True)\n \n >>> tf.math.reduce_min(a, axis=0, keepdims=True)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have real numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.min\n @end_compatibility\n ", "desc": "Computes the `tf.math.minimum` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_prod", "docs": "Computes `tf.math.multiply` of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.multiply` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n entry in `axis`. If `keepdims` is true, the reduced dimensions\n are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> x = tf.constant([[1., 2.], [3., 4.]])\n >>> tf.math.reduce_prod(x)\n \n >>> tf.math.reduce_prod(x, 0)\n \n >>> tf.math.reduce_prod(x, 1)\n \n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor))`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor.\n\n @compatibility(numpy)\n Equivalent to np.prod\n @end_compatibility\n ", "desc": "Computes `tf.math.multiply` of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.reduce_sum", "docs": "Computes the sum of elements across dimensions of a tensor.\n\n This is the reduction operation for the elementwise `tf.math.add` op.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each\n of the entries in `axis`, which must be unique. If `keepdims` is true, the\n reduced dimensions are retained with length 1.\n\n If `axis` is None, all dimensions are reduced, and a\n tensor with a single element is returned.\n\n For example:\n\n >>> # x has a shape of (2, 3) (two rows and three columns):\n >>> x = tf.constant([[1, 1, 1], [1, 1, 1]])\n >>> x.numpy()\n array([[1, 1, 1],\n [1, 1, 1]], dtype=int32)\n >>> # sum all the elements\n >>> # 1 + 1 + 1 + 1 + 1+ 1 = 6\n >>> tf.reduce_sum(x).numpy()\n 6\n >>> # reduce along the first dimension\n >>> # the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> tf.reduce_sum(x, 0).numpy()\n array([2, 2, 2], dtype=int32)\n >>> # reduce along the second dimension\n >>> # the result is [1, 1] + [1, 1] + [1, 1] = [3, 3]\n >>> tf.reduce_sum(x, 1).numpy()\n array([3, 3], dtype=int32)\n >>> # keep the original dimensions\n >>> tf.reduce_sum(x, 1, keepdims=True).numpy()\n array([[3],\n [3]], dtype=int32)\n >>> # reduce along both dimensions\n >>> # the result is 1 + 1 + 1 + 1 + 1 + 1 = 6\n >>> # or, equivalently, reduce along rows, then reduce the resultant array\n >>> # [1, 1, 1] + [1, 1, 1] = [2, 2, 2]\n >>> # 2 + 2 + 2 = 6\n >>> tf.reduce_sum(x, [0, 1]).numpy()\n 6\n\n Args:\n input_tensor: The tensor to reduce. Should have numeric type.\n axis: The dimensions to reduce. If `None` (the default), reduces all\n dimensions. Must be in the range `[-rank(input_tensor),\n rank(input_tensor)]`.\n keepdims: If true, retains reduced dimensions with length 1.\n name: A name for the operation (optional).\n\n Returns:\n The reduced tensor, of the same dtype as the input_tensor.\n\n @compatibility(numpy)\n Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to\n int64 while tensorflow returns the same dtype as the input.\n @end_compatibility\n ", "desc": "Computes the sum of elements across dimensions of a tensor.", "type": "API"}, {"name": "tf.register_tensor_conversion_function", "docs": "Registers a function for converting objects of `base_type` to `Tensor`.\n\n The conversion function must have the following signature:\n\n ```python\n def conversion_func(value, dtype=None, name=None, as_ref=False):\n # ...\n ```\n\n It must return a `Tensor` with the given `dtype` if specified. If the\n conversion function creates a new `Tensor`, it should use the given\n `name` if specified. All exceptions will be propagated to the caller.\n\n The conversion function may return `NotImplemented` for some\n inputs. In this case, the conversion process will continue to try\n subsequent conversion functions.\n\n If `as_ref` is true, the function must return a `Tensor` reference,\n such as a `Variable`.\n\n NOTE: The conversion functions will execute in order of priority,\n followed by order of registration. To ensure that a conversion function\n `F` runs before another conversion function `G`, ensure that `F` is\n registered with a smaller priority than `G`.\n\n Args:\n base_type: The base type or tuple of base types for all objects that\n `conversion_func` accepts.\n conversion_func: A function that converts instances of `base_type` to\n `Tensor`.\n priority: Optional integer that indicates the priority for applying this\n conversion function. Conversion functions with smaller priority values run\n earlier than conversion functions with larger priority values. Defaults to\n 100.\n\n Raises:\n TypeError: If the arguments do not have the appropriate type.\n ", "desc": "Registers a function for converting objects of `base_type` to `Tensor`.", "type": "API"}, {"name": "tf.RegisterGradient", "docs": "A decorator for registering the gradient function for an op type.\n\n This decorator is only used when defining a new op type. For an op\n with `m` inputs and `n` outputs, the gradient function is a function\n that takes the original `Operation` and `n` `Tensor` objects\n (representing the gradients with respect to each output of the op),\n and returns `m` `Tensor` objects (representing the partial gradients\n with respect to each input of the op).\n\n For example, assuming that operations of type `\"Sub\"` take two\n inputs `x` and `y`, and return a single output `x - y`, the\n following gradient function would be registered:\n\n ```python\n @tf.RegisterGradient(\"Sub\")\n def _sub_grad(unused_op, grad):\n return grad, tf.negative(grad)\n ```\n\n The decorator argument `op_type` is the string type of an\n operation. This corresponds to the `OpDef.name` field for the proto\n that defines the operation.\n ", "desc": "A decorator for registering the gradient function for an op type.", "type": "API"}, {"name": "tf.repeat", "docs": "Repeat elements of `input`.\n\n See also `tf.concat`, `tf.stack`, `tf.tile`.\n\n Args:\n input: An `N`-dimensional Tensor.\n repeats: An 1-D `int` Tensor. The number of repetitions for each element.\n repeats is broadcasted to fit the shape of the given axis. `len(repeats)`\n must equal `input.shape[axis]` if axis is not None.\n axis: An int. The axis along which to repeat values. By default (axis=None),\n use the flattened input array, and return a flat output array.\n name: A name for the operation.\n\n Returns:\n A Tensor which has the same shape as `input`, except along the given axis.\n If axis is None then the output array is flattened to match the flattened\n input array.\n\n Example usage:\n\n >>> repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)\n \n\n >>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)\n \n\n >>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1)\n \n\n >>> repeat(3, repeats=4)\n \n\n >>> repeat([[1,2], [3,4]], repeats=2)\n \n\n ", "desc": "Repeat elements of `input`.", "type": "API"}, {"name": "tf.required_space_to_batch_paddings", "docs": "Calculate padding required to make block_shape divide input_shape.\n\n This function can be used to calculate a suitable paddings argument for use\n with space_to_batch_nd and batch_to_space_nd.\n\n Args:\n input_shape: int32 Tensor of shape [N].\n block_shape: int32 Tensor of shape [N].\n base_paddings: Optional int32 Tensor of shape [N, 2]. Specifies the minimum\n amount of padding to use. All elements must be >= 0. If not specified,\n defaults to 0.\n name: string. Optional name prefix.\n\n Returns:\n (paddings, crops), where:\n\n `paddings` and `crops` are int32 Tensors of rank 2 and shape [N, 2]\n satisfying:\n\n paddings[i, 0] = base_paddings[i, 0].\n 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]\n (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0\n\n crops[i, 0] = 0\n crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]\n\n Raises: ValueError if called with incompatible shapes.\n ", "desc": "Calculate padding required to make block_shape divide input_shape.", "type": "API"}, {"name": "tf.reshape", "docs": "Reshapes a tensor.\n\n Given `tensor`, this operation returns a new `tf.Tensor` that has the same\n values as `tensor` in the same order, except with a new shape given by\n `shape`.\n\n >>> t1 = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> print(tf.shape(t1).numpy())\n [2 3]\n >>> t2 = tf.reshape(t1, [6])\n >>> t2\n \n >>> tf.reshape(t2, [3, 2])\n \n\n The `tf.reshape` does not change the order of or the total number of elements\n in the tensor, and so it can reuse the underlying data buffer. This makes it\n a fast operation independent of how big of a tensor it is operating on.\n\n >>> tf.reshape([1, 2, 3], [2, 2])\n Traceback (most recent call last):\n ...\n InvalidArgumentError: Input to reshape is a tensor with 3 values, but the\n requested shape has 4\n\n To instead reorder the data to rearrange the dimensions of a tensor, see\n `tf.transpose`.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [3, 2]).numpy()\n array([[1, 2],\n [3, 4],\n [5, 6]], dtype=int32)\n >>> tf.transpose(t, perm=[1, 0]).numpy()\n array([[1, 4],\n [2, 5],\n [3, 6]], dtype=int32)\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total size remains constant. In particular,\n a `shape` of `[-1]` flattens into 1-D. At most one component of `shape` can\n be -1.\n\n >>> t = [[1, 2, 3],\n ... [4, 5, 6]]\n >>> tf.reshape(t, [-1])\n \n >>> tf.reshape(t, [3, -1])\n \n >>> tf.reshape(t, [-1, 2])\n \n\n `tf.reshape(t, [])` reshapes a tensor `t` with one element to a scalar.\n\n >>> tf.reshape([7], []).numpy()\n 7\n\n More examples:\n\n >>> t = [1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> print(tf.shape(t).numpy())\n [9]\n >>> tf.reshape(t, [3, 3])\n \n\n >>> t = [[[1, 1], [2, 2]],\n ... [[3, 3], [4, 4]]]\n >>> print(tf.shape(t).numpy())\n [2 2 2]\n >>> tf.reshape(t, [2, 4])\n \n\n >>> t = [[[1, 1, 1],\n ... [2, 2, 2]],\n ... [[3, 3, 3],\n ... [4, 4, 4]],\n ... [[5, 5, 5],\n ... [6, 6, 6]]]\n >>> print(tf.shape(t).numpy())\n [3 2 3]\n >>> # Pass '[-1]' to flatten 't'.\n >>> tf.reshape(t, [-1])\n \n >>> # -- Using -1 to infer the shape --\n >>> # Here -1 is inferred to be 9:\n >>> tf.reshape(t, [2, -1])\n \n >>> # -1 is inferred to be 2:\n >>> tf.reshape(t, [-1, 9])\n \n >>> # -1 is inferred to be 3:\n >>> tf.reshape(t, [ 2, -1, 3])\n \n\n Args:\n tensor: A `Tensor`.\n shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Defines the shape of the output tensor.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reshapes a tensor.", "type": "API"}, {"name": "tf.reverse", "docs": "Reverses specific dimensions of a tensor.\n\n Given a `tensor`, and a `int32` tensor `axis` representing the set of\n dimensions of `tensor` to reverse. This operation reverses each dimension\n `i` for which there exists `j` s.t. `axis[j] == i`.\n\n `tensor` can have up to 8 dimensions. The number of dimensions specified\n in `axis` may be 0 or more entries. If an index is specified more than\n once, a InvalidArgument error is raised.\n\n For example:\n\n ```\n # tensor 't' is [[[[ 0, 1, 2, 3],\n # [ 4, 5, 6, 7],\n # [ 8, 9, 10, 11]],\n # [[12, 13, 14, 15],\n # [16, 17, 18, 19],\n # [20, 21, 22, 23]]]]\n # tensor 't' shape is [1, 2, 3, 4]\n\n # 'dims' is [3] or 'dims' is [-1]\n reverse(t, dims) ==> [[[[ 3, 2, 1, 0],\n [ 7, 6, 5, 4],\n [ 11, 10, 9, 8]],\n [[15, 14, 13, 12],\n [19, 18, 17, 16],\n [23, 22, 21, 20]]]]\n\n # 'dims' is '[1]' (or 'dims' is '[-3]')\n reverse(t, dims) ==> [[[[12, 13, 14, 15],\n [16, 17, 18, 19],\n [20, 21, 22, 23]\n [[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]]]\n\n # 'dims' is '[2]' (or 'dims' is '[-2]')\n reverse(t, dims) ==> [[[[8, 9, 10, 11],\n [4, 5, 6, 7],\n [0, 1, 2, 3]]\n [[20, 21, 22, 23],\n [16, 17, 18, 19],\n [12, 13, 14, 15]]]]\n ```\n\n Args:\n tensor: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `int64`, `uint64`, `bool`, `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`, `string`.\n Up to 8-D.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. The indices of the dimensions to reverse. Must be in the range\n `[-rank(tensor), rank(tensor))`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Reverses specific dimensions of a tensor.", "type": "API"}, {"name": "tf.reverse_sequence", "docs": "Reverses variable length slices.\n\n This op first slices `input` along the dimension `batch_axis`, and for\n each slice `i`, reverses the first `seq_lengths[i]` elements along the\n dimension `seq_axis`.\n\n The elements of `seq_lengths` must obey `seq_lengths[i] <=\n input.dims[seq_axis]`, and `seq_lengths` must be a vector of length\n `input.dims[batch_axis]`.\n\n The output slice `i` along dimension `batch_axis` is then given by\n input slice `i`, with the first `seq_lengths[i]` slices along\n dimension `seq_axis` reversed.\n\n Example usage:\n\n >>> seq_lengths = [7, 2, 3, 5]\n >>> input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0],\n ... [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]]\n >>> output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0)\n >>> output\n \n\n Args:\n input: A `Tensor`. The input to reverse.\n seq_lengths: A `Tensor`. Must be one of the following types: `int32`,\n `int64`. 1-D with length `input.dims(batch_axis)` and `max(seq_lengths) <=\n input.dims(seq_axis)`\n seq_axis: An `int`. The dimension which is partially reversed.\n batch_axis: An optional `int`. Defaults to `0`. The dimension along which\n reversal is performed.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor. Has the same type as input.\n ", "desc": "Reverses variable length slices.", "type": "API"}, {"name": "tf.roll", "docs": "Rolls the elements of a tensor along an axis.\n\n The elements are shifted positively (towards larger indices) by the offset of\n `shift` along the dimension of `axis`. Negative `shift` values will shift\n elements in the opposite direction. Elements that roll passed the last position\n will wrap around to the first and vice versa. Multiple shifts along multiple\n axes may be specified.\n\n For example:\n\n ```\n # 't' is [0, 1, 2, 3, 4]\n roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]\n\n # shifting along multiple dimensions\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]\n\n # shifting along the same axis multiple times\n # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]\n ```\n\n Args:\n input: A `Tensor`.\n shift: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `shift[i]` specifies the number of places by which\n elements are shifted positively (towards larger indices) along the dimension\n specified by `axis[i]`. Negative shifts will roll the elements in the opposite\n direction.\n axis: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Dimension must be 0-D or 1-D. `axis[i]` specifies the dimension that the shift\n `shift[i]` should occur. If the same axis is referenced more than once, the\n total shift for that axis will be the sum of all the shifts that belong to that\n axis.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Rolls the elements of a tensor along an axis.", "type": "API"}, {"name": "tf.round", "docs": "Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example:\n\n ```python\n x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5])\n tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ]\n ```\n\n Args:\n x: A `Tensor` of type `float16`, `float32`, `float64`, `int32`, or `int64`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of same shape and type as `x`.\n ", "desc": "Rounds the values of a tensor to the nearest integer, element-wise.", "type": "API"}, {"name": "tf.saturate_cast", "docs": "Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underflow in the cast, this op\n applies the appropriate clamping before the cast.\n\n Args:\n value: A `Tensor`.\n dtype: The desired output `DType`.\n name: A name for the operation (optional).\n\n Returns:\n `value` safely cast to `dtype`.\n ", "desc": "Performs a safe saturating cast of `value` to `dtype`.", "type": "API"}, {"name": "tf.saved_model", "docs": "Public API for tf.saved_model namespace.\n", "desc": "Public API for tf.saved_model namespace.", "type": "API"}, {"name": "tf.saved_model.Asset", "docs": "Represents a file asset to hermetically include in a SavedModel.\n\n A SavedModel can include arbitrary files, called assets, that are needed\n for its use. For example a vocabulary file used initialize a lookup table.\n\n When a trackable object is exported via `tf.saved_model.save()`, all the\n `Asset`s reachable from it are copied into the SavedModel assets directory.\n Upon loading, the assets and the serialized functions that depend on them\n will refer to the correct filepaths inside the SavedModel directory.\n\n Example:\n\n ```\n filename = tf.saved_model.Asset(\"file.txt\")\n\n @tf.function(input_signature=[])\n def func():\n return tf.io.read_file(filename)\n\n trackable_obj = tf.train.Checkpoint()\n trackable_obj.func = func\n trackable_obj.filename = filename\n tf.saved_model.save(trackable_obj, \"/tmp/saved_model\")\n\n # The created SavedModel is hermetic, it does not depend on\n # the original file and can be moved to another path.\n tf.io.gfile.remove(\"file.txt\")\n tf.io.gfile.rename(\"/tmp/saved_model\", \"/tmp/new_location\")\n\n reloaded_obj = tf.saved_model.load(\"/tmp/new_location\")\n print(reloaded_obj.func())\n ```\n\n Attributes:\n asset_path: A path, or a 0-D `tf.string` tensor with path to the asset.\n ", "desc": "Represents a file asset to hermetically include in a SavedModel.", "type": "API"}, {"name": "tf.saved_model.contains_saved_model", "docs": "Checks whether the provided export directory could contain a SavedModel.\n\n Note that the method does not load any data by itself. If the method returns\n `false`, the export directory definitely does not contain a SavedModel. If the\n method returns `true`, the export directory may contain a SavedModel but\n provides no guarantee that it can be loaded.\n\n Args:\n export_dir: Absolute path to possible export location. For example,\n '/my/foo/model'.\n\n Returns:\n True if the export directory contains SavedModel files, False otherwise.\n ", "desc": "Checks whether the provided export directory could contain a SavedModel.", "type": "API"}, {"name": "tf.saved_model.experimental", "docs": "Public API for tf.saved_model.experimental namespace.\n", "desc": "Public API for tf.saved_model.experimental namespace.", "type": "API"}, {"name": "tf.saved_model.experimental.VariablePolicy", "docs": "Enum defining options for variable handling when saving.\n\n NONE\n No policy applied: Distributed variables are saved as one variable, with no\n device attached.\n\n SAVE_VARIABLE_DEVICES\n When saving variables, also save their device assignment.\n This is useful if one wants to hardcode devices in saved models, but it also\n makes them non-portable if soft device placement is disabled (more details\n in `tf.config.set_soft_device_placement`). This is currently not\n fully supported by `saved_model.load`, and is mainly intended to be used\n when one will be reading the saved model at a lower API level. In the\n example below, the graph saved by the call to `saved_model.save` will have\n the variable devices correctly specified:\n ```python\n exported = tf.train.Checkpoint()\n with tf.device('/GPU:0'):\n exported.x_gpu = tf.Variable(1.0)\n with tf.device('/CPU:0'):\n exported.x_cpu = tf.Variable(1.0)\n tf.saved_model.save(exported, export_dir,\n options = tf.saved_model.SaveOptions(\n experimental_variable_policy=\n tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))\n ```\n Distributed variables are still saved as one variable under this policy.\n\n EXPAND_DISTRIBUTED_VARIABLES\n Distributed variables will be saved with information about their components,\n allowing for their restoration on load. Also, the saved graph will contain\n references to those variables. This is useful when one wants to use the\n model for training in environments where the original distribution strategy\n is not available.\n ", "desc": "Enum defining options for variable handling when saving.", "type": "API"}, {"name": "tf.saved_model.load", "docs": "Load a SavedModel from `export_dir`.\n\n Signatures associated with the SavedModel are available as functions:\n\n ```python\n imported = tf.saved_model.load(path)\n f = imported.signatures[\"serving_default\"]\n print(f(x=tf.constant([[1.]])))\n ```\n\n Objects exported with `tf.saved_model.save` additionally have trackable\n objects and functions assigned to attributes:\n\n ```python\n exported = tf.train.Checkpoint(v=tf.Variable(3.))\n exported.f = tf.function(\n lambda x: exported.v * x,\n input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])\n tf.saved_model.save(exported, path)\n imported = tf.saved_model.load(path)\n assert 3. == imported.v.numpy()\n assert 6. == imported.f(x=tf.constant(2.)).numpy()\n ```\n\n _Loading Keras models_\n\n Keras models are trackable, so they can be saved to SavedModel. The object\n returned by `tf.saved_model.load` is not a Keras object (i.e. doesn't have\n `.fit`, `.predict`, etc. methods). A few attributes and functions are still\n available: `.variables`, `.trainable_variables` and `.__call__`.\n\n ```python\n model = tf.keras.Model(...)\n tf.saved_model.save(model, path)\n imported = tf.saved_model.load(path)\n outputs = imported(inputs)\n ```\n\n Use `tf.keras.models.load_model` to restore the Keras model.\n\n _Importing SavedModels from TensorFlow 1.x_\n\n SavedModels from `tf.estimator.Estimator` or 1.x SavedModel APIs have a flat\n graph instead of `tf.function` objects. These SavedModels will be loaded with\n the following attributes:\n\n * `.signatures`: A dictionary mapping signature names to functions.\n * `.prune(feeds, fetches) `: A method which allows you to extract\n functions for new subgraphs. This is equivalent to importing the SavedModel\n and naming feeds and fetches in a Session from TensorFlow 1.x.\n\n ```python\n imported = tf.saved_model.load(path_to_v1_saved_model)\n pruned = imported.prune(\"x:0\", \"out:0\")\n pruned(tf.ones([]))\n ```\n\n See `tf.compat.v1.wrap_function` for details.\n * `.variables`: A list of imported variables.\n * `.graph`: The whole imported graph.\n * `.restore(save_path)`: A function that restores variables from a checkpoint\n saved from `tf.compat.v1.Saver`.\n\n _Consuming SavedModels asynchronously_\n\n When consuming SavedModels asynchronously (the producer is a separate\n process), the SavedModel directory will appear before all files have been\n written, and `tf.saved_model.load` will fail if pointed at an incomplete\n SavedModel. Rather than checking for the directory, check for\n \"saved_model_dir/saved_model.pb\". This file is written atomically as the last\n `tf.saved_model.save` file operation.\n\n Args:\n export_dir: The SavedModel directory to load from.\n tags: A tag or sequence of tags identifying the MetaGraph to load. Optional\n if the SavedModel contains a single MetaGraph, as for those exported from\n `tf.saved_model.save`.\n options: `tf.saved_model.LoadOptions` object that specifies options for\n loading.\n\n Returns:\n A trackable object with a `signatures` attribute mapping from signature\n keys to functions. If the SavedModel was exported by `tf.saved_model.save`,\n it also points to trackable objects, functions, debug info which it has been\n saved.\n\n Raises:\n ValueError: If `tags` don't match a MetaGraph in the SavedModel.\n ", "desc": "Load a SavedModel from `export_dir`.", "type": "API"}, {"name": "tf.saved_model.LoadOptions", "docs": "Options for loading a SavedModel.\n\n This function may be used in the `options` argument in functions that\n load a SavedModel (`tf.saved_model.load`, `tf.keras.models.load_model`).\n ", "desc": "Options for loading a SavedModel.", "type": "API"}, {"name": "tf.saved_model.save", "docs": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).\n\n The `obj` must inherit from the [`Trackable` class](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/tracking/base.py#L591).\n\n Example usage:\n\n >>> class Adder(tf.Module):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.float32)])\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n The resulting SavedModel is then servable with an input named \"x\", a scalar\n with dtype float32.\n\n _Signatures_\n\n Signatures define the input and output types for a computation. The optional\n save `signatures` argument controls which methods in `obj` will be\n available to programs which consume `SavedModel`s, for example, serving\n APIs. Python functions may be decorated with\n `@tf.function(input_signature=...)` and passed as signatures directly, or\n lazily with a call to `get_concrete_function` on the method decorated with\n `@tf.function`.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',signatures=model.add.get_concrete_function(\n ... tf.TensorSpec([], tf.float32)))\n\n If a `@tf.function` does not have an input signature and\n `get_concrete_function` is not called on that method, the function will not\n be directly callable in the restored SavedModel.\n\n Example:\n\n >>> class Adder(tf.Module):\n ... @tf.function\n ... def add(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n >>> restored = tf.saved_model.load('/tmp/adder')\n >>> restored.add(1.)\n Traceback (most recent call last):\n ...\n ValueError: Found zero restored functions for caller function.\n\n If the `signatures` argument is omitted, `obj` will be searched for\n `@tf.function`-decorated methods. If exactly one traced `@tf.function` is\n found, that method will be used as the default signature for the SavedModel.\n Else, any `@tf.function` attached to `obj` or its dependencies will be\n exported for use with `tf.saved_model.load`.\n\n When invoking a signature in an exported SavedModel, `Tensor` arguments are\n identified by name. These names will come from the Python function's argument\n names by default. They may be overridden by specifying a `name=...` argument\n in the corresponding `tf.TensorSpec` object. Explicit naming is required if\n multiple `Tensor`s are passed through a single argument to the Python\n function.\n\n The outputs of functions used as `signatures` must either be flat lists, in\n which case outputs will be numbered, or a dictionary mapping string keys to\n `Tensor`, in which case the keys will be used to name outputs.\n\n Signatures are available in objects returned by `tf.saved_model.load` as a\n `.signatures` attribute. This is a reserved attribute: `tf.saved_model.save`\n on an object with a custom `.signatures` attribute will raise an exception.\n\n _Using `tf.saved_model.save` with Keras models_\n\n While Keras has its own [saving and loading API](https://www.tensorflow.org/guide/keras/save_and_serialize),\n this function can be used to export Keras models. For example, exporting with\n a signature specified:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(model, '/tmp/adder')\n\n Exporting from a function without a fixed signature:\n\n >>> class Adder(tf.keras.Model):\n ... @tf.function\n ... def concat(self, x):\n ... return x + x\n\n >>> model = Adder()\n >>> tf.saved_model.save(\n ... model, '/tmp/adder',\n ... signatures=model.concat.get_concrete_function(\n ... tf.TensorSpec(shape=[], dtype=tf.string, name=\"string_input\")))\n\n `tf.keras.Model` instances constructed from inputs and outputs already have a\n signature and so do not require a `@tf.function` decorator or a `signatures`\n argument. If neither are specified, the model's forward pass is exported.\n\n >>> x = tf.keras.layers.Input((4,), name=\"x\")\n >>> y = tf.keras.layers.Dense(5, name=\"out\")(x)\n >>> model = tf.keras.Model(x, y)\n >>> tf.saved_model.save(model, '/tmp/saved_model/')\n\n The exported SavedModel takes \"x\" with shape [None, 4] and returns \"out\"\n with shape [None, 5]\n\n _Variables and Checkpoints_\n\n Variables must be tracked by assigning them to an attribute of a tracked\n object or to an attribute of `obj` directly. TensorFlow objects (e.g. layers\n from `tf.keras.layers`, optimizers from `tf.train`) track their variables\n automatically. This is the same tracking scheme that `tf.train.Checkpoint`\n uses, and an exported `Checkpoint` object may be restored as a training\n checkpoint by pointing `tf.train.Checkpoint.restore` to the SavedModel's\n \"variables/\" subdirectory.\n\n `tf.function` does not hard-code device annotations from outside the function\n body, instead of using the calling context's device. This means for example\n that exporting a model that runs on a GPU and serving it on a CPU will\n generally work, with some exceptions:\n\n * `tf.device` annotations inside the body of the function will be hard-coded\n in the exported model; this type of annotation is discouraged.\n * Device-specific operations, e.g. with \"cuDNN\" in the name or with\n device-specific layouts, may cause issues.\n * For `ConcreteFunctions`, active distribution strategies will cause device\n placements to be hard-coded in the function.\n\n SavedModels exported with `tf.saved_model.save` [strip default-valued\n attributes](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md#stripping-default-valued-attributes)\n automatically, which removes one source of incompatibilities when the consumer\n of a SavedModel is running an older TensorFlow version than the\n producer. There are however other sources of incompatibilities which are not\n handled automatically, such as when the exported model contains operations\n which the consumer does not have definitions for.\n\n Args:\n obj: A trackable object (e.g. tf.Module or tf.train.Checkpoint) to export.\n export_dir: A directory in which to write the SavedModel.\n signatures: Optional, one of three types:\n * a `tf.function` with an input signature specified, which will use the\n default serving signature key,\n * the result of `f.get_concrete_function` on a `@tf.function`-decorated\n function `f`, in which case `f` will be used to generate a signature for\n the SavedModel under the default serving signature key,\n * a dictionary, which maps signature keys to either `tf.function`\n instances with input signatures or concrete functions. Keys of such a\n dictionary may be arbitrary strings, but will typically be from the\n `tf.saved_model.signature_constants` module.\n options: `tf.saved_model.SaveOptions` object for configuring save options.\n\n Raises:\n ValueError: If `obj` is not trackable.\n\n @compatibility(eager)\n Not well supported when graph building. From TensorFlow 1.x,\n `tf.compat.v1.enable_eager_execution()` should run first. Calling\n tf.saved_model.save in a loop when graph building from TensorFlow 1.x will\n add new save operations to the default graph each iteration.\n\n May not be called from within a function body.\n @end_compatibility\n ", "desc": "Exports a [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) (and subclasses) `obj` to [SavedModel format](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).", "type": "API"}, {"name": "tf.saved_model.SaveOptions", "docs": "Options for saving to SavedModel.\n\n This function may be used in the `options` argument in functions that\n save a SavedModel (`tf.saved_model.save`, `tf.keras.models.save_model`).\n ", "desc": "Options for saving to SavedModel.", "type": "API"}, {"name": "tf.scalar_mul", "docs": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.\n\n This is a special case of `tf.math.multiply`, where the first value must be a\n `scalar`. Unlike the general form of `tf.math.multiply`, this is operation is\n guaranteed to be efficient for `tf.IndexedSlices`.\n\n >>> x = tf.reshape(tf.range(30, dtype=tf.float32), [10, 3])\n >>> with tf.GradientTape() as g:\n ... g.watch(x)\n ... y = tf.gather(x, [1, 2]) # IndexedSlices\n ... z = tf.math.scalar_mul(10.0, y)\n\n Args:\n scalar: A 0-D scalar `Tensor`. Must have known shape.\n x: A `Tensor` or `IndexedSlices` to be scaled.\n name: A name for the operation (optional).\n\n Returns:\n `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.\n\n Raises:\n ValueError: if scalar is not a 0-D `scalar`.\n ", "desc": "Multiplies a scalar times a `Tensor` or `IndexedSlices` object.", "type": "API"}, {"name": "tf.scan", "docs": "scan on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version.\nInstructions for updating:\nback_prop=False is deprecated. Consider using tf.stop_gradient instead.\nInstead of:\nresults = tf.scan(fn, elems, back_prop=False)\nUse:\nresults = tf.nest.map_structure(tf.stop_gradient, tf.scan(fn, elems))\n\nThe simplest version of `scan` repeatedly applies the callable `fn` to a\nsequence of elements from first to last. The elements are made of the tensors\nunpacked from `elems` on dimension 0. The callable fn takes two tensors as\narguments. The first argument is the accumulated value computed from the\npreceding invocation of fn, and the second is the value at the current\nposition of `elems`. If `initializer` is None, `elems` must contain at least\none element, and its first element is used as the initializer.\n\nSuppose that `elems` is unpacked into `values`, a list of tensors. The shape\nof the result tensor is `[len(values)] + fn(initializer, values[0]).shape`.\nIf reverse=True, it's fn(initializer, values[-1]).shape.\n\nThis method also allows multi-arity `elems` and accumulator. If `elems`\nis a (possibly nested) list or tuple of tensors, then each of these tensors\nmust have a matching first (unpack) dimension. The second argument of\n`fn` must match the structure of `elems`.\n\nIf no `initializer` is provided, the output structure and dtypes of `fn`\nare assumed to be the same as its input; and in this case, the first\nargument of `fn` must match the structure of `elems`.\n\nIf an `initializer` is provided, then the output of `fn` must have the same\nstructure as `initializer`; and the first argument of `fn` must match\nthis structure.\n\nFor example, if `elems` is `(t1, [t2, t3])` and `initializer` is\n`[i1, i2]` then an appropriate signature for `fn` in `python2` is:\n`fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]):` and `fn` must return a list,\n`[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the\n one that works in `python3`, is:\n`fn = lambda a, t:`, where `a` and `t` correspond to the input tuples.\n\nArgs:\n fn: The callable to be performed. It accepts two arguments. The first will\n have the same structure as `initializer` if one is provided, otherwise it\n will have the same structure as `elems`. The second will have the same\n (possibly nested) structure as `elems`. Its output must have the same\n structure as `initializer` if one is provided, otherwise it must have the\n same structure as `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be the first argument to `fn`.\n initializer: (optional) A tensor or (possibly nested) sequence of tensors,\n initial value for the accumulator, and the expected output type of `fn`.\n parallel_iterations: (optional) The number of iterations allowed to run in\n parallel.\n back_prop: (optional) Deprecated. False disables support for back\n propagation. Prefer using `tf.stop_gradient` instead.\n swap_memory: (optional) True enables GPU-CPU memory swapping.\n infer_shape: (optional) False disables tests for consistent output shapes.\n reverse: (optional) True scans the tensor last to first (instead of first to\n last).\n name: (optional) Name prefix for the returned tensors.\n\nReturns:\n A tensor or (possibly nested) sequence of tensors. Each tensor packs the\n results of applying `fn` to tensors unpacked from `elems` along the first\n dimension, and the previous accumulator value(s), from first to last (or\n last to first, if `reverse=True`).\n\nRaises:\n TypeError: if `fn` is not callable or the structure of the output of\n `fn` and `initializer` do not match.\n ValueError: if the lengths of the output of `fn` and `initializer`\n do not match.\n\nExamples:\n ```python\n elems = np.array([1, 2, 3, 4, 5, 6])\n sum = scan(lambda a, x: a + x, elems)\n # sum == [1, 3, 6, 10, 15, 21]\n sum = scan(lambda a, x: a + x, elems, reverse=True)\n # sum == [21, 20, 18, 15, 11, 6]\n ```\n\n ```python\n elems = np.array([1, 2, 3, 4, 5, 6])\n initializer = np.array(0)\n sum_one = scan(\n lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)\n # sum_one == [1, 2, 3, 4, 5, 6]\n ```\n\n ```python\n elems = np.array([1, 0, 0, 0, 0, 0])\n initializer = (np.array(0), np.array(1))\n fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)\n # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])\n ```", "desc": "scan on the list of tensors unpacked from `elems` on dimension 0. (deprecated argument values)", "type": "API"}, {"name": "tf.scatter_nd", "docs": "Scatters `updates` into a tensor of shape `shape` according to `indices`.\n\n Update the input tensor by scattering sparse `updates` according to individual values at the specified `indices`.\n This op returns an `output` tensor with the `shape` you specify. This op is the\n inverse of the `tf.gather_nd` operator which extracts values or slices from a\n given tensor.\n\n This operation is similar to `tf.tensor_scatter_nd_add`, except that the tensor\n is zero-initialized. Calling `tf.scatter_nd(indices, values, shape)`\n is identical to calling\n `tf.tensor_scatter_nd_add(tf.zeros(shape, values.dtype), indices, values)`\n\n If `indices` contains duplicates, the duplicate `values` are accumulated\n (summed).\n\n **WARNING**: The order in which updates are applied is nondeterministic, so the\n output will be nondeterministic if `indices` contains duplicates;\n numbers summed in different order may yield different results because of some\n numerical approximation issues.\n\n `indices` is an integer tensor of shape `shape`. The last dimension\n of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices of elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`.\n\n `updates` is a tensor with shape:\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of the scatter op is to insert individual elements in\n a tensor by index. Consider an example where you want to insert 4 scattered\n elements in a rank-1 tensor with 8 elements.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n shape = tf.constant([8])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [0, 11, 0, 10, 9, 0, 0, 12]\n\n You can also insert entire slices of a higher rank tensor all at once. For\n example, you can insert two slices in the first dimension of a rank-3 tensor\n with two matrices of new values.\n\n
\n \n
\n\n In Python, this scatter operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n shape = tf.constant([4, 4, 4])\n scatter = tf.scatter_nd(indices, updates, shape)\n print(scatter)\n ```\n\n The resulting tensor would look like this:\n\n [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],\n [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],\n [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int16`, `int32`, `int64`.\n Tensor of indices.\n updates: A `Tensor`. Values to scatter into the output tensor.\n shape: A `Tensor`. Must have the same type as `indices`.\n 1-D. The shape of the output tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `updates`.\n ", "desc": "Scatters `updates` into a tensor of shape `shape` according to `indices`.", "type": "API"}, {"name": "tf.searchsorted", "docs": "Searches for where a value would go in a sorted sequence.\n\n This is not a method for checking containment (like python `in`).\n\n The typical use case for this operation is \"binning\", \"bucketing\", or\n \"discretizing\". The `values` are assigned to bucket-indices based on the\n **edges** listed in `sorted_sequence`. This operation\n returns the bucket-index for each value.\n\n >>> edges = [-1, 3.3, 9.1, 10.0]\n >>> values = [0.0, 4.1, 12.0]\n >>> tf.searchsorted(edges, values).numpy()\n array([1, 2, 4], dtype=int32)\n\n The `side` argument controls which index is returned if a value lands exactly\n on an edge:\n\n >>> seq = [0, 3, 9, 10, 10]\n >>> values = [0, 4, 10]\n >>> tf.searchsorted(seq, values).numpy()\n array([0, 2, 3], dtype=int32)\n >>> tf.searchsorted(seq, values, side=\"right\").numpy()\n array([1, 2, 5], dtype=int32)\n\n The `axis` is not settable for this operation. It always operates on the\n innermost dimension (`axis=-1`). The operation will accept any number of\n outer dimensions. Here it is applied to the rows of a matrix:\n\n >>> sorted_sequence = [[0., 3., 8., 9., 10.],\n ... [1., 2., 3., 4., 5.]]\n >>> values = [[9.8, 2.1, 4.3],\n ... [0.1, 6.6, 4.5, ]]\n >>> tf.searchsorted(sorted_sequence, values).numpy()\n array([[4, 1, 2],\n [0, 5, 4]], dtype=int32)\n\n Note: This operation assumes that `sorted_sequence` **is sorted** along the\n innermost axis, maybe using `tf.sort(..., axis=-1)`. **If the sequence is not\n sorted no error is raised** and the content of the returned tensor is not well\n defined.\n\n Args:\n sorted_sequence: N-D `Tensor` containing a sorted sequence.\n values: N-D `Tensor` containing the search values.\n side: 'left' or 'right'; 'left' corresponds to lower_bound and 'right' to\n upper_bound.\n out_type: The output type (`int32` or `int64`). Default is `tf.int32`.\n name: Optional name for the operation.\n\n Returns:\n An N-D `Tensor` the size of `values` containing the result of applying\n either lower_bound or upper_bound (depending on side) to each value. The\n result is not a global index to the entire `Tensor`, but the index in the\n last dimension.\n\n Raises:\n ValueError: If the last dimension of `sorted_sequence >= 2^31-1` elements.\n If the total size of `values` exceeds `2^31 - 1` elements.\n If the first `N-1` dimensions of the two tensors don't match.\n ", "desc": "Searches for where a value would go in a sorted sequence.", "type": "API"}, {"name": "tf.sequence_mask", "docs": "Returns a mask tensor representing the first N positions of each cell.\n\n If `lengths` has shape `[d_1, d_2, ..., d_n]` the resulting tensor `mask` has\n dtype `dtype` and shape `[d_1, d_2, ..., d_n, maxlen]`, with\n\n ```\n mask[i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\n ```\n\n Examples:\n\n ```python\n tf.sequence_mask([1, 3, 2], 5) # [[True, False, False, False, False],\n # [True, True, True, False, False],\n # [True, True, False, False, False]]\n\n tf.sequence_mask([[1, 3],[2,0]]) # [[[True, False, False],\n # [True, True, True]],\n # [[True, True, False],\n # [False, False, False]]]\n ```\n\n Args:\n lengths: integer tensor, all its values <= maxlen.\n maxlen: scalar integer tensor, size of last dimension of returned tensor.\n Default is the maximum value in `lengths`.\n dtype: output type of the resulting tensor.\n name: name of the op.\n\n Returns:\n A mask tensor of shape `lengths.shape + (maxlen,)`, cast to specified dtype.\n Raises:\n ValueError: if `maxlen` is not a scalar.\n ", "desc": "Returns a mask tensor representing the first N positions of each cell.", "type": "API"}, {"name": "tf.sets", "docs": "Tensorflow set operations.\n", "desc": "Tensorflow set operations.", "type": "API"}, {"name": "tf.sets.difference", "docs": "Compute set difference of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_difference` is applied to each aligned pair of sets.\n tf.sets.difference(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{2}, {3}], [{}, {}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 2),\n # ((0, 1, 0), 3),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n aminusb: Whether to subtract `b` from `a`, vs vice versa.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n differences.\n\n Raises:\n TypeError: If inputs are invalid types, or if `a` and `b` have\n different types.\n ValueError: If `a` is sparse and `b` is dense.\n errors_impl.InvalidArgumentError: If the shapes of `a` and `b` do not\n match in any dimension other than the last dimension.\n ", "desc": "Compute set difference of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.sets.intersection", "docs": "Compute set intersection of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # Represent the following array of sets as a sparse tensor:\n # a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2,2,2])\n\n # b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]])\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `tf.sets.intersection` is applied to each aligned pair of sets.\n tf.sets.intersection(a, b)\n\n # The result will be equivalent to either of:\n #\n # np.array([[{1}, {}], [{4}, {5, 6}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((1, 0, 0), 4),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n intersections.\n ", "desc": "Compute set intersection of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.sets.size", "docs": "Compute number of unique elements along last dimension of `a`.\n\n Args:\n a: `SparseTensor`, with indices sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a`.\n\n Returns:\n `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with\n rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the\n number of unique elements in the corresponding `[0...n-1]` dimension of `a`.\n\n Raises:\n TypeError: If `a` is an invalid types.\n ", "desc": "Compute number of unique elements along last dimension of `a`.", "type": "API"}, {"name": "tf.sets.union", "docs": "Compute set union of elements in last dimension of `a` and `b`.\n\n All but the last dimension of `a` and `b` must match.\n\n Example:\n\n ```python\n import tensorflow as tf\n import collections\n\n # [[{1, 2}, {3}], [{4}, {5, 6}]]\n a = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 2),\n ((0, 1, 0), 3),\n ((1, 0, 0), 4),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ])\n a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),\n dense_shape=[2, 2, 2])\n\n # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]\n b = collections.OrderedDict([\n ((0, 0, 0), 1),\n ((0, 0, 1), 3),\n ((0, 1, 0), 2),\n ((1, 0, 0), 4),\n ((1, 0, 1), 5),\n ((1, 1, 0), 5),\n ((1, 1, 1), 6),\n ((1, 1, 2), 7),\n ((1, 1, 3), 8),\n ])\n b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),\n dense_shape=[2, 2, 4])\n\n # `set_union` is applied to each aligned pair of sets.\n tf.sets.union(a, b)\n\n # The result will be a equivalent to either of:\n #\n # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]])\n #\n # collections.OrderedDict([\n # ((0, 0, 0), 1),\n # ((0, 0, 1), 2),\n # ((0, 0, 2), 3),\n # ((0, 1, 0), 2),\n # ((0, 1, 1), 3),\n # ((1, 0, 0), 4),\n # ((1, 0, 1), 5),\n # ((1, 1, 0), 5),\n # ((1, 1, 1), 6),\n # ((1, 1, 2), 7),\n # ((1, 1, 3), 8),\n # ])\n ```\n\n Args:\n a: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices\n must be sorted in row-major order.\n b: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices\n must be sorted in row-major order.\n validate_indices: Whether to validate the order and range of sparse indices\n in `a` and `b`.\n\n Returns:\n A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but\n the last dimension the same. Elements along the last dimension contain the\n unions.\n ", "desc": "Compute set union of elements in last dimension of `a` and `b`.", "type": "API"}, {"name": "tf.shape", "docs": "Returns a tensor containing the shape of the input tensor.\n\n See also `tf.size`, `tf.rank`.\n\n `tf.shape` returns a 1-D integer tensor representing the shape of `input`.\n For a scalar input, the tensor returned has a shape of (0,) and its value is\n the empty vector (i.e. []).\n\n For example:\n\n >>> tf.shape(1.)\n \n\n >>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n >>> tf.shape(t)\n \n\n Note: When using symbolic tensors, such as when using the Keras API,\n tf.shape() will return the shape of the symbolic tensor.\n\n >>> a = tf.keras.layers.Input((None, 10))\n >>> tf.shape(a)\n <... shape=(3,) dtype=int32...>\n\n In these cases, using `tf.Tensor.shape` will return more informative results.\n\n >>> a.shape\n TensorShape([None, None, 10])\n\n (The first `None` represents the as yet unknown batch size.)\n\n `tf.shape` and `Tensor.shape` should be identical in eager mode. Within\n `tf.function` or within a `compat.v1` context, not all dimensions may be\n known until execution time. Hence when defining custom layers and models\n for graph mode, prefer the dynamic `tf.shape(x)` over the static `x.shape`.\n\n Args:\n input: A `Tensor` or `SparseTensor`.\n out_type: (Optional) The specified output type of the operation (`int32` or\n `int64`). Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Returns a tensor containing the shape of the input tensor.", "type": "API"}, {"name": "tf.shape_n", "docs": "Returns shape of tensors.\n\n Args:\n input: A list of at least 1 `Tensor` object with the same type.\n out_type: The specified output type of the operation (`int32` or `int64`).\n Defaults to `tf.int32`(optional).\n name: A name for the operation (optional).\n\n Returns:\n A list with the same length as `input` of `Tensor` objects with\n type `out_type`.\n ", "desc": "Returns shape of tensors.", "type": "API"}, {"name": "tf.sigmoid", "docs": "Computes sigmoid of `x` element-wise.\n\n Formula for calculating $\\mathrm{sigmoid}(x) = y = 1 / (1 + \\exp(-x))$.\n\n For $x \\in (-\\infty, \\infty)$, $\\mathrm{sigmoid}(x) \\in (0, 1)$.\n\n Example Usage:\n\n If a positive number is large, then its sigmoid will approach to 1 since the\n formula will be `y = / (1 + )`\n\n >>> x = tf.constant([0.0, 1.0, 50.0, 100.0])\n >>> tf.math.sigmoid(x)\n \n\n If a negative number is large, its sigmoid will approach to 0 since the\n formula will be `y = 1 / (1 + )`\n\n >>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])\n >>> tf.math.sigmoid(x)\n \n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or\n `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor with the same type as `x`.\n\n Usage Example:\n\n >>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)\n >>> tf.sigmoid(x)\n \n\n @compatibility(scipy)\n Equivalent to scipy.special.expit\n @end_compatibility\n ", "desc": "Computes sigmoid of `x` element-wise.", "type": "API"}, {"name": "tf.sign", "docs": "Returns an element-wise indication of the sign of a number.\n\n `y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0`.\n\n For complex numbers, `y = sign(x) = x / |x| if x != 0, otherwise y = 0`.\n\n Example usage:\n\n >>> # real number\n >>> tf.math.sign([0., 2., -3.])\n \n\n >>> # complex number\n >>> tf.math.sign([1 + 1j, 0 + 0j])\n \n\n Args:\n x: A Tensor. Must be one of the following types: bfloat16, half, float32,\n float64, int32, int64, complex64, complex128.\n name: A name for the operation (optional).\n\n Returns:\n A Tensor. Has the same type as x.\n\n If x is a SparseTensor, returns SparseTensor(x.indices,\n tf.math.sign(x.values, ...), x.dense_shape).\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sign(x.values, ...), x.dense_shape)`", "desc": "Returns an element-wise indication of the sign of a number.", "type": "API"}, {"name": "tf.signal", "docs": "Signal processing operations.\n\nSee the [tf.signal](https://tensorflow.org/api_guides/python/contrib.signal)\nguide.\n\n@@frame\n@@hamming_window\n@@hann_window\n@@inverse_stft\n@@inverse_stft_window_fn\n@@mfccs_from_log_mel_spectrograms\n@@linear_to_mel_weight_matrix\n@@overlap_and_add\n@@stft\n\n[hamming]: https://en.wikipedia.org/wiki/Window_function#Hamming_window\n[hann]: https://en.wikipedia.org/wiki/Window_function#Hann_window\n[mel]: https://en.wikipedia.org/wiki/Mel_scale\n[mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum\n[stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n\n", "desc": "Signal processing operations.", "type": "API"}, {"name": "tf.signal.dct", "docs": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.\n\n Types I, II, III and IV are supported.\n Type I is implemented using a length `2N` padded `tf.signal.rfft`.\n Type II is implemented using a length `2N` padded `tf.signal.rfft`, as\n described here: [Type 2 DCT using 2N FFT padded (Makhoul)]\n (https://dsp.stackexchange.com/a/10606).\n Type III is a fairly straightforward inverse of Type II\n (i.e. using a length `2N` padded `tf.signal.irfft`).\n Type IV is calculated through 2N length DCT2 of padded signal and\n picking the odd indices.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.dct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.dct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The DCT type to perform. Must be 1, 2, 3 or 4.\n n: The length of the transform. If length is less than sequence length,\n only the first n elements of the sequence are considered for the DCT.\n If n is greater than the sequence length, zeros are padded and then\n the DCT is computed as usual.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the DCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2`, `3` or `4`, `axis` is\n not `-1`, `n` is not `None` or greater than 0,\n or `norm` is not `None` or `'ortho'`.\n ValueError: If `type` is `1` and `norm` is `ortho`.\n\n [dct]: https://en.wikipedia.org/wiki/Discrete_cosine_transform\n ", "desc": "Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.", "type": "API"}, {"name": "tf.signal.fft", "docs": "Fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform over the inner-most\n dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Fast Fourier transform.", "type": "API"}, {"name": "tf.signal.fft2d", "docs": "2D fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform over the inner-most\n 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "2D fast Fourier transform.", "type": "API"}, {"name": "tf.signal.fft3d", "docs": "3D fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform over the inner-most 3\n dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "3D fast Fourier transform.", "type": "API"}, {"name": "tf.signal.fftshift", "docs": "Shift the zero-frequency component to the center of the spectrum.\n\n This function swaps half-spaces for all axes listed (defaults to all).\n Note that ``y[0]`` is the Nyquist component only if ``len(x)`` is even.\n\n @compatibility(numpy)\n Equivalent to numpy.fft.fftshift.\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftshift.html\n @end_compatibility\n\n For example:\n\n ```python\n x = tf.signal.fftshift([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.])\n x.numpy() # array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.])\n ```\n\n Args:\n x: `Tensor`, input tensor.\n axes: `int` or shape `tuple`, optional Axes over which to shift. Default is\n None, which shifts all axes.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor`, The shifted tensor.\n ", "desc": "Shift the zero-frequency component to the center of the spectrum.", "type": "API"}, {"name": "tf.signal.frame", "docs": "Expands `signal`'s `axis` dimension into frames of `frame_length`.\n\n Slides a window of size `frame_length` over `signal`'s `axis` dimension\n with a stride of `frame_step`, replacing the `axis` dimension with\n `[frames, frame_length]` frames.\n\n If `pad_end` is True, window positions that are past the end of the `axis`\n dimension are padded with `pad_value` until the window moves fully past the\n end of the dimension. Otherwise, only window positions that fully overlap the\n `axis` dimension are produced.\n\n For example:\n\n >>> # A batch size 3 tensor of 9152 audio samples.\n >>> audio = tf.random.normal([3, 9152])\n >>>\n >>> # Compute overlapping frames of length 512 with a step of 180 (frames overlap\n >>> # by 332 samples). By default, only 49 frames are generated since a frame\n >>> # with start position j*180 for j > 48 would overhang the end.\n >>> frames = tf.signal.frame(audio, 512, 180)\n >>> frames.shape.assert_is_compatible_with([3, 49, 512])\n >>>\n >>> # When pad_end is enabled, the final two frames are kept (padded with zeros).\n >>> frames = tf.signal.frame(audio, 512, 180, pad_end=True)\n >>> frames.shape.assert_is_compatible_with([3, 51, 512])\n\n If the dimension along `axis` is N, and `pad_end=False`, the number of frames\n can be computed by:\n ```python\n num_frames = 1 + (N - frame_size) // frame_step\n ```\n If `pad_end=True`, the number of frames can be computed by:\n ```python\n num_frames = -(-N // frame_step) # ceiling division\n ```\n\n Args:\n signal: A `[..., samples, ...]` `Tensor`. The rank and dimensions\n may be unknown. Rank must be at least 1.\n frame_length: The frame length in samples. An integer or scalar `Tensor`.\n frame_step: The frame hop size in samples. An integer or scalar `Tensor`.\n pad_end: Whether to pad the end of `signal` with `pad_value`.\n pad_value: An optional scalar `Tensor` to use where the input signal\n does not exist when `pad_end` is True.\n axis: A scalar integer `Tensor` indicating the axis to frame. Defaults to\n the last axis. Supports negative values for indexing from the end.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of frames with shape `[..., num_frames, frame_length, ...]`.\n\n Raises:\n ValueError: If `frame_length`, `frame_step`, `pad_value`, or `axis` are not\n scalar.\n ", "desc": "Expands `signal`'s `axis` dimension into frames of `frame_length`.", "type": "API"}, {"name": "tf.signal.hamming_window", "docs": "Generate a [Hamming][hamming] window.\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n periodic: A bool `Tensor` indicating whether to generate a periodic or\n symmetric window. Periodic windows are typically used for spectral\n analysis while symmetric windows are typically used for digital\n filter design.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n Raises:\n ValueError: If `dtype` is not a floating point type.\n\n [hamming]:\n https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows\n ", "desc": "Generate a [Hamming][hamming] window.", "type": "API"}, {"name": "tf.signal.hann_window", "docs": "Generate a [Hann window][hann].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n periodic: A bool `Tensor` indicating whether to generate a periodic or\n symmetric window. Periodic windows are typically used for spectral\n analysis while symmetric windows are typically used for digital\n filter design.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n Raises:\n ValueError: If `dtype` is not a floating point type.\n\n [hann]: https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows\n ", "desc": "Generate a [Hann window][hann].", "type": "API"}, {"name": "tf.signal.idct", "docs": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.\n\n Currently Types I, II, III, IV are supported. Type III is the inverse of\n Type II, and vice versa.\n\n Note that you must re-normalize by 1/(2n) to obtain an inverse if `norm` is\n not `'ortho'`. That is:\n `signal == idct(dct(signal)) * 0.5 / signal.shape[-1]`.\n When `norm='ortho'`, we have:\n `signal == idct(dct(signal, norm='ortho'), norm='ortho')`.\n\n @compatibility(scipy)\n Equivalent to [scipy.fftpack.idct]\n (https://docs.scipy.org/doc/scipy-1.4.0/reference/generated/scipy.fftpack.idct.html)\n for Type-I, Type-II, Type-III and Type-IV DCT.\n @end_compatibility\n\n Args:\n input: A `[..., samples]` `float32`/`float64` `Tensor` containing the\n signals to take the DCT of.\n type: The IDCT type to perform. Must be 1, 2, 3 or 4.\n n: For future expansion. The length of the transform. Must be `None`.\n axis: For future expansion. The axis to compute the DCT along. Must be `-1`.\n norm: The normalization to apply. `None` for no normalization or `'ortho'`\n for orthonormal normalization.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `float32`/`float64` `Tensor` containing the IDCT of\n `input`.\n\n Raises:\n ValueError: If `type` is not `1`, `2` or `3`, `n` is not `None, `axis` is\n not `-1`, or `norm` is not `None` or `'ortho'`.\n\n [idct]:\n https://en.wikipedia.org/wiki/Discrete_cosine_transform#Inverse_transforms\n ", "desc": "Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of `input`.", "type": "API"}, {"name": "tf.signal.ifft", "docs": "Inverse fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform over the\n inner-most dimension of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse fast Fourier transform.", "type": "API"}, {"name": "tf.signal.ifft2d", "docs": "Inverse 2D fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform over the\n inner-most 2 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 2D fast Fourier transform.", "type": "API"}, {"name": "tf.signal.ifft3d", "docs": "Inverse 3D fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform over the\n inner-most 3 dimensions of `input`.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Inverse 3D fast Fourier transform.", "type": "API"}, {"name": "tf.signal.ifftshift", "docs": "The inverse of fftshift.\n\n Although identical for even-length x,\n the functions differ by one sample for odd-length x.\n\n @compatibility(numpy)\n Equivalent to numpy.fft.ifftshift.\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.ifftshift.html\n @end_compatibility\n\n For example:\n\n ```python\n x = tf.signal.ifftshift([[ 0., 1., 2.],[ 3., 4., -4.],[-3., -2., -1.]])\n x.numpy() # array([[ 4., -4., 3.],[-2., -1., -3.],[ 1., 2., 0.]])\n ```\n\n Args:\n x: `Tensor`, input tensor.\n axes: `int` or shape `tuple` Axes over which to calculate. Defaults to None,\n which shifts all axes.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor`, The shifted tensor.\n ", "desc": "The inverse of fftshift.", "type": "API"}, {"name": "tf.signal.inverse_mdct", "docs": "Computes the inverse modified DCT of `mdcts`.\n\n To reconstruct an original waveform, the same window function should\n be used with `mdct` and `inverse_mdct`.\n\n Example usage:\n\n >>> @tf.function\n ... def compare_round_trip():\n ... samples = 1000\n ... frame_length = 400\n ... halflen = frame_length // 2\n ... waveform = tf.random.normal(dtype=tf.float32, shape=[samples])\n ... waveform_pad = tf.pad(waveform, [[halflen, 0],])\n ... mdct = tf.signal.mdct(waveform_pad, frame_length, pad_end=True,\n ... window_fn=tf.signal.vorbis_window)\n ... inverse_mdct = tf.signal.inverse_mdct(mdct,\n ... window_fn=tf.signal.vorbis_window)\n ... inverse_mdct = inverse_mdct[halflen: halflen + samples]\n ... return waveform, inverse_mdct\n >>> waveform, inverse_mdct = compare_round_trip()\n >>> np.allclose(waveform.numpy(), inverse_mdct.numpy(), rtol=1e-3, atol=1e-4)\n True\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n mdcts: A `float32`/`float64` `[..., frames, frame_length // 2]`\n `Tensor` of MDCT bins representing a batch of `frame_length // 2`-point\n MDCTs.\n window_fn: A callable that takes a frame_length and a `dtype` keyword\n argument and returns a `[frame_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, a rectangular window with a scale of\n 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct`\n followed by `inverse_mdct`, please use `tf.signal.vorbis_window`,\n `tf.signal.kaiser_bessel_derived_window` or `None`. If using another\n window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1\n and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to\n achieve perfect reconstruction.\n norm: If \"ortho\", orthonormal inverse DCT4 is performed, if it is None,\n a regular dct4 followed by scaling of `1/frame_length` is performed.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `Tensor` of `float32`/`float64` signals representing\n the inverse MDCT for each input MDCT in `mdcts` where `samples` is\n `(frames - 1) * (frame_length // 2) + frame_length`.\n\n Raises:\n ValueError: If `mdcts` is not at least rank 2.\n\n [mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform\n ", "desc": "Computes the inverse modified DCT of `mdcts`.", "type": "API"}, {"name": "tf.signal.inverse_stft", "docs": "Computes the inverse [Short-time Fourier Transform][stft] of `stfts`.\n\n To reconstruct an original waveform, a complementary window function should\n be used with `inverse_stft`. Such a window function can be constructed with\n `tf.signal.inverse_stft_window_fn`.\n Example:\n\n ```python\n frame_length = 400\n frame_step = 160\n waveform = tf.random.normal(dtype=tf.float32, shape=[1000])\n stft = tf.signal.stft(waveform, frame_length, frame_step)\n inverse_stft = tf.signal.inverse_stft(\n stft, frame_length, frame_step,\n window_fn=tf.signal.inverse_stft_window_fn(frame_step))\n ```\n\n If a custom `window_fn` is used with `tf.signal.stft`, it must be passed to\n `tf.signal.inverse_stft_window_fn`:\n\n ```python\n frame_length = 400\n frame_step = 160\n window_fn = tf.signal.hamming_window\n waveform = tf.random.normal(dtype=tf.float32, shape=[1000])\n stft = tf.signal.stft(\n waveform, frame_length, frame_step, window_fn=window_fn)\n inverse_stft = tf.signal.inverse_stft(\n stft, frame_length, frame_step,\n window_fn=tf.signal.inverse_stft_window_fn(\n frame_step, forward_window_fn=window_fn))\n ```\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n stfts: A `complex64`/`complex128` `[..., frames, fft_unique_bins]`\n `Tensor` of STFT bins representing a batch of `fft_length`-point STFTs\n where `fft_unique_bins` is `fft_length // 2 + 1`\n frame_length: An integer scalar `Tensor`. The window length in samples.\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n fft_length: An integer scalar `Tensor`. The size of the FFT that produced\n `stfts`. If not provided, uses the smallest power of 2 enclosing\n `frame_length`.\n window_fn: A callable that takes a window length and a `dtype` keyword\n argument and returns a `[window_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, no windowing is used.\n name: An optional name for the operation.\n\n Returns:\n A `[..., samples]` `Tensor` of `float32`/`float64` signals representing\n the inverse STFT for each input STFT in `stfts`.\n\n Raises:\n ValueError: If `stfts` is not at least rank 2, `frame_length` is not scalar,\n `frame_step` is not scalar, or `fft_length` is not scalar.\n\n [stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n ", "desc": "Computes the inverse [Short-time Fourier Transform][stft] of `stfts`.", "type": "API"}, {"name": "tf.signal.inverse_stft_window_fn", "docs": "Generates a window function that can be used in `inverse_stft`.\n\n Constructs a window that is equal to the forward window with a further\n pointwise amplitude correction. `inverse_stft_window_fn` is equivalent to\n `forward_window_fn` in the case where it would produce an exact inverse.\n\n See examples in `inverse_stft` documentation for usage.\n\n Args:\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n forward_window_fn: window_fn used in the forward transform, `stft`.\n name: An optional name for the operation.\n\n Returns:\n A callable that takes a window length and a `dtype` keyword argument and\n returns a `[window_length]` `Tensor` of samples in the provided datatype.\n The returned window is suitable for reconstructing original waveform in\n inverse_stft.\n ", "desc": "Generates a window function that can be used in `inverse_stft`.", "type": "API"}, {"name": "tf.signal.irfft", "docs": "Inverse real-valued fast Fourier transform.\n\n Computes the inverse 1-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most dimension of `input`.\n\n The inner-most dimension of `input` is assumed to be the result of `RFFT`: the\n `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If\n `fft_length` is not provided, it is computed from the size of the inner-most\n dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to\n compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller\n than the corresponding dimension of `input`, the dimension is cropped. If it is\n larger, the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.irfft2d", "docs": "Inverse 2D real-valued fast Fourier transform.\n\n Computes the inverse 2-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 2 dimensions of `input`.\n\n The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 2 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT2D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.irfft3d", "docs": "Inverse 3D real-valued fast Fourier transform.\n\n Computes the inverse 3-dimensional discrete Fourier transform of a real-valued\n signal over the inner-most 3 dimensions of `input`.\n\n The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`:\n The inner-most dimension contains the `fft_length / 2 + 1` unique components of\n the DFT of a real-valued signal. If `fft_length` is not provided, it is computed\n from the size of the inner-most 3 dimensions of `input`. If the FFT length used\n to compute `input` is odd, it should be provided since it cannot be inferred\n properly.\n\n Along each axis `IRFFT3D` is computed on, if `fft_length` (or\n `fft_length / 2 + 1` for the inner-most dimension) is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `complex64`, `complex128`.\n A complex tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Treal`.\n ", "desc": "Inverse 3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.kaiser_bessel_derived_window", "docs": "Generate a [Kaiser Bessel derived window][kbd].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n beta: Beta parameter for Kaiser window.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [kbd]:\n https://en.wikipedia.org/wiki/Kaiser_window#Kaiser%E2%80%93Bessel-derived_(KBD)_window\n ", "desc": "Generate a [Kaiser Bessel derived window][kbd].", "type": "API"}, {"name": "tf.signal.kaiser_window", "docs": "Generate a [Kaiser window][kaiser].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n beta: Beta parameter for Kaiser window, see reference below.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [kaiser]:\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.kaiser.html\n ", "desc": "Generate a [Kaiser window][kaiser].", "type": "API"}, {"name": "tf.signal.linear_to_mel_weight_matrix", "docs": "Returns a matrix to warp linear scale spectrograms to the [mel scale][mel].\n\n Returns a weight matrix that can be used to re-weight a `Tensor` containing\n `num_spectrogram_bins` linearly sampled frequency information from\n `[0, sample_rate / 2]` into `num_mel_bins` frequency information from\n `[lower_edge_hertz, upper_edge_hertz]` on the [mel scale][mel].\n\n This function follows the [Hidden Markov Model Toolkit\n (HTK)](http://htk.eng.cam.ac.uk/) convention, defining the mel scale in\n terms of a frequency in hertz according to the following formula:\n\n $$\\textrm{mel}(f) = 2595 * \\textrm{log}_{10}(1 + \\frac{f}{700})$$\n\n In the returned matrix, all the triangles (filterbanks) have a peak value\n of 1.0.\n\n For example, the returned matrix `A` can be used to right-multiply a\n spectrogram `S` of shape `[frames, num_spectrogram_bins]` of linear\n scale spectrum values (e.g. STFT magnitudes) to generate a \"mel spectrogram\"\n `M` of shape `[frames, num_mel_bins]`.\n\n # `S` has shape [frames, num_spectrogram_bins]\n # `M` has shape [frames, num_mel_bins]\n M = tf.matmul(S, A)\n\n The matrix can be used with `tf.tensordot` to convert an arbitrary rank\n `Tensor` of linear-scale spectral bins into the mel scale.\n\n # S has shape [..., num_spectrogram_bins].\n # M has shape [..., num_mel_bins].\n M = tf.tensordot(S, A, 1)\n\n Args:\n num_mel_bins: Python int. How many bands in the resulting mel spectrum.\n num_spectrogram_bins: An integer `Tensor`. How many bins there are in the\n source spectrogram data, which is understood to be `fft_size // 2 + 1`,\n i.e. the spectrogram only contains the nonredundant FFT bins.\n sample_rate: An integer or float `Tensor`. Samples per second of the input\n signal used to create the spectrogram. Used to figure out the frequencies\n corresponding to each spectrogram bin, which dictates how they are mapped\n into the mel scale.\n lower_edge_hertz: Python float. Lower bound on the frequencies to be\n included in the mel spectrum. This corresponds to the lower edge of the\n lowest triangular band.\n upper_edge_hertz: Python float. The desired top edge of the highest\n frequency band.\n dtype: The `DType` of the result matrix. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[num_spectrogram_bins, num_mel_bins]`.\n\n Raises:\n ValueError: If `num_mel_bins`/`num_spectrogram_bins`/`sample_rate` are not\n positive, `lower_edge_hertz` is negative, frequency edges are incorrectly\n ordered, `upper_edge_hertz` is larger than the Nyquist frequency.\n\n [mel]: https://en.wikipedia.org/wiki/Mel_scale\n ", "desc": "Returns a matrix to warp linear scale spectrograms to the [mel scale][mel].", "type": "API"}, {"name": "tf.signal.mdct", "docs": "Computes the [Modified Discrete Cosine Transform][mdct] of `signals`.\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n signals: A `[..., samples]` `float32`/`float64` `Tensor` of real-valued\n signals.\n frame_length: An integer scalar `Tensor`. The window length in samples\n which must be divisible by 4.\n window_fn: A callable that takes a frame_length and a `dtype` keyword\n argument and returns a `[frame_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, a rectangular window with a scale of\n 1/sqrt(2) is used. For perfect reconstruction of a signal from `mdct`\n followed by `inverse_mdct`, please use `tf.signal.vorbis_window`,\n `tf.signal.kaiser_bessel_derived_window` or `None`. If using another\n window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1\n and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to\n achieve perfect reconstruction.\n pad_end: Whether to pad the end of `signals` with zeros when the provided\n frame length and step produces a frame that lies partially past its end.\n norm: If it is None, unnormalized dct4 is used, if it is \"ortho\"\n orthonormal dct4 is used.\n name: An optional name for the operation.\n\n Returns:\n A `[..., frames, frame_length // 2]` `Tensor` of `float32`/`float64`\n MDCT values where `frames` is roughly `samples // (frame_length // 2)`\n when `pad_end=False`.\n\n Raises:\n ValueError: If `signals` is not at least rank 1, `frame_length` is\n not scalar, or `frame_length` is not a multiple of `4`.\n\n [mdct]: https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform\n ", "desc": "Computes the [Modified Discrete Cosine Transform][mdct] of `signals`.", "type": "API"}, {"name": "tf.signal.mfccs_from_log_mel_spectrograms", "docs": "Computes [MFCCs][mfcc] of `log_mel_spectrograms`.\n\n Implemented with GPU-compatible ops and supports gradients.\n\n [Mel-Frequency Cepstral Coefficient (MFCC)][mfcc] calculation consists of\n taking the DCT-II of a log-magnitude mel-scale spectrogram. [HTK][htk]'s MFCCs\n use a particular scaling of the DCT-II which is almost orthogonal\n normalization. We follow this convention.\n\n All `num_mel_bins` MFCCs are returned and it is up to the caller to select\n a subset of the MFCCs based on their application. For example, it is typical\n to only use the first few for speech recognition, as this results in\n an approximately pitch-invariant representation of the signal.\n\n For example:\n\n ```python\n batch_size, num_samples, sample_rate = 32, 32000, 16000.0\n # A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1].\n pcm = tf.random.normal([batch_size, num_samples], dtype=tf.float32)\n\n # A 1024-point STFT with frames of 64 ms and 75% overlap.\n stfts = tf.signal.stft(pcm, frame_length=1024, frame_step=256,\n fft_length=1024)\n spectrograms = tf.abs(stfts)\n\n # Warp the linear scale spectrograms into the mel-scale.\n num_spectrogram_bins = stfts.shape[-1].value\n lower_edge_hertz, upper_edge_hertz, num_mel_bins = 80.0, 7600.0, 80\n linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(\n num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,\n upper_edge_hertz)\n mel_spectrograms = tf.tensordot(\n spectrograms, linear_to_mel_weight_matrix, 1)\n mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(\n linear_to_mel_weight_matrix.shape[-1:]))\n\n # Compute a stabilized log to get log-magnitude mel-scale spectrograms.\n log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6)\n\n # Compute MFCCs from log_mel_spectrograms and take the first 13.\n mfccs = tf.signal.mfccs_from_log_mel_spectrograms(\n log_mel_spectrograms)[..., :13]\n ```\n\n Args:\n log_mel_spectrograms: A `[..., num_mel_bins]` `float32`/`float64` `Tensor`\n of log-magnitude mel-scale spectrograms.\n name: An optional name for the operation.\n Returns:\n A `[..., num_mel_bins]` `float32`/`float64` `Tensor` of the MFCCs of\n `log_mel_spectrograms`.\n\n Raises:\n ValueError: If `num_mel_bins` is not positive.\n\n [mfcc]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum\n [htk]: https://en.wikipedia.org/wiki/HTK_(software)\n ", "desc": "Computes [MFCCs][mfcc] of `log_mel_spectrograms`.", "type": "API"}, {"name": "tf.signal.overlap_and_add", "docs": "Reconstructs a signal from a framed representation.\n\n Adds potentially overlapping frames of a signal with shape\n `[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`.\n The resulting tensor has shape `[..., output_size]` where\n\n output_size = (frames - 1) * frame_step + frame_length\n\n Args:\n signal: A [..., frames, frame_length] `Tensor`. All dimensions may be\n unknown, and rank must be at least 2.\n frame_step: An integer or scalar `Tensor` denoting overlap offsets. Must be\n less than or equal to `frame_length`.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` with shape `[..., output_size]` containing the overlap-added\n frames of `signal`'s inner-most two dimensions.\n\n Raises:\n ValueError: If `signal`'s rank is less than 2, or `frame_step` is not a\n scalar integer.\n ", "desc": "Reconstructs a signal from a framed representation.", "type": "API"}, {"name": "tf.signal.rfft", "docs": "Real-valued fast Fourier transform.\n\n Computes the 1-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most dimension of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the\n `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term,\n followed by the `fft_length / 2` positive-frequency terms.\n\n Along the axis `RFFT` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [1]. The FFT length.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "Real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.rfft2d", "docs": "2D real-valued fast Fourier transform.\n\n Computes the 2-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 2 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [2]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "2D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.rfft3d", "docs": "3D real-valued fast Fourier transform.\n\n Computes the 3-dimensional discrete Fourier transform of a real-valued signal\n over the inner-most 3 dimensions of `input`.\n\n Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the\n `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension\n of `output`: the zero-frequency term, followed by the `fft_length / 2`\n positive-frequency terms.\n\n Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the\n corresponding dimension of `input`, the dimension is cropped. If it is larger,\n the dimension is padded with zeros.\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n A float32 tensor.\n fft_length: A `Tensor` of type `int32`.\n An int32 tensor of shape [3]. The FFT length for each dimension.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `Tcomplex`.\n ", "desc": "3D real-valued fast Fourier transform.", "type": "API"}, {"name": "tf.signal.stft", "docs": "Computes the [Short-time Fourier Transform][stft] of `signals`.\n\n Implemented with TPU/GPU-compatible ops and supports gradients.\n\n Args:\n signals: A `[..., samples]` `float32`/`float64` `Tensor` of real-valued\n signals.\n frame_length: An integer scalar `Tensor`. The window length in samples.\n frame_step: An integer scalar `Tensor`. The number of samples to step.\n fft_length: An integer scalar `Tensor`. The size of the FFT to apply.\n If not provided, uses the smallest power of 2 enclosing `frame_length`.\n window_fn: A callable that takes a window length and a `dtype` keyword\n argument and returns a `[window_length]` `Tensor` of samples in the\n provided datatype. If set to `None`, no windowing is used.\n pad_end: Whether to pad the end of `signals` with zeros when the provided\n frame length and step produces a frame that lies partially past its end.\n name: An optional name for the operation.\n\n Returns:\n A `[..., frames, fft_unique_bins]` `Tensor` of `complex64`/`complex128`\n STFT values where `fft_unique_bins` is `fft_length // 2 + 1` (the unique\n components of the FFT).\n\n Raises:\n ValueError: If `signals` is not at least rank 1, `frame_length` is\n not scalar, or `frame_step` is not scalar.\n\n [stft]: https://en.wikipedia.org/wiki/Short-time_Fourier_transform\n ", "desc": "Computes the [Short-time Fourier Transform][stft] of `signals`.", "type": "API"}, {"name": "tf.signal.vorbis_window", "docs": "Generate a [Vorbis power complementary window][vorbis].\n\n Args:\n window_length: A scalar `Tensor` indicating the window length to generate.\n dtype: The data type to produce. Must be a floating point type.\n name: An optional name for the operation.\n\n Returns:\n A `Tensor` of shape `[window_length]` of type `dtype`.\n\n [vorbis]:\n https://en.wikipedia.org/wiki/Modified_discrete_cosine_transform#Window_functions\n ", "desc": "Generate a [Vorbis power complementary window][vorbis].", "type": "API"}, {"name": "tf.sin", "docs": "Computes sine of x element-wise.\n\n Given an input tensor, this function computes sine of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `[-1,1]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10, float(\"inf\")])\n tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes sine of x element-wise.", "type": "API"}, {"name": "tf.sinh", "docs": "Computes hyperbolic sine of x element-wise.\n\n Given an input tensor, this function computes hyperbolic sine of every\n element in the tensor. Input range is `[-inf,inf]` and output range\n is `[-inf,inf]`.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 2, 10, float(\"inf\")])\n tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes hyperbolic sine of x element-wise.", "type": "API"}, {"name": "tf.size", "docs": "Returns the size of a tensor.\n\n See also `tf.shape`.\n\n Returns a 0-D `Tensor` representing the number of elements in `input`\n of type `out_type`. Defaults to tf.int32.\n\n For example:\n\n >>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])\n >>> tf.size(t)\n \n\n Args:\n input: A `Tensor` or `SparseTensor`.\n name: A name for the operation (optional).\n out_type: (Optional) The specified non-quantized numeric output type of the\n operation. Defaults to `tf.int32`.\n\n Returns:\n A `Tensor` of type `out_type`. Defaults to `tf.int32`.\n\n @compatibility(numpy)\n Equivalent to np.size()\n @end_compatibility\n ", "desc": "Returns the size of a tensor.", "type": "API"}, {"name": "tf.slice", "docs": "Extracts a slice from a tensor.\n\n See also `tf.strided_slice`.\n\n This operation extracts a slice of size `size` from a tensor `input_` starting\n at the location specified by `begin`. The slice `size` is represented as a\n tensor shape, where `size[i]` is the number of elements of the 'i'th dimension\n of `input_` that you want to slice. The starting location (`begin`) for the\n slice is represented as an offset in each dimension of `input_`. In other\n words, `begin[i]` is the offset into the i'th dimension of `input_` that you\n want to slice from.\n\n Note that `tf.Tensor.__getitem__` is typically a more pythonic way to\n perform slices, as it allows you to write `foo[3:7, :-2]` instead of\n `tf.slice(foo, [3, 0], [4, foo.get_shape()[1]-2])`.\n\n `begin` is zero-based; `size` is one-based. If `size[i]` is -1,\n all remaining elements in dimension i are included in the\n slice. In other words, this is equivalent to setting:\n\n `size[i] = input_.dim_size(i) - begin[i]`\n\n This operation requires that:\n\n `0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`\n\n For example:\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]],\n [[3, 3, 3], [4, 4, 4]],\n [[5, 5, 5], [6, 6, 6]]])\n tf.slice(t, [1, 0, 0], [1, 1, 3]) # [[[3, 3, 3]]]\n tf.slice(t, [1, 0, 0], [1, 2, 3]) # [[[3, 3, 3],\n # [4, 4, 4]]]\n tf.slice(t, [1, 0, 0], [2, 1, 3]) # [[[3, 3, 3]],\n # [[5, 5, 5]]]\n ```\n\n Args:\n input_: A `Tensor`.\n begin: An `int32` or `int64` `Tensor`.\n size: An `int32` or `int64` `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` the same type as `input_`.\n ", "desc": "Extracts a slice from a tensor.", "type": "API"}, {"name": "tf.sort", "docs": "Sorts a tensor.\n\n Usage:\n\n >>> a = [1, 10, 26.9, 2.8, 166.32, 62.3]\n >>> tf.sort(a).numpy()\n array([ 1. , 2.8 , 10. , 26.9 , 62.3 , 166.32], dtype=float32)\n\n >>> tf.sort(a, direction='DESCENDING').numpy()\n array([166.32, 62.3 , 26.9 , 10. , 2.8 , 1. ], dtype=float32)\n\n For multidimensional inputs you can control which axis the sort is applied\n along. The default `axis=-1` sorts the innermost axis.\n\n >>> mat = [[3,2,1],\n ... [2,1,3],\n ... [1,3,2]]\n >>> tf.sort(mat, axis=-1).numpy()\n array([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]], dtype=int32)\n >>> tf.sort(mat, axis=0).numpy()\n array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]], dtype=int32)\n\n See also:\n\n * `tf.argsort`: Like sort, but it returns the sort indices.\n * `tf.math.top_k`: A partial sort that returns a fixed number of top values\n and corresponding indices.\n\n\n Args:\n values: 1-D or higher **numeric** `Tensor`.\n axis: The axis along which to sort. The default is -1, which sorts the last\n axis.\n direction: The direction in which to sort the values (`'ASCENDING'` or\n `'DESCENDING'`).\n name: Optional name for the operation.\n\n Returns:\n A `Tensor` with the same dtype and shape as `values`, with the elements\n sorted along the given `axis`.\n\n Raises:\n tf.errors.InvalidArgumentError: If the `values.dtype` is not a `float` or\n `int` type.\n ValueError: If axis is not a constant scalar, or the direction is invalid.\n ", "desc": "Sorts a tensor.", "type": "API"}, {"name": "tf.space_to_batch", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.space_to_batch_nd", "docs": "SpaceToBatch for N-D tensors of type T.\n\n This operation divides \"spatial\" dimensions `[1, ..., M]` of the input into a\n grid of blocks of shape `block_shape`, and interleaves these blocks with the\n \"batch\" dimension (0) such that in the output, the spatial dimensions\n `[1, ..., M]` correspond to the position within the grid, and the batch\n dimension combines both the position within a spatial block and the original\n batch position. Prior to division into blocks, the spatial dimensions of the\n input are optionally zero padded according to `paddings`. See below for a\n precise description.\n\n This operation is equivalent to the following steps:\n\n 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the\n input according to `paddings` to produce `padded` of shape `padded_shape`.\n\n 2. Reshape `padded` to `reshaped_padded` of shape:\n\n [batch] +\n [padded_shape[1] / block_shape[0],\n block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1],\n block_shape[M-1]] +\n remaining_shape\n\n 3. Permute dimensions of `reshaped_padded` to produce\n `permuted_reshaped_padded` of shape:\n\n block_shape +\n [batch] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch\n dimension, producing an output tensor of shape:\n\n [batch * prod(block_shape)] +\n [padded_shape[1] / block_shape[0],\n ...,\n padded_shape[M] / block_shape[M-1]] +\n remaining_shape\n\n Some examples:\n\n (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2]], [[3], [4]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 1]` and value:\n\n ```\n [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]\n ```\n\n (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1, 2, 3], [4, 5, 6]],\n [[7, 8, 9], [10, 11, 12]]]]\n ```\n\n The output tensor has shape `[4, 1, 1, 3]` and value:\n\n ```\n [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]\n ```\n\n (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and\n `paddings = [[0, 0], [0, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[4, 2, 2, 1]` and value:\n\n ```\n x = [[[[1], [3]], [[9], [11]]],\n [[[2], [4]], [[10], [12]]],\n [[[5], [7]], [[13], [15]]],\n [[[6], [8]], [[14], [16]]]]\n ```\n\n (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and\n paddings = `[[0, 0], [2, 0]]`:\n\n ```\n x = [[[[1], [2], [3], [4]],\n [[5], [6], [7], [8]]],\n [[[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]]\n ```\n\n The output tensor has shape `[8, 1, 3, 1]` and value:\n\n ```\n x = [[[[0], [1], [3]]], [[[0], [9], [11]]],\n [[[0], [2], [4]]], [[[0], [10], [12]]],\n [[[0], [5], [7]]], [[[0], [13], [15]]],\n [[[0], [6], [8]]], [[[0], [14], [16]]]]\n ```\n\n Among others, this operation is useful for reducing atrous convolution into\n regular convolution.\n\n Args:\n input: A `Tensor`.\n N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,\n where spatial_shape has `M` dimensions.\n block_shape: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D with shape `[M]`, all values must be >= 1.\n paddings: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 2-D with shape `[M, 2]`, all values must be >= 0.\n `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension\n `i + 1`, which corresponds to spatial dimension `i`. It is required that\n `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "SpaceToBatch for N-D tensors of type T.", "type": "API"}, {"name": "tf.sparse", "docs": "Sparse Tensor Representation.\n\nSee also `tf.sparse.SparseTensor`.\n\n", "desc": "Sparse Tensor Representation.", "type": "API"}, {"name": "tf.sparse.add", "docs": "Adds two tensors, at least one of each is a `SparseTensor`.\n\n If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If\n both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order\n of arguments does not matter. Use vanilla `tf.add()` for adding two dense\n `Tensor`s.\n\n The shapes of the two operands must match: broadcasting is not supported.\n\n The indices of any input `SparseTensor` are assumed ordered in standard\n lexicographic order. If this is not the case, before this step run\n `SparseReorder` to restore index ordering.\n\n If both arguments are sparse, we perform \"clipping\" as follows. By default,\n if two values sum to zero at some index, the output `SparseTensor` would still\n include that particular location in its index, storing a zero in the\n corresponding value slot. To override this, callers can specify `threshold`,\n indicating that if the sum has a magnitude strictly smaller than `threshold`,\n its corresponding value and index would then not be included. In particular,\n `threshold == 0.0` (default) means everything is kept and actual thresholding\n happens only for a positive value.\n\n For example, suppose the logical sum of two sparse operands is (densified):\n\n [ 2]\n [.1 0]\n [ 6 -.2]\n\n Then,\n\n * `threshold == 0` (the default): all 5 index/value pairs will be\n returned.\n * `threshold == 0.11`: only .1 and 0 will vanish, and the remaining three\n index/value pairs will be returned.\n * `threshold == 0.21`: .1, 0, and -.2 will vanish.\n\n Args:\n a: The first operand; `SparseTensor` or `Tensor`.\n b: The second operand; `SparseTensor` or `Tensor`. At least one operand\n must be sparse.\n threshold: A 0-D `Tensor`. The magnitude threshold that determines if an\n output value/index pair takes space. Its dtype should match that of the\n values if they are real; if the latter are complex64/complex128, then the\n dtype should be float32/float64, correspondingly.\n\n Returns:\n A `SparseTensor` or a `Tensor`, representing the sum.\n\n Raises:\n TypeError: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead.\n ", "desc": "Adds two tensors, at least one of each is a `SparseTensor`.", "type": "API"}, {"name": "tf.sparse.bincount", "docs": "Count the number of times an integer value appears in a tensor.\n\n This op takes an N-dimensional `Tensor`, `RaggedTensor`, or `SparseTensor`,\n and returns an N-dimensional int64 SparseTensor where element\n `[i0...i[axis], j]` contains the number of times the value `j` appears in\n slice `[i0...i[axis], :]` of the input tensor. Currently, only N=0 and\n N=-1 are supported.\n\n Args:\n values: A Tensor, RaggedTensor, or SparseTensor whose values should be\n counted. These tensors must have a rank of 2 if `axis=-1`.\n weights: If non-None, must be the same shape as arr. For each value in\n `value`, the bin will be incremented by the corresponding weight instead\n of 1.\n axis: The axis to slice over. Axes at and below `axis` will be flattened\n before bin counting. Currently, only `0`, and `-1` are supported. If None,\n all axes will be flattened (identical to passing `0`).\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `values` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n binary_output: If True, this op will output 1 instead of the number of times\n a token appears (equivalent to one_hot + reduce_any instead of one_hot +\n reduce_add). Defaults to False.\n name: A name for this op.\n\n Returns:\n A SparseTensor with `output.shape = values.shape[:axis] + [N]`, where `N` is\n * `maxlength` (if set);\n * `minlength` (if set, and `minlength > reduce_max(values)`);\n * `0` (if `values` is empty);\n * `reduce_max(values) + 1` otherwise.\n\n\n Examples:\n\n **Bin-counting every item in individual batches**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where the value of (i,j) is the\n number of times value j appears in batch i.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(data, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([1 2 1 2 1 1], shape=(6,), dtype=int64),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n **Bin-counting with defined output shape**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where the value of (i,j) is the\n number of times value j appears in batch i. However, all values of j\n above 'maxlength' are ignored. The dense_shape of the output sparse tensor\n is set to 'minlength'. Note that, while the input is identical to the\n example above, the value '10001' in batch item 2 is dropped, and the\n dense shape is [2, 500] instead of [2,10002] or [2, 102].\n\n >>> minlength = maxlength = 500\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(\n ... data, axis=-1, minlength=minlength, maxlength=maxlength)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]], shape=(5, 2), dtype=int64),\n values=tf.Tensor([1 2 1 2 1], shape=(5,), dtype=int64),\n dense_shape=tf.Tensor([ 2 500], shape=(2,), dtype=int64))\n\n **Binary bin-counting**\n\n This example takes an input (which could be a Tensor, RaggedTensor, or\n SparseTensor) and returns a SparseTensor where (i,j) is 1 if the value j\n appears in batch i at least once and is 0 otherwise. Note that, even though\n some values (like 20 in batch 1 and 11 in batch 2) appear more than once,\n the 'values' tensor is all 1s.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> output = tf.sparse.bincount(data, binary_output=True, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([1 1 1 1 1 1], shape=(6,), dtype=int64),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n **Weighted bin-counting**\n\n This example takes two inputs - a values tensor and a weights tensor. These\n tensors must be identically shaped, and have the same row splits or indices\n in the case of RaggedTensors or SparseTensors. When performing a weighted\n count, the op will output a SparseTensor where the value of (i, j) is the\n sum of the values in the weight tensor's batch i in the locations where\n the values tensor has the value j. In this case, the output dtype is the\n same as the dtype of the weights tensor.\n\n >>> data = np.array([[10, 20, 30, 20], [11, 101, 11, 10001]], dtype=np.int64)\n >>> weights = [[2, 0.25, 15, 0.5], [2, 17, 3, 0.9]]\n >>> output = tf.sparse.bincount(data, weights=weights, axis=-1)\n >>> print(output)\n SparseTensor(indices=tf.Tensor(\n [[ 0 10]\n [ 0 20]\n [ 0 30]\n [ 1 11]\n [ 1 101]\n [ 1 10001]], shape=(6, 2), dtype=int64),\n values=tf.Tensor([2. 0.75 15. 5. 17. 0.9], shape=(6,), dtype=float32),\n dense_shape=tf.Tensor([ 2 10002], shape=(2,), dtype=int64))\n\n ", "desc": "Count the number of times an integer value appears in a tensor.", "type": "API"}, {"name": "tf.sparse.concat", "docs": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)\n\nDeprecated: SOME ARGUMENTS ARE DEPRECATED: `(concat_dim)`. They will be removed in a future version.\nInstructions for updating:\nconcat_dim is deprecated, use axis instead\n\nConcatenation is with respect to the dense versions of each sparse input.\nIt is assumed that each inputs is a `SparseTensor` whose elements are ordered\nalong increasing dimension number.\n\nIf expand_nonconcat_dim is False, all inputs' shapes must match, except for\nthe concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are\nallowed to vary among all inputs.\n\nThe `indices`, `values`, and `shapes` lists must have the same length.\n\nIf expand_nonconcat_dim is False, then the output shape is identical to the\ninputs', except along the concat dimension, where it is the sum of the inputs'\nsizes along that dimension.\n\nIf expand_nonconcat_dim is True, then the output shape along the non-concat\ndimensions will be expand to be the largest among all inputs, and it is the\nsum of the inputs sizes along the concat dimension.\n\nThe output elements will be resorted to preserve the sort order along\nincreasing dimension number.\n\nThis op runs in `O(M log M)` time, where `M` is the total number of non-empty\nvalues across all inputs. This is due to the need for an internal sort in\norder to concatenate efficiently across an arbitrary dimension.\n\nFor example, if `axis = 1` and the inputs are\n\n sp_inputs[0]: shape = [2, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nthen the output will be\n\n shape = [2, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b c ] [ ] [b c ]\n\nAnother example, if 'axis = 1' and the inputs are\n\n sp_inputs[0]: shape = [3, 3]\n [0, 2]: \"a\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\n sp_inputs[1]: shape = [2, 4]\n [0, 1]: \"d\"\n [0, 2]: \"e\"\n\nif expand_nonconcat_dim = False, this will result in an error. But if\nexpand_nonconcat_dim = True, this will result in:\n\n shape = [3, 7]\n [0, 2]: \"a\"\n [0, 4]: \"d\"\n [0, 5]: \"e\"\n [1, 0]: \"b\"\n [2, 1]: \"c\"\n\nGraphically this is equivalent to doing\n\n [ a] concat [ d e ] = [ a d e ]\n [b ] [ ] [b ]\n [ c ] [ c ]\n\n\nArgs:\n axis: Dimension to concatenate along. Must be in range [-rank, rank),\n where rank is the number of dimensions in each input `SparseTensor`.\n sp_inputs: List of `SparseTensor` to concatenate.\n name: A name prefix for the returned tensors (optional).\n expand_nonconcat_dim: Whether to allow the expansion in the non-concat\n dimensions. Defaulted to False.\n concat_dim: The old (deprecated) name for axis.\n expand_nonconcat_dims: alias for expand_nonconcat_dim\n\nReturns:\n A `SparseTensor` with the concatenated output.\n\nRaises:\n TypeError: If `sp_inputs` is not a list of `SparseTensor`.", "desc": "Concatenates a list of `SparseTensor` along the specified dimension. (deprecated arguments)", "type": "API"}, {"name": "tf.sparse.cross", "docs": "Generates sparse cross from a list of sparse and dense tensors.\n\n For example, if the inputs are\n\n * inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n * inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n * inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be:\n\n shape = [2, 2]\n [0, 0]: \"a_X_d_X_f\"\n [1, 0]: \"b_X_e_X_g\"\n [1, 1]: \"c_X_e_X_g\"\n\n Customized separator \"_Y_\":\n\n >>> inp_0 = tf.constant([['a'], ['b']])\n >>> inp_1 = tf.constant([['c'], ['d']])\n >>> output = tf.sparse.cross([inp_0, inp_1], separator='_Y_')\n >>> output.values\n \n\n\n Args:\n inputs: An iterable of `Tensor` or `SparseTensor`.\n name: Optional name for the op.\n separator: A string added between each string being joined. Defaults to\n '_X_'.\n\n Returns:\n A `SparseTensor` of type `string`.\n ", "desc": "Generates sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.sparse.cross_hashed", "docs": "Generates hashed sparse cross from a list of sparse and dense tensors.\n\n For example, if the inputs are\n\n * inputs[0]: SparseTensor with shape = [2, 2]\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n * inputs[1]: SparseTensor with shape = [2, 1]\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n * inputs[2]: Tensor [[\"f\"], [\"g\"]]\n\n then the output will be:\n\n shape = [2, 2]\n [0, 0]: FingerprintCat64(\n Fingerprint64(\"f\"), FingerprintCat64(\n Fingerprint64(\"d\"), Fingerprint64(\"a\")))\n [1, 0]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"b\")))\n [1, 1]: FingerprintCat64(\n Fingerprint64(\"g\"), FingerprintCat64(\n Fingerprint64(\"e\"), Fingerprint64(\"c\")))\n\n Args:\n inputs: An iterable of `Tensor` or `SparseTensor`.\n num_buckets: An `int` that is `>= 0`.\n output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.\n hash_key: Integer hash_key that will be used by the `FingerprintCat64`\n function. If not given, will use a default key.\n name: Optional name for the op.\n\n Returns:\n A `SparseTensor` of type `int64`.\n ", "desc": "Generates hashed sparse cross from a list of sparse and dense tensors.", "type": "API"}, {"name": "tf.sparse.expand_dims", "docs": "Returns a tensor with an length 1 axis inserted at index `axis`.\n\n Given a tensor `input`, this operation inserts a dimension of length 1 at the\n dimension index `axis` of `input`'s shape. The dimension index follows python\n indexing rules: It's zero-based, a negative index it is counted backward\n from the end.\n\n This operation is useful to:\n\n * Add an outer \"batch\" dimension to a single element.\n * Align axes for broadcasting.\n * To add an inner vector length axis to a tensor of scalars.\n\n For example:\n\n If you have a sparse tensor with shape `[height, width, depth]`:\n\n >>> sp = tf.sparse.SparseTensor(indices=[[3,4,1]], values=[7,],\n ... dense_shape=[10,10,3])\n\n You can add an outer `batch` axis by passing `axis=0`:\n\n >>> tf.sparse.expand_dims(sp, axis=0).shape.as_list()\n [1, 10, 10, 3]\n\n The new axis location matches Python `list.insert(axis, 1)`:\n\n >>> tf.sparse.expand_dims(sp, axis=1).shape.as_list()\n [10, 1, 10, 3]\n\n Following standard python indexing rules, a negative `axis` counts from the\n end so `axis=-1` adds an inner most dimension:\n\n >>> tf.sparse.expand_dims(sp, axis=-1).shape.as_list()\n [10, 10, 3, 1]\n\n Note: Unlike `tf.expand_dims` this function includes a default value for the\n `axis`: `-1`. So if `axis is not specified, an inner dimension is added.\n\n >>> sp.shape.as_list()\n [10, 10, 3]\n >>> tf.sparse.expand_dims(sp).shape.as_list()\n [10, 10, 3, 1]\n\n This operation requires that `axis` is a valid index for `input.shape`,\n following python indexing rules:\n\n ```\n -1-tf.rank(input) <= axis <= tf.rank(input)\n ```\n\n This operation is related to:\n\n * `tf.expand_dims`, which provides this functionality for dense tensors.\n * `tf.squeeze`, which removes dimensions of size 1, from dense tensors.\n * `tf.sparse.reshape`, which provides more flexible reshaping capability.\n\n Args:\n sp_input: A `SparseTensor`.\n axis: 0-D (scalar). Specifies the dimension index at which to expand the\n shape of `input`. Must be in the range `[-rank(sp_input) - 1,\n rank(sp_input)]`. Defaults to `-1`.\n name: The name of the output `SparseTensor`.\n\n Returns:\n A `SparseTensor` with the same data as `sp_input`, but its shape has an\n additional dimension of size 1 added.\n ", "desc": "Returns a tensor with an length 1 axis inserted at index `axis`.", "type": "API"}, {"name": "tf.sparse.eye", "docs": "Creates a two-dimensional sparse tensor with ones along the diagonal.\n\n Args:\n num_rows: Non-negative integer or `int32` scalar `tensor` giving the number\n of rows in the resulting matrix.\n num_columns: Optional non-negative integer or `int32` scalar `tensor` giving\n the number of columns in the resulting matrix. Defaults to `num_rows`.\n dtype: The type of element in the resulting `Tensor`.\n name: A name for this `Op`. Defaults to \"eye\".\n\n Returns:\n A `SparseTensor` of shape [num_rows, num_columns] with ones along the\n diagonal.\n ", "desc": "Creates a two-dimensional sparse tensor with ones along the diagonal.", "type": "API"}, {"name": "tf.sparse.fill_empty_rows", "docs": "Fills empty rows in the input 2-D `SparseTensor` with a default value.\n\n This op adds entries with the specified `default_value` at index\n `[row, 0]` for any row in the input that does not already have a value.\n\n For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:\n\n [0, 1]: a\n [0, 3]: b\n [1, 0]: default_value\n [2, 0]: c\n [3, 1]: d\n [4, 0]: default_value\n\n Note that the input may have empty columns at the end, with no effect on\n this op.\n\n The output `SparseTensor` will be in row-major order and will have the\n same shape as the input.\n\n This op also returns an indicator vector such that\n\n empty_row_indicator[i] = True iff row i was an empty row.\n\n Args:\n sp_input: A `SparseTensor` with shape `[N, M]`.\n default_value: The value to fill for empty rows, with the same type as\n `sp_input.`\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n sp_ordered_output: A `SparseTensor` with shape `[N, M]`, and with all empty\n rows filled in with `default_value`.\n empty_row_indicator: A bool vector of length `N` indicating whether each\n input row was empty.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Fills empty rows in the input 2-D `SparseTensor` with a default value.", "type": "API"}, {"name": "tf.sparse.from_dense", "docs": "Converts a dense tensor into a sparse tensor.\n\n Only elements not equal to zero will be present in the result. The resulting\n `SparseTensor` has the same dtype and shape as the input.\n\n >>> sp = tf.sparse.from_dense([0, 0, 3, 0, 1])\n >>> sp.shape.as_list()\n [5]\n >>> sp.values.numpy()\n array([3, 1], dtype=int32)\n >>> sp.indices.numpy()\n array([[2],\n [4]])\n\n Args:\n tensor: A dense `Tensor` to be converted to a `SparseTensor`.\n name: Optional name for the op.\n\n Returns:\n The `SparseTensor`.\n ", "desc": "Converts a dense tensor into a sparse tensor.", "type": "API"}, {"name": "tf.sparse.map_values", "docs": "Applies `op` to the `.values` tensor of one or more `SparseTensor`s.\n\n Replaces any `SparseTensor` in `args` or `kwargs` with its `values`\n tensor (which contains the non-default values for the SparseTensor),\n and then calls `op`. Returns a `SparseTensor` that is constructed\n from the input `SparseTensor`s' `indices`, `dense_shape`, and the\n value returned by the `op`.\n\n If the input arguments contain multiple `SparseTensor`s, then they must have\n equal `indices` and dense shapes.\n\n Examples:\n\n >>> s = tf.sparse.from_dense([[1, 2, 0],\n ... [0, 4, 0],\n ... [1, 0, 0]])\n >>> tf.sparse.to_dense(tf.sparse.map_values(tf.ones_like, s)).numpy()\n array([[1, 1, 0],\n [0, 1, 0],\n [1, 0, 0]], dtype=int32)\n\n >>> tf.sparse.to_dense(tf.sparse.map_values(tf.multiply, s, s)).numpy()\n array([[ 1, 4, 0],\n [ 0, 16, 0],\n [ 1, 0, 0]], dtype=int32)\n\n >>> tf.sparse.to_dense(tf.sparse.map_values(tf.add, s, 5)).numpy()\n array([[6, 7, 0],\n [0, 9, 0],\n [6, 0, 0]], dtype=int32)\n\n Note: even though `tf.add(0, 5) != 0`, implicit zeros\n will remain unchanged. However, if the sparse tensor contains any explict\n zeros, these will be affected by the mapping!\n\n Args:\n op: The operation that should be applied to the SparseTensor `values`. `op`\n is typically an element-wise operation (such as math_ops.add), but any\n operation that preserves the shape can be used.\n *args: Arguments for `op`.\n **kwargs: Keyword arguments for `op`.\n\n Returns:\n A `SparseTensor` whose `indices` and `dense_shape` matches the `indices`\n and `dense_shape` of all input `SparseTensor`s.\n Raises:\n ValueError: If args contains no `SparseTensor`, or if the `indices`\n or `dense_shape`s of the input `SparseTensor`s are not equal.\n ", "desc": "Applies `op` to the `.values` tensor of one or more `SparseTensor`s.", "type": "API"}, {"name": "tf.sparse.mask", "docs": "Masks elements of `IndexedSlices`.\n\n Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that\n contains a subset of the slices of `a`. Only the slices at indices not\n specified in `mask_indices` are returned.\n\n This is useful when you need to extract a subset of slices in an\n `IndexedSlices` object.\n\n For example:\n\n ```python\n # `a` contains slices at indices [12, 26, 37, 45] from a large tensor\n # with shape [1000, 10]\n a.indices # [12, 26, 37, 45]\n tf.shape(a.values) # [4, 10]\n\n # `b` will be the subset of `a` slices at its second and third indices, so\n # we want to mask its first and last indices (which are at absolute\n # indices 12, 45)\n b = tf.sparse.mask(a, [12, 45])\n\n b.indices # [26, 37]\n tf.shape(b.values) # [2, 10]\n ```\n\n Args:\n a: An `IndexedSlices` instance.\n mask_indices: Indices of elements to mask.\n name: A name for the operation (optional).\n\n Returns:\n The masked `IndexedSlices` instance.\n ", "desc": "Masks elements of `IndexedSlices`.", "type": "API"}, {"name": "tf.sparse.maximum", "docs": "Returns the element-wise max of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.maximum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n The reduction version of this elementwise operation is `tf.sparse.reduce_max`\n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise max of two SparseTensors.", "type": "API"}, {"name": "tf.sparse.minimum", "docs": "Returns the element-wise min of two SparseTensors.\n\n Assumes the two SparseTensors have the same shape, i.e., no broadcasting.\n\n Example:\n\n >>> sp_zero = tf.sparse.SparseTensor([[0]], [0], [7])\n >>> sp_one = tf.sparse.SparseTensor([[1]], [1], [7])\n >>> res = tf.sparse.minimum(sp_zero, sp_one)\n >>> res.indices\n \n >>> res.values\n \n >>> res.dense_shape\n \n\n Args:\n sp_a: a `SparseTensor` operand whose dtype is real, and indices\n lexicographically ordered.\n sp_b: the other `SparseTensor` operand with the same requirements (and the\n same shape).\n name: optional name of the operation.\n Returns:\n output: the output SparseTensor.\n ", "desc": "Returns the element-wise min of two SparseTensors.", "type": "API"}, {"name": "tf.sparse.reduce_max", "docs": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor.\n\n This is the reduction operation for the elementwise `tf.sparse.maximum` op.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_max()`. In particular, this Op also returns a dense `Tensor`\n if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse`\n is `True`.\n\n Note: A gradient is not defined for this function, so it can't be used\n in training models that need gradient descent.\n\n Reduces `sp_input` along the dimensions given in `axis`. Unless\n `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in\n `axis`. If `keepdims` is true, the reduced dimensions are retained\n with length 1.\n\n If `axis` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n similar to the indexing rules in Python.\n\n The values not defined in `sp_input` don't participate in the reduce max,\n as opposed to be implicitly assumed 0 -- hence it can return negative values\n for sparse `axis`. But, in case there are no values in\n `axis`, it will reduce to 0. See second example below.\n\n For example:\n\n # 'x' represents [[1, ?, 2]\n # [?, 3, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 2, 3], [2, 3])\n >>> tf.sparse.reduce_max(x)\n \n >>> tf.sparse.reduce_max(x, 0)\n \n >>> tf.sparse.reduce_max(x, 1)\n \n >>> tf.sparse.reduce_max(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_max(x, [0, 1])\n \n\n # 'y' represents [[-7, ?]\n # [ 4, 3]\n # [ ?, ?]\n\n >>> y = tf.sparse.SparseTensor([[0, 0,], [1, 0], [1, 1]], [-7, 4, 3],\n ... [3, 2])\n >>> tf.sparse.reduce_max(y, 1)\n \n\n Args:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n output_is_sparse: If true, returns a `SparseTensor` instead of a dense\n `Tensor` (the default).\n name: A name for the operation (optional).\n\n Returns:\n The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is\n True.\n ", "desc": "Computes `tf.sparse.maximum` of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.sparse.reduce_sum", "docs": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor.\n\n This is the reduction operation for the elementwise `tf.sparse.add` op.\n\n This Op takes a SparseTensor and is the sparse counterpart to\n `tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`\n if `output_is_sparse` is `False`, or a `SparseTensor` if `output_is_sparse`\n is `True`.\n\n Note: if `output_is_sparse` is True, a gradient is not defined for this\n function, so it can't be used in training models that need gradient descent.\n\n Reduces `sp_input` along the dimensions given in `axis`. Unless `keepdims` is\n true, the rank of the tensor is reduced by 1 for each entry in `axis`. If\n `keepdims` is true, the reduced dimensions are retained with length 1.\n\n If `axis` has no entries, all dimensions are reduced, and a tensor\n with a single element is returned. Additionally, the axes can be negative,\n similar to the indexing rules in Python.\n\n For example:\n\n # 'x' represents [[1, ?, 1]\n # [?, 1, ?]]\n # where ? is implicitly-zero.\n\n >>> x = tf.sparse.SparseTensor([[0, 0], [0, 2], [1, 1]], [1, 1, 1], [2, 3])\n >>> tf.sparse.reduce_sum(x)\n \n >>> tf.sparse.reduce_sum(x, 0)\n \n >>> tf.sparse.reduce_sum(x, 1) # Can also use -1 as the axis\n \n >>> tf.sparse.reduce_sum(x, 1, keepdims=True)\n \n >>> tf.sparse.reduce_sum(x, [0, 1])\n \n\n Args:\n sp_input: The SparseTensor to reduce. Should have numeric type.\n axis: The dimensions to reduce; list or scalar. If `None` (the\n default), reduces all dimensions.\n keepdims: If true, retain reduced dimensions with length 1.\n output_is_sparse: If true, returns a `SparseTensor` instead of a dense\n `Tensor` (the default).\n name: A name for the operation (optional).\n\n Returns:\n The reduced Tensor or the reduced SparseTensor if `output_is_sparse` is\n True.\n ", "desc": "Computes `tf.sparse.add` of elements across dimensions of a SparseTensor.", "type": "API"}, {"name": "tf.sparse.reorder", "docs": "Reorders a `SparseTensor` into the canonical, row-major ordering.\n\n Note that by convention, all sparse ops preserve the canonical ordering\n along increasing dimension number. The only time ordering can be violated\n is during manual manipulation of the indices and values to add entries.\n\n Reordering does not affect the shape of the `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[4, 5]` and\n `indices` / `values`:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same shape and non-empty values, but in\n canonical ordering.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Reorders a `SparseTensor` into the canonical, row-major ordering.", "type": "API"}, {"name": "tf.sparse.reset_shape", "docs": "Resets the shape of a `SparseTensor` with indices and values unchanged.\n\n If `new_shape` is None, returns a copy of `sp_input` with its shape reset\n to the tight bounding box of `sp_input`. This will be a shape consisting of\n all zeros if sp_input has no values.\n\n If `new_shape` is provided, then it must be larger or equal in all dimensions\n compared to the shape of `sp_input`. When this condition is met, the returned\n SparseTensor will have its shape reset to `new_shape` and its indices and\n values unchanged from that of `sp_input.`\n\n For example:\n\n Consider a `sp_input` with shape [2, 3, 5]:\n\n [0, 0, 1]: a\n [0, 1, 0]: b\n [0, 2, 2]: c\n [1, 0, 3]: d\n\n - It is an error to set `new_shape` as [3, 7] since this represents a\n rank-2 tensor while `sp_input` is rank-3. This is either a ValueError\n during graph construction (if both shapes are known) or an OpError during\n run time.\n\n - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or\n equal in every dimension compared to the original shape [2, 3, 5].\n\n - On the other hand, setting new_shape as [2, 3, 4] is also an error: The\n third dimension is smaller than the original shape [2, 3, 5] (and an\n `InvalidArgumentError` will be raised).\n\n - If `new_shape` is None, the returned SparseTensor will have a shape\n [2, 3, 4], which is the tight bounding box of `sp_input`.\n\n Args:\n sp_input: The input `SparseTensor`.\n new_shape: None or a vector representing the new shape for the returned\n `SparseTensor`.\n\n Returns:\n A `SparseTensor` indices and values unchanged from `sp_input`. Its shape is\n `new_shape` if that is set. Otherwise it is the tight bounding box of\n `sp_input`\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If `new_shape` represents a tensor with a different rank from\n that of `sp_input` (if shapes are known when graph is constructed).\n ValueError: If `new_shape` is determined during graph build to have\n dimension sizes that are too small.\n OpError:\n - If `new_shape` has dimension sizes that are too small.\n - If shapes are not known during graph construction time, and during run\n time it is found out that the ranks do not match.\n ", "desc": "Resets the shape of a `SparseTensor` with indices and values unchanged.", "type": "API"}, {"name": "tf.sparse.reshape", "docs": "Reshapes a `SparseTensor` to represent values in a new dense shape.\n\n This operation has the same semantics as `reshape` on the represented dense\n tensor. The indices of non-empty values in `sp_input` are recomputed based\n on the new dense shape, and a new `SparseTensor` is returned containing the\n new indices and new shape. The order of non-empty values in `sp_input` is\n unchanged.\n\n If one component of `shape` is the special value -1, the size of that\n dimension is computed so that the total dense size remains constant. At\n most one component of `shape` can be -1. The number of dense elements\n implied by `shape` must be the same as the number of dense elements\n originally represented by `sp_input`.\n\n For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`:\n\n [0, 0, 0]: a\n [0, 0, 1]: b\n [0, 1, 0]: c\n [1, 0, 0]: d\n [1, 2, 3]: e\n\n and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of\n shape `[9, 4]` and `indices` / `values`:\n\n [0, 0]: a\n [0, 1]: b\n [1, 2]: c\n [4, 2]: d\n [8, 1]: e\n\n Args:\n sp_input: The input `SparseTensor`.\n shape: A 1-D (vector) int64 `Tensor` specifying the new dense shape of the\n represented `SparseTensor`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A `SparseTensor` with the same non-empty values but with indices calculated\n by the new dense shape.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ValueError: If argument `shape` requests a `SparseTensor` with a different\n number of elements than `sp_input`.\n ValueError: If `shape` has more than one inferred (== -1) dimension.\n ", "desc": "Reshapes a `SparseTensor` to represent values in a new dense shape.", "type": "API"}, {"name": "tf.sparse.retain", "docs": "Retains specified non-empty values within a `SparseTensor`.\n\n For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:\n\n [0, 1]: a\n [0, 3]: b\n [2, 0]: c\n [3, 1]: d\n\n and `to_retain = [True, False, False, True]`, then the output will\n be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:\n\n [0, 1]: a\n [3, 1]: d\n\n Args:\n sp_input: The input `SparseTensor` with `N` non-empty elements.\n to_retain: A bool vector of length `N` with `M` true values.\n\n Returns:\n A `SparseTensor` with the same shape as the input and `M` non-empty\n elements corresponding to the true positions in `to_retain`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Retains specified non-empty values within a `SparseTensor`.", "type": "API"}, {"name": "tf.sparse.segment_mean", "docs": "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_mean`, but `segment_ids` can have rank less than\n `data`'s first dimension, selecting a subset of dimension 0, specified by\n `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the mean along sparse segments of a tensor.", "type": "API"}, {"name": "tf.sparse.segment_sqrt_n", "docs": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.sparse.segment_mean`, but instead of dividing by the size of the\n segment, `N`, divide by `sqrt(N)` instead.\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor divided by the sqrt(N).", "type": "API"}, {"name": "tf.sparse.segment_sum", "docs": "Computes the sum along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/math#about_segmentation)\n for an explanation of segments.\n\n Like `tf.math.segment_sum`, but `segment_ids` can have rank less than `data`'s\n first dimension, selecting a subset of dimension 0, specified by `indices`.\n `segment_ids` is allowed to have missing ids, in which case the output will\n be zeros at those indices. In those cases `num_segments` is used to determine\n the size of the output.\n\n For example:\n\n ```python\n c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])\n\n # Select two rows, one segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))\n # => [[0 0 0 0]]\n\n # Select two rows, two segment.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))\n # => [[ 1 2 3 4]\n # [-1 -2 -3 -4]]\n\n # With missing segment ids.\n tf.sparse.segment_sum(c, tf.constant([0, 1]), tf.constant([0, 2]),\n num_segments=4)\n # => [[ 1 2 3 4]\n # [ 0 0 0 0]\n # [-1 -2 -3 -4]\n # [ 0 0 0 0]]\n\n # Select all rows, two segments.\n tf.sparse.segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))\n # => [[0 0 0 0]\n # [5 6 7 8]]\n\n # Which is equivalent to:\n tf.math.segment_sum(c, tf.constant([0, 0, 1]))\n ```\n\n Args:\n data: A `Tensor` with data that will be assembled in the output.\n indices: A 1-D `Tensor` with indices into `data`. Has same rank as\n `segment_ids`.\n segment_ids: A 1-D `Tensor` with indices into the output `Tensor`. Values\n should be sorted and can be repeated.\n num_segments: An optional int32 scalar. Indicates the size of the output\n `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `tensor` of the shape as data, except for dimension 0 which\n has size `k`, the number of segments specified via `num_segments` or\n inferred for the last element in `segments_ids`.\n ", "desc": "Computes the sum along sparse segments of a tensor.", "type": "API"}, {"name": "tf.sparse.slice", "docs": "Slice a `SparseTensor` based on the `start` and `size`.\n\n For example, if the input is\n\n input_tensor = shape = [2, 7]\n [ a d e ]\n [b c ]\n\n Graphically the output tensors are:\n\n sparse.slice([0, 0], [2, 4]) = shape = [2, 4]\n [ a ]\n [b c ]\n\n sparse.slice([0, 4], [2, 3]) = shape = [2, 3]\n [ d e ]\n [ ]\n\n Args:\n sp_input: The `SparseTensor` to split.\n start: 1-D. tensor represents the start of the slice.\n size: 1-D. tensor represents the size of the slice.\n name: A name for the operation (optional).\n\n Returns:\n A `SparseTensor` objects resulting from splicing.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Slice a `SparseTensor` based on the `start` and `size`.", "type": "API"}, {"name": "tf.sparse.softmax", "docs": "Applies softmax to a batched N-D `SparseTensor`.\n\n The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`\n (where `N >= 2`), and with indices sorted in the canonical lexicographic\n order.\n\n This op is equivalent to applying the normal `tf.nn.softmax()` to each\n innermost logical submatrix with shape `[B, C]`, but with the catch that *the\n implicitly zero elements do not participate*. Specifically, the algorithm is\n equivalent to:\n\n (1) Applies `tf.nn.softmax()` to a densified view of each innermost\n submatrix with shape `[B, C]`, along the size-C dimension;\n (2) Masks out the original implicitly-zero locations;\n (3) Renormalizes the remaining elements.\n\n Hence, the `SparseTensor` result has exactly the same non-zero indices and\n shape.\n\n Example:\n\n ```python\n # First batch:\n # [? e.]\n # [1. ? ]\n # Second batch:\n # [e ? ]\n # [e e ]\n shape = [2, 2, 2] # 3-D SparseTensor\n values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]])\n indices = np.vstack(np.where(values)).astype(np.int64).T\n\n result = tf.sparse.softmax(tf.sparse.SparseTensor(indices, values, shape))\n # ...returning a 3-D SparseTensor, equivalent to:\n # [? 1.] [1 ?]\n # [1. ? ] and [.5 .5]\n # where ? means implicitly zero.\n ```\n\n Args:\n sp_input: N-D `SparseTensor`, where `N >= 2`.\n name: optional name of the operation.\n Returns:\n output: N-D `SparseTensor` representing the results.\n ", "desc": "Applies softmax to a batched N-D `SparseTensor`.", "type": "API"}, {"name": "tf.sparse.sparse_dense_matmul", "docs": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix\n\n (or SparseTensor) \"B\". Please note that one and only one of the inputs MUST\n be a SparseTensor and the other MUST be a dense matrix.\n\n The following input format is recommended (but not required) for optimal\n performance:\n\n * If `adjoint_a == false`: `A` should be sorted in lexicographically\n increasing order. Use `sparse.reorder` if you're not sure.\n * If `adjoint_a == true`: `A` should be sorted in order of increasing\n dimension 1 (i.e., \"column major\" order instead of \"row major\" order).\n\n Args:\n sp_a: SparseTensor (or dense Matrix) A, of rank 2.\n b: dense Matrix (or SparseTensor) B, with the same dtype as sp_a.\n adjoint_a: Use the adjoint of A in the matrix multiply. If A is complex,\n this is transpose(conj(A)). Otherwise it's transpose(A).\n adjoint_b: Use the adjoint of B in the matrix multiply. If B is complex,\n this is transpose(conj(B)). Otherwise it's transpose(B).\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense matrix (pseudo-code in dense np.matrix notation):\n `A = A.H if adjoint_a else A`\n `B = B.H if adjoint_b else B`\n `return A*B`\n\n Notes:\n\n Using `tf.nn.embedding_lookup_sparse` for sparse multiplication:\n\n It's not obvious but you can consider `embedding_lookup_sparse` as another\n sparse and dense multiplication. In some situations, you may prefer to use\n `embedding_lookup_sparse` even though you're not dealing with embeddings.\n\n There are two questions to ask in the decision process: Do you need gradients\n computed as sparse too? Is your sparse data represented as two\n `SparseTensor`s: ids and values? There is more explanation about data format\n below. If you answer any of these questions as yes, consider using\n `tf.nn.embedding_lookup_sparse`.\n\n Following explains differences between the expected SparseTensors:\n For example if dense form of your sparse data has shape `[3, 5]` and values:\n\n [[ a ]\n [b c]\n [ d ]]\n\n\n `SparseTensor` format expected by `sparse_tensor_dense_matmul`:\n `sp_a` (indices, values):\n\n [0, 1]: a\n [1, 0]: b\n [1, 4]: c\n [2, 2]: d\n\n `SparseTensor` format expected by `embedding_lookup_sparse`:\n `sp_ids` `sp_weights`\n\n [0, 0]: 1 [0, 0]: a\n [1, 0]: 0 [1, 0]: b\n [1, 1]: 4 [1, 1]: c\n [2, 0]: 2 [2, 0]: d\n\n\n Deciding when to use `sparse_tensor_dense_matmul` vs.\n `matmul`(a_is_sparse=True):\n\n There are a number of questions to ask in the decision process, including:\n\n * Will the SparseTensor `A` fit in memory if densified?\n * Is the column count of the product large (>> 1)?\n * Is the density of `A` larger than approximately 15%?\n\n If the answer to several of these questions is yes, consider\n converting the `SparseTensor` to a dense one and using `tf.matmul` with\n `a_is_sparse=True`.\n\n This operation tends to perform well when `A` is more sparse, if the column\n size of the product is small (e.g. matrix-vector multiplication), if\n `sp_a.dense_shape` takes on large values.\n\n Below is a rough speed comparison between `sparse_tensor_dense_matmul`,\n labeled 'sparse', and `matmul`(a_is_sparse=True), labeled 'dense'. For\n purposes of the comparison, the time spent converting from a `SparseTensor` to\n a dense `Tensor` is not included, so it is overly conservative with respect to\n the time ratio.\n\n Benchmark system:\n CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB\n GPU: NVidia Tesla k40c\n\n Compiled with:\n `-c opt --config=cuda --copt=-mavx`\n\n ```\n tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks\n A sparse [m, k] with % nonzero values between 1% and 80%\n B dense [k, n]\n\n % nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)\n 0.01 1 True 100 100 0.000221166 0.00010154 0.459112\n 0.01 1 True 100 1000 0.00033858 0.000109275 0.322745\n 0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385\n 0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669\n 0.01 1 False 100 100 0.000208085 0.000107603 0.51711\n 0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762\n 0.01 1 False 1000 100 0.000308222 0.00010345 0.335635\n 0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124\n 0.01 10 True 100 100 0.000218522 0.000105537 0.482958\n 0.01 10 True 100 1000 0.000340882 0.000111641 0.327506\n 0.01 10 True 1000 100 0.000315472 0.000117376 0.372064\n 0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128\n 0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354\n 0.01 10 False 100 1000 0.000330552 0.000112615 0.340687\n 0.01 10 False 1000 100 0.000341277 0.000114097 0.334324\n 0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549\n 0.01 25 True 100 100 0.000207806 0.000105977 0.509981\n 0.01 25 True 100 1000 0.000322879 0.00012921 0.400181\n 0.01 25 True 1000 100 0.00038262 0.00014158 0.370035\n 0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504\n 0.01 25 False 100 100 0.000209401 0.000104696 0.499979\n 0.01 25 False 100 1000 0.000321161 0.000130737 0.407076\n 0.01 25 False 1000 100 0.000377012 0.000136801 0.362856\n 0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413\n 0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833\n 0.2 1 True 100 1000 0.000348674 0.000147475 0.422959\n 0.2 1 True 1000 100 0.000336908 0.00010122 0.300439\n 0.2 1 True 1000 1000 0.001022 0.000203274 0.198898\n 0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746\n 0.2 1 False 100 1000 0.000356127 0.000146824 0.41228\n 0.2 1 False 1000 100 0.000322664 0.000100918 0.312764\n 0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648\n 0.2 10 True 100 100 0.000211692 0.000109903 0.519165\n 0.2 10 True 100 1000 0.000372819 0.000164321 0.440753\n 0.2 10 True 1000 100 0.000338651 0.000144806 0.427596\n 0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064\n 0.2 10 False 100 100 0.000215727 0.000110502 0.512231\n 0.2 10 False 100 1000 0.000375419 0.0001613 0.429653\n 0.2 10 False 1000 100 0.000336999 0.000145628 0.432132\n 0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618\n 0.2 25 True 100 100 0.000218705 0.000129913 0.594009\n 0.2 25 True 100 1000 0.000394794 0.00029428 0.745402\n 0.2 25 True 1000 100 0.000404483 0.0002693 0.665788\n 0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052\n 0.2 25 False 100 100 0.000221494 0.0001306 0.589632\n 0.2 25 False 100 1000 0.000396436 0.000297204 0.74969\n 0.2 25 False 1000 100 0.000409346 0.000270068 0.659754\n 0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046\n 0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836\n 0.5 1 True 100 1000 0.000415328 0.000223073 0.537101\n 0.5 1 True 1000 100 0.000358324 0.00011269 0.314492\n 0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851\n 0.5 1 False 100 100 0.000224196 0.000101423 0.452386\n 0.5 1 False 100 1000 0.000400987 0.000223286 0.556841\n 0.5 1 False 1000 100 0.000368825 0.00011224 0.304318\n 0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563\n 0.5 10 True 100 100 0.000222125 0.000112308 0.505608\n 0.5 10 True 100 1000 0.000461088 0.00032357 0.701753\n 0.5 10 True 1000 100 0.000394624 0.000225497 0.571422\n 0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801\n 0.5 10 False 100 100 0.000232083 0.000114978 0.495418\n 0.5 10 False 100 1000 0.000454574 0.000324632 0.714146\n 0.5 10 False 1000 100 0.000379097 0.000227768 0.600817\n 0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638\n 0.5 25 True 100 100 0.00023429 0.000151703 0.647501\n 0.5 25 True 100 1000 0.000497462 0.000598873 1.20386\n 0.5 25 True 1000 100 0.000460778 0.000557038 1.20891\n 0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845\n 0.5 25 False 100 100 0.000228981 0.000155334 0.678371\n 0.5 25 False 100 1000 0.000496139 0.000620789 1.25124\n 0.5 25 False 1000 100 0.00045473 0.000551528 1.21287\n 0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927\n 0.8 1 True 100 100 0.000222037 0.000105301 0.47425\n 0.8 1 True 100 1000 0.000410804 0.000329327 0.801664\n 0.8 1 True 1000 100 0.000349735 0.000131225 0.375212\n 0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633\n 0.8 1 False 100 100 0.000214079 0.000107486 0.502085\n 0.8 1 False 100 1000 0.000413746 0.000323244 0.781261\n 0.8 1 False 1000 100 0.000348983 0.000131983 0.378193\n 0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282\n 0.8 10 True 100 100 0.000229159 0.00011825 0.516017\n 0.8 10 True 100 1000 0.000498845 0.000532618 1.0677\n 0.8 10 True 1000 100 0.000383126 0.00029935 0.781336\n 0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689\n 0.8 10 False 100 100 0.000230783 0.000124958 0.541452\n 0.8 10 False 100 1000 0.000493393 0.000550654 1.11606\n 0.8 10 False 1000 100 0.000377167 0.000298581 0.791642\n 0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024\n 0.8 25 True 100 100 0.000233496 0.000175241 0.75051\n 0.8 25 True 100 1000 0.00055654 0.00102658 1.84458\n 0.8 25 True 1000 100 0.000463814 0.000783267 1.68875\n 0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132\n 0.8 25 False 100 100 0.000240243 0.000175047 0.728625\n 0.8 25 False 100 1000 0.000578102 0.00104499 1.80763\n 0.8 25 False 1000 100 0.000485113 0.000776849 1.60138\n 0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992\n ```\n\n ", "desc": "Multiply SparseTensor (or dense Matrix) (of rank 2) \"A\" by dense matrix", "type": "API"}, {"name": "tf.sparse.SparseTensor", "docs": "Represents a sparse tensor.\n\n TensorFlow represents a sparse tensor as three separate dense tensors:\n `indices`, `values`, and `dense_shape`. In Python, the three tensors are\n collected into a `SparseTensor` class for ease of use. If you have separate\n `indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`\n object before passing to the ops below.\n\n Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`\n comprises the following components, where `N` and `ndims` are the number\n of values and number of dimensions in the `SparseTensor`, respectively:\n\n * `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the\n indices of the elements in the sparse tensor that contain nonzero values\n (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies\n that the elements with indexes of [1,3] and [2,4] have nonzero values.\n\n * `values`: A 1-D tensor of any type and shape `[N]`, which supplies the\n values for each element in `indices`. For example, given `indices=[[1,3],\n [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of\n the sparse tensor has a value of 18, and element [2,4] of the tensor has a\n value of 3.6.\n\n * `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the\n dense_shape of the sparse tensor. Takes a list indicating the number of\n elements in each dimension. For example, `dense_shape=[3,6]` specifies a\n two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a\n three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a\n one-dimensional tensor with 9 elements.\n\n The corresponding dense tensor satisfies:\n\n ```python\n dense.shape = dense_shape\n dense[tuple(indices[i])] = values[i]\n ```\n\n By convention, `indices` should be sorted in row-major order (or equivalently\n lexicographic order on the tuples `indices[i]`). This is not enforced when\n `SparseTensor` objects are constructed, but most ops assume correct ordering.\n If the ordering of sparse tensor `st` is wrong, a fixed version can be\n obtained by calling `tf.sparse.reorder(st)`.\n\n Example: The sparse tensor\n\n ```python\n SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])\n ```\n\n represents the dense tensor\n\n ```python\n [[1, 0, 0, 0]\n [0, 0, 2, 0]\n [0, 0, 0, 0]]\n ```\n ", "desc": "Represents a sparse tensor.", "type": "API"}, {"name": "tf.sparse.split", "docs": "Split a `SparseTensor` into `num_split` tensors along `axis`.\n\n If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split`\n each slice starting from 0:`shape[axis] % num_split` gets extra one\n dimension. For example:\n\n >>> indices = [[0, 2], [0, 4], [0, 5], [1, 0], [1, 1]]\n >>> values = [1, 2, 3, 4, 5]\n >>> t = tf.SparseTensor(indices=indices, values=values, dense_shape=[2, 7])\n >>> tf.sparse.to_dense(t)\n \n\n >>> output = tf.sparse.split(sp_input=t, num_split=2, axis=1)\n >>> tf.sparse.to_dense(output[0])\n \n >>> tf.sparse.to_dense(output[1])\n \n\n >>> output = tf.sparse.split(sp_input=t, num_split=2, axis=0)\n >>> tf.sparse.to_dense(output[0])\n \n >>> tf.sparse.to_dense(output[1])\n \n\n >>> output = tf.sparse.split(sp_input=t, num_split=2, axis=-1)\n >>> tf.sparse.to_dense(output[0])\n \n >>> tf.sparse.to_dense(output[1])\n \n\n Args:\n sp_input: The `SparseTensor` to split.\n num_split: A Python integer. The number of ways to split.\n axis: A 0-D `int32` `Tensor`. The dimension along which to split. Must be in\n range [-rank, rank), where rank is the number of dimensions in the input\n `SparseTensor`.\n name: A name for the operation (optional).\n\n Returns:\n `num_split` `SparseTensor` objects resulting from splitting `value`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Split a `SparseTensor` into `num_split` tensors along `axis`.", "type": "API"}, {"name": "tf.sparse.to_dense", "docs": "Converts a `SparseTensor` into a dense tensor.\n\n For this sparse tensor with three non-empty values:\n\n >>> sp_input = tf.SparseTensor(\n ... dense_shape=[3, 5],\n ... values=[7, 8, 9],\n ... indices =[[0, 1],\n ... [0, 3],\n ... [2, 0]])\n\n The output will be a dense `[3, 5]` tensor with values:\n\n >>> tf.sparse.to_dense(sp_input).numpy()\n array([[0, 7, 0, 8, 0],\n [0, 0, 0, 0, 0],\n [9, 0, 0, 0, 0]], dtype=int32)\n\n Note: Indices must be without repeats. This is only tested if\n `validate_indices` is `True`.\n\n Args:\n sp_input: The input `SparseTensor`.\n default_value: Scalar value to set for indices not specified in\n `sp_input`. Defaults to zero.\n validate_indices: A boolean value. If `True`, indices are checked to make\n sure they are sorted in lexicographic order and that there are no repeats.\n name: A name prefix for the returned tensors (optional).\n\n Returns:\n A dense tensor with shape `sp_input.dense_shape` and values specified by\n the non-empty values in `sp_input`. Indices not in `sp_input` are assigned\n `default_value`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` into a dense tensor.", "type": "API"}, {"name": "tf.sparse.to_indicator", "docs": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.\n\n The last dimension of `sp_input.indices` is discarded and replaced with\n the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`,\n then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where\n\n output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True\n\n and False elsewhere in `output`.\n\n For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values:\n\n [0, 0, 0]: 0\n [0, 1, 0]: 10\n [1, 0, 3]: 103\n [1, 1, 1]: 150\n [1, 1, 2]: 149\n [1, 1, 3]: 150\n [1, 2, 1]: 121\n\n and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool\n tensor with False everywhere except at positions\n\n (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),\n (1, 2, 121).\n\n Note that repeats are allowed in the input SparseTensor.\n This op is useful for converting `SparseTensor`s into dense formats for\n compatibility with ops that expect dense tensors.\n\n The input `SparseTensor` must be in row-major order.\n\n Args:\n sp_input: A `SparseTensor` with `values` property of type `int32` or\n `int64`.\n vocab_size: A scalar int64 Tensor (or Python int) containing the new size\n of the last dimension, `all(0 <= sp_input.values < vocab_size)`.\n name: A name prefix for the returned tensors (optional)\n\n Returns:\n A dense bool indicator tensor representing the indices with specified value.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Converts a `SparseTensor` of ids into a dense bool indicator tensor.", "type": "API"}, {"name": "tf.sparse.transpose", "docs": "Transposes a `SparseTensor`\n\n The returned tensor's dimension i will correspond to the input dimension\n `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is\n the rank of the input tensor. Hence by default, this operation performs a\n regular matrix transpose on 2-D input Tensors.\n\n For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:\n\n [0, 3]: b\n [0, 1]: a\n [3, 1]: d\n [2, 0]: c\n\n then the output will be a `SparseTensor` of shape `[5, 4]` and\n `indices` / `values`:\n\n [0, 2]: c\n [1, 0]: a\n [1, 3]: d\n [3, 0]: b\n\n Args:\n sp_input: The input `SparseTensor`.\n perm: A permutation of the dimensions of `sp_input`.\n name: A name prefix for the returned tensors (optional)\n Returns:\n A transposed `SparseTensor`.\n\n Raises:\n TypeError: If `sp_input` is not a `SparseTensor`.\n ", "desc": "Transposes a `SparseTensor`", "type": "API"}, {"name": "tf.SparseTensor", "docs": "Represents a sparse tensor.\n\n TensorFlow represents a sparse tensor as three separate dense tensors:\n `indices`, `values`, and `dense_shape`. In Python, the three tensors are\n collected into a `SparseTensor` class for ease of use. If you have separate\n `indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`\n object before passing to the ops below.\n\n Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`\n comprises the following components, where `N` and `ndims` are the number\n of values and number of dimensions in the `SparseTensor`, respectively:\n\n * `indices`: A 2-D int64 tensor of shape `[N, ndims]`, which specifies the\n indices of the elements in the sparse tensor that contain nonzero values\n (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]` specifies\n that the elements with indexes of [1,3] and [2,4] have nonzero values.\n\n * `values`: A 1-D tensor of any type and shape `[N]`, which supplies the\n values for each element in `indices`. For example, given `indices=[[1,3],\n [2,4]]`, the parameter `values=[18, 3.6]` specifies that element [1,3] of\n the sparse tensor has a value of 18, and element [2,4] of the tensor has a\n value of 3.6.\n\n * `dense_shape`: A 1-D int64 tensor of shape `[ndims]`, which specifies the\n dense_shape of the sparse tensor. Takes a list indicating the number of\n elements in each dimension. For example, `dense_shape=[3,6]` specifies a\n two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a\n three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a\n one-dimensional tensor with 9 elements.\n\n The corresponding dense tensor satisfies:\n\n ```python\n dense.shape = dense_shape\n dense[tuple(indices[i])] = values[i]\n ```\n\n By convention, `indices` should be sorted in row-major order (or equivalently\n lexicographic order on the tuples `indices[i]`). This is not enforced when\n `SparseTensor` objects are constructed, but most ops assume correct ordering.\n If the ordering of sparse tensor `st` is wrong, a fixed version can be\n obtained by calling `tf.sparse.reorder(st)`.\n\n Example: The sparse tensor\n\n ```python\n SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])\n ```\n\n represents the dense tensor\n\n ```python\n [[1, 0, 0, 0]\n [0, 0, 2, 0]\n [0, 0, 0, 0]]\n ```\n ", "desc": "Represents a sparse tensor.", "type": "API"}, {"name": "tf.SparseTensorSpec", "docs": "Type specification for a `tf.sparse.SparseTensor`.", "desc": "Type specification for a `tf.sparse.SparseTensor`.", "type": "API"}, {"name": "tf.split", "docs": "Splits a tensor `value` into a list of sub tensors.\n\n See also `tf.unstack`.\n\n If `num_or_size_splits` is an `int`, then it splits `value` along the\n dimension `axis` into `num_or_size_splits` smaller tensors. This requires that\n `value.shape[axis]` is divisible by `num_or_size_splits`.\n\n If `num_or_size_splits` is a 1-D Tensor (or list), then `value` is split into\n `len(num_or_size_splits)` elements. The shape of the `i`-th\n element has the same size as the `value` except along dimension `axis` where\n the size is `num_or_size_splits[i]`.\n\n For example:\n\n >>> x = tf.Variable(tf.random.uniform([5, 30], -1, 1))\n >>>\n >>> # Split `x` into 3 tensors along dimension 1\n >>> s0, s1, s2 = tf.split(x, num_or_size_splits=3, axis=1)\n >>> tf.shape(s0).numpy()\n array([ 5, 10], dtype=int32)\n >>>\n >>> # Split `x` into 3 tensors with sizes [4, 15, 11] along dimension 1\n >>> split0, split1, split2 = tf.split(x, [4, 15, 11], 1)\n >>> tf.shape(split0).numpy()\n array([5, 4], dtype=int32)\n >>> tf.shape(split1).numpy()\n array([ 5, 15], dtype=int32)\n >>> tf.shape(split2).numpy()\n array([ 5, 11], dtype=int32)\n\n Args:\n value: The `Tensor` to split.\n num_or_size_splits: Either an `int` indicating the number of splits\n along `axis` or a 1-D integer `Tensor` or Python list containing the sizes\n of each output tensor along `axis`. If an `int`, then it must evenly\n divide `value.shape[axis]`; otherwise the sum of sizes along the split\n axis must match that of the `value`.\n axis: An `int` or scalar `int32` `Tensor`. The dimension along which\n to split. Must be in the range `[-rank(value), rank(value))`. Defaults to\n 0.\n num: Optional, an `int`, used to specify the number of outputs when it\n cannot be inferred from the shape of `size_splits`.\n name: A name for the operation (optional).\n\n Returns:\n if `num_or_size_splits` is an `int` returns a list of\n `num_or_size_splits` `Tensor` objects; if `num_or_size_splits` is a 1-D\n list or 1-D `Tensor` returns `num_or_size_splits.get_shape[0]`\n `Tensor` objects resulting from splitting `value`.\n\n Raises:\n ValueError: If `num` is unspecified and cannot be inferred.\n ValueError: If `num_or_size_splits` is a scalar `Tensor`.\n ", "desc": "Splits a tensor `value` into a list of sub tensors.", "type": "API"}, {"name": "tf.sqrt", "docs": "Computes element-wise square root of the input tensor.\n\n Note: This operation does not support integer types.\n\n >>> x = tf.constant([[4.0], [16.0]])\n >>> tf.sqrt(x)\n \n >>> y = tf.constant([[-4.0], [16.0]])\n >>> tf.sqrt(y)\n \n >>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)\n >>> tf.sqrt(z)\n \n\n Note: In order to support complex type, please provide an input tensor\n of `complex64` or `complex128`.\n\n Args:\n x: A `tf.Tensor` of type `bfloat16`, `half`, `float32`, `float64`,\n `complex64`, `complex128`\n name: A name for the operation (optional).\n\n Returns:\n A `tf.Tensor` of same size, type and sparsity as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape)`", "desc": "Computes element-wise square root of the input tensor.", "type": "API"}, {"name": "tf.square", "docs": "Computes square of x element-wise.\n\n I.e., \\\\(y = x * x = x^2\\\\).\n\n >>> tf.math.square([-2., 0., 3.])\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, `uint64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape)`", "desc": "Computes square of x element-wise.", "type": "API"}, {"name": "tf.squeeze", "docs": "Removes dimensions of size 1 from the shape of a tensor.\n\n Given a tensor `input`, this operation returns a tensor of the same type with\n all dimensions of size 1 removed. If you don't want to remove all size 1\n dimensions, you can remove specific size 1 dimensions by specifying\n `axis`.\n\n For example:\n\n ```python\n # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n tf.shape(tf.squeeze(t)) # [2, 3]\n ```\n\n Or, to remove specific size 1 dimensions:\n\n ```python\n # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]\n tf.shape(tf.squeeze(t, [2, 4])) # [1, 2, 3, 1]\n ```\n\n Unlike the older op `tf.compat.v1.squeeze`, this op does not accept a\n deprecated `squeeze_dims` argument.\n\n Note: if `input` is a `tf.RaggedTensor`, then this operation takes `O(N)`\n time, where `N` is the number of elements in the squeezed dimensions.\n\n Args:\n input: A `Tensor`. The `input` to squeeze.\n axis: An optional list of `ints`. Defaults to `[]`. If specified, only\n squeezes the dimensions listed. The dimension index starts at 0. It is an\n error to squeeze a dimension that is not 1. Must be in the range\n `[-rank(input), rank(input))`. Must be specified if `input` is a\n `RaggedTensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n Contains the same data as `input`, but has one or more dimensions of\n size 1 removed.\n\n Raises:\n ValueError: The input cannot be converted to a tensor, or the specified\n axis cannot be squeezed.\n ", "desc": "Removes dimensions of size 1 from the shape of a tensor.", "type": "API"}, {"name": "tf.stack", "docs": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.\n\n See also `tf.concat`, `tf.tile`, `tf.repeat`.\n\n Packs the list of tensors in `values` into a tensor with rank one higher than\n each tensor in `values`, by packing them along the `axis` dimension.\n Given a list of length `N` of tensors of shape `(A, B, C)`;\n\n if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.\n if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.\n Etc.\n\n For example:\n\n >>> x = tf.constant([1, 4])\n >>> y = tf.constant([2, 5])\n >>> z = tf.constant([3, 6])\n >>> tf.stack([x, y, z])\n \n >>> tf.stack([x, y, z], axis=1)\n \n\n This is the opposite of unstack. The numpy equivalent is `np.stack`\n\n >>> np.array_equal(np.stack([x, y, z]), tf.stack([x, y, z]))\n True\n\n Args:\n values: A list of `Tensor` objects with the same shape and type.\n axis: An `int`. The axis to stack along. Defaults to the first dimension.\n Negative values wrap around, so the valid range is `[-(R+1), R+1)`.\n name: A name for this operation (optional).\n\n Returns:\n output: A stacked `Tensor` with the same type as `values`.\n\n Raises:\n ValueError: If `axis` is out of the range [-(R+1), R+1).\n ", "desc": "Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.", "type": "API"}, {"name": "tf.stop_gradient", "docs": "Stops gradient computation.\n\n When executed in a graph, this op outputs its input tensor as-is.\n\n When building ops to compute gradients, this op prevents the contribution of\n its inputs to be taken into account. Normally, the gradient generator adds ops\n to a graph to compute the derivatives of a specified 'loss' by recursively\n finding out inputs that contributed to its computation. If you insert this op\n in the graph it inputs are masked from the gradient generator. They are not\n taken into account for computing gradients.\n\n This is useful any time you want to compute a value with TensorFlow but need\n to pretend that the value was a constant. For example, the softmax function\n for a vector x can be written as\n\n ```python\n\n def softmax(x):\n numerator = tf.exp(x)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n This however is susceptible to overflow if the values in x are large. An\n alternative more stable way is to subtract the maximum of x from each of the\n values.\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.reduce_max(x)\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n However, when we backprop through the softmax to x, we dont want to backprop\n through the `tf.reduce_max(x)` (if the max values are not unique then the\n gradient could flow to the wrong input) calculation and treat that as a\n constant. Therefore, we should write this out as\n\n ```python\n\n def stable_softmax(x):\n z = x - tf.stop_gradient(tf.reduce_max(x))\n numerator = tf.exp(z)\n denominator = tf.reduce_sum(numerator)\n return numerator / denominator\n ```\n\n Some other examples include:\n\n * The *EM* algorithm where the *M-step* should not involve backpropagation\n through the output of the *E-step*.\n * Contrastive divergence training of Boltzmann machines where, when\n differentiating the energy function, the training must not backpropagate\n through the graph that generated the samples from the model.\n * Adversarial training, where no backprop should happen through the adversarial\n example generation process.\n\n Args:\n input: A `Tensor`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Stops gradient computation.", "type": "API"}, {"name": "tf.strided_slice", "docs": "Extracts a strided slice of a tensor (generalized Python array indexing).\n\n See also `tf.slice`.\n\n **Instead of calling this op directly most users will want to use the\n NumPy-style slicing syntax (e.g. `tensor[..., 3:4:-1, tf.newaxis, 3]`), which\n is supported via `tf.Tensor.__getitem__` and `tf.Variable.__getitem__`.**\n The interface of this op is a low-level encoding of the slicing syntax.\n\n Roughly speaking, this op extracts a slice of size `(end-begin)/stride`\n from the given `input_` tensor. Starting at the location specified by `begin`\n the slice continues by adding `stride` to the index until all dimensions are\n not less than `end`.\n Note that a stride can be negative, which causes a reverse slice.\n\n Given a Python slice `input[spec0, spec1, ..., specn]`,\n this function will be called as follows.\n\n `begin`, `end`, and `strides` will be vectors of length n.\n n in general is not equal to the rank of the `input_` tensor.\n\n In each mask field (`begin_mask`, `end_mask`, `ellipsis_mask`,\n `new_axis_mask`, `shrink_axis_mask`) the ith bit will correspond to\n the ith spec.\n\n If the ith bit of `begin_mask` is set, `begin[i]` is ignored and\n the fullest possible range in that dimension is used instead.\n `end_mask` works analogously, except with the end range.\n\n `foo[5:,:,:3]` on a 7x8x9 tensor is equivalent to `foo[5:7,0:8,0:3]`.\n `foo[::-1]` reverses a tensor with shape 8.\n\n If the ith bit of `ellipsis_mask` is set, as many unspecified dimensions\n as needed will be inserted between other dimensions. Only one\n non-zero bit is allowed in `ellipsis_mask`.\n\n For example `foo[3:5,...,4:5]` on a shape 10x3x3x10 tensor is\n equivalent to `foo[3:5,:,:,4:5]` and\n `foo[3:5,...]` is equivalent to `foo[3:5,:,:,:]`.\n\n If the ith bit of `new_axis_mask` is set, then `begin`,\n `end`, and `stride` are ignored and a new length 1 dimension is\n added at this point in the output tensor.\n\n For example,\n `foo[:4, tf.newaxis, :2]` would produce a shape `(4, 1, 2)` tensor.\n\n If the ith bit of `shrink_axis_mask` is set, it implies that the ith\n specification shrinks the dimensionality by 1, taking on the value at index\n `begin[i]`. `end[i]` and `strides[i]` are ignored in this case. For example in\n Python one might do `foo[:, 3, :]` which would result in `shrink_axis_mask`\n equal to 2.\n\n\n NOTE: `begin` and `end` are zero-indexed.\n `strides` entries must be non-zero.\n\n\n ```python\n t = tf.constant([[[1, 1, 1], [2, 2, 2]],\n [[3, 3, 3], [4, 4, 4]],\n [[5, 5, 5], [6, 6, 6]]])\n tf.strided_slice(t, [1, 0, 0], [2, 1, 3], [1, 1, 1]) # [[[3, 3, 3]]]\n tf.strided_slice(t, [1, 0, 0], [2, 2, 3], [1, 1, 1]) # [[[3, 3, 3],\n # [4, 4, 4]]]\n tf.strided_slice(t, [1, -1, 0], [2, -3, 3], [1, -1, 1]) # [[[4, 4, 4],\n # [3, 3, 3]]]\n ```\n\n Args:\n input_: A `Tensor`.\n begin: An `int32` or `int64` `Tensor`.\n end: An `int32` or `int64` `Tensor`.\n strides: An `int32` or `int64` `Tensor`.\n begin_mask: An `int32` mask.\n end_mask: An `int32` mask.\n ellipsis_mask: An `int32` mask.\n new_axis_mask: An `int32` mask.\n shrink_axis_mask: An `int32` mask.\n var: The variable corresponding to `input_` or None\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` the same type as `input`.\n ", "desc": "Extracts a strided slice of a tensor (generalized Python array indexing).", "type": "API"}, {"name": "tf.strings", "docs": "Operations for working with string Tensors.\n", "desc": "Operations for working with string Tensors.", "type": "API"}, {"name": "tf.strings.as_string", "docs": "Converts each entry in the given tensor to strings.\n\n Supports many numeric types and boolean.\n\n For Unicode, see the\n [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text)\n tutorial.\n\n Examples:\n\n >>> tf.strings.as_string([3, 2])\n \n >>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()\n array([b'3.14', b'2.72'], dtype=object)\n\n Args:\n input: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`, `complex64`, `complex128`, `bool`, `variant`.\n precision: An optional `int`. Defaults to `-1`.\n The post-decimal precision to use for floating point numbers.\n Only used if precision > -1.\n scientific: An optional `bool`. Defaults to `False`.\n Use scientific notation for floating point numbers.\n shortest: An optional `bool`. Defaults to `False`.\n Use shortest representation (either scientific or standard) for\n floating point numbers.\n width: An optional `int`. Defaults to `-1`.\n Pad pre-decimal numbers to this width.\n Applies to both floating point and integer numbers.\n Only used if width > -1.\n fill: An optional `string`. Defaults to `\"\"`.\n The value to pad if width > -1. If empty, pads with spaces.\n Another typical value is '0'. String cannot be longer than 1 character.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts each entry in the given tensor to strings.", "type": "API"}, {"name": "tf.strings.bytes_split", "docs": "Split string elements of `input` into bytes.\n\n Examples:\n\n >>> tf.strings.bytes_split('hello').numpy()\n array([b'h', b'e', b'l', b'l', b'o'], dtype=object)\n >>> tf.strings.bytes_split(['hello', '123'])\n \n\n Note that this op splits strings into bytes, not unicode characters. To\n split strings into unicode characters, use `tf.strings.unicode_split`.\n\n See also: `tf.io.decode_raw`, `tf.strings.split`, `tf.strings.unicode_split`.\n\n Args:\n input: A string `Tensor` or `RaggedTensor`: the strings to split. Must\n have a statically known rank (`N`).\n name: A name for the operation (optional).\n\n Returns:\n A `RaggedTensor` of rank `N+1`: the bytes that make up the source strings.\n ", "desc": "Split string elements of `input` into bytes.", "type": "API"}, {"name": "tf.strings.format", "docs": "Formats a string template using a list of tensors.\n\n Formats a string template using a list of tensors, abbreviating tensors by\n only printing the first and last `summarize` elements of each dimension\n (recursively). If formatting only one tensor into a template, the tensor does\n not have to be wrapped in a list.\n\n Example:\n Formatting a single-tensor template:\n\n >>> tensor = tf.range(5)\n >>> tf.strings.format(\"tensor: {}, suffix\", tensor)\n \n\n Formatting a multi-tensor template:\n\n >>> tensor_a = tf.range(2)\n >>> tensor_b = tf.range(1, 4, 2)\n >>> tf.strings.format(\"a: {}, b: {}, suffix\", (tensor_a, tensor_b))\n \n\n\n Args:\n template: A string template to format tensor values into.\n inputs: A list of `Tensor` objects, or a single Tensor.\n The list of tensors to format into the template string. If a solitary\n tensor is passed in, the input tensor will automatically be wrapped as a\n list.\n placeholder: An optional `string`. Defaults to `{}`.\n At each placeholder occurring in the template, a subsequent tensor\n will be inserted.\n summarize: An optional `int`. Defaults to `3`.\n When formatting the tensors, show the first and last `summarize`\n entries of each tensor dimension (recursively). If set to -1, all\n elements of the tensor will be shown.\n name: A name for the operation (optional).\n\n Returns:\n A scalar `Tensor` of type `string`.\n\n Raises:\n ValueError: if the number of placeholders does not match the number of\n inputs.\n ", "desc": "Formats a string template using a list of tensors.", "type": "API"}, {"name": "tf.strings.join", "docs": "Perform element-wise concatenation of a list of string tensors.\n\n Given a list of string tensors of same shape, performs element-wise\n concatenation of the strings of the same index in all tensors.\n\n\n >>> tf.strings.join(['abc','def']).numpy()\n b'abcdef'\n >>> tf.strings.join([['abc','123'],\n ... ['def','456'],\n ... ['ghi','789']]).numpy()\n array([b'abcdefghi', b'123456789'], dtype=object)\n >>> tf.strings.join([['abc','123'],\n ... ['def','456']],\n ... separator=\" \").numpy()\n array([b'abc def', b'123 456'], dtype=object)\n\n The reduction version of this elementwise operation is\n `tf.strings.reduce_join`\n\n Args:\n inputs: A list of `tf.Tensor` objects of same size and `tf.string` dtype.\n separator: A string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Perform element-wise concatenation of a list of string tensors.", "type": "API"}, {"name": "tf.strings.length", "docs": "String lengths of `input`.\n\n Computes the length of each string given in the input tensor.\n\n >>> strings = tf.constant(['Hello','TensorFlow', '\\U0001F642'])\n >>> tf.strings.length(strings).numpy() # default counts bytes\n array([ 5, 10, 4], dtype=int32)\n >>> tf.strings.length(strings, unit=\"UTF8_CHAR\").numpy()\n array([ 5, 10, 1], dtype=int32)\n\n Args:\n input: A `Tensor` of type `string`.\n The strings for which to compute the length for each element.\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is counted to compute string length. One of: `\"BYTE\"` (for\n the number of bytes in each string) or `\"UTF8_CHAR\"` (for the number of UTF-8\n encoded Unicode code points in each string). Results are undefined\n if `unit=UTF8_CHAR` and the `input` strings do not contain structurally\n valid UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "String lengths of `input`.", "type": "API"}, {"name": "tf.strings.lower", "docs": "Converts all uppercase characters into their respective lowercase replacements.\n\n Example:\n\n >>> tf.strings.lower(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be lower-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all uppercase characters into their respective lowercase replacements.", "type": "API"}, {"name": "tf.strings.ngrams", "docs": "Create a tensor of n-grams based on `data`.\n\n Creates a tensor of n-grams based on `data`. The n-grams are created by\n joining windows of `width` adjacent strings from the inner axis of `data`\n using `separator`.\n\n The input data can be padded on both the start and end of the sequence, if\n desired, using the `pad_values` argument. If set, `pad_values` should contain\n either a tuple of strings or a single string; the 0th element of the tuple\n will be used to pad the left side of the sequence and the 1st element of the\n tuple will be used to pad the right side of the sequence. The `padding_width`\n arg controls how many padding values are added to each side; it defaults to\n `ngram_width-1`.\n\n If this op is configured to not have padding, or if it is configured to add\n padding with `padding_width` set to less than ngram_width-1, it is possible\n that a sequence, or a sequence plus padding, is smaller than the ngram\n width. In that case, no ngrams will be generated for that sequence. This can\n be prevented by setting `preserve_short_sequences`, which will cause the op\n to always generate at least one ngram per non-empty sequence.\n\n Examples:\n\n >>> tf.strings.ngrams([\"A\", \"B\", \"C\", \"D\"], 2).numpy()\n array([b'A B', b'B C', b'C D'], dtype=object)\n >>> tf.strings.ngrams([\"TF\", \"and\", \"keras\"], 1).numpy()\n array([b'TF', b'and', b'keras'], dtype=object)\n\n Args:\n data: A Tensor or RaggedTensor containing the source data for the ngrams.\n ngram_width: The width(s) of the ngrams to create. If this is a list or\n tuple, the op will return ngrams of all specified arities in list order.\n Values must be non-Tensor integers greater than 0.\n separator: The separator string used between ngram elements. Must be a\n string constant, not a Tensor.\n pad_values: A tuple of (left_pad_value, right_pad_value), a single string,\n or None. If None, no padding will be added; if a single string, then that\n string will be used for both left and right padding. Values must be Python\n strings.\n padding_width: If set, `padding_width` pad values will be added to both\n sides of each sequence. Defaults to `ngram_width`-1. Must be greater than\n 0. (Note that 1-grams are never padded, regardless of this value.)\n preserve_short_sequences: If true, then ensure that at least one ngram is\n generated for each input sequence. In particular, if an input sequence is\n shorter than `min(ngram_width) + 2*pad_width`, then generate a single\n ngram containing the entire sequence. If false, then no ngrams are\n generated for these short input sequences.\n name: The op name.\n\n Returns:\n A RaggedTensor of ngrams. If `data.shape=[D1...DN, S]`, then\n `output.shape=[D1...DN, NUM_NGRAMS]`, where\n `NUM_NGRAMS=S-ngram_width+1+2*padding_width`.\n\n Raises:\n TypeError: if `pad_values` is set to an invalid type.\n ValueError: if `pad_values`, `padding_width`, or `ngram_width` is set to an\n invalid value.\n ", "desc": "Create a tensor of n-grams based on `data`.", "type": "API"}, {"name": "tf.strings.reduce_join", "docs": "Joins all strings into a single string, or joins along an axis.\n\n This is the reduction operation for the elementwise `tf.strings.join` op.\n\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']]).numpy()\n b'abc123def456'\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']], axis=-1).numpy()\n array([b'abc123', b'def456'], dtype=object)\n >>> tf.strings.reduce_join([['abc','123'],\n ... ['def','456']],\n ... axis=-1,\n ... separator=\" \").numpy()\n array([b'abc 123', b'def 456'], dtype=object)\n\n Args:\n inputs: A `tf.string` tensor.\n axis: Which axis to join along. The default behavior is to join all\n elements, producing a scalar.\n keepdims: If true, retains reduced dimensions with length 1.\n separator: a string added between each string being joined.\n name: A name for the operation (optional).\n\n Returns:\n A `tf.string` tensor.\n ", "desc": "Joins all strings into a single string, or joins along an axis.", "type": "API"}, {"name": "tf.strings.regex_full_match", "docs": "Check if the input matches the regex pattern.\n\n The input is a string tensor of any shape. The pattern is a scalar\n string tensor which is applied to every element of the input tensor.\n The boolean values (True or False) of the output tensor indicate\n if the input matches the regex pattern provided.\n\n The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)\n\n Examples:\n\n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*lib$\")\n \n >>> tf.strings.regex_full_match([\"TF lib\", \"lib TF\"], \".*TF$\")\n \n\n Args:\n input: A `Tensor` of type `string`.\n A string tensor of the text to be processed.\n pattern: A `Tensor` of type `string`.\n A scalar string tensor containing the regular expression to match the input.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `bool`.\n ", "desc": "Check if the input matches the regex pattern.", "type": "API"}, {"name": "tf.strings.regex_replace", "docs": "Replace elements of `input` matching regex `pattern` with `rewrite`.\n\n >>> tf.strings.regex_replace(\"Text with tags.
contains html\",\n ... \"<[^>]+>\", \" \")\n \n\n Args:\n input: string `Tensor`, the source strings to process.\n pattern: string or scalar string `Tensor`, regular expression to use,\n see more details at https://github.com/google/re2/wiki/Syntax\n rewrite: string or scalar string `Tensor`, value to use in match\n replacement, supports backslash-escaped digits (\\1 to \\9) can be to insert\n text matching corresponding parenthesized group.\n replace_global: `bool`, if `True` replace all non-overlapping matches,\n else replace only the first match.\n name: A name for the operation (optional).\n\n Returns:\n string `Tensor` of the same shape as `input` with specified replacements.\n ", "desc": "Replace elements of `input` matching regex `pattern` with `rewrite`.", "type": "API"}, {"name": "tf.strings.split", "docs": "Split elements of `input` based on `sep` into a `RaggedTensor`.\n\n Let N be the size of `input` (typically N will be the batch size). Split each\n element of `input` based on `sep` and return a `RaggedTensor` containing the\n split tokens. Empty tokens are ignored.\n\n Example:\n\n >>> tf.strings.split('hello world').numpy()\n array([b'hello', b'world'], dtype=object)\n >>> tf.strings.split(['hello world', 'a b c'])\n \n\n If `sep` is given, consecutive delimiters are not grouped together and are\n deemed to delimit empty strings. For example, `input` of `\"1<>2<><>3\"` and\n `sep` of `\"<>\"` returns `[\"1\", \"2\", \"\", \"3\"]`. If `sep` is None or an empty\n string, consecutive whitespace are regarded as a single separator, and the\n result will contain no empty strings at the start or end if the string has\n leading or trailing whitespace.\n\n Note that the above mentioned behavior matches python's str.split.\n\n Args:\n input: A string `Tensor` of rank `N`, the strings to split. If\n `rank(input)` is not known statically, then it is assumed to be `1`.\n sep: `0-D` string `Tensor`, the delimiter string.\n maxsplit: An `int`. If `maxsplit > 0`, limit of the split of the result.\n name: A name for the operation (optional).\n\n Raises:\n ValueError: If sep is not a string.\n\n Returns:\n A `RaggedTensor` of rank `N+1`, the strings split according to the\n delimiter.\n ", "desc": "Split elements of `input` based on `sep` into a `RaggedTensor`.", "type": "API"}, {"name": "tf.strings.strip", "docs": "Strip leading and trailing whitespaces from the Tensor.\n\n Examples:\n\n >>> tf.strings.strip([\"\\nTensorFlow\", \" The python library \"]).numpy()\n array([b'TensorFlow', b'The python library'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`. A string `Tensor` of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Strip leading and trailing whitespaces from the Tensor.", "type": "API"}, {"name": "tf.strings.substr", "docs": "Return substrings from `Tensor` of strings.\n\n For each string in the input `Tensor`, creates a substring starting at index\n `pos` with a total length of `len`.\n\n If `len` defines a substring that would extend beyond the length of the input\n string, or if `len` is negative, then as many characters as possible are used.\n\n A negative `pos` indicates distance within the string backwards from the end.\n\n If `pos` specifies an index which is out of range for any of the input strings,\n then an `InvalidArgumentError` is thrown.\n\n `pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on\n Op creation.\n\n *NOTE*: `Substr` supports broadcasting up to two dimensions. More about\n broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n ---\n\n Examples\n\n Using scalar `pos` and `len`:\n\n ```python\n input = [b'Hello', b'World']\n position = 1\n length = 3\n\n output = [b'ell', b'orl']\n ```\n\n Using `pos` and `len` with same shape as `input`:\n\n ```python\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen']]\n position = [[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]\n length = [[2, 3, 4],\n [4, 3, 2],\n [5, 5, 5]]\n\n output = [[b'en', b'eve', b'lve'],\n [b'hirt', b'urt', b'te'],\n [b'ixtee', b'vente', b'hteen']]\n ```\n\n Broadcasting `pos` and `len` onto `input`:\n\n ```\n input = [[b'ten', b'eleven', b'twelve'],\n [b'thirteen', b'fourteen', b'fifteen'],\n [b'sixteen', b'seventeen', b'eighteen'],\n [b'nineteen', b'twenty', b'twentyone']]\n position = [1, 2, 3]\n length = [1, 2, 3]\n\n output = [[b'e', b'ev', b'lve'],\n [b'h', b'ur', b'tee'],\n [b'i', b've', b'hte'],\n [b'i', b'en', b'nty']]\n ```\n\n Broadcasting `input` onto `pos` and `len`:\n\n ```\n input = b'thirteen'\n position = [1, 5, 7]\n length = [3, 2, 1]\n\n output = [b'hir', b'ee', b'n']\n ```\n\n Raises:\n\n * `ValueError`: If the first argument cannot be converted to a\n Tensor of `dtype string`.\n * `InvalidArgumentError`: If indices are out of range.\n * `ValueError`: If `pos` and `len` are not the same shape.\n\n Args:\n input: A `Tensor` of type `string`. Tensor of strings\n pos: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Scalar defining the position of first character in each substring\n len: A `Tensor`. Must have the same type as `pos`.\n Scalar defining the number of characters to include in each substring\n unit: An optional `string` from: `\"BYTE\", \"UTF8_CHAR\"`. Defaults to `\"BYTE\"`.\n The unit that is used to create the substring. One of: `\"BYTE\"` (for\n defining position and length by bytes) or `\"UTF8_CHAR\"` (for the UTF-8\n encoded Unicode code points). The default is `\"BYTE\"`. Results are undefined if\n `unit=UTF8_CHAR` and the `input` strings do not contain structurally valid\n UTF-8.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Return substrings from `Tensor` of strings.", "type": "API"}, {"name": "tf.strings.to_hash_bucket", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process.\n\n Note that the hash function may change from time to time.\n This functionality will be deprecated and it's recommended to use\n `tf.strings.to_hash_bucket_fast()` or `tf.strings.to_hash_bucket_strong()`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket([\"Hello\", \"TensorFlow\", \"2.x\"], 3)\n \n\n Args:\n input: A `Tensor` of type `string`.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.strings.to_hash_bucket_fast", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process and will never change. However, it is not suitable for cryptography.\n This function may be used when CPU time is scarce and inputs are trusted or\n unimportant. There is a risk of adversaries constructing inputs that all hash\n to the same bucket. To prevent this problem, use a strong hash function with\n `tf.string_to_hash_bucket_strong`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_fast([\"Hello\", \"TensorFlow\", \"2.x\"], 3).numpy()\n array([0, 2, 2])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.strings.to_hash_bucket_strong", "docs": "Converts each string in the input Tensor to its hash mod by a number of buckets.\n\n The hash function is deterministic on the content of the string within the\n process. The hash function is a keyed hash function, where attribute `key`\n defines the key of the hash function. `key` is an array of 2 elements.\n\n A strong hash is important when inputs may be malicious, e.g. URLs with\n additional components. Adversaries could try to make their inputs hash to the\n same bucket for a denial-of-service attack or to skew the results. A strong\n hash can be used to make it difficult to find inputs with a skewed hash value\n distribution over buckets. This requires that the hash function is\n seeded by a high-entropy (random) \"key\" unknown to the adversary.\n\n The additional robustness comes at a cost of roughly 4x higher compute\n time than `tf.string_to_hash_bucket_fast`.\n\n Examples:\n\n >>> tf.strings.to_hash_bucket_strong([\"Hello\", \"TF\"], 3, [1, 2]).numpy()\n array([2, 0])\n\n Args:\n input: A `Tensor` of type `string`. The strings to assign a hash bucket.\n num_buckets: An `int` that is `>= 1`. The number of buckets.\n key: A list of `ints`.\n The key used to seed the hash function, passed as a list of two uint64\n elements.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int64`.\n ", "desc": "Converts each string in the input Tensor to its hash mod by a number of buckets.", "type": "API"}, {"name": "tf.strings.to_number", "docs": "Converts each string in the input Tensor to the specified numeric type.\n\n (Note that int32 overflow results in an error while float overflow\n results in a rounded value.)\n\n Examples:\n\n >>> tf.strings.to_number(\"1.55\")\n \n >>> tf.strings.to_number(\"3\", tf.int32)\n \n\n Args:\n input: A `Tensor` of type `string`.\n out_type: An optional `tf.DType` from: `tf.float32, tf.float64, tf.int32,\n tf.int64`. Defaults to `tf.float32`.\n The numeric type to interpret each string in `string_tensor` as.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `out_type`.\n ", "desc": "Converts each string in the input Tensor to the specified numeric type.", "type": "API"}, {"name": "tf.strings.unicode_decode", "docs": "Decodes each string in `input` into a sequence of Unicode code points.\n\n `result[i1...iN, j]` is the Unicode codepoint for the `j`th character in\n `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`; and in place of C0 control\n characters in `input` when `replace_control_characters=True`.\n replace_control_characters: Whether to replace the C0 control characters\n `(U+0000 - U+001F)` with the `replacement_char`.\n name: A name for the operation (optional).\n\n Returns:\n A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`.\n The returned tensor is a `tf.Tensor` if `input` is a scalar, or a\n `tf.RaggedTensor` otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> tf.strings.unicode_decode(input, 'UTF-8').to_list()\n [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]]\n ", "desc": "Decodes each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.strings.unicode_decode_with_offsets", "docs": "Decodes each string into a sequence of code points with start offsets.\n\n This op is similar to `tf.strings.decode(...)`, but it also returns the\n start offset for each character in its respective string. This information\n can be used to align the characters with the original byte sequence.\n\n Returns a tuple `(codepoints, start_offsets)` where:\n\n * `codepoints[i1...iN, j]` is the Unicode codepoint for the `j`th character\n in `input[i1...iN]`, when decoded using `input_encoding`.\n * `start_offsets[i1...iN, j]` is the start byte offset for the `j`th\n character in `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`; and in place of C0 control\n characters in `input` when `replace_control_characters=True`.\n replace_control_characters: Whether to replace the C0 control characters\n `(U+0000 - U+001F)` with the `replacement_char`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`.\n\n * `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`.\n * `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`.\n\n The returned tensors are `tf.Tensor`s if `input` is a scalar, or\n `tf.RaggedTensor`s otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> result = tf.strings.unicode_decode_with_offsets(input, 'UTF-8')\n >>> result[0].to_list() # codepoints\n [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]]\n >>> result[1].to_list() # offsets\n [[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]]\n\n ", "desc": "Decodes each string into a sequence of code points with start offsets.", "type": "API"}, {"name": "tf.strings.unicode_encode", "docs": "Encodes each sequence of Unicode code points in `input` into a string.\n\n `result[i1...iN]` is the string formed by concatenating the Unicode\n codepoints `input[1...iN, :]`, encoded using `output_encoding`.\n\n Args:\n input: An `N+1` dimensional potentially ragged integer tensor with shape\n `[D1...DN, num_chars]`.\n output_encoding: Unicode encoding that should be used to encode each\n codepoint sequence. Can be `\"UTF-8\"`, `\"UTF-16-BE\"`, or `\"UTF-32-BE\"`.\n errors: Specifies the response when an invalid codepoint is encountered\n (optional). One of:\n * `'replace'`: Replace invalid codepoint with the\n `replacement_char`. (default)\n * `'ignore'`: Skip invalid codepoints.\n * `'strict'`: Raise an exception for any invalid codepoint.\n replacement_char: The replacement character codepoint to be used in place of\n any invalid input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character\n which is 0xFFFD (U+65533).\n name: A name for the operation (optional).\n\n Returns:\n A `N` dimensional `string` tensor with shape `[D1...DN]`.\n\n #### Example:\n\n >>> input = tf.ragged.constant(\n ... [[71, 246, 246, 100, 110, 105, 103, 104, 116], [128522]])\n >>> print(unicode_encode(input, 'UTF-8'))\n tf.Tensor([b'G\\xc3\\xb6\\xc3\\xb6dnight' b'\\xf0\\x9f\\x98\\x8a'],\n shape=(2,), dtype=string)\n ", "desc": "Encodes each sequence of Unicode code points in `input` into a string.", "type": "API"}, {"name": "tf.strings.unicode_script", "docs": "Determine the script codes of a given tensor of Unicode integer code points.\n\n This operation converts Unicode code points to script codes corresponding to\n each code point. Script codes correspond to International Components for\n Unicode (ICU) UScriptCode values.\n\n See\n [ICU project docs](http://icu-project.org/apiref/icu4c/uscript_8h.html)\n for more details on script codes.\n\n For an example, see the unicode strings guide on [unicode scripts]\n (https://www.tensorflow.org/tutorials/load_data/unicode#representing_unicode).\n\n Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will\n match input shape.\n\n Examples:\n\n >>> tf.strings.unicode_script([1, 31, 38])\n \n\n Args:\n input: A `Tensor` of type `int32`. A Tensor of int32 Unicode code points.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `int32`.\n ", "desc": "Determine the script codes of a given tensor of Unicode integer code points.", "type": "API"}, {"name": "tf.strings.unicode_split", "docs": "Splits each string in `input` into a sequence of Unicode code points.\n\n `result[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its\n `j`th character, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`.\n name: A name for the operation (optional).\n\n Returns:\n A `N+1` dimensional `int32` tensor with shape `[D1...DN, (num_chars)]`.\n The returned tensor is a `tf.Tensor` if `input` is a scalar, or a\n `tf.RaggedTensor` otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> tf.strings.unicode_split(input, 'UTF-8').to_list()\n [[b'G', b'\\xc3\\xb6', b'\\xc3\\xb6', b'd', b'n', b'i', b'g', b'h', b't'],\n [b'\\xf0\\x9f\\x98\\x8a']]\n ", "desc": "Splits each string in `input` into a sequence of Unicode code points.", "type": "API"}, {"name": "tf.strings.unicode_split_with_offsets", "docs": "Splits each string into a sequence of code points with start offsets.\n\n This op is similar to `tf.strings.decode(...)`, but it also returns the\n start offset for each character in its respective string. This information\n can be used to align the characters with the original byte sequence.\n\n Returns a tuple `(chars, start_offsets)` where:\n\n * `chars[i1...iN, j]` is the substring of `input[i1...iN]` that encodes its\n `j`th character, when decoded using `input_encoding`.\n * `start_offsets[i1...iN, j]` is the start byte offset for the `j`th\n character in `input[i1...iN]`, when decoded using `input_encoding`.\n\n Args:\n input: An `N` dimensional potentially ragged `string` tensor with shape\n `[D1...DN]`. `N` must be statically known.\n input_encoding: String name for the unicode encoding that should be used to\n decode each string.\n errors: Specifies the response when an input string can't be converted\n using the indicated encoding. One of:\n * `'strict'`: Raise an exception for any illegal substrings.\n * `'replace'`: Replace illegal substrings with `replacement_char`.\n * `'ignore'`: Skip illegal substrings.\n replacement_char: The replacement codepoint to be used in place of invalid\n substrings in `input` when `errors='replace'`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `N+1` dimensional tensors `(codepoints, start_offsets)`.\n\n * `codepoints` is an `int32` tensor with shape `[D1...DN, (num_chars)]`.\n * `offsets` is an `int64` tensor with shape `[D1...DN, (num_chars)]`.\n\n The returned tensors are `tf.Tensor`s if `input` is a scalar, or\n `tf.RaggedTensor`s otherwise.\n\n #### Example:\n\n >>> input = [s.encode('utf8') for s in (u'G\\xf6\\xf6dnight', u'\\U0001f60a')]\n >>> result = tf.strings.unicode_split_with_offsets(input, 'UTF-8')\n >>> result[0].to_list() # character substrings\n [[b'G', b'\\xc3\\xb6', b'\\xc3\\xb6', b'd', b'n', b'i', b'g', b'h', b't'],\n [b'\\xf0\\x9f\\x98\\x8a']]\n >>> result[1].to_list() # offsets\n [[0, 1, 3, 5, 6, 7, 8, 9, 10], [0]]\n\n ", "desc": "Splits each string into a sequence of code points with start offsets.", "type": "API"}, {"name": "tf.strings.unicode_transcode", "docs": "Transcode the input text from a source encoding to a destination encoding.\n\n The input is a string tensor of any shape. The output is a string tensor of\n the same shape containing the transcoded strings. Output strings are always\n valid unicode. If the input contains invalid encoding positions, the\n `errors` attribute sets the policy for how to deal with them. If the default\n error-handling policy is used, invalid formatting will be substituted in the\n output by the `replacement_char`. If the errors policy is to `ignore`, any\n invalid encoding positions in the input are skipped and not included in the\n output. If it set to `strict` then any invalid formatting will result in an\n InvalidArgument error.\n\n This operation can be used with `output_encoding = input_encoding` to enforce\n correct formatting for inputs even if they are already in the desired encoding.\n\n If the input is prefixed by a Byte Order Mark needed to determine encoding\n (e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that\n BOM will be consumed and not emitted into the output. If the input encoding\n is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is\n interpreted as a non-breaking-space and is preserved in the output (including\n always for UTF-8).\n\n The end result is that if the input is marked as an explicit endianness the\n transcoding is faithful to all codepoints in the source. If it is not marked\n with an explicit endianness, the BOM is not considered part of the string itself\n but as metadata, and so is not preserved in the output.\n\n Examples:\n\n >>> tf.strings.unicode_transcode([\"Hello\", \"TensorFlow\", \"2.x\"], \"UTF-8\", \"UTF-16-BE\")\n \n >>> tf.strings.unicode_transcode([\"A\", \"B\", \"C\"], \"US ASCII\", \"UTF-8\").numpy()\n array([b'A', b'B', b'C'], dtype=object)\n\n Args:\n input: A `Tensor` of type `string`.\n The text to be processed. Can have any shape.\n input_encoding: A `string`.\n Text encoding of the input strings. This is any of the encodings supported\n by ICU ucnv algorithmic converters. Examples: `\"UTF-16\", \"US ASCII\", \"UTF-8\"`.\n output_encoding: A `string` from: `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`.\n The unicode encoding to use in the output. Must be one of\n `\"UTF-8\", \"UTF-16-BE\", \"UTF-32-BE\"`. Multi-byte encodings will be big-endian.\n errors: An optional `string` from: `\"strict\", \"replace\", \"ignore\"`. Defaults to `\"replace\"`.\n Error handling policy when there is invalid formatting found in the input.\n The value of 'strict' will cause the operation to produce a InvalidArgument\n error on any invalid input formatting. A value of 'replace' (the default) will\n cause the operation to replace any invalid formatting in the input with the\n `replacement_char` codepoint. A value of 'ignore' will cause the operation to\n skip any invalid formatting in the input and produce no corresponding output\n character.\n replacement_char: An optional `int`. Defaults to `65533`.\n The replacement character codepoint to be used in place of any invalid\n formatting in the input when `errors='replace'`. Any valid unicode codepoint may\n be used. The default value is the default unicode replacement character is\n 0xFFFD or U+65533.)\n\n Note that for UTF-8, passing a replacement character expressible in 1 byte, such\n as ' ', will preserve string alignment to the source since invalid bytes will be\n replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte\n replacement character will preserve byte alignment to the source.\n replace_control_characters: An optional `bool`. Defaults to `False`.\n Whether to replace the C0 control characters (00-1F) with the\n `replacement_char`. Default is false.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Transcode the input text from a source encoding to a destination encoding.", "type": "API"}, {"name": "tf.strings.unsorted_segment_join", "docs": "Joins the elements of `inputs` based on `segment_ids`.\n\n Computes the string join along segments of a tensor.\n Given `segment_ids` with rank `N` and `data` with rank `N+M`:\n\n `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])`\n\n where the join is over all [j1...jN] such that segment_ids[j1...jN] = i.\n Strings are joined in row-major order.\n\n For example:\n\n ```python\n inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']]\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[1, 0, 1],\n num_segments=2,\n separator=':'))\n # output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']]\n\n\n inputs = ['this', 'is', 'a', 'test']\n output_array = string_ops.unsorted_segment_join(inputs=inputs,\n segment_ids=[0, 0, 0, 0],\n num_segments=1,\n separator=':'))\n # output_array ==> ['this:is:a:test']\n ```\n\n Args:\n inputs: A `Tensor` of type `string`. The input to be joined.\n segment_ids: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A tensor whose shape is a prefix of data.shape. Negative segment ids are not\n supported.\n num_segments: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n A scalar.\n separator: An optional `string`. Defaults to `\"\"`.\n The separator to use when joining.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Joins the elements of `inputs` based on `segment_ids`.", "type": "API"}, {"name": "tf.strings.upper", "docs": "Converts all lowercase characters into their respective uppercase replacements.\n\n Example:\n\n >>> tf.strings.upper(\"CamelCase string and ALL CAPS\")\n \n\n Args:\n input: A `Tensor` of type `string`. The input to be upper-cased.\n encoding: An optional `string`. Defaults to `\"\"`.\n Character encoding of `input`. Allowed values are '' and 'utf-8'.\n Value '' is interpreted as ASCII.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `string`.\n ", "desc": "Converts all lowercase characters into their respective uppercase replacements.", "type": "API"}, {"name": "tf.subtract", "docs": "Returns x - y element-wise.\n\n *NOTE*: `tf.subtract` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Both input and output have a range `(-inf, inf)`.\n\n Example usages below.\n\n Subtract operation between an array and a scalar:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = 1\n >>> tf.subtract(x, y)\n \n >>> tf.subtract(y, x)\n \n\n Note that binary `-` operator can be used instead:\n\n >>> x = tf.convert_to_tensor([1, 2, 3, 4, 5])\n >>> y = tf.convert_to_tensor(1)\n >>> x - y\n \n\n Subtract operation between an array and a tensor of same shape:\n\n >>> x = [1, 2, 3, 4, 5]\n >>> y = tf.constant([5, 4, 3, 2, 1])\n >>> tf.subtract(y, x)\n \n\n **Warning**: If one of the inputs (`x` or `y`) is a tensor and the other is a\n non-tensor, the non-tensor input will adopt (or get casted to) the data type\n of the tensor input. This can potentially cause unwanted overflow or underflow\n conversion.\n\n For example,\n\n >>> x = tf.constant([1, 2], dtype=tf.int8)\n >>> y = [2**8 + 1, 2**8 + 2]\n >>> tf.subtract(x, y)\n \n\n When subtracting two input values of different shapes, `tf.subtract` follows the\n [general broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules)\n . The two input array shapes are compared element-wise. Starting with the\n trailing dimensions, the two dimensions either have to be equal or one of them\n needs to be `1`.\n\n For example,\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(2, 1, 3)\n >>> tf.subtract(x, y)\n \n\n Example with inputs of different dimensions:\n\n >>> x = np.ones(6).reshape(2, 3, 1)\n >>> y = np.ones(6).reshape(1, 6)\n >>> tf.subtract(x, y)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `uint32`, `uint64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x - y element-wise.", "type": "API"}, {"name": "tf.summary", "docs": "Operations for writing summary data, for use in analysis and visualization.\n\nThe `tf.summary` module provides APIs for writing summary data. This data can be\nvisualized in TensorBoard, the visualization toolkit that comes with TensorFlow.\nSee the [TensorBoard website](https://www.tensorflow.org/tensorboard) for more\ndetailed tutorials about how to use these APIs, or some quick examples below.\n\nExample usage with eager execution, the default in TF 2.0:\n\n```python\nwriter = tf.summary.create_file_writer(\"/tmp/mylogs\")\nwith writer.as_default():\n for step in range(100):\n # other model code would go here\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n writer.flush()\n```\n\nExample usage with `tf.function` graph execution:\n\n```python\nwriter = tf.summary.create_file_writer(\"/tmp/mylogs\")\n\n@tf.function\ndef my_func(step):\n # other model code would go here\n with writer.as_default():\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n\nfor step in range(100):\n my_func(step)\n writer.flush()\n```\n\nExample usage with legacy TF 1.x graph execution:\n\n```python\nwith tf.compat.v1.Graph().as_default():\n step = tf.Variable(0, dtype=tf.int64)\n step_update = step.assign_add(1)\n writer = tf.summary.create_file_writer(\"/tmp/mylogs\")\n with writer.as_default():\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()\n writer_flush = writer.flush()\n\n sess = tf.compat.v1.Session()\n sess.run([writer.init(), step.initializer])\n for i in range(100):\n sess.run(all_summary_ops)\n sess.run(step_update)\n sess.run(writer_flush)\n```\n", "desc": "Operations for writing summary data, for use in analysis and visualization.", "type": "API"}, {"name": "tf.summary.audio", "docs": "Write an audio summary.\n\n Arguments:\n name: A name for this summary. The summary tag used for TensorBoard will\n be this name prefixed by any active name scopes.\n data: A `Tensor` representing audio data with shape `[k, t, c]`,\n where `k` is the number of audio clips, `t` is the number of\n frames, and `c` is the number of channels. Elements should be\n floating-point values in `[-1.0, 1.0]`. Any of the dimensions may\n be statically unknown (i.e., `None`).\n sample_rate: An `int` or rank-0 `int32` `Tensor` that represents the\n sample rate, in Hz. Must be positive.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n max_outputs: Optional `int` or rank-0 integer `Tensor`. At most this\n many audio clips will be emitted at each step. When more than\n `max_outputs` many clips are provided, the first `max_outputs`\n many clips will be used and the rest silently discarded.\n encoding: Optional constant `str` for the desired encoding. Only \"wav\"\n is currently supported, but this is not guaranteed to remain the\n default, so if you want \"wav\" in particular, set this explicitly.\n description: Optional long-form description for this summary, as a\n constant `str`. Markdown is supported. Defaults to empty.\n\n Returns:\n True on success, or false if no summary was emitted because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Write an audio summary.", "type": "API"}, {"name": "tf.summary.create_file_writer", "docs": "Creates a summary file writer for the given log directory.\n\n Args:\n logdir: a string specifying the directory in which to write an event file.\n max_queue: the largest number of summaries to keep in a queue; will\n flush once the queue gets bigger than this. Defaults to 10.\n flush_millis: the largest interval between flushes. Defaults to 120,000.\n filename_suffix: optional suffix for the event file name. Defaults to `.v2`.\n name: a name for the op that creates the writer.\n experimental_trackable: a boolean that controls whether the returned writer\n will be a `TrackableResource`, which makes it compatible with SavedModel\n when used as a `tf.Module` property.\n\n Returns:\n A SummaryWriter object.\n ", "desc": "Creates a summary file writer for the given log directory.", "type": "API"}, {"name": "tf.summary.create_noop_writer", "docs": "Returns a summary writer that does nothing.\n\n This is useful as a placeholder in code that expects a context manager.\n ", "desc": "Returns a summary writer that does nothing.", "type": "API"}, {"name": "tf.summary.experimental", "docs": "Public API for tf.summary.experimental namespace.\n", "desc": "Public API for tf.summary.experimental namespace.", "type": "API"}, {"name": "tf.summary.experimental.get_step", "docs": "Returns the default summary step for the current thread.\n\n Returns:\n The step set by `tf.summary.experimental.set_step()` if one has been set,\n otherwise None.\n ", "desc": "Returns the default summary step for the current thread.", "type": "API"}, {"name": "tf.summary.experimental.set_step", "docs": "Sets the default summary step for the current thread.\n\n For convenience, this function sets a default value for the `step` parameter\n used in summary-writing functions elsewhere in the API so that it need not\n be explicitly passed in every such invocation. The value can be a constant\n or a variable, and can be retrieved via `tf.summary.experimental.get_step()`.\n\n Note: when using this with @tf.functions, the step value will be captured at\n the time the function is traced, so changes to the step outside the function\n will not be reflected inside the function unless using a `tf.Variable` step.\n\n Args:\n step: An `int64`-castable default step value, or None to unset.\n ", "desc": "Sets the default summary step for the current thread.", "type": "API"}, {"name": "tf.summary.experimental.summary_scope", "docs": "Experimental context manager for use when defining a custom summary op.\n\n This behaves similarly to `tf.name_scope`, except that it returns a generated\n summary tag in addition to the scope name. The tag is structurally similar to\n the scope name - derived from the user-provided name, prefixed with enclosing\n name scopes if any - but we relax the constraint that it be uniquified, as\n well as the character set limitation (so the user-provided name can contain\n characters not legal for scope names; in the scope name these are removed).\n\n This makes the summary tag more predictable and consistent for the user.\n\n For example, to define a new summary op called `my_op`:\n\n ```python\n def my_op(name, my_value, step):\n with tf.summary.summary_scope(name, \"MyOp\", [my_value]) as (tag, scope):\n my_value = tf.convert_to_tensor(my_value)\n return tf.summary.write(tag, my_value, step=step)\n ```\n\n Args:\n name: string name for the summary.\n default_name: Optional; if provided, used as default name of the summary.\n values: Optional; passed as `values` parameter to name_scope.\n\n Yields:\n A tuple `(tag, scope)` as described above.\n ", "desc": "Experimental context manager for use when defining a custom summary op.", "type": "API"}, {"name": "tf.summary.experimental.write_raw_pb", "docs": "Writes a summary using raw `tf.compat.v1.Summary` protocol buffers.\n\n Experimental: this exists to support the usage of V1-style manual summary\n writing (via the construction of a `tf.compat.v1.Summary` protocol buffer)\n with the V2 summary writing API.\n\n Args:\n tensor: the string Tensor holding one or more serialized `Summary` protobufs\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n name: Optional string name for this op.\n\n Returns:\n True on success, or false if no summary was written because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Writes a summary using raw `tf.compat.v1.Summary` protocol buffers.", "type": "API"}, {"name": "tf.summary.flush", "docs": "Forces summary writer to send any buffered data to storage.\n\n This operation blocks until that finishes.\n\n Args:\n writer: The `tf.summary.SummaryWriter` to flush. If None, the current\n default writer will be used instead; if there is no current writer, this\n returns `tf.no_op`.\n name: Ignored legacy argument for a name for the operation.\n\n Returns:\n The created `tf.Operation`.\n ", "desc": "Forces summary writer to send any buffered data to storage.", "type": "API"}, {"name": "tf.summary.graph", "docs": "Writes a TensorFlow graph summary.\n\n Write an instance of `tf.Graph` or `tf.compat.v1.GraphDef` as summary only\n in an eager mode. Please prefer to use the trace APIs (`tf.summary.trace_on`,\n `tf.summary.trace_off`, and `tf.summary.trace_export`) when using\n `tf.function` which can automatically collect and record graphs from\n executions.\n\n Usage Example:\n ```py\n writer = tf.summary.create_file_writer(\"/tmp/mylogs\")\n\n @tf.function\n def f():\n x = constant_op.constant(2)\n y = constant_op.constant(3)\n return x**y\n\n with writer.as_default():\n tf.summary.graph(f.get_concrete_function().graph)\n\n # Another example: in a very rare use case, when you are dealing with a TF v1\n # graph.\n graph = tf.Graph()\n with graph.as_default():\n c = tf.constant(30.0)\n with writer.as_default():\n tf.summary.graph(graph)\n ```\n\n Args:\n graph_data: The TensorFlow graph to write, as a `tf.Graph` or a\n `tf.compat.v1.GraphDef`.\n\n Returns:\n True on success, or False if no summary was written because no default\n summary writer was available.\n\n Raises:\n ValueError: `graph` summary API is invoked in a graph mode.\n ", "desc": "Writes a TensorFlow graph summary.", "type": "API"}, {"name": "tf.summary.histogram", "docs": "Write a histogram summary.\n\n See also `tf.summary.scalar`, `tf.summary.SummaryWriter`.\n\n Writes a histogram to the current default summary writer, for later analysis\n in TensorBoard's 'Histograms' and 'Distributions' dashboards (data written\n using this API will appear in both places). Like `tf.summary.scalar` points,\n each histogram is associated with a `step` and a `name`. All the histograms\n with the same `name` constitute a time series of histograms.\n\n The histogram is calculated over all the elements of the given `Tensor`\n without regard to its shape or rank.\n\n This example writes 2 histograms:\n\n ```python\n w = tf.summary.create_file_writer('test/logs')\n with w.as_default():\n tf.summary.histogram(\"activations\", tf.random.uniform([100, 50]), step=0)\n tf.summary.histogram(\"initial_weights\", tf.random.normal([1000]), step=0)\n ```\n\n A common use case is to examine the changing activation patterns (or lack\n thereof) at specific layers in a neural network, over time.\n\n ```python\n w = tf.summary.create_file_writer('test/logs')\n with w.as_default():\n for step in range(100):\n # Generate fake \"activations\".\n activations = [\n tf.random.normal([1000], mean=step, stddev=1),\n tf.random.normal([1000], mean=step, stddev=10),\n tf.random.normal([1000], mean=step, stddev=100),\n ]\n\n tf.summary.histogram(\"layer1/activate\", activations[0], step=step)\n tf.summary.histogram(\"layer2/activate\", activations[1], step=step)\n tf.summary.histogram(\"layer3/activate\", activations[2], step=step)\n ```\n\n Arguments:\n name: A name for this summary. The summary tag used for TensorBoard will\n be this name prefixed by any active name scopes.\n data: A `Tensor` of any shape. The histogram is computed over its elements,\n which must be castable to `float64`.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n buckets: Optional positive `int`. The output will have this\n many buckets, except in two edge cases. If there is no data, then\n there are no buckets. If there is data but all points have the\n same value, then all buckets' left and right endpoints are the same\n and only the last bucket has nonzero count.\n description: Optional long-form description for this summary, as a\n constant `str`. Markdown is supported. Defaults to empty.\n\n Returns:\n True on success, or false if no summary was emitted because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Write a histogram summary.", "type": "API"}, {"name": "tf.summary.image", "docs": "Write an image summary.\n\n See also `tf.summary.scalar`, `tf.summary.SummaryWriter`.\n\n Writes a collection of images to the current default summary writer. Data\n appears in TensorBoard's 'Images' dashboard. Like `tf.summary.scalar` points,\n each collection of images is associated with a `step` and a `name`. All the\n image collections with the same `name` constitute a time series of image\n collections.\n\n This example writes 2 random grayscale images:\n\n ```python\n w = tf.summary.create_file_writer('test/logs')\n with w.as_default():\n image1 = tf.random.uniform(shape=[8, 8, 1])\n image2 = tf.random.uniform(shape=[8, 8, 1])\n tf.summary.image(\"grayscale_noise\", [image1, image2], step=0)\n ```\n\n To avoid clipping, data should be converted to one of the following:\n\n - floating point values in the range [0,1], or\n - uint8 values in the range [0,255]\n\n ```python\n # Convert the original dtype=int32 `Tensor` into `dtype=float64`.\n rgb_image_float = tf.constant([\n [[1000, 0, 0], [0, 500, 1000]],\n ]) / 1000\n tf.summary.image(\"picture\", [rgb_image_float], step=0)\n\n # Convert original dtype=uint8 `Tensor` into proper range.\n rgb_image_uint8 = tf.constant([\n [[1, 1, 0], [0, 0, 1]],\n ], dtype=tf.uint8) * 255\n tf.summary.image(\"picture\", [rgb_image_uint8], step=1)\n ```\n\n Arguments:\n name: A name for this summary. The summary tag used for TensorBoard will\n be this name prefixed by any active name scopes.\n data: A `Tensor` representing pixel data with shape `[k, h, w, c]`,\n where `k` is the number of images, `h` and `w` are the height and\n width of the images, and `c` is the number of channels, which\n should be 1, 2, 3, or 4 (grayscale, grayscale with alpha, RGB, RGBA).\n Any of the dimensions may be statically unknown (i.e., `None`).\n Floating point data will be clipped to the range [0,1]. Other data types\n will be clipped into an allowed range for safe casting to uint8, using\n `tf.image.convert_image_dtype`.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n max_outputs: Optional `int` or rank-0 integer `Tensor`. At most this\n many images will be emitted at each step. When more than\n `max_outputs` many images are provided, the first `max_outputs` many\n images will be used and the rest silently discarded.\n description: Optional long-form description for this summary, as a\n constant `str`. Markdown is supported. Defaults to empty.\n\n Returns:\n True on success, or false if no summary was emitted because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Write an image summary.", "type": "API"}, {"name": "tf.summary.record_if", "docs": "Sets summary recording on or off per the provided boolean value.\n\n The provided value can be a python boolean, a scalar boolean Tensor, or\n or a callable providing such a value; if a callable is passed it will be\n invoked on-demand to determine whether summary writing will occur. Note that\n when calling record_if() in an eager mode context, if you intend to provide a\n varying condition like `step % 100 == 0`, you must wrap this in a\n callable to avoid immediate eager evaluation of the condition. In particular,\n using a callable is the only way to have your condition evaluated as part of\n the traced body of an @tf.function that is invoked from within the\n `record_if()` context.\n\n Args:\n condition: can be True, False, a bool Tensor, or a callable providing such.\n\n Yields:\n Returns a context manager that sets this value on enter and restores the\n previous value on exit.\n ", "desc": "Sets summary recording on or off per the provided boolean value.", "type": "API"}, {"name": "tf.summary.scalar", "docs": "Write a scalar summary.\n\n See also `tf.summary.image`, `tf.summary.histogram`, `tf.summary.SummaryWriter`.\n\n Writes simple numeric values for later analysis in TensorBoard. Writes go to\n the current default summary writer. Each summary point is associated with an\n integral `step` value. This enables the incremental logging of time series\n data. A common usage of this API is to log loss during training to produce\n a loss curve.\n\n For example:\n\n ```python\n test_summary_writer = tf.summary.create_file_writer('test/logdir')\n with test_summary_writer.as_default():\n tf.summary.scalar('loss', 0.345, step=1)\n tf.summary.scalar('loss', 0.234, step=2)\n tf.summary.scalar('loss', 0.123, step=3)\n ```\n\n Multiple independent time series may be logged by giving each series a unique\n `name` value.\n\n See [Get started with TensorBoard](https://www.tensorflow.org/tensorboard/get_started)\n for more examples of effective usage of `tf.summary.scalar`.\n\n In general, this API expects that data points are logged iwth a monotonically\n increasing step value. Duplicate points for a single step or points logged out\n of order by step are not guaranteed to display as desired in TensorBoard.\n\n Arguments:\n name: A name for this summary. The summary tag used for TensorBoard will\n be this name prefixed by any active name scopes.\n data: A real numeric scalar value, convertible to a `float32` Tensor.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n description: Optional long-form description for this summary, as a\n constant `str`. Markdown is supported. Defaults to empty.\n\n Returns:\n True on success, or false if no summary was written because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Write a scalar summary.", "type": "API"}, {"name": "tf.summary.should_record_summaries", "docs": "Returns boolean Tensor which is True if summaries will be recorded.\n\n If no default summary writer is currently registered, this always returns\n False. Otherwise, this reflects the recording condition has been set via\n `tf.summary.record_if()` (except that it may return False for some replicas\n when using `tf.distribute.Strategy`). If no recording condition is active,\n it defaults to True.\n ", "desc": "Returns boolean Tensor which is True if summaries will be recorded.", "type": "API"}, {"name": "tf.summary.SummaryWriter", "docs": "Interface representing a stateful summary writer object.", "desc": "Interface representing a stateful summary writer object.", "type": "API"}, {"name": "tf.summary.text", "docs": "Write a text summary.\n\n See also `tf.summary.scalar`, `tf.summary.SummaryWriter`, `tf.summary.image`.\n\n Writes text Tensor values for later visualization and analysis in TensorBoard.\n Writes go to the current default summary writer. Like `tf.summary.scalar`\n points, text points are each associated with a `step` and a `name`.\n All the points with the same `name` constitute a time series of text values.\n\n For Example:\n ```python\n test_summary_writer = tf.summary.create_file_writer('test/logdir')\n with test_summary_writer.as_default():\n tf.summary.text('first_text', 'hello world!', step=0)\n tf.summary.text('first_text', 'nice to meet you!', step=1)\n ```\n\n The text summary can also contain Markdown, and TensorBoard will render the text\n as such.\n\n ```python\n with test_summary_writer.as_default():\n text_data = '''\n | *hello* | *there* |\n |---------|---------|\n | this | is |\n | a | table |\n '''\n text_data = '\\n'.join(l.strip() for l in text_data.splitlines())\n tf.summary.text('markdown_text', text_data, step=0)\n ```\n\n Since text is Tensor valued, each text point may be a Tensor of string values.\n rank-1 and rank-2 Tensors are rendered as tables in TensorBoard. For higher ranked\n Tensors, you'll see just a 2D slice of the data. To avoid this, reshape the Tensor\n to at most rank-2 prior to passing it to this function.\n\n Demo notebook at\n [\"Displaying text data in TensorBoard\"](https://www.tensorflow.org/tensorboard/text_summaries).\n\n Arguments:\n name: A name for this summary. The summary tag used for TensorBoard will\n be this name prefixed by any active name scopes.\n data: A UTF-8 string Tensor value.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n description: Optional long-form description for this summary, as a\n constant `str`. Markdown is supported. Defaults to empty.\n\n Returns:\n True on success, or false if no summary was emitted because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Write a text summary.", "type": "API"}, {"name": "tf.summary.trace_export", "docs": "Stops and exports the active trace as a Summary and/or profile file.\n\n Stops the trace and exports all metadata collected during the trace to the\n default SummaryWriter, if one has been set.\n\n Args:\n name: A name for the summary to be written.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n profiler_outdir: Output directory for profiler. It is required when profiler\n is enabled when trace was started. Otherwise, it is ignored.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Stops and exports the active trace as a Summary and/or profile file.", "type": "API"}, {"name": "tf.summary.trace_off", "docs": "Stops the current trace and discards any collected information.", "desc": "Stops the current trace and discards any collected information.", "type": "API"}, {"name": "tf.summary.trace_on", "docs": "Starts a trace to record computation graphs and profiling information.\n\n Must be invoked in eager mode.\n\n When enabled, TensorFlow runtime will collect information that can later be\n exported and consumed by TensorBoard. The trace is activated across the entire\n TensorFlow runtime and affects all threads of execution.\n\n To stop the trace and export the collected information, use\n `tf.summary.trace_export`. To stop the trace without exporting, use\n `tf.summary.trace_off`.\n\n Args:\n graph: If True, enables collection of executed graphs. It includes ones from\n tf.function invocation and ones from the legacy graph mode. The default\n is True.\n profiler: If True, enables the advanced profiler. Enabling profiler\n implicitly enables the graph collection. The profiler may incur a high\n memory overhead. The default is False.\n\n ", "desc": "Starts a trace to record computation graphs and profiling information.", "type": "API"}, {"name": "tf.summary.write", "docs": "Writes a generic summary to the default SummaryWriter if one exists.\n\n This exists primarily to support the definition of type-specific summary ops\n like scalar() and image(), and is not intended for direct use unless defining\n a new type-specific summary op.\n\n Args:\n tag: string tag used to identify the summary (e.g. in TensorBoard), usually\n generated with `tf.summary.summary_scope`\n tensor: the Tensor holding the summary data to write or a callable that\n returns this Tensor. If a callable is passed, it will only be called when\n a default SummaryWriter exists and the recording condition specified by\n `record_if()` is met.\n step: Explicit `int64`-castable monotonic step value for this summary. If\n omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n not be None.\n metadata: Optional SummaryMetadata, as a proto or serialized bytes\n name: Optional string name for this op.\n\n Returns:\n True on success, or false if no summary was written because no default\n summary writer was available.\n\n Raises:\n ValueError: if a default writer exists, but no step was provided and\n `tf.summary.experimental.get_step()` is None.\n ", "desc": "Writes a generic summary to the default SummaryWriter if one exists.", "type": "API"}, {"name": "tf.switch_case", "docs": "Create a switch/case operation, i.e. an integer-indexed conditional.\n\n See also `tf.case`.\n\n This op can be substantially more efficient than `tf.case` when exactly one\n branch will be selected. `tf.switch_case` is more like a C++ switch/case\n statement than `tf.case`, which is more like an if/elif/elif/else chain.\n\n The `branch_fns` parameter is either a dict from `int` to callables, or list\n of (`int`, callable) pairs, or simply a list of callables (in which case the\n index is implicitly the key). The `branch_index` `Tensor` is used to select an\n element in `branch_fns` with matching `int` key, falling back to `default`\n if none match, or `max(keys)` if no `default` is provided. The keys must form\n a contiguous set from `0` to `len(branch_fns) - 1`.\n\n `tf.switch_case` supports nested structures as implemented in `tf.nest`. All\n callables must return the same (possibly nested) value structure of lists,\n tuples, and/or named tuples.\n\n **Example:**\n\n Pseudocode:\n\n ```c++\n switch (branch_index) { // c-style switch\n case 0: return 17;\n case 1: return 31;\n default: return -1;\n }\n ```\n or\n ```python\n branches = {0: lambda: 17, 1: lambda: 31}\n branches.get(branch_index, lambda: -1)()\n ```\n\n Expressions:\n\n ```python\n def f1(): return tf.constant(17)\n def f2(): return tf.constant(31)\n def f3(): return tf.constant(-1)\n r = tf.switch_case(branch_index, branch_fns={0: f1, 1: f2}, default=f3)\n # Equivalent: tf.switch_case(branch_index, branch_fns={0: f1, 1: f2, 2: f3})\n ```\n\n Args:\n branch_index: An int Tensor specifying which of `branch_fns` should be\n executed.\n branch_fns: A `dict` mapping `int`s to callables, or a `list` of\n (`int`, callable) pairs, or simply a list of callables (in which case the\n index serves as the key). Each callable must return a matching structure\n of tensors.\n default: Optional callable that returns a structure of tensors.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the callable identified by `branch_index`, or those\n returned by `default` if no key matches and `default` was provided, or those\n returned by the max-keyed `branch_fn` if no `default` is provided.\n\n Raises:\n TypeError: If `branch_fns` is not a list/dictionary.\n TypeError: If `branch_fns` is a list but does not contain 2-tuples or\n callables.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable.\n ", "desc": "Create a switch/case operation, i.e. an integer-indexed conditional.", "type": "API"}, {"name": "tf.sysconfig", "docs": "System configuration library.\n", "desc": "System configuration library.", "type": "API"}, {"name": "tf.sysconfig.get_build_info", "docs": "Get a dictionary describing TensorFlow's build environment.\n\n Values are generated when TensorFlow is compiled, and are static for each\n TensorFlow package. The return value is a dictionary with string keys such as:\n\n - cuda_version\n - cudnn_version\n - is_cuda_build\n - is_rocm_build\n - msvcp_dll_names\n - nvcuda_dll_name\n - cudart_dll_name\n - cudnn_dll_name\n\n Note that the actual keys and values returned by this function is subject to\n change across different versions of TensorFlow or across platforms.\n\n Returns:\n A Dictionary describing TensorFlow's build environment.\n ", "desc": "Get a dictionary describing TensorFlow's build environment.", "type": "API"}, {"name": "tf.sysconfig.get_compile_flags", "docs": "Get the compilation flags for custom operators.\n\n Returns:\n The compilation flags.\n ", "desc": "Get the compilation flags for custom operators.", "type": "API"}, {"name": "tf.sysconfig.get_include", "docs": "Get the directory containing the TensorFlow C++ header files.\n\n Returns:\n The directory as string.\n ", "desc": "Get the directory containing the TensorFlow C++ header files.", "type": "API"}, {"name": "tf.sysconfig.get_lib", "docs": "Get the directory containing the TensorFlow framework library.\n\n Returns:\n The directory as string.\n ", "desc": "Get the directory containing the TensorFlow framework library.", "type": "API"}, {"name": "tf.sysconfig.get_link_flags", "docs": "Get the link flags for custom operators.\n\n Returns:\n The link flags.\n ", "desc": "Get the link flags for custom operators.", "type": "API"}, {"name": "tf.tan", "docs": "Computes tan of x element-wise.\n\n Given an input tensor, this function computes tangent of every\n element in the tensor. Input range is `(-inf, inf)` and\n output range is `(-inf, inf)`. If input lies outside the boundary, `nan`\n is returned.\n\n ```python\n x = tf.constant([-float(\"inf\"), -9, -0.5, 1, 1.2, 200, 10000, float(\"inf\")])\n tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]\n ```\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Computes tan of x element-wise.", "type": "API"}, {"name": "tf.tanh", "docs": "Computes hyperbolic tangent of `x` element-wise.\n\n Given an input tensor, this function computes hyperbolic tangent of every\n element in the tensor. Input range is `[-inf, inf]` and\n output range is `[-1,1]`.\n\n >>> x = tf.constant([-float(\"inf\"), -5, -0.5, 1, 1.2, 2, 3, float(\"inf\")])\n >>> tf.math.tanh(x)\n \n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `complex64`, `complex128`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n\n If `x` is a `SparseTensor`, returns\n `SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape)`", "desc": "Computes hyperbolic tangent of `x` element-wise.", "type": "API"}, {"name": "tf.Tensor", "docs": "A `tf.Tensor` represents a multidimensional array of elements.\n\n All elements are of a single known data type.\n\n When writing a TensorFlow program, the main object that is\n manipulated and passed around is the `tf.Tensor`.\n\n A `tf.Tensor` has the following properties:\n\n * a single data type (float32, int32, or string, for example)\n * a shape\n\n TensorFlow supports eager execution and graph execution. In eager\n execution, operations are evaluated immediately. In graph\n execution, a computational graph is constructed for later\n evaluation.\n\n TensorFlow defaults to eager execution. In the example below, the\n matrix multiplication results are calculated immediately.\n\n >>> # Compute some values using a Tensor\n >>> c = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n >>> d = tf.constant([[1.0, 1.0], [0.0, 1.0]])\n >>> e = tf.matmul(c, d)\n >>> print(e)\n tf.Tensor(\n [[1. 3.]\n [3. 7.]], shape=(2, 2), dtype=float32)\n\n Note that during eager execution, you may discover your `Tensors` are actually\n of type `EagerTensor`. This is an internal detail, but it does give you\n access to a useful function, `numpy`:\n\n >>> type(e)\n \n >>> print(e.numpy())\n [[1. 3.]\n [3. 7.]]\n\n In TensorFlow, `tf.function`s are a common way to define graph execution.\n\n A Tensor's shape (that is, the rank of the Tensor and the size of\n each dimension) may not always be fully known. In `tf.function`\n definitions, the shape may only be partially known.\n\n Most operations produce tensors of fully-known shapes if the shapes of their\n inputs are also fully known, but in some cases it's only possible to find the\n shape of a tensor at execution time.\n\n A number of specialized tensors are available: see `tf.Variable`,\n `tf.constant`, `tf.placeholder`, `tf.sparse.SparseTensor`, and\n `tf.RaggedTensor`.\n\n Caution: when constructing a tensor from a numpy array or pandas dataframe\n the underlying buffer may be re-used:\n\n ```python\n a = np.array([1, 2, 3])\n b = tf.constant(a)\n a[0] = 4\n print(b) # tf.Tensor([4 2 3], shape=(3,), dtype=int64)\n ```\n\n Note: this is an implementation detail that is subject to change and users\n should not rely on this behaviour.\n\n For more on Tensors, see the [guide](https://tensorflow.org/guide/tensor).\n\n ", "desc": "A `tf.Tensor` represents a multidimensional array of elements.", "type": "API"}, {"name": "tf.tensor_scatter_nd_add", "docs": "Adds sparse `updates` to an existing tensor according to `indices`.\n\n This operation creates a new tensor by adding sparse `updates` to the passed\n in `tensor`.\n This operation is very similar to `tf.compat.v1.scatter_nd_add`, except that the\n updates are added onto an existing tensor (as opposed to a variable). If the\n memory for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `tensor.shape`. The last dimension of `indices` can be at most the rank of\n `tensor.shape`:\n\n ```\n indices.shape[-1] <= tensor.shape.rank\n ```\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = tensor.shape.rank`) or slices\n (if `indices.shape[-1] < tensor.shape.rank`) along dimension\n `indices.shape[-1]` of `tensor.shape`. `updates` is a tensor with shape\n\n ```\n indices.shape[:-1] + tensor.shape[indices.shape[-1]:]\n ```\n\n The simplest form of `tensor_scatter_nd_add` is to add individual elements to a\n tensor by index. For example, say we want to add 4 elements in a rank-1\n tensor with 8 elements.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[4], [3], [1], [7]])\n >>> updates = tf.constant([9, 10, 11, 12])\n >>> tensor = tf.ones([8], dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n >>> indices = tf.constant([[0], [2]])\n >>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]],\n ... [[5, 5, 5, 5], [6, 6, 6, 6],\n ... [7, 7, 7, 7], [8, 8, 8, 8]]])\n >>> tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n >>> updated = tf.tensor_scatter_nd_add(tensor, indices, updates)\n >>> updated\n \n\n Note: on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Adds sparse `updates` to an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.tensor_scatter_nd_max", "docs": "Apply a sparse update to a tensor taking the element-wise maximum.\n\n Returns a new tensor copied from `tensor` whose values are element-wise maximum between\n tensor and updates according to the indices.\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] \n >>> indices = [[1], [4], [5]]\n >>> updates = [1, -1, 1]\n >>> tf.tensor_scatter_nd_max(tensor, indices, updates).numpy()\n array([0, 1, 0, 0, 0, 1, 0, 0], dtype=int32)\n\n Refer to `tf.tensor_scatter_nd_update` for more details.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Apply a sparse update to a tensor taking the element-wise maximum.", "type": "API"}, {"name": "tf.tensor_scatter_nd_min", "docs": "TODO: add doc.\n\n Args:\n tensor: A `Tensor`. Tensor to update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "TODO: add doc.", "type": "API"}, {"name": "tf.tensor_scatter_nd_sub", "docs": "Subtracts sparse `updates` from an existing tensor according to `indices`.\n\n This operation creates a new tensor by subtracting sparse `updates` from the\n passed in `tensor`.\n This operation is very similar to `tf.scatter_nd_sub`, except that the updates\n are subtracted from an existing tensor (as opposed to a variable). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n `indices` is an integer tensor containing indices into a new tensor of shape\n `shape`. The last dimension of `indices` can be at most the rank of `shape`:\n\n indices.shape[-1] <= shape.rank\n\n The last dimension of `indices` corresponds to indices into elements\n (if `indices.shape[-1] = shape.rank`) or slices\n (if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of\n `shape`. `updates` is a tensor with shape\n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\n The simplest form of tensor_scatter_sub is to subtract individual elements\n from a tensor by index. For example, say we want to insert 4 scattered elements\n in a rank-1 tensor with 8 elements.\n\n In Python, this scatter subtract operation would look like this:\n\n ```python\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n tensor = tf.ones([8], dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [1, -10, 1, -9, -8, 1, 1, -11]\n\n We can also, insert entire slices of a higher rank tensor all at once. For\n example, if we wanted to insert two slices in the first dimension of a\n rank-3 tensor with two matrices of new values.\n\n In Python, this scatter add operation would look like this:\n\n ```python\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n ```\n\n The resulting tensor would look like this:\n\n [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],\n [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]\n\n Note that on CPU, if an out of bound index is found, an error is returned.\n On GPU, if an out of bound index is found, the index is ignored.\n\n Args:\n tensor: A `Tensor`. Tensor to copy/update.\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n Index tensor.\n updates: A `Tensor`. Must have the same type as `tensor`.\n Updates to scatter into output.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `tensor`.\n ", "desc": "Subtracts sparse `updates` from an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.tensor_scatter_nd_update", "docs": "Scatter `updates` into an existing tensor according to `indices`.\n\n This operation creates a new tensor by applying sparse `updates` to the\n input `tensor`. This is similar to an index assignment.\n\n ```\n # Not implemented: tensors cannot be updated inplace.\n tensor[indices] = updates\n ```\n\n If an out of bound index is found on CPU, an error is returned.\n\n > **WARNING**: There are some GPU specific semantics for this operation.\n >\n > - If an out of bound index is found, the index is ignored.\n > - The order in which updates are applied is nondeterministic, so the output\n > will be nondeterministic if `indices` contains duplicates.\n\n This operation is very similar to `tf.scatter_nd`, except that the updates are\n scattered onto an existing tensor (as opposed to a zero-tensor). If the memory\n for the existing tensor cannot be re-used, a copy is made and updated.\n\n In general:\n\n * `indices` is an integer tensor - the indices to update in `tensor`.\n * `indices` has **at least two** axes, the last axis is the depth of the\n index vectors.\n * For each index vector in `indices` there is a corresponding entry in\n `updates`.\n * If the length of the index vectors matches the rank of the `tensor`, then\n the index vectors each point to scalars in `tensor` and each update is a\n scalar.\n * If the length of the index vectors is less than the rank of `tensor`, then\n the index vectors each point to slices of `tensor` and shape of the updates\n must match that slice.\n\n Overall this leads to the following shape constraints:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Typical usage is often much simpler than this general form, and it\n can be better understood starting with simple examples:\n\n ### Scalar updates\n\n The simplest usage inserts scalar elements into a tensor by index.\n In this case, the `index_depth` must equal the rank of the\n input `tensor`, slice each column of `indices` is an index into an axis of the\n input `tensor`.\n\n In this simplest case the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n assert updates.shape == [num_updates]\n assert index_depth == tf.rank(tensor)`\n ```\n\n For example, to insert 4 scattered elements in a rank-1 tensor with\n 8 elements.\n\n
\n \n
\n\n This scatter operation would look like this:\n\n >>> tensor = [0, 0, 0, 0, 0, 0, 0, 0] # tf.rank(tensor) == 1\n >>> indices = [[1], [3], [4], [7]] # num_updates == 4, index_depth == 1\n >>> updates = [9, 10, 11, 12] # num_updates == 4\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor([ 0 9 0 10 11 0 0 12], shape=(8,), dtype=int32)\n\n The length (first axis) of `updates` must equal the length of the `indices`:\n `num_updates`. This is the number of updates being inserted. Each scalar\n update is inserted into `tensor` at the indexed location.\n\n For a higher rank input `tensor` scalar updates can be inserted by using an\n `index_depth` that matches `tf.rank(tensor)`:\n\n >>> tensor = [[1, 1], [1, 1], [1, 1]] # tf.rank(tensor) == 2\n >>> indices = [[0, 1], [2, 0]] # num_updates == 2, index_depth == 2\n >>> updates = [5, 10] # num_updates == 2\n >>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))\n tf.Tensor(\n [[ 1 5]\n [ 1 1]\n [10 1]], shape=(3, 2), dtype=int32)\n\n ### Slice updates\n\n When the input `tensor` has more than one axis scatter can be used to update\n entire slices.\n\n In this case it's helpful to think of the input `tensor` as being a two level\n array-of-arrays. The shape of this two level array is split into the\n `outer_shape` and the `inner_shape`.\n\n `indices` indexes into the outer level of the input tensor (`outer_shape`).\n and replaces the sub-array at that location with the corresponding item from\n the `updates` list. The shape of each update is `inner_shape`.\n\n When updating a list of slices the shape constraints are:\n\n ```\n num_updates, index_depth = indices.shape.as_list()\n inner_shape = tensor.shape[:index_depth]\n outer_shape = tensor.shape[index_depth:]\n assert updates.shape == [num_updates, inner_shape]\n ```\n\n For example, to update rows of a `(6, 3)` `tensor`:\n\n >>> tensor = tf.zeros([6, 3], dtype=tf.int32)\n\n Use an index depth of one.\n\n >>> indices = tf.constant([[2], [4]]) # num_updates == 2, index_depth == 1\n >>> num_updates, index_depth = indices.shape.as_list()\n\n The `outer_shape` is `6`, the inner shape is `3`:\n\n >>> outer_shape = tensor.shape[:index_depth]\n >>> inner_shape = tensor.shape[index_depth:]\n\n 2 rows are being indexed so 2 `updates` must be supplied.\n Each update must be shaped to match the `inner_shape`.\n\n >>> # num_updates == 2, inner_shape==3\n >>> updates = tf.constant([[1, 2, 3],\n ... [4, 5, 6]])\n\n Altogether this gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[0, 0, 0],\n [0, 0, 0],\n [1, 2, 3],\n [0, 0, 0],\n [4, 5, 6],\n [0, 0, 0]], dtype=int32)\n\n #### More slice update examples\n\n A tensor representing a batch of uniformly sized video clips naturally has 5\n axes: `[batch_size, time, width, height, channels]`.\n\n For example:\n\n >>> batch_size, time, width, height, channels = 13,11,7,5,3\n >>> video_batch = tf.zeros([batch_size, time, width, height, channels])\n\n To replace a selection of video clips:\n * Use an `index_depth` of 1 (indexing the `outer_shape`: `[batch_size]`)\n * Provide updates each with a shape matching the `inner_shape`:\n `[time, width, height, channels]`.\n\n To replace the first two clips with ones:\n\n >>> indices = [[0],[1]]\n >>> new_clips = tf.ones([2, time, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_clips)\n\n To replace a selection of frames in the videos:\n\n * `indices` must have an `index_depth` of 2 for the `outer_shape`:\n `[batch_size, time]`.\n * `updates` must be shaped like a list of images. Each update must have a\n shape, matching the `inner_shape`: `[width, height, channels]`.\n\n To replace the first frame of the first three video clips:\n\n >>> indices = [[0, 0], [1, 0], [2, 0]] # num_updates=3, index_depth=2\n >>> new_images = tf.ones([\n ... # num_updates=3, inner_shape=(width, height, channels)\n ... 3, width, height, channels])\n >>> tf.tensor_scatter_nd_update(video_batch, indices, new_images)\n\n ### Folded indices\n\n In simple cases it's convenient to think of `indices` and `updates` as\n lists, but this is not a strict requirement. Instead of a flat `num_updates`,\n the `indices` and `updates` can be folded into a `batch_shape`. This\n `batch_shape` is all axes of the `indices`, except for the innermost\n `index_depth` axis.\n\n ```\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n ```\n\n Note: The one exception is that the `batch_shape` cannot be `[]`. You can't\n update a single index by passing indices with shape `[index_depth]`.\n\n `updates` must have a matching `batch_shape` (the axes before `inner_shape`).\n\n ```\n assert updates.shape == batch_shape + inner_shape\n ```\n\n Note: The result is equivalent to flattening the `batch_shape` axes of\n `indices` and `updates`. This generalization just avoids the need\n for reshapes when it is more natural to construct \"folded\" indices and\n updates.\n\n With this generalization the full shape constraints are:\n\n ```\n assert tf.rank(indices) >= 2\n index_depth = indices.shape[-1]\n batch_shape = indices.shape[:-1]\n assert index_depth <= tf.rank(tensor)\n outer_shape = tensor.shape[:index_depth]\n inner_shape = tensor.shape[index_depth:]\n assert updates.shape == batch_shape + inner_shape\n ```\n\n For example, to draw an `X` on a `(5,5)` matrix start with these indices:\n\n >>> tensor = tf.zeros([5,5])\n >>> indices = tf.constant([\n ... [[0,0],\n ... [1,1],\n ... [2,2],\n ... [3,3],\n ... [4,4]],\n ... [[0,4],\n ... [1,3],\n ... [2,2],\n ... [3,1],\n ... [4,0]],\n ... ])\n >>> indices.shape.as_list() # batch_shape == [2, 5], index_depth == 2\n [2, 5, 2]\n\n Here the `indices` do not have a shape of `[num_updates, index_depth]`, but a\n shape of `batch_shape+[index_depth]`.\n\n Since the `index_depth` is equal to the rank of `tensor`:\n\n * `outer_shape` is `(5,5)`\n * `inner_shape` is `()` - each update is scalar\n * `updates.shape` is `batch_shape + inner_shape == (5,2) + ()`\n\n >>> updates = [\n ... [1,1,1,1,1],\n ... [1,1,1,1,1],\n ... ]\n\n Putting this together gives:\n\n >>> tf.tensor_scatter_nd_update(tensor, indices, updates).numpy()\n array([[1., 0., 0., 0., 1.],\n [0., 1., 0., 1., 0.],\n [0., 0., 1., 0., 0.],\n [0., 1., 0., 1., 0.],\n [1., 0., 0., 0., 1.]], dtype=float32)\n\n Args:\n tensor: Tensor to copy/update.\n indices: Indices to update.\n updates: Updates to apply at the indices.\n name: Optional name for the operation.\n\n Returns:\n A new tensor with the given shape and updates applied according to the\n indices.\n ", "desc": "Scatter `updates` into an existing tensor according to `indices`.", "type": "API"}, {"name": "tf.TensorArray", "docs": "Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.\n\n This class is meant to be used with dynamic iteration primitives such as\n `while_loop` and `map_fn`. It supports gradient back-propagation via special\n \"flow\" control flow dependencies.\n\n Example 1: Plain reading and writing.\n\n >>> ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)\n >>> ta = ta.write(0, 10)\n >>> ta = ta.write(1, 20)\n >>> ta = ta.write(2, 30)\n >>>\n >>> ta.read(0)\n \n >>> ta.read(1)\n \n >>> ta.read(2)\n \n >>> ta.stack()\n \n\n Example 2: Fibonacci sequence algorithm that writes in a loop then returns.\n\n >>> @tf.function\n ... def fibonacci(n):\n ... ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)\n ... ta = ta.unstack([0., 1.])\n ...\n ... for i in range(2, n):\n ... ta = ta.write(i, ta.read(i - 1) + ta.read(i - 2))\n ...\n ... return ta.stack()\n >>>\n >>> fibonacci(7)\n \n\n Example 3: A simple loop interacting with a `tf.Variable`.\n\n >>> v = tf.Variable(1)\n >>> @tf.function\n ... def f(x):\n ... ta = tf.TensorArray(tf.int32, size=0, dynamic_size=True)\n ... for i in tf.range(x):\n ... v.assign_add(i)\n ... ta = ta.write(i, v)\n ... return ta.stack()\n >>> f(5)\n \n ", "desc": "Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.", "type": "API"}, {"name": "tf.TensorArraySpec", "docs": "Type specification for a `tf.TensorArray`.", "desc": "Type specification for a `tf.TensorArray`.", "type": "API"}, {"name": "tf.tensordot", "docs": "Tensor contraction of a and b along specified axes and outer product.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `axes`.\n\n This operation corresponds to `numpy.tensordot(a, b, axes)`.\n\n Example 1: When `a` and `b` are matrices (order 2), the case `axes=1`\n is equivalent to matrix multiplication.\n\n Example 2: When `a` and `b` are matrices (order 2), the case\n `axes = [[1], [0]]` is equivalent to matrix multiplication.\n\n Example 3: When `a` and `b` are matrices (order 2), the case `axes=0` gives\n the outer product, a tensor of order 4.\n\n Example 4: Suppose that \\\\(a_{ijk}\\\\) and \\\\(b_{lmn}\\\\) represent two\n tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor\n \\\\(c_{jklm}\\\\) whose entry\n corresponding to the indices \\\\((j,k,l,m)\\\\) is given by:\n\n \\\\( c_{jklm} = \\sum_i a_{ijk} b_{lmi} \\\\).\n\n In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.\n\n Args:\n a: `Tensor` of type `float32` or `float64`.\n b: `Tensor` with the same type as `a`.\n axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].\n If axes is a scalar, sum over the last N axes of a and the first N axes of\n b in order. If axes is a list or `Tensor` the first and second row contain\n the set of unique integers specifying axes along which the contraction is\n computed, for `a` and `b`, respectively. The number of axes for `a` and\n `b` must be equal. If `axes=0`, computes the outer product between `a` and\n `b`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with the same type as `a`.\n\n Raises:\n ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.\n IndexError: If the values in axes exceed the rank of the corresponding\n tensor.\n ", "desc": "Tensor contraction of a and b along specified axes and outer product.", "type": "API"}, {"name": "tf.TensorShape", "docs": "Represents the shape of a `Tensor`.\n\n A `TensorShape` represents a possibly-partial shape specification for a\n `Tensor`. It may be one of the following:\n\n * *Fully-known shape:* has a known number of dimensions and a known size\n for each dimension. e.g. `TensorShape([16, 256])`\n * *Partially-known shape:* has a known number of dimensions, and an unknown\n size for one or more dimension. e.g. `TensorShape([None, 256])`\n * *Unknown shape:* has an unknown number of dimensions, and an unknown\n size in all dimensions. e.g. `TensorShape(None)`\n\n If a tensor is produced by an operation of type `\"Foo\"`, its shape\n may be inferred if there is a registered shape function for\n `\"Foo\"`. See [Shape\n functions](https://www.tensorflow.org/guide/create_op#shape_functions_in_c)\n for details of shape functions and how to register them. Alternatively,\n you may set the shape explicitly using `tf.Tensor.set_shape`.\n ", "desc": "Represents the shape of a `Tensor`.", "type": "API"}, {"name": "tf.TensorSpec", "docs": "Describes a tf.Tensor.\n\n Metadata for describing the `tf.Tensor` objects accepted or returned\n by some TensorFlow APIs.\n ", "desc": "Describes a tf.Tensor.", "type": "API"}, {"name": "tf.test", "docs": "Testing.\n", "desc": "Testing.", "type": "API"}, {"name": "tf.test.assert_equal_graph_def", "docs": "Asserts that two `GraphDef`s are (mostly) the same.\n\n Compares two `GraphDef` protos for equality, ignoring versions and ordering of\n nodes, attrs, and control inputs. Node names are used to match up nodes\n between the graphs, so the naming of nodes must be consistent. This function\n ignores randomized attribute values that may appear in V2 checkpoints.\n\n Args:\n expected: The `GraphDef` we expected.\n actual: The `GraphDef` we have.\n\n Raises:\n AssertionError: If the `GraphDef`s do not match.\n TypeError: If either argument is not a `GraphDef`.\n ", "desc": "Asserts that two `GraphDef`s are (mostly) the same.", "type": "API"}, {"name": "tf.test.Benchmark", "docs": "Abstract class that provides helpers for TensorFlow benchmarks.", "desc": "Abstract class that provides helpers for TensorFlow benchmarks.", "type": "API"}, {"name": "tf.test.benchmark_config", "docs": "Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.\n\n Returns:\n A TensorFlow ConfigProto object.\n ", "desc": "Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.", "type": "API"}, {"name": "tf.test.compute_gradient", "docs": "Computes the theoretical and numeric Jacobian of `f`.\n\n With y = f(x), computes the theoretical and numeric Jacobian dy/dx.\n\n Args:\n f: the function.\n x: the arguments for the function as a list or tuple of values convertible\n to a Tensor.\n delta: (optional) perturbation used to compute numeric Jacobian.\n\n Returns:\n A pair of lists, where the first is a list of 2-d numpy arrays representing\n the theoretical Jacobians for each argument, and the second list is the\n numerical ones. Each 2-d array has \"y_size\" rows\n and \"x_size\" columns where \"x_size\" is the number of elements in the\n corresponding argument and \"y_size\" is the number of elements in f(x).\n\n Raises:\n ValueError: If result is empty but the gradient is nonzero.\n ValueError: If x is not list, but any other type.\n\n Example:\n\n >>> @tf.function\n ... def test_func(x):\n ... return x*x\n ...\n >>>\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_gradient_of_test_func(self):\n ... theoretical, numerical = tf.test.compute_gradient(test_func, [1.0])\n ... # ((array([[2.]], dtype=float32),),\n ... # (array([[2.000004]], dtype=float32),))\n ... self.assertAllClose(theoretical, numerical)\n\n ", "desc": "Computes the theoretical and numeric Jacobian of `f`.", "type": "API"}, {"name": "tf.test.create_local_cluster", "docs": "Create and start local servers and return the associated `Server` objects.\n\n \"PS\" stands for \"parameter server\": a task responsible for storing and\n updating the model's parameters. Other tasks send updates to these parameters\n as they work on optimizing the parameters. This particular division of labor\n between tasks is not required, but is common for distributed training.\n\n Read more at https://www.tensorflow.org/guide/extend/architecture\n\n ![components](https://www.tensorflow.org/images/diag1.svg \"components\")\n\n\n Figure illustrates the interaction of these components.\n \"/job:worker/task:0\" and \"/job:ps/task:0\" are both tasks with worker services.\n\n\n Example:\n ```python\n workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2)\n\n worker_sessions = [tf.compat.v1.Session(w.target) for w in workers]\n\n with tf.device(\"/job:ps/task:0\"):\n ...\n with tf.device(\"/job:ps/task:1\"):\n ...\n with tf.device(\"/job:worker/task:0\"):\n ...\n with tf.device(\"/job:worker/task:1\"):\n ...\n\n worker_sessions[0].run(...)\n ```\n\n Args:\n num_workers: Number of worker servers to start.\n num_ps: Number of PS servers to start.\n protocol: Communication protocol. Allowed values are documented in the\n documentation of `tf.distribute.Server`.\n worker_config: (optional) `tf.ConfigProto` to initialize workers. Can be\n used to instantiate multiple devices etc.\n ps_config: (optional) `tf.ConfigProto` to initialize PS servers.\n\n Returns:\n A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list\n of `num_workers` objects of type `tf.distribute.Server` (all running\n locally);\n and `ps_servers` is a list of `num_ps` objects of similar type.\n\n Raises:\n ImportError: if portpicker module was not found at load time\n ", "desc": "Create and start local servers and return the associated `Server` objects.", "type": "API"}, {"name": "tf.test.disable_with_predicate", "docs": "Disables the test if pred is true.", "desc": "Disables the test if pred is true.", "type": "API"}, {"name": "tf.test.gpu_device_name", "docs": "Returns the name of a GPU device if available or a empty string.\n\n This method should only be used in tests written with `tf.test.TestCase`.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_gpu_support():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(tf.test.gpu_device_name()):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n ", "desc": "Returns the name of a GPU device if available or a empty string.", "type": "API"}, {"name": "tf.test.is_built_with_cuda", "docs": "Returns whether TensorFlow was built with CUDA (GPU) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with CUDA (GPU).\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_cuda():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is built with CUDA.\n ", "desc": "Returns whether TensorFlow was built with CUDA (GPU) support.", "type": "API"}, {"name": "tf.test.is_built_with_gpu_support", "docs": "Returns whether TensorFlow was built with GPU (CUDA or ROCm) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with GPU.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_gpu_support():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is built with CUDA GPU support.\n ", "desc": "Returns whether TensorFlow was built with GPU (CUDA or ROCm) support.", "type": "API"}, {"name": "tf.test.is_built_with_rocm", "docs": "Returns whether TensorFlow was built with ROCm (GPU) support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with ROCm (GPU).\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_gpu(self):\n ... if not tf.test.is_built_with_rocm():\n ... self.skipTest(\"test is only applicable on GPU\")\n ...\n ... with tf.device(\"GPU:0\"):\n ... self.assertEqual(tf.math.add(1.0, 2.0), 3.0)\n\n TensorFlow official binary is NOT built with ROCm.\n ", "desc": "Returns whether TensorFlow was built with ROCm (GPU) support.", "type": "API"}, {"name": "tf.test.is_built_with_xla", "docs": "Returns whether TensorFlow was built with XLA support.\n\n This method should only be used in tests written with `tf.test.TestCase`. A\n typical usage is to skip tests that should only run with XLA.\n\n >>> class MyTest(tf.test.TestCase):\n ...\n ... def test_add_on_xla(self):\n ... if not tf.test.is_built_with_xla():\n ... self.skipTest(\"test is only applicable on XLA\")\n\n ... @tf.function(jit_compile=True)\n ... def add(x, y):\n ... return tf.math.add(x, y)\n ...\n ... self.assertEqual(add(tf.ones(()), tf.ones(())), 2.0)\n\n TensorFlow official binary is built with XLA.\n ", "desc": "Returns whether TensorFlow was built with XLA support.", "type": "API"}, {"name": "tf.test.is_gpu_available", "docs": "Returns whether TensorFlow can access a GPU. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nUse `tf.config.list_physical_devices('GPU')` instead.\n\nWarning: if a non-GPU version of the package is installed, the function would\nalso return False. Use `tf.test.is_built_with_cuda` to validate if TensorFlow\nwas build with CUDA support.\n\nFor example,\n>>> gpu_available = tf.test.is_gpu_available()\n>>> is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True)\n>>> is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0))\n\nArgs:\n cuda_only: limit the search to CUDA GPUs.\n min_cuda_compute_capability: a (major,minor) pair that indicates the minimum\n CUDA compute capability required, or None if no requirement.\n\nNote that the keyword arg name \"cuda_only\" is misleading (since routine will\nreturn true when a GPU device is available irrespective of whether TF was\nbuilt with CUDA support or ROCm support. However no changes here because\n\n++ Changing the name \"cuda_only\" to something more generic would break\n backward compatibility\n\n++ Adding an equivalent \"rocm_only\" would require the implementation check\n the build type. This in turn would require doing the same for CUDA and thus\n potentially break backward compatibility\n\n++ Adding a new \"cuda_or_rocm_only\" would not break backward compatibility,\n but would require most (if not all) callers to update the call to use\n \"cuda_or_rocm_only\" instead of \"cuda_only\"\n\nReturns:\n True if a GPU device of the requested kind is available.", "desc": "Returns whether TensorFlow can access a GPU. (deprecated)", "type": "API"}, {"name": "tf.test.main", "docs": "Runs all unit tests.", "desc": "Runs all unit tests.", "type": "API"}, {"name": "tf.test.TestCase", "docs": "Base class for tests that need to test TensorFlow.", "desc": "Base class for tests that need to test TensorFlow.", "type": "API"}, {"name": "tf.test.TestCase.failureException", "docs": "Assertion failed.", "desc": "Assertion failed.", "type": "API"}, {"name": "tf.tile", "docs": "Constructs a tensor by tiling a given tensor.\n\n This operation creates a new tensor by replicating `input` `multiples` times.\n The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,\n and the values of `input` are replicated `multiples[i]` times along the 'i'th\n dimension. For example, tiling `[a b c d]` by `[2]` produces\n `[a b c d a b c d]`.\n\n >>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32)\n >>> b = tf.constant([1,2], tf.int32)\n >>> tf.tile(a, b)\n \n >>> c = tf.constant([2,1], tf.int32)\n >>> tf.tile(a, c)\n \n >>> d = tf.constant([2,2], tf.int32)\n >>> tf.tile(a, d)\n \n\n Args:\n input: A `Tensor`. 1-D or higher.\n multiples: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n 1-D. Length must be the same as the number of dimensions in `input`\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `input`.\n ", "desc": "Constructs a tensor by tiling a given tensor.", "type": "API"}, {"name": "tf.timestamp", "docs": "Provides the time since epoch in seconds.\n\n Returns the timestamp as a `float64` for seconds since the Unix epoch.\n\n Note: the timestamp is computed when the op is executed, not when it is added\n to the graph.\n\n Args:\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of type `float64`.\n ", "desc": "Provides the time since epoch in seconds.", "type": "API"}, {"name": "tf.tpu", "docs": "Ops related to Tensor Processing Units.\n", "desc": "Ops related to Tensor Processing Units.", "type": "API"}, {"name": "tf.tpu.experimental", "docs": "Public API for tf.tpu.experimental namespace.\n", "desc": "Public API for tf.tpu.experimental namespace.", "type": "API"}, {"name": "tf.tpu.experimental.DeviceAssignment", "docs": "Mapping from logical cores in a computation to the physical TPU topology.\n\n Prefer to use the `DeviceAssignment.build()` helper to construct a\n `DeviceAssignment`; it is easier if less flexible than constructing a\n `DeviceAssignment` directly.\n ", "desc": "Mapping from logical cores in a computation to the physical TPU topology.", "type": "API"}, {"name": "tf.tpu.experimental.embedding", "docs": "Public API for tf.tpu.experimental.embedding namespace.\n", "desc": "Public API for tf.tpu.experimental.embedding namespace.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.Adagrad", "docs": "Optimization parameters for Adagrad with TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adagrad(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for Adagrad with TPU embeddings.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.Adam", "docs": "Optimization parameters for Adam with TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n NOTE: By default this optimizer is lazy, i.e. it will not apply the gradient\n update of zero to rows that were not looked up. You can change this behavior\n by setting `lazy_adam` to `False`.\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.Adam(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for Adam with TPU embeddings.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.FeatureConfig", "docs": "Configuration data for one embedding feature.\n\n This class holds the configuration data for a single embedding feature. The\n main use is to assign features to `tf.tpu.experimental.embedding.TableConfig`s\n via the table parameter:\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n The above configuration has 2 tables, and three features. The first two\n features will be looked up in the first table and the third feature will be\n looked up in the second table.\n\n You can also specify the output shape for each feature. The output shape\n should be the expected activation shape excluding the table dimension. For\n dense and sparse tensor, the output shape should be the same as the input\n shape excluding the last dimension. For ragged tensor, the output shape can\n mismatch the input shape.\n\n NOTE: The `max_sequence_length` will be only used when the input tensor has\n rank 2 and the `output_shape` is not set in the feature config.\n\n When feeding features into `embedding.enqueue` they can be `tf.Tensor`s,\n `tf.SparseTensor`s or `tf.RaggedTensor`s. When the argument\n `max_sequence_length` is 0, the default, you should expect a output of\n `embedding.dequeue` for this feature of shape `(batch_size, dim)`. If\n `max_sequence_length` is greater than 0, the feature is embedded as a sequence\n and padded up to the given length. The shape of the output for this feature\n will be `(batch_size, max_sequence_length, dim)`.\n ", "desc": "Configuration data for one embedding feature.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.FTRL", "docs": "Optimization parameters for FTRL with TPU embeddings.\n\n See Algorithm 1 of this\n [paper](https://research.google.com/pubs/archive/41159.pdf).\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```python\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.FTRL(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```python\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.FTRL(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.FTRL(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for FTRL with TPU embeddings.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.serving_embedding_lookup", "docs": "Apply standard lookup ops with `tf.tpu.experimental.embedding` configs.\n\n This function is a utility which allows using the\n `tf.tpu.experimental.embedding` config objects with standard lookup functions.\n This can be used when exporting a model which uses\n `tf.tpu.experimental.embedding.TPUEmbedding` for serving on CPU. In particular\n `tf.tpu.experimental.embedding.TPUEmbedding` only supports lookups on TPUs and\n should not be part of your serving graph.\n\n Note that TPU specific options (such as `max_sequence_length`) in the\n configuration objects will be ignored.\n\n In the following example we take a trained model (see the documentation for\n `tf.tpu.experimental.embedding.TPUEmbedding` for the context) and create a\n saved model with a serving function that will perform the embedding lookup and\n pass the results to your model:\n\n ```python\n model = model_fn(...)\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=1024,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.restore(...)\n\n @tf.function(input_signature=[{'feature_one': tf.TensorSpec(...),\n 'feature_two': tf.TensorSpec(...),\n 'feature_three': tf.TensorSpec(...)}])\n def serve_tensors(embedding_features):\n embedded_features = tf.tpu.experimental.embedding.serving_embedding_lookup(\n embedding_features, None, embedding.embedding_tables,\n feature_config)\n return model(embedded_features)\n\n model.embedding_api = embedding\n tf.saved_model.save(model,\n export_dir=...,\n signatures={'serving_default': serve_tensors})\n\n ```\n\n NOTE: It's important to assign the embedding API object to a member of your\n model as `tf.saved_model.save` only supports saving variables as one\n `Trackable` object. Since the model's weights are in `model` and the\n embedding table are managed by `embedding`, we assign `embedding` to an\n attribute of `model` so that tf.saved_model.save can find the embedding\n variables.\n\n NOTE: The same `serve_tensors` function and `tf.saved_model.save` call will\n work directly from training.\n\n Args:\n inputs: a nested structure of Tensors, SparseTensors or RaggedTensors.\n weights: a nested structure of Tensors, SparseTensors or RaggedTensors or\n None for no weights. If not None, structure must match that of inputs, but\n entries are allowed to be None.\n tables: a dict of mapping TableConfig objects to Variables.\n feature_config: a nested structure of FeatureConfig objects with the same\n structure as inputs.\n\n Returns:\n A nested structure of Tensors with the same structure as inputs.\n ", "desc": "Apply standard lookup ops with `tf.tpu.experimental.embedding` configs.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.SGD", "docs": "Optimization parameters for stochastic gradient descent for TPU embeddings.\n\n Pass this to `tf.tpu.experimental.embedding.TPUEmbedding` via the `optimizer`\n argument to set the global optimizer and its parameters:\n\n ```\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n ...\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n This can also be used in a `tf.tpu.experimental.embedding.TableConfig` as the\n optimizer parameter to set a table specific optimizer. This will override the\n optimizer and parameters for global embedding optimizer defined above:\n\n ```\n table_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...,\n optimizer=tf.tpu.experimental.embedding.SGD(0.2))\n table_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n\n feature_config = (\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_one),\n tf.tpu.experimental.embedding.FeatureConfig(\n table=table_two))\n\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n In the above example, the first feature will be looked up in a table that has\n a learning rate of 0.2 while the second feature will be looked up in a table\n that has a learning rate of 0.1.\n\n See 'tensorflow/core/protobuf/tpu/optimization_parameters.proto' for a\n complete description of these parameters and their impacts on the optimizer\n algorithm.\n ", "desc": "Optimization parameters for stochastic gradient descent for TPU embeddings.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.TableConfig", "docs": "Configuration data for one embedding table.\n\n This class holds the configuration data for a single embedding table. It is\n used as the `table` parameter of a\n `tf.tpu.experimental.embedding.FeatureConfig`. Multiple\n `tf.tpu.experimental.embedding.FeatureConfig` objects can use the same\n `tf.tpu.experimental.embedding.TableConfig` object. In this case a shared\n table will be created for those feature lookups.\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n batch_size=...\n optimizer=tf.tpu.experimental.embedding.Adam(0.1))\n ```\n\n The above configuration has 2 tables, and three features. The first two\n features will be looked up in the first table and the third feature will be\n looked up in the second table.\n\n ", "desc": "Configuration data for one embedding table.", "type": "API"}, {"name": "tf.tpu.experimental.embedding.TPUEmbedding", "docs": "The TPUEmbedding mid level API.\n\n NOTE: When instantiated under a TPUStrategy, this class can only be created\n once per call to `tf.tpu.experimental.initialize_tpu_system`. If you wish to\n re-initialize the embedding engine you must re-initialize the tpu as well.\n Doing this will clear any variables from TPU, so ensure you have checkpointed\n before you do this. If a further instances of the class are needed,\n set the `initialize_tpu_embedding` argument to `False`.\n\n This class can be used to support training large embeddings on TPU. When\n creating an instance of this class, you must specify the complete set of\n tables and features you expect to lookup in those tables. See the\n documentation of `tf.tpu.experimental.embedding.TableConfig` and\n `tf.tpu.experimental.embedding.FeatureConfig` for more details on the complete\n set of options. We will cover the basic usage here.\n\n NOTE: multiple `FeatureConfig` objects can use the same `TableConfig` object,\n allowing different features to share the same table:\n\n ```python\n table_config_one = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n table_config_two = tf.tpu.experimental.embedding.TableConfig(\n vocabulary_size=...,\n dim=...)\n feature_config = {\n 'feature_one': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_two': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_one),\n 'feature_three': tf.tpu.experimental.embedding.FeatureConfig(\n table=table_config_two)}\n ```\n\n There are two modes under which the `TPUEmbedding` class can used. This\n depends on if the class was created under a `TPUStrategy` scope or not.\n\n Under `TPUStrategy`, we allow access to the method `enqueue`, `dequeue` and\n `apply_gradients`. We will show examples below of how to use these to train\n and evaluate your model. Under CPU, we only access to the `embedding_tables`\n property which allow access to the embedding tables so that you can use them\n to run model evaluation/prediction on CPU.\n\n First lets look at the `TPUStrategy` mode. Initial setup looks like:\n\n ```python\n strategy = tf.distribute.TPUStrategy(...)\n with strategy.scope():\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n ```\n\n When creating a distributed dataset that is to be passed to the enqueue\n operation a special input option must be specified:\n\n ```python\n distributed_dataset = (\n strategy.distribute_datasets_from_function(\n dataset_fn=...,\n options=tf.distribute.InputOptions(\n experimental_fetch_to_device=False))\n dataset_iterator = iter(distributed_dataset)\n ```\n\n Different feature inputs can have different shapes. For dense and sparse\n tensor, rank 2 and above is supported. For ragged tensor, although only rank 2\n is supported, you can specify the output shape to be rank 2 and above. The\n output shape specified in the FeatureConfig has the first priority. The input\n shape passed in build method has second priority and the input shapes\n auto detected from input feature has the lowest priority. The latter two will\n be converted to output shapes by omitting the last dimension. If the lower\n priority one has output shapes which don't match the former one. A ValueError\n will be raised. Only when the former one has undefined output shapes, the\n latter one can override.\n\n NOTE: All batches passed to the layer can have different input shapes. But\n these input shapes need to match with the output shapes set by either\n `FeatureConfig` or build method except for ragged tensor. Only 2D\n ragged tensor with output shape set to higher dimensions is allowed as\n long as the total number of elements matches. All subsequent calls must have\n the same input shapes. In the event that the input shapes cannot be\n automatically determined by the enqueue method, you must call\n the build method with the input shapes or provide output shapes in the\n `FeatureConfig` to initialize the layer.\n\n To use this API on TPU you should use a custom training loop. Below is an\n example of a training and evaluation step:\n\n ```python\n @tf.function\n def training_step(dataset_iterator, num_steps):\n def tpu_step(tpu_features):\n with tf.GradientTape() as tape:\n activations = embedding.dequeue()\n tape.watch(activations)\n model_output = model(activations)\n loss = ... # some function of labels and model_output\n\n embedding_gradients = tape.gradient(loss, activations)\n embedding.apply_gradients(embedding_gradients)\n # Insert your model gradient and optimizer application here\n\n for _ in tf.range(num_steps):\n embedding_features, tpu_features = next(dataset_iterator)\n embedding.enqueue(embedding_features, training=True)\n strategy.run(tpu_step, args=(tpu_features, ))\n\n @tf.function\n def evalution_step(dataset_iterator, num_steps):\n def tpu_step(tpu_features):\n activations = embedding.dequeue()\n model_output = model(activations)\n # Insert your evaluation code here.\n\n for _ in tf.range(num_steps):\n embedding_features, tpu_features = next(dataset_iterator)\n embedding.enqueue(embedding_features, training=False)\n strategy.run(tpu_step, args=(tpu_features, ))\n ```\n\n NOTE: The calls to `enqueue` have `training` set to `True` when\n `embedding.apply_gradients` is used and set to `False` when\n `embedding.apply_gradients` is not present in the function. If you don't\n follow this pattern you may cause an error to be raised or the tpu may\n deadlock.\n\n In the above examples, we assume that the user has a dataset which returns\n a tuple where the first element of the tuple matches the structure of what\n was passed as the `feature_config` argument to the object initializer. Also we\n utilize `tf.range` to get a `tf.while_loop` in order to increase performance.\n\n When checkpointing your model, you should include your\n `tf.tpu.experimental.embedding.TPUEmbedding` object in the checkpoint. It is a\n trackable object and saving it will save the embedding tables and their\n optimizer slot variables:\n\n ```python\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.save(...)\n ```\n\n On CPU, only the `embedding_table` property is usable. This will allow you to\n restore a checkpoint to the object and have access to the table variables:\n\n ```python\n model = model_fn(...)\n embedding = tf.tpu.experimental.embedding.TPUEmbedding(\n feature_config=feature_config,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)\n checkpoint.restore(...)\n\n tables = embedding.embedding_tables\n ```\n\n You can now use table in functions like `tf.nn.embedding_lookup` to perform\n your embedding lookup and pass to your model.\n\n ", "desc": "The TPUEmbedding mid level API.", "type": "API"}, {"name": "tf.tpu.experimental.initialize_tpu_system", "docs": "Initialize the TPU devices.\n\n Args:\n cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver,\n which provides information about the TPU cluster.\n Returns:\n The tf.tpu.Topology object for the topology of the TPU cluster. If called\n inside tf.function, it returns the serialized topology object instead.\n\n Raises:\n RuntimeError: If running inside a tf.function.\n NotFoundError: If no TPU devices found in eager mode.\n ", "desc": "Initialize the TPU devices.", "type": "API"}, {"name": "tf.tpu.experimental.shutdown_tpu_system", "docs": "Shuts down the TPU devices.\n\n This will clear all caches, even those that are maintained through sequential\n calls to tf.tpu.experimental.initialize_tpu_system, such as the compilation\n cache.\n\n Args:\n cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver,\n which provides information about the TPU cluster.\n\n Raises:\n RuntimeError: If no TPU devices found for eager execution or if run in a\n tf.function.\n ", "desc": "Shuts down the TPU devices.", "type": "API"}, {"name": "tf.tpu.experimental.Topology", "docs": "Describes a set of TPU devices.\n\n Represents both the shape of the physical mesh, and the mapping between\n TensorFlow TPU devices to physical mesh coordinates.\n ", "desc": "Describes a set of TPU devices.", "type": "API"}, {"name": "tf.tpu.experimental.TPUSystemMetadata", "docs": "Describes some metadata about the TPU system.\n\n Attributes:\n num_cores: interger. Total number of TPU cores in the TPU system.\n num_hosts: interger. Total number of hosts (TPU workers) in the TPU system.\n num_of_cores_per_host: interger. Number of TPU cores per host (TPU worker).\n topology: an instance of `tf.tpu.experimental.Topology`, which describes the\n physical topology of TPU system.\n devices: a tuple of strings, which describes all the TPU devices in the\n system.\n ", "desc": "Describes some metadata about the TPU system.", "type": "API"}, {"name": "tf.tpu.XLAOptions", "docs": "XLA compilation options.\n\n Attributes:\n use_spmd_for_xla_partitioning: Boolean. Whether to use XLA's SPMD\n partitioner instead of MPMD partitioner when compiler partitioning is\n requested.\n enable_xla_dynamic_padder: Boolean. Whether to enable XLA dynamic padder\n infrastructure to handle dynamic shapes inputs inside XLA. True by\n default. Disabling this may cause correctness issues with dynamic shapes\n inputs, as XLA will just assume the inputs are with padded shapes. However\n users can optionally set it to False to improve device time if masking is\n already handled in the user side.\n ", "desc": "XLA compilation options.", "type": "API"}, {"name": "tf.train", "docs": "Support for training models.\n\nSee the [Training](https://tensorflow.org/api_guides/python/train) guide.\n\n", "desc": "Support for training models.", "type": "API"}, {"name": "tf.train.BytesList", "docs": "Used in `tf.train.Example` protos. Holds a list of byte-strings.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[bytes]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {bytes_list {value: ['abc', '12345' ]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].bytes_list.value\n[\"abc\", \"12345\"]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_feature': }\n\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of byte-strings.", "type": "API"}, {"name": "tf.train.Checkpoint", "docs": "Manages saving/restoring trackable values to disk.\n\n TensorFlow objects may contain trackable state, such as `tf.Variable`s,\n `tf.keras.optimizers.Optimizer` implementations, `tf.data.Dataset` iterators,\n `tf.keras.Layer` implementations, or `tf.keras.Model` implementations.\n These are called **trackable objects**.\n\n A `Checkpoint` object can be constructed to save either a single or group of\n trackable objects to a checkpoint file. It maintains a `save_counter` for\n numbering checkpoints.\n\n Example:\n\n ```python\n model = tf.keras.Model(...)\n checkpoint = tf.train.Checkpoint(model)\n\n # Save a checkpoint to /tmp/training_checkpoints-{save_counter}. Every time\n # checkpoint.save is called, the save counter is increased.\n save_path = checkpoint.save('/tmp/training_checkpoints')\n\n # Restore the checkpointed values to the `model` object.\n checkpoint.restore(save_path)\n ```\n\n Example 2:\n\n ```python\n import tensorflow as tf\n import os\n\n checkpoint_directory = \"/tmp/training_checkpoints\"\n checkpoint_prefix = os.path.join(checkpoint_directory, \"ckpt\")\n\n # Create a Checkpoint that will manage two objects with trackable state,\n # one we name \"optimizer\" and the other we name \"model\".\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory))\n for _ in range(num_training_steps):\n optimizer.minimize( ... ) # Variables will be restored on creation.\n status.assert_consumed() # Optional sanity checks.\n checkpoint.save(file_prefix=checkpoint_prefix)\n ```\n\n `Checkpoint.save()` and `Checkpoint.restore()` write and read object-based\n checkpoints, in contrast to TensorFlow 1.x's `tf.compat.v1.train.Saver` which\n writes and\n reads `variable.name` based checkpoints. Object-based checkpointing saves a\n graph of dependencies between Python objects (`Layer`s, `Optimizer`s,\n `Variable`s, etc.) with named edges, and this graph is used to match variables\n when restoring a checkpoint. It can be more robust to changes in the Python\n program, and helps to support restore-on-create for variables.\n\n `Checkpoint` objects have dependencies on the objects passed as keyword\n arguments to their constructors, and each dependency is given a name that is\n identical to the name of the keyword argument for which it was created.\n TensorFlow classes like `Layer`s and `Optimizer`s will automatically add\n dependencies on their own variables (e.g. \"kernel\" and \"bias\" for\n `tf.keras.layers.Dense`). Inheriting from `tf.keras.Model` makes managing\n dependencies easy in user-defined classes, since `Model` hooks into attribute\n assignment. For example:\n\n ```python\n class Regress(tf.keras.Model):\n\n def __init__(self):\n super(Regress, self).__init__()\n self.input_transform = tf.keras.layers.Dense(10)\n # ...\n\n def call(self, inputs):\n x = self.input_transform(inputs)\n # ...\n ```\n\n This `Model` has a dependency named \"input_transform\" on its `Dense` layer,\n which in turn depends on its variables. As a result, saving an instance of\n `Regress` using `tf.train.Checkpoint` will also save all the variables created\n by the `Dense` layer.\n\n When variables are assigned to multiple workers, each worker writes its own\n section of the checkpoint. These sections are then merged/re-indexed to behave\n as a single checkpoint. This avoids copying all variables to one worker, but\n does require that all workers see a common filesystem.\n\n This function differs slightly from the Keras Model `save_weights` function.\n `tf.keras.Model.save_weights` creates a checkpoint file with the name\n specified in `filepath`, while `tf.train.Checkpoint` numbers the checkpoints,\n using `filepath` as the prefix for the checkpoint file names. Aside from this,\n `model.save_weights()` and `tf.train.Checkpoint(model).save()` are equivalent.\n\n See the [guide to training\n checkpoints](https://www.tensorflow.org/guide/checkpoint) for\n details.\n\n Attributes:\n save_counter: Incremented when `save()` is called. Used to number\n checkpoints.\n ", "desc": "Manages saving/restoring trackable values to disk.", "type": "API"}, {"name": "tf.train.CheckpointManager", "docs": "Manages multiple checkpoints by keeping some and deleting unneeded ones.\n\n Example usage:\n\n ```python\n import tensorflow as tf\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n manager = tf.train.CheckpointManager(\n checkpoint, directory=\"/tmp/model\", max_to_keep=5)\n status = checkpoint.restore(manager.latest_checkpoint)\n while True:\n # train\n manager.save()\n ```\n\n `CheckpointManager` preserves its own state across instantiations (see the\n `__init__` documentation for details). Only one should be active in a\n particular directory at a time.\n ", "desc": "Manages multiple checkpoints by keeping some and deleting unneeded ones.", "type": "API"}, {"name": "tf.train.CheckpointOptions", "docs": "Options for constructing a Checkpoint.\n\n Used as the `options` argument to either `tf.train.Checkpoint.save()` or\n `tf.train.Checkpoint.restore()` methods to adjust how variables are\n saved/restored.\n\n Example: Run IO ops on \"localhost\" while saving a checkpoint:\n\n ```\n step = tf.Variable(0, name=\"step\")\n checkpoint = tf.train.Checkpoint(step=step)\n options = tf.train.CheckpointOptions(experimental_io_device=\"/job:localhost\")\n checkpoint.save(\"/tmp/ckpt\", options=options)\n ```\n ", "desc": "Options for constructing a Checkpoint.", "type": "API"}, {"name": "tf.train.checkpoints_iterator", "docs": "Continuously yield new checkpoint files as they appear.\n\n The iterator only checks for new checkpoints when control flow has been\n reverted to it. This means it can miss checkpoints if your code takes longer\n to run between iterations than `min_interval_secs` or the interval at which\n new checkpoints are written.\n\n The `timeout` argument is the maximum number of seconds to block waiting for\n a new checkpoint. It is used in combination with the `timeout_fn` as\n follows:\n\n * If the timeout expires and no `timeout_fn` was specified, the iterator\n stops yielding.\n * If a `timeout_fn` was specified, that function is called and if it returns\n a true boolean value the iterator stops yielding.\n * If the function returns a false boolean value then the iterator resumes the\n wait for new checkpoints. At this point the timeout logic applies again.\n\n This behavior gives control to callers on what to do if checkpoints do not\n come fast enough or stop being generated. For example, if callers have a way\n to detect that the training has stopped and know that no new checkpoints\n will be generated, they can provide a `timeout_fn` that returns `True` when\n the training has stopped. If they know that the training is still going on\n they return `False` instead.\n\n Args:\n checkpoint_dir: The directory in which checkpoints are saved.\n min_interval_secs: The minimum number of seconds between yielding\n checkpoints.\n timeout: The maximum number of seconds to wait between checkpoints. If left\n as `None`, then the process will wait indefinitely.\n timeout_fn: Optional function to call after a timeout. If the function\n returns True, then it means that no new checkpoints will be generated and\n the iterator will exit. The function is called with no arguments.\n\n Yields:\n String paths to latest checkpoint files as they arrive.\n ", "desc": "Continuously yield new checkpoint files as they appear.", "type": "API"}, {"name": "tf.train.ClusterDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.train.ClusterSpec", "docs": "Represents a cluster as a set of \"tasks\", organized into \"jobs\".\n\n A `tf.train.ClusterSpec` represents the set of processes that\n participate in a distributed TensorFlow computation. Every\n `tf.distribute.Server` is constructed in a particular cluster.\n\n To create a cluster with two jobs and five tasks, you specify the\n mapping from job names to lists of network addresses (typically\n hostname-port pairs).\n\n ```python\n cluster = tf.train.ClusterSpec({\"worker\": [\"worker0.example.com:2222\",\n \"worker1.example.com:2222\",\n \"worker2.example.com:2222\"],\n \"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n ```\n\n Each job may also be specified as a sparse mapping from task indices\n to network addresses. This enables a server to be configured without\n needing to know the identity of (for example) all other worker\n tasks:\n\n ```python\n cluster = tf.train.ClusterSpec({\"worker\": {1: \"worker1.example.com:2222\"},\n \"ps\": [\"ps0.example.com:2222\",\n \"ps1.example.com:2222\"]})\n ```\n ", "desc": "Represents a cluster as a set of \"tasks\", organized into \"jobs\".", "type": "API"}, {"name": "tf.train.Coordinator", "docs": "A coordinator for threads.\n\n This class implements a simple mechanism to coordinate the termination of a\n set of threads.\n\n #### Usage:\n\n ```python\n # Create a coordinator.\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate.\n coord.join(threads)\n ```\n\n Any of the threads can call `coord.request_stop()` to ask for all the threads\n to stop. To cooperate with the requests, each thread must check for\n `coord.should_stop()` on a regular basis. `coord.should_stop()` returns\n `True` as soon as `coord.request_stop()` has been called.\n\n A typical thread running with a coordinator will do something like:\n\n ```python\n while not coord.should_stop():\n ...do some work...\n ```\n\n #### Exception handling:\n\n A thread can report an exception to the coordinator as part of the\n `request_stop()` call. The exception will be re-raised from the\n `coord.join()` call.\n\n Thread code:\n\n ```python\n try:\n while not coord.should_stop():\n ...do some work...\n except Exception as e:\n coord.request_stop(e)\n ```\n\n Main code:\n\n ```python\n try:\n ...\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate.\n coord.join(threads)\n except Exception as e:\n ...exception that was passed to coord.request_stop()\n ```\n\n To simplify the thread implementation, the Coordinator provides a\n context handler `stop_on_exception()` that automatically requests a stop if\n an exception is raised. Using the context handler the thread code above\n can be written as:\n\n ```python\n with coord.stop_on_exception():\n while not coord.should_stop():\n ...do some work...\n ```\n\n #### Grace period for stopping:\n\n After a thread has called `coord.request_stop()` the other threads have a\n fixed time to stop, this is called the 'stop grace period' and defaults to 2\n minutes. If any of the threads is still alive after the grace period expires\n `coord.join()` raises a RuntimeError reporting the laggards.\n\n ```python\n try:\n ...\n coord = Coordinator()\n # Start a number of threads, passing the coordinator to each of them.\n ...start thread 1...(coord, ...)\n ...start thread N...(coord, ...)\n # Wait for all the threads to terminate, give them 10s grace period\n coord.join(threads, stop_grace_period_secs=10)\n except RuntimeError:\n ...one of the threads took more than 10s to stop after request_stop()\n ...was called.\n except Exception:\n ...exception that was passed to coord.request_stop()\n ```\n ", "desc": "A coordinator for threads.", "type": "API"}, {"name": "tf.train.Example", "docs": "An `Example` is a standard proto storing data for training and inference.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nIt contains a key-value store `Example.features` where each key (string) maps\nto a `tf.train.Feature` message which contains a fixed-type list. This flexible\nand compact format allows the storage of large amounts of typed data, but\nrequires that the data shape and use be determined by the configuration files\nand parsers that are used to read and write this format (refer to\n`tf.io.parse_example` for details).\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {int64_list {value: [1, 2, 3, 4]}}}\n... }''',\n... tf.train.Example())\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)})\n{'my_feature': }\n\nWhile the list of keys, and the contents of each key _could_ be different for\nevery `Example`, TensorFlow expects a fixed list of keys, each with a fixed\n`tf.dtype`. A conformant `Example` dataset obeys the following conventions:\n\n - If a Feature `K` exists in one example with data type `T`, it must be of\n type `T` in all other examples when present. It may be omitted.\n - The number of instances of Feature `K` list data may vary across examples,\n depending on the requirements of the model.\n - If a Feature `K` doesn't exist in an example, a `K`-specific default will be\n used, if configured.\n - If a Feature `K` exists in an example but contains no items, the intent\n is considered to be an empty tensor and no default will be used.\n\n", "desc": "An `Example` is a standard proto storing data for training and inference.", "type": "API"}, {"name": "tf.train.experimental", "docs": "Public API for tf.train.experimental namespace.\n", "desc": "Public API for tf.train.experimental namespace.", "type": "API"}, {"name": "tf.train.experimental.PythonState", "docs": "A mixin for putting Python state in an object-based checkpoint.\n\n This is an abstract class which allows extensions to TensorFlow's object-based\n checkpointing (see `tf.train.Checkpoint`). For example a wrapper for NumPy\n arrays:\n\n ```python\n import io\n import numpy\n\n class NumpyWrapper(tf.train.experimental.PythonState):\n\n def __init__(self, array):\n self.array = array\n\n def serialize(self):\n string_file = io.BytesIO()\n try:\n numpy.save(string_file, self.array, allow_pickle=False)\n serialized = string_file.getvalue()\n finally:\n string_file.close()\n return serialized\n\n def deserialize(self, string_value):\n string_file = io.BytesIO(string_value)\n try:\n self.array = numpy.load(string_file, allow_pickle=False)\n finally:\n string_file.close()\n ```\n\n Instances of `NumpyWrapper` are checkpointable objects, and will be saved and\n restored from checkpoints along with TensorFlow state like variables.\n\n ```python\n root = tf.train.Checkpoint(numpy=NumpyWrapper(numpy.array([1.])))\n save_path = root.save(prefix)\n root.numpy.array *= 2.\n assert [2.] == root.numpy.array\n root.restore(save_path)\n assert [1.] == root.numpy.array\n ```\n ", "desc": "A mixin for putting Python state in an object-based checkpoint.", "type": "API"}, {"name": "tf.train.ExponentialMovingAverage", "docs": "Maintains moving averages of variables by employing an exponential decay.\n\n When training a model, it is often beneficial to maintain moving averages of\n the trained parameters. Evaluations that use averaged parameters sometimes\n produce significantly better results than the final trained values.\n\n The `apply()` method adds shadow copies of trained variables the first time\n it is called, and maintains a moving average of the trained variables in\n their shadow copies at every additional invocation.\n It should generally be called immediately after creating the model weights,\n and then after each training step.\n\n The `average()` method gives access to the shadow variables.\n It allows you to use the moving averages in place of the last trained values\n for evaluations, by loading the moving averages into your model via\n `var.assign(ema.average(var))`.\n Additionally, although `ExponentialMovingAverage`\n objects are not directly trackable by checkpoints,\n `average()` returns the moving average variables for your model weights,\n which you can then checkpoint. (There is an example\n of this near the bottom of this docstring).\n So, `average()` is useful when\n building an evaluation model, or when restoring a model from a checkpoint\n file.\n\n The moving averages are computed using exponential decay. You specify the\n decay value (as a scalar float value, `Tensor`, or `Variable`) when creating\n the `ExponentialMovingAverage` object. The shadow variables are initialized\n with the same initial values as the trained variables. When you run `apply`\n to update the moving averages, each shadow variable is updated with the\n formula:\n\n `shadow_variable -= (1 - decay) * (shadow_variable - variable)`\n\n This is mathematically equivalent to the classic formula below, but the use\n of an `assign_sub` op (the `\"-=\"` in the formula) allows concurrent lockless\n updates to the variables:\n\n `shadow_variable = decay * shadow_variable + (1 - decay) * variable`\n\n Reasonable values for `decay` are close to 1.0, typically in the\n multiple-nines range: 0.999, 0.9999, etc.\n\n To have fine-grained control over the value of the decay parameter during\n training, pass a scalar `tf.Variable` as the `decay` value to the constructor,\n and update the variable as needed.\n\n Example usage when creating a training model:\n\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n\n # The first `apply` creates the shadow variables that hold the moving averages\n ema.apply([var0, var1])\n\n # grab the moving averages for checkpointing purposes or to be able to\n # load the moving averages into the model weights\n averages = [ema.average(var0), ema.average(var1)]\n\n ...\n def train_step(...):\n ...\n # Apply the optimizer.\n opt.minimize(my_loss, [var0, var1])\n\n # Update the moving averages\n # of var0 and var1 with additional calls to `apply`\n ema.apply([var0, var1])\n\n ...train the model by running train_step multiple times...\n ```\n\n There are several ways to use the moving averages for evaluations:\n\n 1. Assign the values of the shadow variables to your model variables with\n `Variable.assign(...)` before evaluating your\n model. You can use the `average()`\n method to get the shadow variable for a given variable. To continue\n training after using this approach, make sure to record the unaveraged\n weights and restore them before continuing to train. You can see the\n tensorflow-addons' MovingAverage optimizer's `swap_weights` method for\n one example of how to swap variables efficiently in distributed settings:\n https://github.com/tensorflow/addons/blob/v0.13.0/tensorflow_addons/optimizers/moving_average.py#L151\n 2. Make sure to checkpoint out your moving average variables in your\n `tf.train.Checkpoint`. At evaluation time, create your shadow variables and\n use `tf.train.Checkpoint` to restore the moving averages into the shadow\n variables. Then, load the moving averages into the actual model weights via\n `var.assign(moving_avg)`.\n 3. Checkpoint out your moving average variables in your `tf.train.Checkpoint`.\n For evaluation, restore your model weights directly from the moving\n averages instead of from the non-averaged weights.\n Caution: If you choose this approach, include only the object-graph paths\n to the averaged path in your checkpoint restore.\n If you point both the unaveraged and averaged paths in a checkpoint\n restore to the same variables, it is hard to reason about whether your\n model will restore the averaged or non-averaged variables.\n\n Example of saving out then restoring the shadow variable values:\n\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object, create the shadow variables,\n # and grab the moving averages for checkpointing purposes.\n # (The ExponentialMovingAverage object itself is not checkpointable)\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n ema.apply([var0, var1])\n avg_var0 = ema.average(var0)\n avg_var1 = ema.average(var1)\n\n # Create a Checkpoint that will manage the model weights and the averages,\n checkpoint = tf.train.Checkpoint(model_weights=[var0, var1],\n averaged_weights=[avg_var0, avg_var1])\n ... # Do training\n\n # Save out the checkpoint including the model weights and the moving averages\n checkpoint.save(...)\n ```\n\n Restore option: restore all averaged & non-averaged weights, then load\n moving averages into the model via `var.assign()`\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create an ExponentialMovingAverage object, create the shadow variables,\n # and grab the moving averages for checkpoint restore purposes.\n # (The ExponentialMovingAverage object itself is not checkpointable)\n ema = tf.train.ExponentialMovingAverage(decay=0.9999)\n ema.apply([var0, var1])\n avg_var0 = ema.average(var0)\n avg_var1 = ema.average(var1)\n\n # Create a Checkpoint that will manage the model weights and the averages,\n checkpoint = tf.train.Checkpoint(model_weights=[var0, var1],\n averaged_weights=[avg_var0, avg_var1])\n checkpoint.restore(...)\n var0.assign(avg_var0)\n var1.assign(avg_var1)\n # var0 and var1 now hold the moving average values\n ```\n\n Restore option: Directly restore the moving averages into the model weights.\n ```python\n # Create variables.\n var0 = tf.Variable(...)\n var1 = tf.Variable(...)\n # ... use the variables to build a training model...\n\n # Create a Checkpoint that will manage two objects with trackable state,\n checkpoint = tf.train.Checkpoint(averaged_weights=[var0, var1])\n checkpoint.restore(...)\n # var0 and var1 now hold the moving average values\n ```\n ", "desc": "Maintains moving averages of variables by employing an exponential decay.", "type": "API"}, {"name": "tf.train.Feature", "docs": "Used in `tf.train.Example` protos. Contains a list of values.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `Union`.\n\nThe contained list can be one of three types:\n\n - `tf.train.BytesList`\n - `tf.train.FloatList`\n - `tf.train.Int64List`\n\n>>> int_feature = tf.train.Feature(\n... int64_list=tf.train.Int64List(value=[1, 2, 3, 4]))\n>>> float_feature = tf.train.Feature(\n... float_list=tf.train.FloatList(value=[1., 2., 3., 4.]))\n>>> bytes_feature = tf.train.Feature(\n... bytes_list=tf.train.BytesList(value=[b\"abc\", b\"1234\"]))\n>>>\n>>> example = tf.train.Example(\n... features=tf.train.Features(feature={\n... 'my_ints': int_feature,\n... 'my_floats': float_feature,\n... 'my_bytes': bytes_feature,\n... }))\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {\n... 'my_ints': tf.io.RaggedFeature(dtype=tf.int64),\n... 'my_floats': tf.io.RaggedFeature(dtype=tf.float32),\n... 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_bytes': ,\n 'my_floats': ,\n 'my_ints': }\n\n", "desc": "Used in `tf.train.Example` protos. Contains a list of values.", "type": "API"}, {"name": "tf.train.FeatureList", "docs": "Mainly used as part of a `tf.train.SequenceExample`.\n\nContains a list of `tf.train.Feature`s.\n\nThe `tf.train.SequenceExample` proto can be thought of as a\nproto implementation of the following python type:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nThis proto implements the `List[Feature]` portion.\n\n", "desc": "Mainly used as part of a `tf.train.SequenceExample`.", "type": "API"}, {"name": "tf.train.FeatureLists", "docs": "Mainly used as part of a `tf.train.SequenceExample`.\n\nContains a list of `tf.train.Feature`s.\n\nThe `tf.train.SequenceExample` proto can be thought of as a\nproto implementation of the following python type:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nThis proto implements the `Dict[str, FeatureList]` portion.\n", "desc": "Mainly used as part of a `tf.train.SequenceExample`.", "type": "API"}, {"name": "tf.train.FeatureLists.FeatureListEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.train.Features", "docs": "Used in `tf.train.Example` protos. Contains the mapping from keys to `Feature`.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `Dict`.\n\n>>> int_feature = tf.train.Feature(\n... int64_list=tf.train.Int64List(value=[1, 2, 3, 4]))\n>>> float_feature = tf.train.Feature(\n... float_list=tf.train.FloatList(value=[1., 2., 3., 4.]))\n>>> bytes_feature = tf.train.Feature(\n... bytes_list=tf.train.BytesList(value=[b\"abc\", b\"1234\"]))\n>>>\n>>> example = tf.train.Example(\n... features=tf.train.Features(feature={\n... 'my_ints': int_feature,\n... 'my_floats': float_feature,\n... 'my_bytes': bytes_feature,\n... }))\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {\n... 'my_ints': tf.io.RaggedFeature(dtype=tf.int64),\n... 'my_floats': tf.io.RaggedFeature(dtype=tf.float32),\n... 'my_bytes': tf.io.RaggedFeature(dtype=tf.string)})\n{'my_bytes': ,\n 'my_floats': ,\n 'my_ints': }\n\n", "desc": "Used in `tf.train.Example` protos. Contains the mapping from keys to `Feature`.", "type": "API"}, {"name": "tf.train.Features.FeatureEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.train.FloatList", "docs": "Used in `tf.train.Example` protos. Holds a list of floats.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[float]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {float_list {value: [1., 2., 3., 4. ]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].float_list.value\n[1.0, 2.0, 3.0, 4.0]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.float32)})\n{'my_feature': }\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of floats.", "type": "API"}, {"name": "tf.train.get_checkpoint_state", "docs": "Returns CheckpointState proto from the \"checkpoint\" file.\n\n If the \"checkpoint\" file contains a valid CheckpointState\n proto, returns it.\n\n Args:\n checkpoint_dir: The directory of checkpoints.\n latest_filename: Optional name of the checkpoint file. Default to\n 'checkpoint'.\n\n Returns:\n A CheckpointState if the state was available, None\n otherwise.\n\n Raises:\n ValueError: if the checkpoint read doesn't have model_checkpoint_path set.\n ", "desc": "Returns CheckpointState proto from the \"checkpoint\" file.", "type": "API"}, {"name": "tf.train.Int64List", "docs": "Used in `tf.train.Example` protos. Holds a list of Int64s.\n\nAn `Example` proto is a representation of the following python type:\n\n```\nDict[str,\n Union[List[bytes],\n List[int64],\n List[float]]]\n```\n\nThis proto implements the `List[int64]` portion.\n\n>>> from google.protobuf import text_format\n>>> example = text_format.Parse('''\n... features {\n... feature {key: \"my_feature\"\n... value {int64_list {value: [1, 2, 3, 4]}}}\n... }''',\n... tf.train.Example())\n>>>\n>>> example.features.feature['my_feature'].int64_list.value\n[1, 2, 3, 4]\n\nUse `tf.io.parse_example` to extract tensors from a serialized `Example` proto:\n\n>>> tf.io.parse_example(\n... example.SerializeToString(),\n... features = {'my_feature': tf.io.RaggedFeature(dtype=tf.int64)})\n{'my_feature': }\n\nSee the [`tf.train.Example`](https://www.tensorflow.org/tutorials/load_data/tfrecord#tftrainexample)\nguide for usage details.\n", "desc": "Used in `tf.train.Example` protos. Holds a list of Int64s.", "type": "API"}, {"name": "tf.train.JobDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.train.JobDef.TasksEntry", "docs": "", "desc": "", "type": "API"}, {"name": "tf.train.latest_checkpoint", "docs": "Finds the filename of latest saved checkpoint file.\n\n Gets the checkpoint state given the provided checkpoint_dir and looks for a\n corresponding TensorFlow 2 (preferred) or TensorFlow 1.x checkpoint path.\n The latest_filename argument is only applicable if you are saving checkpoint\n using `v1.train.Saver.save`\n\n\n See the [Training Checkpoints\n Guide](https://www.tensorflow.org/guide/checkpoint) for more details and\n examples.`\n\n Args:\n checkpoint_dir: Directory where the variables were saved.\n latest_filename: Optional name for the protocol buffer file that\n contains the list of most recent checkpoint filenames.\n See the corresponding argument to `v1.train.Saver.save`.\n\n Returns:\n The full path to the latest checkpoint or `None` if no checkpoint was found.\n ", "desc": "Finds the filename of latest saved checkpoint file.", "type": "API"}, {"name": "tf.train.list_variables", "docs": "Lists the checkpoint keys and shapes of variables in a checkpoint.\n\n Checkpoint keys are paths in a checkpoint graph.\n\n Example usage:\n\n ```python\n import tensorflow as tf\n import os\n ckpt_directory = \"/tmp/training_checkpoints/ckpt\"\n ckpt = tf.train.Checkpoint(optimizer=optimizer, model=model)\n manager = tf.train.CheckpointManager(ckpt, ckpt_directory, max_to_keep=3)\n train_and_checkpoint(model, manager)\n tf.train.list_variables(manager.latest_checkpoint)\n ```\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint.\n\n Returns:\n List of tuples `(key, shape)`.\n ", "desc": "Lists the checkpoint keys and shapes of variables in a checkpoint.", "type": "API"}, {"name": "tf.train.load_checkpoint", "docs": "Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`.\n\n If `ckpt_dir_or_file` resolves to a directory with multiple checkpoints,\n reader for the latest checkpoint is returned.\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint\n file.\n\n Returns:\n `CheckpointReader` object.\n\n Raises:\n ValueError: If `ckpt_dir_or_file` resolves to a directory with no\n checkpoints.\n ", "desc": "Returns `CheckpointReader` for checkpoint found in `ckpt_dir_or_file`.", "type": "API"}, {"name": "tf.train.load_variable", "docs": "Returns the tensor value of the given variable in the checkpoint.\n\n Args:\n ckpt_dir_or_file: Directory with checkpoints file or path to checkpoint.\n name: Name of the variable to return.\n\n Returns:\n A numpy `ndarray` with a copy of the value of this variable.\n ", "desc": "Returns the tensor value of the given variable in the checkpoint.", "type": "API"}, {"name": "tf.train.SequenceExample", "docs": "A `SequenceExample` is a format a sequences and some context.\n\nIt can be thought of as a proto-implementation of the following python type:\n\n```\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: Dict[str, List[Feature]]\n```\n\nTo implement this as protos it's broken up into sub-messages as follows:\n\n```\n# tf.train.Feature\nFeature = Union[List[bytes],\n List[int64],\n List[float]]\n\n# tf.train.FeatureList\nFeatureList = List[Feature]\n\n# tf.train.FeatureLists\nFeatureLists = Dict[str, FeatureList]\n\n# tf.train.SequenceExample\nclass SequenceExample(typing.NamedTuple):\n context: Dict[str, Feature]\n feature_lists: FeatureLists\n```\n\nTo parse a `SequenceExample` in TensorFlow refer to the\n`tf.io.parse_sequence_example` function.\n\nThe `context` contains features which apply to the entire\nexample. The `feature_lists` contain a key, value map where each key is\nassociated with a repeated set of `tf.train.Features` (a `tf.train.FeatureList`).\nA `FeatureList` represents the values of a feature identified by its key\nover time / frames.\n\nBelow is a `SequenceExample` for a movie recommendation application recording a\nsequence of ratings by a user. The time-independent features (\"locale\",\n\"age\", \"favorites\") describing the user are part of the context. The sequence\nof movies the user rated are part of the feature_lists. For each movie in the\nsequence we have information on its name and actors and the user's rating.\nThis information is recorded in three separate `feature_list`s.\nIn the example below there are only two movies. All three `feature_list`s,\nnamely \"movie_ratings\", \"movie_names\", and \"actors\" have a feature value for\nboth movies. Note, that \"actors\" is itself a `bytes_list` with multiple\nstrings per movie.\n\n```\n context: {\n feature: {\n key : \"locale\"\n value: {\n bytes_list: {\n value: [ \"pt_BR\" ]\n }\n }\n }\n feature: {\n key : \"age\"\n value: {\n float_list: {\n value: [ 19.0 ]\n }\n }\n }\n feature: {\n key : \"favorites\"\n value: {\n bytes_list: {\n value: [ \"Majesty Rose\", \"Savannah Outen\", \"One Direction\" ]\n }\n }\n }\n }\n feature_lists: {\n feature_list: {\n key : \"movie_ratings\"\n value: {\n feature: {\n float_list: {\n value: [ 4.5 ]\n }\n }\n feature: {\n float_list: {\n value: [ 5.0 ]\n }\n }\n }\n }\n feature_list: {\n key : \"movie_names\"\n value: {\n feature: {\n bytes_list: {\n value: [ \"The Shawshank Redemption\" ]\n }\n }\n feature: {\n bytes_list: {\n value: [ \"Fight Club\" ]\n }\n }\n }\n }\n feature_list: {\n key : \"actors\"\n value: {\n feature: {\n bytes_list: {\n value: [ \"Tim Robbins\", \"Morgan Freeman\" ]\n }\n }\n feature: {\n bytes_list: {\n value: [ \"Brad Pitt\", \"Edward Norton\", \"Helena Bonham Carter\" ]\n }\n }\n }\n }\n }\n```\n\nA conformant `SequenceExample` data set obeys the following conventions:\n\n`context`:\n\n - All conformant context features `K` must obey the same conventions as\n a conformant Example's features (see above).\n\n`feature_lists`:\n\n - A `FeatureList L` may be missing in an example; it is up to the\n parser configuration to determine if this is allowed or considered\n an empty list (zero length).\n - If a `FeatureList L` exists, it may be empty (zero length).\n - If a `FeatureList L` is non-empty, all features within the `FeatureList`\n must have the same data type `T`. Even across `SequenceExample`s, the type `T`\n of the `FeatureList` identified by the same key must be the same. An entry\n without any values may serve as an empty feature.\n - If a `FeatureList L` is non-empty, it is up to the parser configuration\n to determine if all features within the `FeatureList` must\n have the same size. The same holds for this `FeatureList` across multiple\n examples.\n - For sequence modeling ([example](https://github.com/tensorflow/nmt)), the\n feature lists represent a sequence of frames. In this scenario, all\n `FeatureList`s in a `SequenceExample` have the same number of `Feature`\n messages, so that the i-th element in each `FeatureList` is part of the\n i-th frame (or time step).\n\n**Examples of conformant and non-conformant examples' `FeatureLists`:**\n\nConformant `FeatureLists`:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n```\n\nNon-conformant `FeatureLists` (mismatched types):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { int64_list: { value: [ 5 ] } } }\n } }\n```\n\nConditionally conformant `FeatureLists`, the parser configuration determines\nif the feature sizes must match:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0, 6.0 ] } } }\n } }\n```\n\n**Examples of conformant and non-conformant `SequenceExample`s:**\n\nConformant pair of SequenceExample:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } }\n feature: { float_list: { value: [ 2.0 ] } } }\n } }\n```\n\nConformant pair of `SequenceExample`s:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { }\n } }\n```\n\nConditionally conformant pair of `SequenceExample`s, the parser configuration\ndetermines if the second `feature_lists` is consistent (zero-length) or\ninvalid (missing \"movie_ratings\"):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { }\n```\n\nNon-conformant pair of `SequenceExample`s (mismatched types):\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { int64_list: { value: [ 4 ] } }\n feature: { int64_list: { value: [ 5 ] } }\n feature: { int64_list: { value: [ 2 ] } } }\n } }\n```\n\nConditionally conformant pair of `SequenceExample`s; the parser configuration\ndetermines if the feature sizes must match:\n\n```\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.5 ] } }\n feature: { float_list: { value: [ 5.0 ] } } }\n } }\n\n feature_lists: { feature_list: {\n key: \"movie_ratings\"\n value: { feature: { float_list: { value: [ 4.0 ] } }\n feature: { float_list: { value: [ 5.0, 3.0 ] } }\n } }\n```\n", "desc": "A `SequenceExample` is a format a sequences and some context.", "type": "API"}, {"name": "tf.train.ServerDef", "docs": "", "desc": "", "type": "API"}, {"name": "tf.transpose", "docs": "Transposes `a`, where `a` is a Tensor.\n\n Permutes the dimensions according to the value of `perm`.\n\n The returned tensor's dimension `i` will correspond to the input dimension\n `perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is the rank\n of the input tensor. Hence by default, this operation performs a regular\n matrix transpose on 2-D input Tensors.\n\n If conjugate is `True` and `a.dtype` is either `complex64` or `complex128`\n then the values of `a` are conjugated and transposed.\n\n @compatibility(numpy)\n In `numpy` transposes are memory-efficient constant time operations as they\n simply return a new view of the same data with adjusted `strides`.\n\n TensorFlow does not support strides, so `transpose` returns a new tensor with\n the items permuted.\n @end_compatibility\n\n For example:\n\n >>> x = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.transpose(x)\n \n\n Equivalently, you could call `tf.transpose(x, perm=[1, 0])`.\n\n If `x` is complex, setting conjugate=True gives the conjugate transpose:\n\n >>> x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],\n ... [4 + 4j, 5 + 5j, 6 + 6j]])\n >>> tf.transpose(x, conjugate=True)\n \n\n 'perm' is more useful for n-dimensional tensors where n > 2:\n\n >>> x = tf.constant([[[ 1, 2, 3],\n ... [ 4, 5, 6]],\n ... [[ 7, 8, 9],\n ... [10, 11, 12]]])\n\n As above, simply calling `tf.transpose` will default to `perm=[2,1,0]`.\n\n To take the transpose of the matrices in dimension-0 (such as when you are\n transposing matrices where 0 is the batch dimension), you would set\n `perm=[0,2,1]`.\n\n >>> tf.transpose(x, perm=[0, 2, 1])\n \n\n Note: This has a shorthand `linalg.matrix_transpose`):\n\n Args:\n a: A `Tensor`.\n perm: A permutation of the dimensions of `a`. This should be a vector.\n conjugate: Optional bool. Setting it to `True` is mathematically equivalent\n to tf.math.conj(tf.transpose(input)).\n name: A name for the operation (optional).\n\n Returns:\n A transposed `Tensor`.\n ", "desc": "Transposes `a`, where `a` is a Tensor.", "type": "API"}, {"name": "tf.truediv", "docs": "Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 division operator semantics where all integer\n arguments are cast to floating types first. This op is generated by normal\n `x / y` division in Python 3 and in Python 2.7 with\n `from __future__ import division`. If you want integer division that rounds\n down, use `x // y` or `tf.math.floordiv`.\n\n `x` and `y` must have the same numeric type. If the inputs are floating\n point, the output will have the same type. If the inputs are integral, the\n inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`\n and `int64` (matching the behavior of Numpy).\n\n Args:\n x: `Tensor` numerator of numeric type.\n y: `Tensor` denominator of numeric type.\n name: A name for the operation (optional).\n\n Returns:\n `x / y` evaluated in floating point.\n\n Raises:\n TypeError: If `x` and `y` have different dtypes.\n ", "desc": "Divides x / y elementwise (using Python 3 division operator semantics).", "type": "API"}, {"name": "tf.truncatediv", "docs": "Returns x / y element-wise for integer types.\n\n Truncation designates that negative numbers will round fractional quantities\n toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different\n than Python semantics. See `FloorDiv` for a division function that matches\n Python Semantics.\n\n *NOTE*: `truncatediv` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `bfloat16`, `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `uint32`, `uint64`, `int64`, `complex64`, `complex128`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns x / y element-wise for integer types.", "type": "API"}, {"name": "tf.truncatemod", "docs": "Returns element-wise remainder of division. This emulates C semantics in that\n\n the result here is consistent with a truncating divide. E.g. `truncate(x / y) *\n y + truncate_mod(x, y) = x`.\n\n *NOTE*: `truncatemod` supports broadcasting. More about broadcasting\n [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)\n\n Args:\n x: A `Tensor`. Must be one of the following types: `int32`, `int64`, `bfloat16`, `half`, `float32`, `float64`.\n y: A `Tensor`. Must have the same type as `x`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `x`.\n ", "desc": "Returns element-wise remainder of division. This emulates C semantics in that", "type": "API"}, {"name": "tf.tuple", "docs": "Groups tensors together.\n\n The returned tensors have the same value as the input tensors, but they\n are computed only after all the input tensors have been computed.\n\n Note: *In TensorFlow 2 with eager and/or Autograph, you should not require\n this method, as ops execute in the expected order thanks to automatic control\n dependencies.* Only use `tf.tuple` when working with v1 `tf.Graph` code.\n\n See also `tf.group` and `tf.control_dependencies`.\n\n Args:\n tensors: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.\n control_inputs: List of additional ops to finish before returning.\n name: (optional) A name to use as a `name_scope` for the operation.\n\n Returns:\n Same as `tensors`.\n\n Raises:\n ValueError: If `tensors` does not contain any `Tensor` or `IndexedSlices`.\n TypeError: If `control_inputs` is not a list of `Operation` or `Tensor`\n objects.\n\n ", "desc": "Groups tensors together.", "type": "API"}, {"name": "tf.type_spec_from_value", "docs": "Returns a `tf.TypeSpec` that represents the given `value`.\n\n Examples:\n\n >>> tf.type_spec_from_value(tf.constant([1, 2, 3]))\n TensorSpec(shape=(3,), dtype=tf.int32, name=None)\n >>> tf.type_spec_from_value(np.array([4.0, 5.0], np.float64))\n TensorSpec(shape=(2,), dtype=tf.float64, name=None)\n >>> tf.type_spec_from_value(tf.ragged.constant([[1, 2], [3, 4, 5]]))\n RaggedTensorSpec(TensorShape([2, None]), tf.int32, 1, tf.int64)\n\n >>> example_input = tf.ragged.constant([[1, 2], [3]])\n >>> @tf.function(input_signature=[tf.type_spec_from_value(example_input)])\n ... def f(x):\n ... return tf.reduce_sum(x, axis=1)\n\n Args:\n value: A value that can be accepted or returned by TensorFlow APIs. Accepted\n types for `value` include `tf.Tensor`, any value that can be converted to\n `tf.Tensor` using `tf.convert_to_tensor`, and any subclass of\n `CompositeTensor` (such as `tf.RaggedTensor`).\n\n Returns:\n A `TypeSpec` that is compatible with `value`.\n\n Raises:\n TypeError: If a TypeSpec cannot be built for `value`, because its type\n is not supported.\n ", "desc": "Returns a `tf.TypeSpec` that represents the given `value`.", "type": "API"}, {"name": "tf.types", "docs": "Public TensorFlow type definitions.\n\nFor details, see\nhttps://github.com/tensorflow/community/blob/master/rfcs/20200211-tf-types.md.\n\n", "desc": "Public TensorFlow type definitions.", "type": "API"}, {"name": "tf.types.experimental", "docs": "Public API for tf.types.experimental namespace.\n", "desc": "Public API for tf.types.experimental namespace.", "type": "API"}, {"name": "tf.types.experimental.TensorLike", "docs": "Union of all types that can be converted to a `tf.Tensor` by `tf.convert_to_tensor`.\n\nThis definition may be used in user code. Additional types may be added\nin the future as more input types are supported.\n\nExample:\n\n```\ndef foo(x: TensorLike):\n pass\n```\n\nThis definition passes static type verification for:\n\n```\nfoo(tf.constant([1, 2, 3]))\nfoo([1, 2, 3])\nfoo(np.array([1, 2, 3]))\n```\n", "desc": "Union of all types that can be converted to a `tf.Tensor` by `tf.convert_to_tensor`.", "type": "API"}, {"name": "tf.TypeSpec", "docs": "Specifies a TensorFlow value type.\n\n A `tf.TypeSpec` provides metadata describing an object accepted or returned\n by TensorFlow APIs. Concrete subclasses, such as `tf.TensorSpec` and\n `tf.RaggedTensorSpec`, are used to describe different value types.\n\n For example, `tf.function`'s `input_signature` argument accepts a list\n (or nested structure) of `TypeSpec`s.\n\n Creating new subclasses of `TypeSpec` (outside of TensorFlow core) is not\n currently supported. In particular, we may make breaking changes to the\n private methods and properties defined by this base class.\n\n Example:\n\n >>> spec = tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)\n >>> @tf.function(input_signature=[spec])\n ... def double(x):\n ... return x * 2\n >>> print(double(tf.ragged.constant([[1, 2], [3]])))\n \n ", "desc": "Specifies a TensorFlow value type.", "type": "API"}, {"name": "tf.UnconnectedGradients", "docs": "Controls how gradient computation behaves when y does not depend on x.\n\n The gradient of y with respect to x can be zero in two different ways: there\n could be no differentiable path in the graph connecting x to y (and so we can\n statically prove that the gradient is zero) or it could be that runtime values\n of tensors in a particular execution lead to a gradient of zero (say, if a\n relu unit happens to not be activated). To allow you to distinguish between\n these two cases you can choose what value gets returned for the gradient when\n there is no path in the graph from x to y:\n\n * `NONE`: Indicates that [None] will be returned if there is no path from x\n to y\n * `ZERO`: Indicates that a zero tensor will be returned in the shape of x.\n ", "desc": "Controls how gradient computation behaves when y does not depend on x.", "type": "API"}, {"name": "tf.unique", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`; `x` does not need to be sorted.\n This operation also returns a tensor `idx` the same size as `x` that contains\n the index of each value of `x` in the unique output `y`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n Examples:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx = unique(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n ```\n\n ```\n # tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5]\n y, idx = unique(x)\n y ==> [4, 5, 1, 2, 3]\n idx ==> [0, 1, 2, 3, 4, 4, 0, 1]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.unique_with_counts", "docs": "Finds unique elements in a 1-D tensor.\n\n This operation returns a tensor `y` containing all of the unique elements of `x`\n sorted in the same order that they occur in `x`. This operation also returns a\n tensor `idx` the same size as `x` that contains the index of each value of `x`\n in the unique output `y`. Finally, it returns a third tensor `count` that\n contains the count of each element of `y` in `x`. In other words:\n\n `y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`\n\n For example:\n\n ```\n # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]\n y, idx, count = unique_with_counts(x)\n y ==> [1, 2, 4, 7, 8]\n idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]\n count ==> [2, 1, 3, 1, 2]\n ```\n\n Args:\n x: A `Tensor`. 1-D.\n out_idx: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.\n name: A name for the operation (optional).\n\n Returns:\n A tuple of `Tensor` objects (y, idx, count).\n\n y: A `Tensor`. Has the same type as `x`.\n idx: A `Tensor` of type `out_idx`.\n count: A `Tensor` of type `out_idx`.\n ", "desc": "Finds unique elements in a 1-D tensor.", "type": "API"}, {"name": "tf.unravel_index", "docs": "Converts an array of flat indices into a tuple of coordinate arrays.\n\n \n Example:\n\n ```\n y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3])\n # 'dims' represent a hypothetical (3, 3) tensor of indices:\n # [[0, 1, *2*],\n # [3, 4, *5*],\n # [6, *7*, 8]]\n # For each entry from 'indices', this operation returns\n # its coordinates (marked with '*'), such as\n # 2 ==> (0, 2)\n # 5 ==> (1, 2)\n # 7 ==> (2, 1)\n y ==> [[0, 1, 2], [2, 2, 1]]\n ```\n\n @compatibility(numpy)\n Equivalent to np.unravel_index\n @end_compatibility\n\n Args:\n indices: A `Tensor`. Must be one of the following types: `int32`, `int64`.\n An 0-D or 1-D `int` Tensor whose elements are indices into the\n flattened version of an array of dimensions dims.\n dims: A `Tensor`. Must have the same type as `indices`.\n An 1-D `int` Tensor. The shape of the array to use for unraveling\n indices.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`. Has the same type as `indices`.\n ", "desc": "Converts an array of flat indices into a tuple of coordinate arrays.", "type": "API"}, {"name": "tf.unstack", "docs": "Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.\n\n Unpacks tensors from `value` by chipping it along the `axis` dimension.\n\n >>> x = tf.reshape(tf.range(12), (3,4))\n >>>\n >>> p, q, r = tf.unstack(x)\n >>> p.shape.as_list()\n [4]\n\n >>> i, j, k, l = tf.unstack(x, axis=1)\n >>> i.shape.as_list()\n [3]\n\n This is the opposite of stack.\n\n >>> x = tf.stack([i, j, k, l], axis=1)\n\n More generally if you have a tensor of shape `(A, B, C, D)`:\n\n >>> A, B, C, D = [2, 3, 4, 5]\n >>> t = tf.random.normal(shape=[A, B, C, D])\n\n The number of tensor returned is equal to the length of the target `axis`:\n\n >>> axis = 2\n >>> items = tf.unstack(t, axis=axis)\n >>> len(items) == t.shape[axis]\n True\n\n The shape of each result tensor is equal to the shape of the input tensor,\n with the target `axis` removed.\n\n >>> items[0].shape.as_list() # [A, B, D]\n [2, 3, 5]\n\n The value of each tensor `items[i]` is equal to the slice of `input` across\n `axis` at index `i`:\n\n >>> for i in range(len(items)):\n ... slice = t[:,:,i,:]\n ... assert tf.reduce_all(slice == items[i])\n\n #### Python iterable unpacking\n\n With eager execution you _can_ unstack the 0th axis of a tensor using python's\n iterable unpacking:\n\n >>> t = tf.constant([1,2,3])\n >>> a,b,c = t\n\n `unstack` is still necessary because Iterable unpacking doesn't work in\n a `@tf.function`: Symbolic tensors are not iterable.\n\n You need to use `tf.unstack` here:\n\n >>> @tf.function\n ... def bad(t):\n ... a,b,c = t\n ... return a\n >>>\n >>> bad(t)\n Traceback (most recent call last):\n ...\n OperatorNotAllowedInGraphError: ...\n\n >>> @tf.function\n ... def good(t):\n ... a,b,c = tf.unstack(t)\n ... return a\n >>>\n >>> good(t).numpy()\n 1\n\n #### Unknown shapes\n\n Eager tensors have concrete values, so their shape is always known.\n Inside a `tf.function` the symbolic tensors may have unknown shapes.\n If the length of `axis` is unknown `tf.unstack` will fail because it cannot\n handle an unknown number of tensors:\n\n >>> @tf.function(input_signature=[tf.TensorSpec([None], tf.float32)])\n ... def bad(t):\n ... tensors = tf.unstack(t)\n ... return tensors[0]\n >>>\n >>> bad(tf.constant([1,2,3]))\n Traceback (most recent call last):\n ...\n ValueError: Cannot infer argument `num` from shape (None,)\n\n If you know the `axis` length you can pass it as the `num` argument. But this\n must be a constant value.\n\n If you actually need a variable number of tensors in a single `tf.function`\n trace, you will need to use exlicit loops and a `tf.TensorArray` instead.\n\n Args:\n value: A rank `R > 0` `Tensor` to be unstacked.\n num: An `int`. The length of the dimension `axis`. Automatically inferred if\n `None` (the default).\n axis: An `int`. The axis to unstack along. Defaults to the first dimension.\n Negative values wrap around, so the valid range is `[-R, R)`.\n name: A name for the operation (optional).\n\n Returns:\n The list of `Tensor` objects unstacked from `value`.\n\n Raises:\n ValueError: If `axis` is out of the range `[-R, R)`.\n ValueError: If `num` is unspecified and cannot be inferred.\n InvalidArgumentError: If `num` does not match the shape of `value`.\n ", "desc": "Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.", "type": "API"}, {"name": "tf.Variable", "docs": "See the [variable guide](https://tensorflow.org/guide/variable).\n\n A variable maintains shared, persistent state manipulated by a program.\n\n The `Variable()` constructor requires an initial value for the variable, which\n can be a `Tensor` of any type and shape. This initial value defines the type\n and shape of the variable. After construction, the type and shape of the\n variable are fixed. The value can be changed using one of the assign methods.\n\n >>> v = tf.Variable(1.)\n >>> v.assign(2.)\n \n >>> v.assign_add(0.5)\n \n\n The `shape` argument to `Variable`'s constructor allows you to construct a\n variable with a less defined shape than its `initial_value`:\n\n >>> v = tf.Variable(1., shape=tf.TensorShape(None))\n >>> v.assign([[1.]])\n dtype=float32, numpy=array([[1.]], ...)>\n\n Just like any `Tensor`, variables created with `Variable()` can be used as\n inputs to operations. Additionally, all the operators overloaded for the\n `Tensor` class are carried over to variables.\n\n >>> w = tf.Variable([[1.], [2.]])\n >>> x = tf.constant([[3., 4.]])\n >>> tf.matmul(w, x)\n \n >>> tf.sigmoid(w + x)\n \n\n When building a machine learning model it is often convenient to distinguish\n between variables holding trainable model parameters and other variables such\n as a `step` variable used to count training steps. To make this easier, the\n variable constructor supports a `trainable=`\n parameter. `tf.GradientTape` watches trainable variables by default:\n\n >>> with tf.GradientTape(persistent=True) as tape:\n ... trainable = tf.Variable(1.)\n ... non_trainable = tf.Variable(2., trainable=False)\n ... x1 = trainable * 2.\n ... x2 = non_trainable * 3.\n >>> tape.gradient(x1, trainable)\n \n >>> assert tape.gradient(x2, non_trainable) is None # Unwatched\n\n Variables are automatically tracked when assigned to attributes of types\n inheriting from `tf.Module`.\n\n >>> m = tf.Module()\n >>> m.v = tf.Variable([1.])\n >>> m.trainable_variables\n (,)\n\n This tracking then allows saving variable values to\n [training checkpoints](https://www.tensorflow.org/guide/checkpoint), or to\n [SavedModels](https://www.tensorflow.org/guide/saved_model) which include\n serialized TensorFlow graphs.\n\n Variables are often captured and manipulated by `tf.function`s. This works the\n same way the un-decorated function would have:\n\n >>> v = tf.Variable(0.)\n >>> read_and_decrement = tf.function(lambda: v.assign_sub(0.1))\n >>> read_and_decrement()\n \n >>> read_and_decrement()\n \n\n Variables created inside a `tf.function` must be owned outside the function\n and be created only once:\n\n >>> class M(tf.Module):\n ... @tf.function\n ... def __call__(self, x):\n ... if not hasattr(self, \"v\"): # Or set self.v to None in __init__\n ... self.v = tf.Variable(x)\n ... return self.v * x\n >>> m = M()\n >>> m(2.)\n \n >>> m(3.)\n \n >>> m.v\n \n\n See the `tf.function` documentation for details.\n ", "desc": "See the [variable guide](https://tensorflow.org/guide/variable).", "type": "API"}, {"name": "tf.Variable.SaveSliceInfo", "docs": "Information on how to save this Variable as a slice.\n\n Provides internal support for saving variables as slices of a larger\n variable. This API is not public and is subject to change.\n\n Available properties:\n\n * full_name\n * full_shape\n * var_offset\n * var_shape\n ", "desc": "Information on how to save this Variable as a slice.", "type": "API"}, {"name": "tf.variable_creator_scope", "docs": "Scope which defines a variable creation function to be used by variable().\n\n variable_creator is expected to be a function with the following signature:\n\n ```\n def variable_creator(next_creator, **kwargs)\n ```\n\n The creator is supposed to eventually call the next_creator to create a\n variable if it does want to create a variable and not call Variable or\n ResourceVariable directly. This helps make creators composable. A creator may\n choose to create multiple variables, return already existing variables, or\n simply register that a variable was created and defer to the next creators in\n line. Creators can also modify the keyword arguments seen by the next\n creators.\n\n Custom getters in the variable scope will eventually resolve down to these\n custom creators when they do create variables.\n\n The valid keyword arguments in kwds are:\n\n * initial_value: A `Tensor`, or Python object convertible to a `Tensor`,\n which is the initial value for the Variable. The initial value must have\n a shape specified unless `validate_shape` is set to False. Can also be a\n callable with no argument that returns the initial value when called. In\n that case, `dtype` must be specified. (Note that initializer functions\n from init_ops.py must first be bound to a shape before being used here.)\n * trainable: If `True`, the default, GradientTapes automatically watch\n uses of this Variable.\n * validate_shape: If `False`, allows the variable to be initialized with a\n value of unknown shape. If `True`, the default, the shape of\n `initial_value` must be known.\n * caching_device: Optional device string describing where the Variable\n should be cached for reading. Defaults to the Variable's device.\n If not `None`, caches on another device. Typical use is to cache\n on the device where the Ops using the Variable reside, to deduplicate\n copying through `Switch` and other conditional statements.\n * name: Optional name for the variable. Defaults to `'Variable'` and gets\n uniquified automatically.\n dtype: If set, initial_value will be converted to the given type.\n If `None`, either the datatype will be kept (if `initial_value` is\n a Tensor), or `convert_to_tensor` will decide.\n * constraint: A constraint function to be applied to the variable after\n updates by some algorithms.\n * synchronization: Indicates when a distributed a variable will be\n aggregated. Accepted values are constants defined in the class\n `tf.VariableSynchronization`. By default the synchronization is set to\n `AUTO` and the current `DistributionStrategy` chooses\n when to synchronize.\n * aggregation: Indicates how a distributed variable will be aggregated.\n Accepted values are constants defined in the class\n `tf.VariableAggregation`.\n\n This set may grow over time, so it's important the signature of creators is as\n mentioned above.\n\n Args:\n variable_creator: the passed creator\n\n Yields:\n A scope in which the creator is active\n ", "desc": "Scope which defines a variable creation function to be used by variable().", "type": "API"}, {"name": "tf.VariableAggregation", "docs": "Indicates how a distributed variable will be aggregated.\n\n `tf.distribute.Strategy` distributes a model by making multiple copies\n (called \"replicas\") acting data-parallel on different elements of the input\n batch. When performing some variable-update operation, say\n `var.assign_add(x)`, in a model, we need to resolve how to combine the\n different values for `x` computed in the different replicas.\n\n * `NONE`: This is the default, giving an error if you use a\n variable-update operation with multiple replicas.\n * `SUM`: Add the updates across replicas.\n * `MEAN`: Take the arithmetic mean (\"average\") of the updates across replicas.\n * `ONLY_FIRST_REPLICA`: This is for when every replica is performing the same\n update, but we only want to perform the update once. Used, e.g., for the\n global step counter.\n ", "desc": "Indicates how a distributed variable will be aggregated.", "type": "API"}, {"name": "tf.VariableSynchronization", "docs": "Indicates when a distributed variable will be synced.\n\n * `AUTO`: Indicates that the synchronization will be determined by the current\n `DistributionStrategy` (eg. With `MirroredStrategy` this would be\n `ON_WRITE`).\n * `NONE`: Indicates that there will only be one copy of the variable, so\n there is no need to sync.\n * `ON_WRITE`: Indicates that the variable will be updated across devices\n every time it is written.\n * `ON_READ`: Indicates that the variable will be aggregated across devices\n when it is read (eg. when checkpointing or when evaluating an op that uses\n the variable).\n\n Example:\n >>> temp_grad=[tf.Variable([0.], trainable=False,\n ... synchronization=tf.VariableSynchronization.ON_READ,\n ... aggregation=tf.VariableAggregation.MEAN\n ... )]\n ", "desc": "Indicates when a distributed variable will be synced.", "type": "API"}, {"name": "tf.vectorized_map", "docs": "Parallel map on the list of tensors unpacked from `elems` on dimension 0.\n\n This method works similar to `tf.map_fn` but is optimized to run much faster,\n possibly with a much larger memory footprint. The speedups are obtained by\n vectorization (see [Auto-Vectorizing TensorFlow Graphs: Jacobians,\n Auto-Batching and Beyond](https://arxiv.org/pdf/1903.04243.pdf)). The idea\n behind vectorization is to semantically launch all the invocations of `fn` in\n parallel and fuse corresponding operations across all these invocations. This\n fusion is done statically at graph generation time and the generated code is\n often similar in performance to a manually fused version.\n\n Because `tf.vectorized_map` fully parallelizes the batch, this method will\n generally be significantly faster than using `tf.map_fn`, especially in eager\n mode. However this is an experimental feature and currently has a lot of\n limitations:\n - There should be no data dependency between the different semantic\n invocations of `fn`, i.e. it should be safe to map the elements of the\n inputs in any order.\n - Stateful kernels may mostly not be supported since these often imply a\n data dependency. We do support a limited set of such stateful kernels\n though (like RandomFoo, Variable operations like reads, etc).\n - `fn` has limited support for control flow operations.\n - `fn` should return nested structure of Tensors or Operations. However\n if an Operation is returned, it should have zero outputs.\n - The shape and dtype of any intermediate or output tensors in the\n computation of `fn` should not depend on the input to `fn`.\n\n Examples:\n ```python\n def outer_product(a):\n return tf.tensordot(a, a, 0)\n\n batch_size = 100\n a = tf.ones((batch_size, 32, 32))\n c = tf.vectorized_map(outer_product, a)\n assert c.shape == (batch_size, 32, 32, 32, 32)\n ```\n\n ```python\n # Computing per-example gradients\n\n batch_size = 10\n num_features = 32\n layer = tf.keras.layers.Dense(1)\n\n def model_fn(arg):\n with tf.GradientTape() as g:\n inp, label = arg\n inp = tf.expand_dims(inp, 0)\n label = tf.expand_dims(label, 0)\n prediction = layer(inp)\n loss = tf.nn.l2_loss(label - prediction)\n return g.gradient(loss, (layer.kernel, layer.bias))\n\n inputs = tf.random.uniform([batch_size, num_features])\n labels = tf.random.uniform([batch_size, 1])\n per_example_gradients = tf.vectorized_map(model_fn, (inputs, labels))\n assert per_example_gradients[0].shape == (batch_size, num_features, 1)\n assert per_example_gradients[1].shape == (batch_size, 1)\n ```\n\n Args:\n fn: The callable to be performed. It accepts one argument, which will have\n the same (possibly nested) structure as `elems`, and returns a possibly\n nested structure of Tensors and Operations, which may be different than\n the structure of `elems`.\n elems: A tensor or (possibly nested) sequence of tensors, each of which will\n be unpacked along their first dimension. The nested sequence of the\n resulting slices will be mapped over by `fn`. The first dimensions of all\n elements must broadcast to a consistent value; equivalently, each\n element tensor must have first dimension of either `B` or `1`, for some\n common batch size `B >= 1`.\n fallback_to_while_loop: If true, on failing to vectorize an operation,\n the unsupported op is wrapped in a tf.while_loop to execute the map\n iterations. Note that this fallback only happens for unsupported ops and\n other parts of `fn` are still vectorized. If false, on encountering an\n unsupported op, a ValueError is thrown. Note that the fallbacks can result\n in slowdowns since vectorization often yields speedup of one to two orders\n of magnitude.\n\n Returns:\n A tensor or (possibly nested) sequence of tensors. Each tensor packs the\n results of applying fn to tensors unpacked from elems along the first\n dimension, from first to last.\n\n Although they are less common as user-visible inputs and outputs, note that\n tensors of type `tf.variant` which represent tensor lists (for example from\n `tf.raw_ops.TensorListFromTensor`) are vectorized by stacking the list\n contents rather than the variant itself, and so the container tensor will\n have a scalar shape when returned rather than the usual stacked shape. This\n improves the performance of control flow gradient vectorization.\n\n Raises:\n ValueError: If vectorization fails and fallback_to_while_loop is False.\n ", "desc": "Parallel map on the list of tensors unpacked from `elems` on dimension 0.", "type": "API"}, {"name": "tf.version", "docs": "Public API for tf.version namespace.\n", "desc": "Public API for tf.version namespace.", "type": "API"}, {"name": "tf.where", "docs": "Returns the indices of non-zero elements, or multiplexes `x` and `y`.\n\n This operation has two modes:\n\n 1. **Return the indices of non-zero elements** - When only\n `condition` is provided the result is an `int64` tensor where each row is\n the index of a non-zero element of `condition`. The result's shape\n is `[tf.math.count_nonzero(condition), tf.rank(condition)]`.\n 2. **Multiplex `x` and `y`** - When both `x` and `y` are provided the\n result has the shape of `x`, `y`, and `condition` broadcast together. The\n result is taken from `x` where `condition` is non-zero\n or `y` where `condition` is zero.\n\n #### 1. Return the indices of non-zero elements\n\n Note: In this mode `condition` can have a dtype of `bool` or any numeric\n dtype.\n\n If `x` and `y` are not provided (both are None):\n\n `tf.where` will return the indices of `condition` that are non-zero,\n in the form of a 2-D tensor with shape `[n, d]`, where `n` is the number of\n non-zero elements in `condition` (`tf.count_nonzero(condition)`), and `d` is\n the number of axes of `condition` (`tf.rank(condition)`).\n\n Indices are output in row-major order. The `condition` can have a `dtype` of\n `tf.bool`, or any numeric `dtype`.\n\n Here `condition` is a 1-axis `bool` tensor with 2 `True` values. The result\n has a shape of `[2,1]`\n\n >>> tf.where([True, False, False, True]).numpy()\n array([[0],\n [3]])\n\n Here `condition` is a 2-axis integer tensor, with 3 non-zero values. The\n result has a shape of `[3, 2]`.\n\n >>> tf.where([[1, 0, 0], [1, 0, 1]]).numpy()\n array([[0, 0],\n [1, 0],\n [1, 2]])\n\n Here `condition` is a 3-axis float tensor, with 5 non-zero values. The output\n shape is `[5, 3]`.\n\n >>> float_tensor = [[[0.1, 0], [0, 2.2], [3.5, 1e6]],\n ... [[0, 0], [0, 0], [99, 0]]]\n >>> tf.where(float_tensor).numpy()\n array([[0, 0, 0],\n [0, 1, 1],\n [0, 2, 0],\n [0, 2, 1],\n [1, 2, 0]])\n\n These indices are the same that `tf.sparse.SparseTensor` would use to\n represent the condition tensor:\n\n >>> sparse = tf.sparse.from_dense(float_tensor)\n >>> sparse.indices.numpy()\n array([[0, 0, 0],\n [0, 1, 1],\n [0, 2, 0],\n [0, 2, 1],\n [1, 2, 0]])\n\n A complex number is considered non-zero if either the real or imaginary\n component is non-zero:\n\n >>> tf.where([complex(0.), complex(1.), 0+1j, 1+1j]).numpy()\n array([[1],\n [2],\n [3]])\n\n #### 2. Multiplex `x` and `y`\n\n Note: In this mode `condition` must have a dtype of `bool`.\n\n If `x` and `y` are also provided (both have non-None values) the `condition`\n tensor acts as a mask that chooses whether the corresponding\n element / row in the output should be taken from `x` (if the element in\n `condition` is `True`) or `y` (if it is `False`).\n\n The shape of the result is formed by\n [broadcasting](https://docs.scipy.org/doc/numpy/reference/ufuncs.html)\n together the shapes of `condition`, `x`, and `y`.\n\n When all three inputs have the same size, each is handled element-wise.\n\n >>> tf.where([True, False, False, True],\n ... [1, 2, 3, 4],\n ... [100, 200, 300, 400]).numpy()\n array([ 1, 200, 300, 4], dtype=int32)\n\n There are two main rules for broadcasting:\n\n 1. If a tensor has fewer axes than the others, length-1 axes are added to the\n left of the shape.\n 2. Axes with length-1 are streched to match the coresponding axes of the other\n tensors.\n\n A length-1 vector is streched to match the other vectors:\n\n >>> tf.where([True, False, False, True], [1, 2, 3, 4], [100]).numpy()\n array([ 1, 100, 100, 4], dtype=int32)\n\n A scalar is expanded to match the other arguments:\n\n >>> tf.where([[True, False], [False, True]], [[1, 2], [3, 4]], 100).numpy()\n array([[ 1, 100], [100, 4]], dtype=int32)\n >>> tf.where([[True, False], [False, True]], 1, 100).numpy()\n array([[ 1, 100], [100, 1]], dtype=int32)\n\n A scalar `condition` returns the complete `x` or `y` tensor, with\n broadcasting applied.\n\n >>> tf.where(True, [1, 2, 3, 4], 100).numpy()\n array([1, 2, 3, 4], dtype=int32)\n >>> tf.where(False, [1, 2, 3, 4], 100).numpy()\n array([100, 100, 100, 100], dtype=int32)\n\n For a non-trivial example of broadcasting, here `condition` has a shape of\n `[3]`, `x` has a shape of `[3,3]`, and `y` has a shape of `[3,1]`.\n Broadcasting first expands the shape of `condition` to `[1,3]`. The final\n broadcast shape is `[3,3]`. `condition` will select columns from `x` and `y`.\n Since `y` only has one column, all columns from `y` will be identical.\n\n >>> tf.where([True, False, True],\n ... x=[[1, 2, 3],\n ... [4, 5, 6],\n ... [7, 8, 9]],\n ... y=[[100],\n ... [200],\n ... [300]]\n ... ).numpy()\n array([[ 1, 100, 3],\n [ 4, 200, 6],\n [ 7, 300, 9]], dtype=int32)\n\n Note that if the gradient of either branch of the `tf.where` generates\n a `NaN`, then the gradient of the entire `tf.where` will be `NaN`. This is\n because the gradient calculation for `tf.where` combines the two branches, for\n performance reasons.\n\n A workaround is to use an inner `tf.where` to ensure the function has\n no asymptote, and to avoid computing a value whose gradient is `NaN` by\n replacing dangerous inputs with safe inputs.\n\n Instead of this,\n\n >>> x = tf.constant(0., dtype=tf.float32)\n >>> with tf.GradientTape() as tape:\n ... tape.watch(x)\n ... y = tf.where(x < 1., 0., 1. / x)\n >>> print(tape.gradient(y, x))\n tf.Tensor(nan, shape=(), dtype=float32)\n\n Although, the `1. / x` values are never used, its gradient is a `NaN` when\n `x = 0`. Instead, we should guard that with another `tf.where`\n\n >>> x = tf.constant(0., dtype=tf.float32)\n >>> with tf.GradientTape() as tape:\n ... tape.watch(x)\n ... safe_x = tf.where(tf.equal(x, 0.), 1., x)\n ... y = tf.where(x < 1., 0., 1. / safe_x)\n >>> print(tape.gradient(y, x))\n tf.Tensor(0.0, shape=(), dtype=float32)\n\n See also:\n\n * `tf.sparse` - The indices returned by the first form of `tf.where` can be\n useful in `tf.sparse.SparseTensor` objects.\n * `tf.gather_nd`, `tf.scatter_nd`, and related ops - Given the\n list of indices returned from `tf.where` the `scatter` and `gather` family\n of ops can be used fetch values or insert values at those indices.\n * `tf.strings.length` - `tf.string` is not an allowed dtype for the\n `condition`. Use the string length instead.\n\n Args:\n condition: A `tf.Tensor` of dtype bool, or any numeric dtype. `condition`\n must have dtype `bool` when `x` and `y` are provided.\n x: If provided, a Tensor which is of the same type as `y`, and has a shape\n broadcastable with `condition` and `y`.\n y: If provided, a Tensor which is of the same type as `x`, and has a shape\n broadcastable with `condition` and `x`.\n name: A name of the operation (optional).\n\n Returns:\n If `x` and `y` are provided:\n A `Tensor` with the same type as `x` and `y`, and shape that\n is broadcast from `condition`, `x`, and `y`.\n Otherwise, a `Tensor` with shape `[tf.math.count_nonzero(condition),\n tf.rank(condition)]`.\n\n Raises:\n ValueError: When exactly one of `x` or `y` is non-None, or the shapes\n are not all broadcastable.\n ", "desc": "Returns the indices of non-zero elements, or multiplexes `x` and `y`.", "type": "API"}, {"name": "tf.while_loop", "docs": "Repeat `body` while the condition `cond` is true. (deprecated argument values)\n\nDeprecated: SOME ARGUMENT VALUES ARE DEPRECATED: `(back_prop=False)`. They will be removed in a future version.\nInstructions for updating:\nback_prop=False is deprecated. Consider using tf.stop_gradient instead.\nInstead of:\nresults = tf.while_loop(c, b, vars, back_prop=False)\nUse:\nresults = tf.nest.map_structure(tf.stop_gradient, tf.while_loop(c, b, vars))\n\n`cond` is a callable returning a boolean scalar tensor. `body` is a callable\nreturning a (possibly nested) tuple, namedtuple or list of tensors of the same\narity (length and structure) and types as `loop_vars`. `loop_vars` is a\n(possibly nested) tuple, namedtuple or list of tensors that is passed to both\n`cond` and `body`. `cond` and `body` both take as many arguments as there are\n`loop_vars`.\n\nIn addition to regular Tensors or IndexedSlices, the body may accept and\nreturn TensorArray objects. The flows of the TensorArray objects will\nbe appropriately forwarded between loops and during gradient calculations.\n\nNote that `while_loop` calls `cond` and `body` *exactly once* (inside the\ncall to `while_loop`, and not at all during `Session.run()`). `while_loop`\nstitches together the graph fragments created during the `cond` and `body`\ncalls with some additional graph nodes to create the graph flow that\nrepeats `body` until `cond` returns false.\n\nFor correctness, `tf.while_loop()` strictly enforces shape invariants for\nthe loop variables. A shape invariant is a (possibly partial) shape that\nis unchanged across the iterations of the loop. An error will be raised\nif the shape of a loop variable after an iteration is determined to be more\ngeneral than or incompatible with its shape invariant. For example, a shape\nof [11, None] is more general than a shape of [11, 17], and [11, 21] is not\ncompatible with [11, 17]. By default (if the argument `shape_invariants` is\nnot specified), it is assumed that the initial shape of each tensor in\n`loop_vars` is the same in every iteration. The `shape_invariants` argument\nallows the caller to specify a less specific shape invariant for each loop\nvariable, which is needed if the shape varies between iterations. The\n`tf.Tensor.set_shape`\nfunction may also be used in the `body` function to indicate that\nthe output loop variable has a particular shape. The shape invariant for\nSparseTensor and IndexedSlices are treated specially as follows:\n\na) If a loop variable is a SparseTensor, the shape invariant must be\nTensorShape([r]) where r is the rank of the dense tensor represented\nby the sparse tensor. It means the shapes of the three tensors of the\nSparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here\nis the shape of the SparseTensor.dense_shape property. It must be the shape of\na vector.\n\nb) If a loop variable is an IndexedSlices, the shape invariant must be\na shape invariant of the values tensor of the IndexedSlices. It means\nthe shapes of the three tensors of the IndexedSlices are (shape, [shape[0]],\n[shape.ndims]).\n\n`while_loop` implements non-strict semantics, enabling multiple iterations\nto run in parallel. The maximum number of parallel iterations can be\ncontrolled by `parallel_iterations`, which gives users some control over\nmemory consumption and execution order. For correct programs, `while_loop`\nshould return the same result for any parallel_iterations > 0.\n\nFor training, TensorFlow stores the tensors that are produced in the\nforward inference and are needed in back propagation. These tensors are a\nmain source of memory consumption and often cause OOM errors when training\non GPUs. When the flag swap_memory is true, we swap out these tensors from\nGPU to CPU. This for example allows us to train RNN models with very long\nsequences and large batches.\n\nArgs:\n cond: A callable that represents the termination condition of the loop.\n body: A callable that represents the loop body.\n loop_vars: A (possibly nested) tuple, namedtuple or list of numpy array,\n `Tensor`, and `TensorArray` objects.\n shape_invariants: The shape invariants for the loop variables.\n parallel_iterations: The number of iterations allowed to run in parallel. It\n must be a positive integer.\n back_prop: (optional) Deprecated. False disables support for back\n propagation. Prefer using `tf.stop_gradient` instead.\n swap_memory: Whether GPU-CPU memory swap is enabled for this loop.\n maximum_iterations: Optional maximum number of iterations of the while loop\n to run. If provided, the `cond` output is AND-ed with an additional\n condition ensuring the number of iterations executed is no greater than\n `maximum_iterations`.\n name: Optional name prefix for the returned tensors.\n\nReturns:\n The output tensors for the loop variables after the loop. The return value\n has the same structure as `loop_vars`.\n\nRaises:\n TypeError: if `cond` or `body` is not callable.\n ValueError: if `loop_vars` is empty.\n\nExample:\n\n```python\ni = tf.constant(0)\nc = lambda i: tf.less(i, 10)\nb = lambda i: (tf.add(i, 1), )\nr = tf.while_loop(c, b, [i])\n```\n\nExample with nesting and a namedtuple:\n\n```python\nimport collections\nPair = collections.namedtuple('Pair', 'j, k')\nijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2)))\nc = lambda i, p: i < 10\nb = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k)))\nijk_final = tf.while_loop(c, b, ijk_0)\n```\n\nExample using shape_invariants:\n\n```python\ni0 = tf.constant(0)\nm0 = tf.ones([2, 2])\nc = lambda i, m: i < 10\nb = lambda i, m: [i+1, tf.concat([m, m], axis=0)]\ntf.while_loop(\n c, b, loop_vars=[i0, m0],\n shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])\n```\n\nExample which demonstrates non-strict semantics: In the following\nexample, the final value of the counter `i` does not depend on `x`. So\nthe `while_loop` can increment the counter parallel to updates of `x`.\nHowever, because the loop counter at one loop iteration depends\non the value at the previous iteration, the loop counter itself cannot\nbe incremented in parallel. Hence if we just want the final value of the\ncounter (which we print on the line `print(sess.run(i))`), then\n`x` will never be incremented, but the counter will be updated on a\nsingle thread. Conversely, if we want the value of the output (which we\nprint on the line `print(sess.run(out).shape)`), then the counter may be\nincremented on its own thread, while `x` can be incremented in\nparallel on a separate thread. In the extreme case, it is conceivable\nthat the thread incrementing the counter runs until completion before\n`x` is incremented even a single time. The only thing that can never\nhappen is that the thread updating `x` can never get ahead of the\ncounter thread because the thread incrementing `x` depends on the value\nof the counter.\n\n```python\nimport tensorflow as tf\n\nn = 10000\nx = tf.constant(list(range(n)))\nc = lambda i, x: i < n\nb = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1,\n[i], \"x:\"))\ni, out = tf.while_loop(c, b, (0, x))\nwith tf.compat.v1.Session() as sess:\n print(sess.run(i)) # prints [0] ... [9999]\n\n # The following line may increment the counter and x in parallel.\n # The counter thread may get ahead of the other thread, but not the\n # other way around. So you may see things like\n # [9996] x:[9987]\n # meaning that the counter thread is on iteration 9996,\n # while the other thread is on iteration 9987\n print(sess.run(out).shape)\n```", "desc": "Repeat `body` while the condition `cond` is true. (deprecated argument values)", "type": "API"}, {"name": "tf.xla", "docs": "Public API for tf.xla namespace.\n", "desc": "Public API for tf.xla namespace.", "type": "API"}, {"name": "tf.xla.experimental", "docs": "Public API for tf.xla.experimental namespace.\n", "desc": "Public API for tf.xla.experimental namespace.", "type": "API"}, {"name": "tf.xla.experimental.compile", "docs": "Builds an operator that compiles and runs `computation` with XLA. (deprecated)\n\nDeprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.\nInstructions for updating:\nxla.experimental.compile is deprecated. Consider using tf.function(jit_compile=True)\n\nNOTE: In eager mode, `computation` will have `@tf.function` semantics.\n\nArgs:\n computation: A Python function that builds a computation to apply to the\n input. If the function takes n inputs, 'inputs' should be a list of n\n tensors.\n\n `computation` may return a list of operations and tensors. Tensors must\n come before operations in the returned list. The return value of\n `compile` is a list of tensors corresponding to the tensors from the\n output of `computation`.\n\n All `Operation`s returned from `computation` will be executed when\n evaluating any of the returned output tensors.\n inputs: A list of inputs or `None` (equivalent to an empty list). Each input\n can be a nested structure containing values that are convertible to\n tensors. Note that passing an N-dimension list of compatible values will\n result in a N-dimension list of scalar tensors rather than a single Rank-N\n tensors. If you need different behavior, convert part of inputs to tensors\n with `tf.convert_to_tensor`.\n\nReturns:\n Same data structure as if computation(*inputs) is called directly with some\n exceptions for correctness. Exceptions include:\n 1) None output: a NoOp would be returned which control-depends on\n computation.\n 2) Single value output: A tuple containing the value would be returned.\n 3) Operation-only outputs: a NoOp would be returned which\n control-depends on computation.\n TODO(b/121383831): Investigate into removing these special cases.\n\nRaises:\n RuntimeError: if called when eager execution is enabled.\n\nKnown issues:\n When a tf.random operation is built with XLA, the implementation doesn't\n pass the user provided seed to the XLA compiler. As such, the XLA compiler\n generates a random number and uses it as a seed when compiling the\n operation. This implementation causes a violation of the Tensorflow\n defined semantics in two aspects. First, changing the value of the user\n defined seed doesn't change the numbers generated by the operation.\n Second, when a seed is not specified, running the program multiple times\n will generate the same numbers.", "desc": "Builds an operator that compiles and runs `computation` with XLA. (deprecated)", "type": "API"}, {"name": "tf.xla.experimental.jit_scope", "docs": "Enable or disable JIT compilation of operators within the scope.\n\n NOTE: This is an experimental feature.\n\n The compilation is a hint and only supported on a best-effort basis.\n\n Example usage:\n\n ```python\n with tf.xla.experimental.jit_scope():\n c = tf.matmul(a, b) # compiled\n with tf.xla.experimental.jit_scope(compile_ops=False):\n d = tf.matmul(a, c) # not compiled\n with tf.xla.experimental.jit_scope(\n compile_ops=lambda node_def: 'matmul' in node_def.op.lower()):\n e = tf.matmul(a, b) + d # matmul is compiled, the addition is not.\n ```\n\n Example of `separate_compiled_gradients`:\n\n ```python\n # In the example below, the computations for f, g and h will all be compiled\n # in separate scopes.\n with tf.xla.experimental.jit_scope(\n separate_compiled_gradients=True):\n f = tf.matmul(a, b)\n g = tf.gradients([f], [a, b], name='mygrads1')\n h = tf.gradients([f], [a, b], name='mygrads2')\n ```\n\n Ops that are not in the scope may be clustered and compiled with ops in\n the scope with `compile_ops=True`, while the ops in the scope with\n `compile_ops=False` will never be compiled.\n\n For example:\n\n ```python\n # In the example below, x and loss may be clustered and compiled together,\n # while y will not be compiled.\n with tf.xla.experimental.jit_scope():\n x = tf.matmul(a, b)\n with tf.xla.experimental.jit_scope(compile_ops=False):\n y = tf.matmul(c, d)\n loss = x + y\n ```\n\n If you want to only compile the ops in the scope with `compile_ops=True`,\n consider adding an outer `jit_scope(compile_ops=False)`:\n\n ```python\n # In the example below, only x will be compiled.\n with tf.xla.experimental.jit_scope(compile_ops=False):\n with tf.xla.experimental.jit_scope():\n x = tf.matmul(a, b)\n y = tf.matmul(c, d)\n loss = x + y\n ```\n\n Args:\n compile_ops: Whether to enable or disable compilation in the scope.\n Either a Python bool, or a callable that accepts the parameter\n `node_def` and returns a python bool.\n separate_compiled_gradients: If true put each gradient subgraph into a\n separate compilation scope. This gives fine-grained control over which\n portions of the graph will be compiled as a single unit. Compiling\n gradients separately may yield better performance for some graphs.\n The scope is named based on the scope of the forward computation as well\n as the name of the gradients. As a result, the gradients will be compiled\n in a scope that is separate from both the forward computation, and from\n other gradients.\n Raises:\n RuntimeError: if called when eager execution is enabled.\n Yields:\n The current scope, enabling or disabling compilation.\n ", "desc": "Enable or disable JIT compilation of operators within the scope.", "type": "API"}, {"name": "tf.zeros", "docs": "Creates a tensor with all elements set to zero.\n\n See also `tf.zeros_like`, `tf.ones`, `tf.fill`, `tf.eye`.\n\n This operation returns a tensor of type `dtype` with shape `shape` and\n all elements set to zero.\n\n >>> tf.zeros([3, 4], tf.int32)\n \n\n Args:\n shape: A `list` of integers, a `tuple` of integers, or\n a 1-D `Tensor` of type `int32`.\n dtype: The DType of an element in the resulting `Tensor`.\n name: Optional string. A name for the operation.\n\n Returns:\n A `Tensor` with all elements set to zero.\n ", "desc": "Creates a tensor with all elements set to zero.", "type": "API"}, {"name": "tf.zeros_initializer", "docs": "Initializer that generates tensors initialized to 0.\n\n Initializers allow you to pre-specify an initialization strategy, encoded in\n the Initializer object, without knowing the shape and dtype of the variable\n being initialized.\n\n Examples:\n\n >>> def make_variables(k, initializer):\n ... return (tf.Variable(initializer(shape=[k], dtype=tf.float32)),\n ... tf.Variable(initializer(shape=[k, k], dtype=tf.float32)))\n >>> v1, v2 = make_variables(3, tf.zeros_initializer())\n >>> v1\n \n >>> v2\n \n >>> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.))\n (, >> tensor = tf.constant([[1, 2, 3], [4, 5, 6]])\n >>> tf.zeros_like(tensor)\n \n\n >>> tf.zeros_like(tensor, dtype=tf.float32)\n \n\n >>> tf.zeros_like([[1, 2, 3], [4, 5, 6]])\n \n\n Args:\n input: A `Tensor` or array-like object.\n dtype: A type for the returned `Tensor`. Must be `float16`, `float32`,\n `float64`, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64`,\n `complex64`, `complex128`, `bool` or `string` (optional).\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` with all elements set to zero.\n ", "desc": "Creates a tensor with all elements set to zero.", "type": "API"}] \ No newline at end of file diff --git a/src/router.ts b/src/router.ts index a86e18f..49aec45 100644 --- a/src/router.ts +++ b/src/router.ts @@ -1,10 +1,12 @@ import Home from "@/views/Home.vue"; import About from "@/views/About.vue"; +import PrimarySymbol from "@/views/PrimarySymbol.vue"; import {createRouter, createWebHistory, RouteRecordRaw} from "vue-router"; const routes: Array = [ {path: '/', name: 'Home', component: Home}, - {path: '/about', name: 'About', component: About} + {path: '/about', name: 'About', component: About}, + {path: '/PrimarySymbol', name: 'PrimarySymbol', component: PrimarySymbol} ] const router = createRouter({ diff --git a/src/views/PrimarySymbol.vue b/src/views/PrimarySymbol.vue index e69de29..d57f94d 100644 --- a/src/views/PrimarySymbol.vue +++ b/src/views/PrimarySymbol.vue @@ -0,0 +1,98 @@ + + + + + + + \ No newline at end of file